url
stringlengths
31
38
title
stringlengths
7
229
abstract
stringlengths
44
2.87k
text
stringlengths
319
2.51M
meta
dict
https://arxiv.org/abs/2002.03448
Kelly Criterion: From a Simple Random Walk to Lévy Processes
The original Kelly criterion provides a strategy to maximize the long-term growth of winnings in a sequence of simple Bernoulli bets with an edge, that is, when the expected return on each bet is positive. The objective of this work is to consider more general models of returns and the continuous time, or high frequency, limits of those models.
\section{Introduction} Consider repeatedly engaging in a game of chance where one side has an edge and seeks to optimize the betting in a way that ensures maximal long-term growth rate of the overall wealth. This problem was posed and analyzed by John Kelly \cite{Kelly56} at Bell Labs in 1956; the solution was implemented and tested in a variety of setting by a successful mathematician, gambler, and hedge fund manager Ed Thorp \cite{Thorp06} over the period from the 60's to the early 00's. As a motivating example, consider betting on a biased coin toss where the return $r$ is a random variable with distribution \begin{equation} \label{return1} \mathbb{P}(r=1) = p,\ \ \ \mathbb{P}(r=-1)=1- p; \end{equation} in what follows, we refer to this as the {\em simple Bernoulli model}. The condition to have an edge in this setting becomes $1/2<p\leq 1$ or, equivalently, \begin{equation} \label{edge1} \mathbb{E}[r] =2p-1>0. \end{equation} We plan on being able to make a large sequence of bets on this biased coin, resulting in an iid sequence of returns $\{r_k\}_{k\geq 1}$ with the same distribution as $r$, and ask how much we should bet so as to maximize long term wealth, given that we are compounding our returns. Assume we are betting with a fixed exposure $f$, that is, each bet involves a fixed fraction $f$ of the overall wealth, and $f \in [0,1]$. Practically, $f \geq 0$ means {\bf no shorting} and $f \leq 1$ means {\bf no leverage}, which we refer to as the {\bf NS-NL} condition. Then, starting with the initial amount $W_0$, the total wealth at time $n=1,2,3,\ldots$ is the following function of $f$: \begin{equation*} W_n^f = W_0 \prod_{k=1}^n\big(1 + f r_k\big). \end{equation*} For the long-term compounder wishing to maximize their long term wealth, a natural and equivalent goal would be to find the strategy $f=f^*$ maximizing the long-term growth rate \begin{equation} \label{rate-dt} g_r(f):= \lim_{n\rightarrow \infty}\frac{1}{n} \ln \frac{W_n^f}{W_0}. \end{equation} By direct computation, $$ g_r(f) = \lim_{n\rightarrow \infty}\frac{1}{n}\sum_{k=1}^n \ln(1+fr_k) = \mathbb{E}\ln(1+fr)=p\ln(1+f) + (1-p)\ln(1-f), $$ where the second equality follows by the law of large numbers, and therefore, after solving $g_r'(f^*)=0$, \begin{equation} \label{optimal1} f^* = 2p-1,\ \ \max_{f \in [0,1]} g_r(f)= g_r(f^*)=p\ln \frac{p}{1-p}+(2-p)\ln(2-2p); \end{equation} note that the edge condition \eqref{edge1} ensures that $f^*$ is an admissible strategy and $g_r(f^*)>0$. For more discussions of this result see \cite{Thorp06}. Our objective in this paper is to derive analogues of \eqref{optimal1} in the following situations: \begin{enumerate} \item the distribution of returns is a more general random variable; \item the compounding is continuous in time; \item the compounding is high frequency, leading to a continuous-time limit. \end{enumerate} In particular, we consider several scenarios when the returns are described by L\'evy processes, which addresses some of Thorp's questions regarding fat-tailed distributions in finance \cite{Thorp08}. In what follows, we write $\xi\overset{d}{=}\eta$ to indicate equality in distribution for two random variables, and $X\overset{{\mathcal{L}}}{=} Y$ to indicate equality in law (as function-valued random elements) for two random processes. For $x>0$, $\lfloor x \rfloor$ denotes the largest integer less than or equal to $x$. To simplify the notations, we always assume that $W_0=1$. \section{Discrete Compounding: General Distribution of Returns} Assume that the returns on each bet are independent random variables $r_k,\ k\geq 1,$ with the same distribution as a given random variable $r$, and let \begin{equation} \label{wealth2} W_n^f = \prod_{k=1}^n\big(1 + f r_k\big) \end{equation} denote the corresponding wealth process. We also keep the NS-NL condition on admissible strategies: $f\in [0,1]$. For the wealth process $W^f$ to be well-defined, we need the random variable $r$ to have the following properties: \begin{align} \label{r1} &\mathbb{P}(r\geq -1)=1;\\ \label{r2} &\mathbb{P}(r>0)>0,\ \mathbb{P}(r<0)>0;\\ \label{r3} &\mathbb{E}|\ln(1+r)|<\infty. \end{align} Condition \eqref{r1} quantifies the idea that a loss in a bet should not be more than $100\%$. Condition \eqref{r2} is basic non-degeneracy: both gains and losses are possible. Condition \eqref{r3} is a minimal requirement to define the long-term growth rate of the wealth process. The key object in this section will be the function \begin{equation} \label{F(f)} g_r(f)=\mathbb{E}\ln(1+fr). \end{equation} In particular, the following result shows that $g_r(f)$ is the long term growth rate of the wealth process $W^f$. \begin{prop} \label{prop:LLN} If \eqref{r1} and \eqref{r3} hold and $g_r(f)\not=0$, then, for every $f\in[0,1]$, the wealth process $W^f$ has an asymptotic representation \begin{equation} \label{eq:LLN-g1} W^f_n=\exp\Big( n g_r(f)\big(1+\varepsilon_n\big)\Big), \end{equation} where \begin{equation} \label{asymp0} \lim_{n\to \infty} \varepsilon_n=0 \end{equation} with probability one. \end{prop} \begin{proof} By \eqref{wealth2}, we have \eqref{eq:LLN-g1} with \begin{equation} \label{err0} \varepsilon_n=\frac{1}{ng_r(f)}\sum_{k=1}^n\bigg( \ln(1+fr_k)-g_r(f)\bigg), \end{equation} and then \eqref{asymp0} follows by \eqref{r3} and the strong law of large numbers. \end{proof} A stronger version of \eqref{r3} leads to a more detailed asymptotic of $W_n^f$. \begin{thm} \label{th:asympt1} Assume that \eqref{r1} holds and \begin{equation} \label{r3-3} \mathbb{E}|\ln(1+r)|^2<\infty. \end{equation} Then then, for every $f\in[0,1]$, the wealth process $W^f$ has an asymptotic representation \begin{equation} \label{eq:LLN-g2} W^f_n=\exp\Big( n g_r(f)+\sqrt{n}\big(\sigma_r(f)\zeta_n+ \epsilon_n\big)\Big), \end{equation} where $\zeta_n,\ n\geq 1, $ are standard Gaussian random variables, \begin{equation*} \sigma_r(f)=\Big(\mathbb{E}\big[\ln^2(1+fr)\big]-g_r^2(f)\Big)^{1/2}, \end{equation*} and \begin{equation*} \lim_{n\to \infty}\epsilon_n=0 \end{equation*} in probability. \end{thm} \begin{proof} With $\varepsilon_n$ from \eqref{err0}, the result follows by the Central Limit Theorem: $$ ng_r(f)\,\varepsilon_n= \sqrt{n}\left(\frac{1}{\sqrt{n}} \sum_{k=1}^n\bigg(\ln(1+fr_k)-g_r(f)\bigg)\right)= \sqrt{n}\big(\sigma_r(f)\zeta_n+ \epsilon_n\big). $$ \end{proof} Because the Central Limit Theorem gives convergence in distribution, the random variables $\zeta_n$ in \eqref{eq:LLN-g2} can indeed depend on $n$. Additional assumptions about the distribution of $r$ \cite[Theorem 1]{Zolotarev-AE} lead to higher-order asymptotic expansions and a possibility to have $\lim_{n\to \infty}\epsilon_n=0$ with probability one. The following properties of the function $g_r$ are immediate consequences of the definition and the assumptions \eqref{r1}--\eqref{r3}: \begin{prop} \label{prop:BasicF} The function $f\mapsto g_r(f)$ is continuous on the closed interval $[0,1]$ and infinitely differentiable in $(0,1)$. In particular, \begin{equation} \label{DF} \frac{dg_r}{df}(f)=\mathbb{E}\left[\frac{r}{1+fr}\right],\ \ \frac{d^2g_r}{df^2}(f)=-\mathbb{E}\left[\frac{r^2}{(1+fr)^2}\right]<0. \end{equation} \end{prop} \begin{cor} \label{cor1} The function $g_r$ achieves its maximal value on $[0,1]$ at a point $f^*\in [0,1]$ and $g_r(f^*)\geq 0$. If $ g_r(f^*)> 0$, then $f^*$ is unique. \end{cor} \begin{proof} Note that $g_r(0)=0$ and, by \eqref{DF}, the function $g_r$ is strictly concave (or convex up) on $[0,1]$. \end{proof} While concavity of $g_r$ implies that $g_r$ achieves a unique global maximal value at a point $f^{**}$, it is possible that the domain of the function $g_r$ is bigger than the interval $[0,1]$ and $f^{**}\notin [0,1]$. A simple way to exclude the possibility $f^{**}<0$ is to consider returns $r$ that are not bounded from above: $\mathbb{P}(r>c)>0$ for all $c>0$: in this case, the function $g_r(f)=\mathbb{E}\ln(1+fr)$ is not defined for $f<0$. Similarly, if $\mathbb{P}(r<-1+\delta)>0$ for all $\delta>0$, then the function $g_r$ is not defined for $f>1$, excluding the possibility $f^{**}>1.$ Below are more general sufficient conditions to ensure that the point $f^*\in [0,1]$ from Corollary \ref{cor1} is the point of global maximum of $g_r$: $f^*=f^{**}$. \begin{prop} \label{prop:glob1} If \begin{align} \label{global1-l} &\lim_{f\to 0+} \mathbb{E}\left[\frac{r}{1+fr}\right]>0 \ \ \ \ \ {\rm and}\\ \label{global1-r} &\lim_{f\to 1-} \mathbb{E}\left[\frac{r}{1+fr}\right]<0, \end{align} then there is a unique $f^*\in (0,1)$ such that $$ g_r(f)< g_r(f^*) $$ for all $f$ in the domain of $g_r$. \end{prop} \begin{proof} Together with the intermediate value theorem, conditions \eqref{global1-l} and \eqref{global1-r} imply that there is a unique $f^*\in (0,1)$ such that $$ \frac{dg_r}{df}(f^*)=0. $$ It remains to use strong concavity of $g_r$. \end{proof} Because $r\geq -1$, the expected value $\mathbb{E}[r]$ is always defined, although $\mathbb{E}[r]=+\infty$ is a possibility. Thus, by \eqref{DF}, condition \eqref{global1-l} is equivalent to the intuitive idea of an edge: $$ \mathbb{E}[r]>0, $$ which, similar to \eqref{edge1}, guarantees that $g_r(f)>0$ for some $f\in (0,1)$. Condition \eqref{global1-r} can be written as $$ \mathbb{E}\left[\frac{r}{1+r}\right]<0, $$ with the convention that the left-hand side can be $-\infty$. This condition does not appear in the simple Bernoulli model, but is necessary in general, to ensure that the edge is not too big and leveraged gambling ($f^*>1$) does not lead to an optimal strategy. As an example, consider the {\em general Bernoulli model} with \begin{equation} \label{GenBern} \mathbb{P}(r=-a)=1-p,\ \ \mathbb{P}(r=b)=p,\ \ 0<a\leq 1,\ b>0,\ 0<p<1. \end{equation} The function $$ g_r(f)=p\ln (1+fb) + (1-p)\ln(1-fa) $$ is defined on $(-1/b, 1/a)$, achieves the global maximum at $$ f^*=\frac{p}{a}-\frac{1-p}{b}, $$ and $$ g_r(f^*)=p\ln p +(1-p)\ln (1-p) +\ln\frac{a+b}{a} - (p-1)\ln \frac{b}{a}; $$ we know that $g_r(f^*)\geq 0$, even though it is not at all obvious from the above expression. The NS-NL condition $f^*\in [0,1]$ becomes $$ \frac{a}{a+b}\leq p \leq \min\left( \frac{ab}{a+b}\left(1+\frac{1}{b}\right), 1\right), $$ and it is now easy to come up with a model in which $f^*>1$: for example, take $$ a=0.1, \ b=0.5,\ p=0.5 $$ so that $f^*=4$. Given that a gain and a loss in each bet are equally likely, but the amount of a gain is five times as much as that of a loss, a large value of $f^*$ is not surprising, although economical and financial implications of this type of leveraged betting are potentially very interesting and should be a subject of a separate investigation. Because of the logarithmic function in the definition of $g_r$, the distribution of $r$ can have a rather heavy right tail and still satisfy \eqref{r3}. For example, consider \begin{equation} \label{Ch0} r=\eta^2-1, \end{equation} where $\eta$ has standard Cauchy distribution with probability density function $$ h_{\eta}(x)=\frac{1}{\pi(1+x^2)},\ \ \ -\infty< x<+\infty. $$ Then $$ g_r(f)=\frac{2}{\pi}\int_0^{+\infty} \frac{\ln\big((1-f)+fx^2\big)}{1+x^2}\, dx= 2\ln\big(\sqrt{f}+\sqrt{1-f}\big), $$ where the second equality follows from \cite[(4.295.7)]{Gradshtein-Ryzhyk}. As a result, we get a closed-form answer $$ f^*=\frac{1}{2},\ g_r(f^*)=\ln 2. $$ A general way to ensure \eqref{r1}--\eqref{r3} is to consider \begin{equation} \label{expo-model} r=e^{\xi}-1 \end{equation} for some random variable $\xi$ such that $\mathbb{P}(\xi>0)>0,\ \mathbb{P}(\xi<0)>0$, and $\mathbb{E}|\xi|<\infty$; note that \eqref{Ch0} is a particular case, with $\xi=\ln\eta^2$. Then \eqref{global1-l} and \eqref{global1-r} become, respectively, \begin{align} \label{global1-l-e} &\mathbb{E}e^{\xi}>1 \ \ \ \ \ {\rm and}\\ \label{global1-r-e} &\mathbb{E}e^{-\xi}>1. \end{align} For example, if $\xi$ is normal with mean $\mu\in \mathbb{R}$ and variance $\sigma^2>0$, then $$ \mathbb{E}e^{\xi}=e^{\mu+(\sigma^2/2)}, \ \ \mathbb{E}e^{-\xi}=e^{-\mu+(\sigma^2/2)}, $$ and \eqref{global1-l-e}, \eqref{global1-r-e} are equivalent to \begin{equation} \label{log-nrmal} -\frac{\sigma^2}{2}<\mu<\frac{\sigma^2}{2}, \end{equation} which, when interpreted in terms of returns, can indeed be considered as a ``reasonable'' edge condition: large values of $|\mu|$ do create a bias in one direction. Note that the corresponding $f^*$ is not available in closed form, but can be evaluated numerically. \section{Continuous Compounding and a Case for L\'{e}vy Processes} \label{sec:CC} Continuous time compounding includes discrete compounding as a particular case and makes it possible to consider more general types of return processes. The objective of this section is to show that continuous time compounding that leads to a non-trivial and non-random long-term growth rate of the resulting wealth process effectively forces the return process to have independent increments. The two main examples of such process are sums of iid random variables from the previous section and the L\'{e}vy processes. Writing \eqref{wealth2} as \begin{equation} \label{wealth1-dt} W_{n+1}^f-W_n^f=\big(fW_n^f\big)\,r_{n+1}, \end{equation} we see that a natural continuous time version of \eqref{wealth1-dt} is \begin{equation} \label{wealth1-ct} dW_{t}^f=fW_t^fdR_{t} \end{equation} for a suitable process $R=R_t,\ t\geq 0$ on a stochastic basis \begin{equation*} \mathbb{F}=\Big(\Omega, \mathcal{F},\ \{\mathcal{F}_t\}_{t\geq 1}, \mathbb{P}\Big) \end{equation*} satisfying the usual conditions \cite[Definition I.1.1]{Protter}. We interpret \eqref{wealth1-ct} as an integral equation \begin{equation} \label{wealth1-ct-i} W_{t}^f=1+f\int_0^tW_s^fdR_{s}; \end{equation} recall that $W_0^f=1$ is the standing assumption. Then the Bichteler-Dellacherie theorem \cite[Theorem III.22]{Protter} implies that the process $R$ must be a semi-martingale (a sum of a martingale and a process of bounded variation) with trajectories that, at every point, are continuous from the right and have limits from the left. Furthermore, if we allow the process $R$ to have discontinuities, then, by \cite[Theorem II.36]{Protter}, we need to modify \eqref{wealth1-ct-i} further: \begin{equation*} W_{t}^f=1+f\int_0^tW_{s-}^fdR_{s}, \end{equation*} where $$ W_{s-}=\lim_{\varepsilon\to 0, \varepsilon>0} W_{s-\varepsilon}, $$ and, assuming $R_0=0$, the process $W^f$ becomes the Dol\'{e}ans-Dade exponential \begin{equation} \label{DDE-1} W^f_t=\exp\left(fR_t-\frac{f^2\langle R^c\rangle_t}{2} \right)\prod_{0<s\leq t} (1+f\triangle R_s)\,e^{-f\triangle R_s}. \end{equation} In \eqref{DDE-1}, $\langle R^c\rangle$ is the quadratic variation process of the continuous martingale component of $R$ and $\triangle R_s=R_s-R_{s-}$. A natural analog of \eqref{r1} is \begin{equation} \label{r1-ct} \triangle R_s\geq -1, \end{equation} and then \eqref{DDE-1} becomes \begin{equation} \label{DDE-2} W^f_t=\exp\left(fR_t-\frac{f^2\langle R^c\rangle_t}{2}+ \sum_{0<s\leq t}\Big( \ln (1+f\triangle R_s) -f\triangle R_s\Big)\right). \end{equation} To proceed, let us assume that the trajectories of $R$ are continuous: $\triangle R_s=0$ for all $s$ so that \begin{equation*} W^f_t=\exp\left(fR_t-f^2\langle R^c\rangle_t\right). \end{equation*} If, similar to \eqref{rate-dt}, we define the long-term growth rate $g_R(f)$ by \begin{equation} \label{gr-ct0} g_R(f)=\lim_{t\to \infty} \frac{\ln W^f_t}{t}, \end{equation} then we need the limits \begin{equation} \label{trip0-cont} \mu:=\lim_{t\to \infty} \frac{R_t}{t},\ \ \ \sigma^2:=\lim_{t\to \infty} \frac{\langle R^c\rangle_t}{t} \end{equation} to exist with probability one and with non-random numbers $\mu,\sigma^2.$ Being a semi-martingale without jumps, the process $R$ has a representation \begin{equation} \label{sm-cont} R_t=A_t+R^c_t, \end{equation} where $A$ is process of bounded variation; cf. \cite[Theorem II.2.34]{LimitTheoremsforStochasticProcesses}. Then \eqref{trip0-cont} imply that, for large $t$, \begin{equation} \label{linear0} A_t\approx \mu t,\ \ \langle R^c\rangle_t\approx \sigma^2 t, \end{equation} that is, a natural way to achieve \eqref{trip0-cont} is to consider the process $R$ of the form \begin{equation*} R_t=\mu t+\sigma\,B_t, \end{equation*} where $\sigma>0$ and $B=B_t$ is a standard Brownian motion. Then \begin{equation} \label{DDE-3-cont} W^f_t=\exp\left(f\mu t+f \sigma\,B_t-\frac{f^2\sigma^2 t}{2}\right), \end{equation} and we come to the following conclusion: {\em continuous time compounding with a continuous return process effectively implies that the wealth process is a geometric Brownian motion}. The long-term growth rate \eqref{gr-ct0} becomes \begin{equation} \label{gr-ctL-cont} g_R(f)=f\mu-\frac{f^2\sigma^2}{2}, \end{equation} so that \begin{equation*} f^*=\frac{\mu}{\sigma^2},\ \ g_R(f^*)=\frac{\mu^2}{2\sigma^2}, \end{equation*} and the NS-NL condition is \begin{equation*} 0<\mu<\sigma^2. \end{equation*} Even though these results are not especially sophisticated, we will see in the next section (Theorem \ref{prop0}) that the process \eqref{DDE-3-cont} naturally appears as the continuous-time, or high frequency, limit of discrete-time compounding for a large class of returns. On the other hand, if we assume that the process $R$ is purely discontinuous, with jumps $\triangle R_{k}=r_k$ at times $s=k\in \{1,2,3,\ldots\}$, then $$ R_t=0,\ t\in (0,1),\ R_t=\sum_{k=1}^{\lfloor t \rfloor} r_k,\ t\geq 1, $$ and \eqref{DDE-1} becomes \eqref{wealth2}. Accordingly, we will now investigate the general case \eqref{DDE-1} when the process $R$ has both a continuous component and jumps. To this end, we use \cite[Proposition II.1.16]{LimitTheoremsforStochasticProcesses} and introduce the jump measure $\mu^R=\mu^R(dx,ds)$ of the process $R$ by putting a point mass at every point in space-time where the process $R$ has a jump: \begin{equation} \label{JumpMeasure} \mu^R(dx,ds) = \sum_{s > 0}\delta_{(\triangle R_s,s)}(dx, ds); \end{equation} note that both the time $s$ and size $\triangle R_s$ of the jump can be random. In particular, with \eqref{r1-ct} in mind, \begin{equation} \label{sum1} \sum_{0<s\leq t} \Big(\ln (1+f\triangle R_s) -f\triangle R_s\Big)= \int_0^t-\!\!\!\!\!\!\int_{-1}^{+\infty} \big(\ln(1+fx)-fx\big)\mu^R(dx,ds); \end{equation} here and below, \begin{equation} \label{zint} -\!\!\!\!\!\!\int_a^b, \ \ \ a<0<b, \end{equation} stands for $$ \int\limits_{(a,0)\bigcup(0,b)}. $$ By \cite[Proposition II.2.9 and Theorem II.2.34]{LimitTheoremsforStochasticProcesses}, and keeping in mind \eqref{r1-ct}, we get the following generalization of \eqref{sm-cont}: \begin{equation} \label{sm-general} \begin{split} R_t&=A_t+R^c_t+ \int_0^t-\!\!\!\!\!\!\int_{1}^{+\infty} x\mu^R(dx,ds)\\ &+ \int_0^t-\!\!\!\!\!\!\int_{-1}^1 x\big(\mu^R(dx,ds)-\nu(dx,s)da_s\big), \end{split} \end{equation} where $a=a_t$ is a predictable non-decreasing process and $\nu=\nu(dx,t)$ is the non-negative random time-dependent measure on $(-1,0)\bigcup(0,+\infty)$ with the property \begin{equation*} -\!\!\!\!\!\!\int_{-1}^{+\infty}\min(1,x^2)\nu(dx,t)\leq 1 \end{equation*} for all $t\geq 0$ and $\omega\in \Omega$. Moreover \begin{align} \label{trip-A} A_t&=\int_0^t \mu_s\, da_s \ {\rm \ for\ some\ predictable \ process} \ \mu=\mu_t,\\ \label{trip-B} \langle R^c\rangle_t&=\int_0^t \sigma^2_s\,da_s\ {\rm \ for\ some\ predictable \ process} \ \sigma=\sigma_t, \end{align} and the process $$ t \mapsto \int_0^t-\!\!\!\!\!\!\int_{-1}^{+\infty} h(x)\big(\mu^R(dx,ds)-\nu(dx,s)da_s\big) $$ is a martingale for every bounded measurable function $h=h(x)$ such that $\limsup_{x\to 0}|h(x)|/|x|<\infty$. To proceed, we assume that $$ \mathbb{E} \int_0^t-\!\!\!\!\!\!\int_{-1}^{+\infty} |\ln(1+x)|\, \nu(dx,s)\,da_s<\infty,\ t>0, $$ which is a generalization of condition \eqref{r3}. Then, by \cite[Theorem II.1.8]{LimitTheoremsforStochasticProcesses}, the process $$ t\mapsto \int_0^t-\!\!\!\!\!\!\int_{-1}^{+\infty} \ln(1+x)\big(\mu^R(dx,ds)-\nu(dx,s)da_s\big) $$ is a martingale. Next, we combine \eqref{DDE-2}, \eqref{sum1}, and \eqref{sm-general}, and re-arrange the terms so that the logarithm of the wealth process becomes \begin{equation} \label{logW-g} \begin{split} \ln W_t^f&=fA_t+fR^c_t-\frac{f^2}{2}\langle R^c_t \rangle -f\int_0^t-\!\!\!\!\!\!\int_{-1}^1x\nu(dx,s)da_s\\ &+ \int_0^t-\!\!\!\!\!\!\int_{-1}^{+\infty} \ln(1+fx)\nu(dx,s)da_s+M^f_t, \end{split} \end{equation} where $$ M^f_t=\int_0^t-\!\!\!\!\!\!\int_{-1}^{+\infty} \ln(1+fx)\big(\mu^R(dx,ds)-\nu(dx,s)da_s\big). $$ In general, for equality \eqref{logW-g} to hold, we need to make an additional assumption \begin{equation} \label{nu-int1} -\!\!\!\!\!\!\int_{-1}^{1}x\nu(dx,t)<\infty \end{equation} for all $t\geq 0$ and $\omega\in \Omega$. In the particular case \eqref{wealth2}, \begin{itemize} \item $a_s=\lfloor s \rfloor$ is the step function, with unit jumps at positive integers, so that $da_s$ is the collection of point masses at positive integers; \item $\nu(dx,s)=F^R(dx),$ where $F^R$ is the cumulative distribution function of the random variable $r$, so that \eqref{nu-int1} holds automatically; \item $\mu_t=g_f(r)+\int_{-1}^1 x\,F^R(dx)$, $R^c_t=0$, $\sigma_t=0$; \item $M^f_t=\sum_{0<k\leq t}\big(\ln(1+fr_k)-g_r(f)\big)$; \item condition \eqref{LogVar-Levy} is \eqref{r3-3}. \end{itemize} A natural way to reconcile \eqref{linear0} with \eqref{trip-A}, \eqref{trip-B} is to take $\mu_t=\mu$, $\sigma_t=\sigma$ for some non-random numbers $\mu\in \mathbb{R}$, $\sigma\geq 0$, and a non-random non-decreasing function $a=a_t$ with the property \begin{equation} \label{limit-a} \lim_{t\to+\infty}\frac{a_t}{t}=1. \end{equation} Then, to have a non-random almost-sure limit $$ \lim_{t\to \infty} \frac{1}{t}\int_0^t\int_{-1}^{+\infty} \varphi(x) \nu(dx,s)da_s $$ for a sufficiently rich class of non-random test functions $\varphi$, we have to assume that there exists a non-random non-negative measure $F^R=F^R(dx)$ on $(-1,0)\bigcup(0,+\infty)$ such that \begin{equation} \label{intF1-1} -\!\!\!\!\!\!\int_{-1}^{+\infty}\min( |x|,1)\, F^R(dx)<\infty \end{equation} and, for large $s$, $$ \nu(dx,s)\approx F^R(dx). $$ As a result, if \begin{equation} \label{LevyMeasure0} \nu(dx,s)= F^R(dx) \end{equation} for all $s$, then \begin{equation} \label{triple-nr1} A_t=\mu a_t,\ \langle R^c\rangle_t=\sigma^2a_t,\ \nu(dx,t)=F^R(dx)a_t \end{equation} are all non-random, and \cite[Theorem II.4.15]{LimitTheoremsforStochasticProcesses} implies that $R$ is a {\em process with independent increments}. Furthermore, \eqref{triple-nr1} and the strong law of large numbers for martingales imply \begin{equation*} \mathbb{P}\left(\lim_{t\to \infty} \frac{R^c_t}{t}=0\right)=1; \end{equation*} cf. \cite[Corollary 1 to Theorem II.6.10]{LSh-M}. Similarly, if \begin{equation} \label{LogVar-Levy} -\!\!\!\!\!\!\int_{-1}^{+\infty} \ln^2(1+x)\, F^R(dx)<\infty, \end{equation} then $M^f$ is a square-integrable martingale and \begin{equation*} \mathbb{P}\left( \lim_{t\to \infty} \frac{M^f_t}{t}=0\right)=1. \end{equation*} Writing $$ \bar{\mu}=\mu-\int_{-1}^{1}x F^R(dx) $$ the long-term growth rate \eqref{gr-ct0} becomes \begin{equation} \label{gr-ctL} g_R(f)=f\bar{\mu}-\frac{f^2\sigma^2}{2}+ -\!\!\!\!\!\!\int_{-1}^{\infty} \ln(1+fx) F^R(dx), \end{equation} which does include both \eqref{F(f)} and \eqref{gr-ctL-cont} as particular cases. By direct computation, the function $f\mapsto g_R(f)$ is concave and the domain of the function contains $[0,1]$. Similar to Proposition \ref{prop:glob1}, we have the following result. \begin{thm} \label{th-LevyRate} Consider continuous-time compounding with return process \begin{equation} \label{Return-Levy0} \begin{split} R_t&=A_t+R_t^c+ \int_0^t-\!\!\!\!\!\!\int_{1}^{+\infty} x\mu^R(dx,ds)\\ &+ \int_0^t-\!\!\!\!\!\!\int_{-1}^1 x\big(\mu^R(dx,ds)-\nu(dx,s)da_s\big), \end{split} \end{equation} where the random measure $\mu^R$ is from \eqref{JumpMeasure}, and assume that equalities \eqref{limit-a} and \eqref{triple-nr1} hold. If $F^R$ satisfies \eqref{intF1-1}, \eqref{LogVar-Levy}, and \begin{align*} &\lim_{f\to 0+} -\!\!\!\!\!\!\int_{-1}^{\infty} \frac{x}{1+fx}\, F^R(dx)>-\bar{\mu},\\ &\lim_{f\to 1-} -\!\!\!\!\!\!\int_{-1}^{\infty} \frac{x}{1+fx}\, F^R(dx)<\sigma^2-\bar{\mu}, \end{align*} then the long-term growth rate is given by \eqref{gr-ctL}, and there exists a unique $f^*\in (0,1)$ such that $$ g_R(f)< g_R(f^*) $$ for all $f$ in the domain of $g_R$. \end{thm} By the Lebesgue decomposition theorem, the measure corresponding to the function $a=a_t$ has a discrete, absolutely continuous, and singular components. With \eqref{limit-a} in mind, a natural choice of the discrete component is $a_t=\lfloor t \rfloor$, which, as we saw, corresponds to discrete compounding discussed in the previous section. A natural choice of the absolutely continuous component is \begin{equation*} a_t=t. \end{equation*} Then \begin{equation*} A_t=\mu t,\ R^c_t=\sigma B_t,\ \nu(dx,t)da_t=F^R(dx)\,dt, \end{equation*} where $B$ is a standard Brownian motion. By \cite[Corollary II.4.19]{LimitTheoremsforStochasticProcesses}, we conclude that the process $R$ has independent and stationary increments, that is, {\em $R$ is a L\'{e}vy process}. In this case, equality \eqref{Return-Levy0} is known as the L\'{e}vy-It\^{o} decomposition of the process $R$; cf. \cite[Theorem 19.2]{Sato}. We do not consider the singular case in this paper and leave it for future investigation. \section{Continuous Limit of Discrete Compounding} \subsection{A (Simple) Random Walk Model} Following the methodology in \cite[Section 7.1]{Thorp06}, we assume compounding a sufficiently large number $n$ of bets in a time period $[0,T]$. The returns $r_{n,1}, r_{n,2},\ldots $ of the bets are \begin{equation} \label{return-nk} r_{n,k} = \frac{\mu}{n} + \frac{\sigma}{\sqrt{n}}\,\xi_{n,k} \end{equation} for some $\mu>0$, $\sigma>0$ and independent identically distributed random variables $\xi_{n,k},\ k=1,2,\ldots,$ with mean $0$ and variance $1$. The classical simple random walk corresponds to $\mathbb{P}(\xi_{n,k}=\pm 1)=1/2$ and can be considered a {\em high frequency} version of \eqref{return1}. Similar to \eqref{r1}, we need $r_{n,k}\geq -1$, which, in general, can only be achieved with {\em uniform boundedness} of $\xi_{n,k}$: \begin{equation} \label{eq:bnd1} |\xi_{n,k}|\leq C_0, \end{equation} and then, with no loss of generality, we assume that $n$ is large enough so that \begin{equation} \label{small-r} |r_{n,k}|\leq \frac{1}{2}. \end{equation} Similar to \eqref{edge1}, a condition to have an edge is $$ \mathbb{E}[r_{n,k}] = \frac{\mu}{n} > 0, $$ and, similar to \eqref{wealth2}, given $n$ bets per unit time period, with exposure $f \in [0,1]$ in each bet, we get the following formula for the total wealth $W_t^{n,f}$ at time $t\in (0,T]$ assuming $W_0=1$: \begin{equation} \label{Wntf} W_t^{n,f} = \prod_{k=1}^{\lfloor nt \rfloor}\big(1+fr_{n,k}\big); \end{equation} $\lfloor nt \rfloor$ denotes the largest integer less than $nt$. Let $B=B_t,\ t\geq 0,$ be a standard Brownian motion on a stochastic basis $(\Omega, \mathcal{F}, \{\mathcal{F}_t\}_{t\geq 0},\mathbb{P})$ satisfying the usual conditions, and define the process \begin{equation} \label{wealth2-cont} W^f_t= \exp\left(\left(f \mu - \frac{f^2\sigma^2}{2}\right)t + f \sigma B_t\right). \end{equation} Note that \eqref{wealth2-cont} is a particular case of \eqref{DDE-3-cont}. \begin{thm} \label{prop0} For every $T>0$ and every $f\in [0,1]$, the sequence of processes $ \big(W_t^{n,f},\ n\geq 1,\ t\in [0,T]\big)$ converges in law to the process $W^f=W^f_t,\ t\in [0,T]$. \end{thm} \begin{proof} Writing $$ Y^{n,f}_t= \ln W_t^{n,f}, $$ the objective is to show weak convergence, as $n\to \infty$, of $Y^{n,f}$ to the process $$ Y^f_t=\left( f \mu - \frac{f^2\sigma^2}{2}\right)t + f \sigma B_t,\ \ t\in [0,T]. $$ The proof relies on the method of predictable characteristics for semimartingales from \cite{LimitTheoremsforStochasticProcesses}. More specifically, we make suitable changes in the proof of Corollary VII.3.11. By \eqref{wealth2-cont} $$ Y^{n,f}_t=\sum_{k=1}^{\lfloor nt \rfloor} \ln(1+fr_{n,k}). $$ Then \eqref{return-nk} and \eqref{eq:bnd1} imply \begin{equation*} \mathbb{E}\bigg( Y^{n,f}_t - \mathbb{E}Y^{n,f}_t\bigg)^4 \leq \frac{C_0^4\sigma^4}{n^2}\big(nT+3nT(nT-1)\big) \leq {3C_0^4\sigma^4T^2}, \end{equation*} from which uniform integrability of the family $\{Y^{n,f}_t,\ n\geq 1, \ t\in [0,T]\}$ follows. Then, by \cite[Theorem VII.3.7]{LimitTheoremsforStochasticProcesses}, it suffices to establish the following: \begin{align} \label{mean0} \lim_{n\to \infty}& \sup_{t \leq T} \left|\lfloor nt \rfloor \mathbb{E}\big[\ln(1+fr_{n,1})\big] - \left(f\mu-\frac{f^2\sigma^2}{2}\right)t \right| =0,\\ \label{var0} \lim_{n\to \infty} & \lfloor nt \rfloor \left(\mathbb{E} \big(\ln(1+fr_{n,1})\big)^2 - \bigg(\mathbb{E}\big[\ln(1+fr_{n,1})\big]\bigg)^2 \right)= f^2 \sigma^2 t,\ t\in [0,T],\\ \label{jump0} \lim_{n\to \infty} &\lfloor nt \rfloor \mathbb{E} \big[\phi\big(\ln(1+fr_{n,1})\big)\big] = 0,\ t\in [0,T]. \end{align} Equality \eqref{jump0} must hold for all functions $\phi=\phi(x), \ x\in \mathbb{R},$ that are continuous and bounded on $\mathbb{R}$ and satisfy $\phi(x)=o(x^2), \ x\to 0$, that is, \begin{equation} \label{phi0} \lim_{x\to 0} \frac{\phi(x)}{x^2}=0. \end{equation} Equalities \eqref{mean0} and \eqref{var0} follow from $$ r_{n,1}^2=\frac{\sigma^2}{n}\, \xi_{n,1}^2 + \frac{2\mu\sigma\xi_{n,1}}{n^{3/2}}+\frac{\mu^2}{n^2}, $$ together with \eqref{small-r} and an elementary inequality $$ \left| \ln(1+x)-x-\frac{x^2}{2} \right| \leq |x|^3,\ |x|\leq \frac{1}{2}. $$ In particular, \begin{equation*} \mathbb{E} \big[\big(\ln(1+fr_{n,1})\big)^2\big] = \frac{f^2\sigma^2}{n}+o(1/n), \ n\to +\infty. \end{equation*} To establish \eqref{jump0}, note that \eqref{phi0} and \eqref{return-nk} imply $$ \phi\big(\ln (1+fr_{n,1})\big) = o(1/n), \ n\to +\infty. $$ \end{proof} Similar to \eqref{rate-dt}, we define the long-term continuous time growth rate \begin{equation*} g(f)=\lim_{t\to \infty} \frac{1}{t}\ln W^f_t. \end{equation*} Then a simple computation show that $$ g(f)=f\mu - \frac{f^2\sigma^2}{2}, $$ and so \begin{equation} \label{optf-rv} f^*=\frac{\mu}{\sigma^2} \end{equation} achieves the maximal long-term continuous time growth rate \begin{equation} \label{optf-rv1} g(f^*)=\frac{\mu^2}{2\sigma^2}. \end{equation} The NS-NL condition $f^*\in [0,1]$ holds if $0\leq \mu\leq \sigma^2$, which, to the order $1/n$, is consistent with \eqref{global1-l} and \eqref{global1-r}, when applied to \eqref{return-nk}: $$ \mathbb{E}[r_{n,k}]=\frac{\mu}{n},\ \ \mathbb{E}\left[\frac{r_{n,k}}{1+r_{n,k}}\right] =\frac{\mu-\sigma^2}{n}+o(n^{-1}). $$ The wealth process \eqref{wealth2-cont} is that of someone who is ``continuously" placing bets, that is, adjusts the positions instantaneously, and, for large $n$, is a good approximation of high frequency betting \eqref{Wntf}. In general, when the returns are given by \eqref{return-nk}, a direct optimization of \eqref{Wntf} with respect to $f$ will not lead to a closed-form expression for the corresponding optimal strategy $f_n^*$ , but Theorem \ref{prop0} implies that, for sufficiently large $n$, \eqref{optf-rv} is an approximation of $f_n^*$ and \eqref{optf-rv1} is an approximation of the corresponding long-term growth rate. As an illustration, consider the high-frequency version of the simple Bernoulli model \eqref{return1}: \begin{equation} \label{return1-hf} \mathbb{P}\left( r_{n,k}=\frac{\mu}{n}\pm \frac{\sigma}{\sqrt{n}}\right)=\frac{1}{2}, \end{equation} which, for fixed $n$, is is a particular case of the general Bernoulli model \eqref{GenBern} with $p=q=1/2$, $$ a=\frac{\sigma}{\sqrt{n}}-\frac{\mu}{n},\ b=\frac{\sigma}{\sqrt{n}}+\frac{\mu}{n}. $$ Then, by direct computation, $$ f_n^*=\frac{\mu}{\sigma^2-(\mu^2/n)}\to \frac{\mu}{\sigma^2},\ n\to \infty, $$ and $$ \lim_{n\to \infty} g_{r_n}(f_n^*)=\frac{\mu^2}{2\sigma^2}. $$ Even though the analysis of the proof of Theorem \ref{prop0} shows that the convergence is uniform in $f$ on compact sub-sets of $(0,1)$, the proof that $\lim_{n\to \infty} f^*_n=f^*$ would require a version of Theorem \ref{prop0} with $T=+\infty$, which, for now, is not available. With natural modifications, Theorem \ref{prop0} extends to the setting \eqref{expo-model}. \begin{thm} \label{prop0-1} Assume that \begin{equation*} r_{n,k}+1=\exp\left(\frac{b}{n}+\frac{\sigma}{\sqrt{n}}\,\xi_{n,k}\right), \end{equation*} where $b \in \mathbb{R}$, $\sigma>0$, and, for each $n\geq 1$, $k\leq n$, the random variables $\xi_{n,k}$ are independent and identically distributed, with zero mean, unit variance, and, for every $a>0$, \begin{equation} \label{moment1} \lim_{n\to\infty}n\, \mathbb{E}\big[|\xi_{n,1}|^2I(|\xi_{n,1}|>a\sqrt{n})\big]=0. \end{equation} Then the conclusion of Theorem \ref{prop0} holds with $$ \mu=b+\frac{\sigma^2}{2}. $$ \end{thm} \begin{proof} Even though a formal Taylor expansion suggests $$ r_{n,k}=\frac{\mu}{n} + \frac{\sigma}{\sqrt{n}}\,\xi_{n,k}+o(1/n) $$ we cannot apply Theorem \ref{prop0} directly because the random variables $\xi_{n,k}$ are not necessarily uniformly bounded. Still, condition \eqref{moment1} makes it possible to verify conditions \eqref{mean0}--\eqref{jump0}. \end{proof} Condition \eqref{moment1} is clearly satisfied when $\xi_{n,k},\ k=1,2,\ldots,$ are iid standard normal, which corresponds to \begin{equation} \label{discr-ret-gauss} r_{n,k}=\frac{P_{k/n} - P_{(k-1)/n}}{P_{(k-1)/n}} \end{equation} and \begin{equation} \label{GBM-returns} P_t=e^{bt+\sigma B_t}. \end{equation} Thus, while the exponential model \eqref{expo-model} with log-normal returns is not solvable in closed form, the high-frequency version leads to the (approximately) optimal strategy \begin{equation} \label{st-line} f^*=\frac{b}{\sigma^2}+\frac{1}{2}, \end{equation} and, under \eqref{log-nrmal}, the NS-NL condition holds: $f^*\in (0,1)$. Numerical experiments with $\sigma=1$ and $n=10$ show that the values of the corresponding optimal $f^*_{10}$ are very close to those given by \eqref{st-line} for all $b\in (-1/2,1/2)$. Informally, both Theorems \ref{prop0} and \ref{prop0-1} can be considered as particular cases of the delta method for the Donsker theorem with drift: if the sequence of processes $$ t\mapsto \sum_{k=1}^{\lfloor nt \rfloor} \xi_{n,k} $$ converges, as $n\to \infty$, to the processes $t\mapsto bt+\sigma B_t$ and $\varphi=\varphi(x)$ is a suitable function with $\varphi(0)=0$, then one would expect the sequence of processes $$ t\mapsto \sum_{k=1}^{\lfloor nt \rfloor} \varphi\big(\xi_{n,k}\big) $$ to converge to the process $$ t\mapsto \left(\varphi'(0)b+\frac{\varphi''(0)\sigma^2}{2}\right)t+|\varphi'(0)|\sigma B_t. $$ \subsection{Beyond the Log-Normal Limit} With the results of Section \ref{sec:CC} in mind, we consider the following generalization of \eqref{discr-ret-gauss}: \eqref{GBM-returns}: $$ r_{n,k} = \frac{P_{k/n} - P_{(k-1)/n}}{P_{(k-1)/n}}, \ k=1,2,\ldots, $$ where the process $P=P_t,\ t\geq 0$, has the form $P_t=e^{R_t}$, and $R=R_t$ is a L\'evy process. In other words, \begin{equation} \label{dctr-gen} r_{n,k}={e}^{R_{k/n}-R_{(k-1)/n}} - 1. \end{equation} As in \eqref{Return-Levy0}, the process $R=R_t$ can be decomposed into a drift, diffusion/small jump, and large jump components according to the L\'{e}vy-It\^{o} decomposition \cite[Theorem 19.2]{Sato}: \begin{equation} \label{Levy-main} R_t={\mu}t+\sigma\,B_t+ \int_0^t-\!\!\!\!\!\!\int_{-1}^1 x\big(\mu^R(dx,ds)-F^R(dx)ds\big)+ \int_0^t\int_{|x|>1} x\mu^R(dx,ds); \end{equation} we continue to use the notation $-\!\!\!\!\!\int$ first introduced in \eqref{zint}. Now that the process $R_t$ is exponentiated, \begin{itemize} \item there is no need to assume that $\triangle R_t\geq -1$ ; \item the analog of \eqref{LogVar-Levy} becomes $\mathbb{E}|R_1|<\infty$. \end{itemize} Equality \eqref{Levy-main} has a natural interpretation in terms of financial risks \cite{Thorp03}: the drift represents the edge (``guaranteed'' return), diffusion and small jumps represent small fluctuations of returns, and the large jump component represents (sudden) large changes in returns. Similar to \eqref{Wntf}, the corresponding wealth process is \begin{equation} \label{wp-100} W_{t}^{n,f} = \prod_{k = 1}^{\lfloor nt \rfloor} \big(1 + f r_{n,k}\big). \end{equation} We have the following generalization of Theorem \ref{prop0-1}. \begin{thm} Consider the family of processes $W^{n,f}=W_t^{n,f}$, $t\in [0,T]$, $n\geq 1$, $f\in [0,1],$ defined by \eqref{wp-100}. If $r_{n,k}$ is given by \eqref{dctr-gen}, with $P_t=e^{R_t}$, and $R=R_t$ is a L\'{e}vy process with representation \eqref{Levy-main} and $\mathbb{E}|R_1|< \infty$, then, for every $f\in [0,1]$ and $T>0$, $$ \lim_{n\to \infty}W^{n,f} \overset{{\mathcal{L}}}{=} W^f $$ in $\mathbb{D}((0,T))$, where \begin{equation} \label{WP-generalLP} \begin{split} W^f_t &=\exp\left(f R_t + \frac{ f(1-f)\sigma^2}{2}\, t\right.\\ &+ \left. \int_0^t -\!\!\!\!\!\!\int_{\mathbb{R}} \Big[\ln\big(1+f(e^x - 1)\big) - f x\Big] \mu^R(dx, ds)\right). \end{split} \end{equation} \end{thm} \begin{proof} By \eqref{dctr-gen} and \eqref{wp-100}, \begin{equation*} \ln W_t^{n,f} = \sum_{k=1}^{\lfloor nt \rfloor}\ln\bigg( 1 + f \big( {e}^{R_{k/n}-R_{(k-1)/n}} - 1\big)\bigg). \end{equation*} \underline{Step 1:} For $s \in \big(\frac{k-1}{n}, \frac{k}{n}\big]$, let \begin{equation} \label{rnks} r_s^{n,k} = {e}^{R_s - R_{(k-1)/n}} - 1, \end{equation} and apply the It\^o's formula \cite[Theorem II.32]{Protter} to the process $$ s\mapsto \ln\big(1+f r_s^{n,k}\big),\ \ s \in \Bigg(\frac{k-1}{n}, \frac{k}{n}\Bigg]. $$ The result is \begin{equation*} \begin{split} \ln\big(1+f r_s^{n,k}\big) &= \int_{\frac{k-1}{n}}^s \frac{f (1 + r_{u-}^{n,k})}{1+f r_{u-}^{n,k}}\, dR_u + \frac{\sigma^2}{2}\int_{\frac{k-1}{n}}^s \frac{f(1-f)(1+r_{u-}^{n,k})}{\big(1+fr_{u-}^{n,k}\big)^2}\, du \\ & + \int_{\frac{k-1}{n}}^s -\!\!\!\!\!\!\int_{\mathbb{R}} \bigg[\ln\big(1-f+f{e}^{x} (r_{u-}^{n,k}+ 1)\big) \\ & - \ln(1+fr_{u-}^{n,k}) - x \frac{f(1+r_{u-}^{n,k})}{1+f r_{u-}^{n,k}}\bigg]\mu^R(dx, du). \end{split} \end{equation*} \underline{Step 2:} Putting $s = \frac{k}{n}$ in the above equality and summing over $k$, we derive the following expression for $ \ln W_t^{n,f}$: \begin{align} \notag \ln W_t^{n,f} &= \sum_{k=1}^{\lfloor nt \rfloor} \bigg(\int_{\frac{k-1}{n}}^{\frac{k}{n}} h^{(1)}_{n,k}(s) \, dR_s + \int_{\frac{k-1}{n}}^{\frac{k}{n}} h^{(2)}_{n,k}(s) \, ds + \int_{\frac{k-1}{n}}^{\frac{k}{n}} -\!\!\!\!\!\!\int_{\mathbb{R}} h^{(3)}_{n,k}(s,x) \mu^R(dx, du)\bigg)\\ \label{main-integrals} &= \int_0^t H^{(1)}_{n,t}(s) \, dR_s + \int_0^t H^{(2)}_{n,t}(s) \, ds + \int_0^t-\!\!\!\!\!\!\int_{\mathbb{R}} H^{(3)}_{n,t}(s,x) \mu^R(dx, ds)\,, \end{align} where \begin{equation*} \begin{split} \displaystyle h^{(1)}_{n,k}(s) & = \frac{f(1+r_{s-}^{n,k})}{1+fr_{s-}^{n,k}}\,,\ \ h^{(2)}_{n,k}(s) = \frac{\sigma f(1-f)}{2}\frac{1+r_{s-}^{n,k}}{(1+fr_{s-}^{n,k})^2}\,, \\ h^{(3)}_{n,k}(s,x)&= \ln\big(1-f+fe^x(r_{s-}^{n,k}+1) \big) - \ln(1+fr_{s-}^{n,k}) - fx\frac{1+r_{s-}^{n,k}}{1+fr_{s-}^{n,k}}\,;\\ H^{(i)}_{n,t}(s) &= \sum_{k=1}^{\lfloor nt \rfloor} h^{(i)}_{n,k}(s) \mathbf{1}_{(\frac{k-1}{n}, \frac{k}{n}]}(s), \ i = 1, 2;\ \ H^{(3)}_{n,t}(s,x) = \sum_{k=1}^{\lfloor nt \rfloor} h^{(3)}_{n,k}(s,x)\mathbf{1}_{(\frac{k-1}{n}, \frac{k}{n}]}(s). \end{split} \end{equation*} \underline{Step 3:} Because $$ \lim_{n\to \infty,\, k/n\to s} R_{(k-1)/n}=R_{s-}, $$ equality \eqref{rnks} implies $$ \lim_{n\to +\infty,\, k/n\to s} r^{n,k}_{s-}= 0 $$ for all $s$. Consequently, we have the following convergence in probability: \begin{align*} \lim_{n\to +\infty} H^{(1)}_{n,t}(s)=f,\ & \lim_{n\to +\infty} H^{(2)}_{n,t}(s)=\frac{\sigma^2 f(1-f)}{2},\\ &\lim_{n\to +\infty} H^{(2)}_{n,t}(s,x)=\ln\big(1+f(e^x-1)\big)-fx. \end{align*} To pass to the corresponding limits in \eqref{main-integrals}, we need suitable bounds on the functions $H^{(i)}$, $i=1,2,3$. Using the inequalities $$ 0<\frac{1+y}{1+ay}\leq \frac{1}{a},\ \ 0<\frac{1+y}{(1+ay)^2} \leq \frac{1}{4a(1-a)}, \ \ \ y>-1,\ a\in (0,1), $$ we conclude that $$ 0<h^{(1)}_{n,k}(s)\leq 1,\ 0<h^{(2)}_{n,k}(s)\leq \sigma^2, $$ and therefore \begin{equation} \label{ubound1-2} 0<H^{(1)}_{n,t}(s)\leq 1,\ 0<H^{(2)}_{n,t}(s)\leq \sigma^2. \end{equation} Similarly, for $f\in (0,1)$ and $y > -1$, \begin{equation} \label{H3bnd00} \left|\ln \frac{1-f+fe^x(y+1)}{1+fy}-fx\frac{1+y}{1+fy}\right|\leq 2\big(|x|\wedge |x|^2\big), \end{equation} so that $$ |h^{(3)}_{n,k}(s,x)|\leq 2\big(|x|\wedge |x|^2\big) $$ and \begin{equation} \label{ubound3} \big|H^{(3)}_{n,t}(s)\big| \leq 2\big(|x|\wedge |x|^2\big). \end{equation} To verify \eqref{H3bnd00}, fix $f\in (0,1)$ and $y>-1$, and define the function $$ z(x) = \ln \frac{1-f+fe^x(y+1)}{1+fy},\ x\in \mathbb{R}. $$ By direct computation, \begin{equation*} \begin{split} z(0) &= 0, \\ z'(x) &= \frac{fe^x(y+1)}{1-f + fe^x(y+1)}=1-\frac{1-f}{1-f + fe^x(y+1)},\\ z'(0) &= \frac{f(y+1)}{1+fy}, \end{split} \end{equation*} so that, using the Taylor formula, \begin{equation} \label{H3bnd} \ln \frac{1-f+fe^x(y+1)}{1+fy}-fx\frac{1+y}{1+fy} =z(x)-z(0)-xz'(0)=\int_0^x(x-u)z''(u)du. \end{equation} It remains to notice that $$ 0\leq z'(x)\leq 1, \ \ 0\leq z''(x)\leq 1, $$ and then \eqref{H3bnd00} follows from \eqref{H3bnd}. With \eqref{ubound1-2} and \eqref{ubound3} in mind, the dominated convergence theorem \cite[Theorem IV.32]{Protter} makes it possible to pass to the limit in probability in \eqref{main-integrals}; the convergence in the space $\mathbb{D}$ then follows from the general results of \cite[Section IX.5.12]{LimitTheoremsforStochasticProcesses}. \end{proof} The following is a representation of the long-term growth rate of the limiting wealth process $W^f$. \begin{thm} \label{th:gr-rate-LP-gen} Let $R=R_t$ be a L\'{e}vy process with representation \eqref{Levy-main}. If $\mathbb{E}|R_1| < \infty$, then the process $W^f=W^f_t$ defined in \eqref{WP-generalLP} satisfies \begin{equation} \label{GR-gen-LP} \begin{split} \lim_{t\to +\infty} \frac{\ln W_t^f}{t}&=f \bigg(\mu + \int_{|x|>1} x F^R(dx)\bigg) + \frac{ f(1-f) \sigma^2}{2}\\ &+ \int_{\mathbb{R}}\big[\ln\big(1 + f ({e}^x - 1) \big) - f x \big] F^R(dx). \end{split} \end{equation} \end{thm} \begin{proof} By \eqref{WP-generalLP}, $$ \frac{\ln W_t^f}{t}= f \frac{R_t}{t} + \frac{ f(1-f)\sigma^2}{2}+ \frac{1}{t}\int_0^t \int_{\mathbb{R}} \Big[\ln\big(1+f(e^x - 1)\big) - f x\Big] \mu^R(dx, ds). $$ It remains to apply the law of large numbers for L\'evy processes \cite[Theorem 36.5]{Sato}. \end{proof} If, in addition, we assume that $$ -\!\!\!\!\!\!\int_{-1}^1 |x|F^R(dx)<\infty, $$ that is, the small-jump component of $R$ has bounded variation, then, after a change of variables and re-arrangement of terms, \eqref{GR-gen-LP} becomes \eqref{gr-ctL}. On the other hand, equality \eqref{gr-ctL} is derived for a wider class of return processes that includes L\'{e}vy processes as a particular case. Similar to Proposition \ref{prop:glob1}, we also have the following result. \begin{thm} \label{th-LevyRate1} In the setting of Theorem \ref{th:gr-rate-LP-gen}, denote the right-hand side of \eqref{GR-gen-LP} by $g_R(f)$ and assume that \begin{align*} &\lim_{f\to 0+} \int_{\mathbb{R}} \left(\frac{e^x - 1}{1+f(e^x - 1)} - x\right)\, F^R(dx)>-\bigg(\mu + \frac{\sigma^2}{2} + \int_{|x|>1} x F^R(dx)\bigg),\\ &\lim_{f\to 1-} \int_{\mathbb{R}} \left(\frac{e^x - 1}{1+f(e^x - 1)} - x\right)\, F^R(dx)< -\bigg(\mu + \int_{|x|>1} x F^R(dx)\bigg), \end{align*} Then there exists a unique $f^*\in (0,1)$ such that $$ g_R(f)< g_R(f^*) $$ for all $f$ in the domain of $g_R$. \end{thm} \section{Continuous Limit of Random Discrete Compounding} The objective of this section is to analyze high frequency limits for betting {\em in business time}. In other words, the number of bets is not known a priori, so that a natural model of the corresponding wealth process is \begin{equation} \label{weath-mgen} W_{t}^{n,f}=\prod_{k=1}^{\lfloor \Lambda_{n,t}\rfloor} (1+fr_{n,k}) \end{equation} where, for each $n$, the process $t\mapsto \Lambda_{n,t}$ is a subordinator, that is, a non-decreasing L\'{e}vy process, independent of all $r_{n,k}$. To study \eqref{weath-mgen}, we will follow the methodology in \cite{KZZ}, where convergence of processes is derived after {\em assuming} a suitable convergence of the random variables. The main result in this connection is as follows. \begin{thm} \label{th:Levy-gen0} Consider the following objects: \begin{itemize} \item random variables $X_{n,k},\ n,k\geq 1$ such that $\{X_{n,k},\ k\geq 1\}$ are iid for each $n$, with mean zero and, for some $\beta\in [0,1]$, $m_n:=\Big(\mathbb{E}|X_{n,1}|^{\beta}\Big)^{1/\beta}<\infty$; \item random processes $\Lambda_n=\Lambda_{n,t}$, $n\geq 1,\ t\geq 0,$ such that, for each $n$, $\Lambda_n$ is a subordinator independent of $\{X_{n,k},\ k\geq 1\}$ with the properties $\Lambda_{n,0}=0$, and for some numbers $0<\delta,\delta_1\leq 1$ and $C_n>0$, $\Big(\mathbb{E}\Lambda_{n,t}^{\delta}\Big)^{1/\delta} \leq C_n t^{\delta_1/\delta}$. \end{itemize} Assume that there exist infinitely divisible random variables $Y$ and $U$ such that $$ \lim_{n\to \infty} \sum_{k=1}^n X_{n,k} \overset{d}{=} \bar{Y},\ \lim_{n\to \infty} \frac{\Lambda_{n,1}}{n} \overset{d}{=} \bar{U}. $$ If \begin{equation} \label{dens} \sup_{n} \Big(C_n m_n^{\beta}\Big)<\infty, \end{equation} then, as $n\to \infty$, the sequence of processes $$ t\mapsto \sum_{k=1}^{\lfloor \Lambda_{n,t}\rfloor} X_{n,k},\ t\in [0,T], $$ converges, in the Skorokhod topology, to the process $Z=Z_t$ such that $Z_t=Y_{U_t}$, where $Y$ and $U$ are independent L\'{e}vy processes satisfying $Y_1\overset{d}{=}\bar{Y}$ and $U_1\overset{d}{=}\bar{U}$. \end{thm} The proof is a word-for-word repetition of the arguments leading to \cite[Theorem 1]{KZZ}: the result of \cite{GnF}, together with the assumptions of the theorem, implies $$ \lim_{n\to \infty} \sum_{k=1}^{\lfloor \Lambda_{n,1}\rfloor} X_{n,k}\overset{d}{=} Z_1, $$ and therefore the convergence of finite-dimensional distributions for the corresponding processes; together with condition \eqref{dens}, this implies the convergence in the Skorokhod space. Because we deal exclusively with L\'{e}vy processes, it is possible to avoid the heavy machinery from \cite{LimitTheoremsforStochasticProcesses}. We now consider the wealth process \eqref{weath-mgen} and apply Theorem \ref{th:Levy-gen0} with $$ X_{n,k}= \ln (1+fr_{n,k})-\mathbb{E}\ln (1+fr_{n,k}). $$ On the one hand, convergence to infinitely divisible distributions other than normal is a very diverse area, with a variety of conditions and conclusions; cf. \cite[Chapter XVII, Section 5]{FellerII} or a summary in \cite[Section 16.2]{Klenke}. On the other hand, optimal strategy \eqref{optf-rv} seems to persist. For example, assume that the returns $r_{n,k}$ are as in \eqref{return-nk}, and let $\Lambda_{n,t}=S_{n^{\alpha}t}$, where $\alpha\in (0,1]$ and $S=S_t$ is the L\'{e}vy process such that $S_1$ has the $\alpha$-stable distribution with both scale and skewness parameters equal to $1$. Recall that an $\alpha$-stable L\'{e}vy process $L^{\alpha}=L^{\alpha}_t$ satisfies the following equality in distribution (as processes): \begin{equation} \label{scale-L} L^{\alpha}_{\gamma t}\overset{{\mathcal{L}}}{=} \gamma^{1/\alpha}L^{\alpha}_t, \ \gamma>0. \end{equation} Then $$ \Lambda_{n,t}\overset{{\mathcal{L}}}{=} nS_{t} $$ and, in the notations of Theorem \ref{th:Levy-gen0}, $\bar{Y}$ is normal with mean zero and variance $\sigma^2$. Keeping in mind that $$ \mathbb{E}\ln (1+fr_{n,k}) = \mathbb{E}\ln (1+fr_{n,1})= \Big(f\mu-\frac{f^2\sigma^2}{2}\Big)n^{-1}+o(n^{-1}), $$ we repeat the arguments from \cite[Example 1]{KZZ} to conclude that $$ \lim_{n\to \infty} \ln W^{n,f}_t \overset{{\mathcal{L}}}{=} \Big(f\mu-\frac{f^2\sigma^2}{2}\Big)S_t + Z_t, $$ where $Z_1$ has symmetric $2\alpha$-stable distribution. By \eqref{scale-L}, $$ S_t\overset{d}{=} t^{1/\alpha}S_1,\ \lim_{t\to +\infty} t^{-1/\alpha}Z_t \overset{d}{=} \lim_{t\to +\infty} t^{-1/(2\alpha)}Z_1 \overset{d}{=} 0, $$ and the ``natural'' long term growth rate becomes $$ \lim_{t\to \infty}t^{-1/\alpha}\Big(\lim_{n\to \infty} \ln W^{n,f}_t\Big) \overset{d}{=} \Big(f\mu-\frac{f^2\sigma^2}{2}\Big)S_1, $$ which is random, but, for each realization of $S$, is still maximized by $f^*$ from \eqref{optf-rv}. Therefore, if the time with which we compound our wealth is random, then the growth rate is also random as we don't know when we will stop compounding, yet it is still maximized by a deterministic fraction. Note that, for the purpose of this computation, the (stochastic) dependence between the processes $S$ and $Z$ is not important. \section{Conclusions And Further Directions} The NS-NL condition $f^*\in [0,1]$ can fail in many situations. Even in the simple Bernoulli model, if $p<1/2$, then the short position $f^*=2p-1$ achieves positive long-time wealth growth: $$ g_r(f^*)=p\ln \frac{p}{1-p}+(2-p)\ln(2-2p)=\ln 2 +p\ln p+(1-p)\ln(1-p)>0. $$ Note that $-p\ln p-(1-p)\ln(1-p)$ is the Shannon entropy of the Bernoulli distribution, and the largest value of the entropy is $\ln 2$, corresponding to $p=1/2$. When the edge is too big (cf. \eqref{GenBern}), then $f^*>1$, that is, leveraged gambling leads to bigger long-time wealth growth than any NS-NL strategy. The economical and financial implications of $f^*\notin [0,1]$ are beyond the scope of our investigation and must be studied in a broader context of risk tolerance: even when $f^*\in (0,1)$, a certain fraction of $f^*$ can be a smarter strategy, cf. \cite[Section 7.3]{Thorp06}. A related observation, to be further studied in the future, is that high-frequency betting can lead to a more aggressive strategy than the ``low frequency'' counterpart. For example, comparing \eqref{return1} and \eqref{return1-hf}, we see that $\mu=2p-1$ and $\sigma^2=4p(1-p) < 1$ when $p\not=1/2$. As a result, by \eqref{optf-rv}, the optimal strategy for \eqref{return1-hf} with large $n$ is $f^*\approx (2p-1)/(4p(1-p))> 2p-1$; recall that $f^*=2p-1$ is the optimal strategy for the simple Bernoulli model \eqref{return1}. On the other hand, numerical simulations suggest that, in the log-normal model \eqref{discr-ret-gauss}, \eqref{GBM-returns}, high-frequency compounding does not always lead to larger $f^*$. Other problems warranting further investigation include \begin{enumerate} \item A dynamic strategy $f=f(t)$ with a predictable process $f$; \item A portfolio of bets, with a vector of strategies $\mathbf{f}=(f_1,\ldots, f_N)$. \end{enumerate} \bibliographystyle{amsplain}
{ "timestamp": "2020-02-11T02:18:16", "yymm": "2002", "arxiv_id": "2002.03448", "language": "en", "url": "https://arxiv.org/abs/2002.03448", "abstract": "The original Kelly criterion provides a strategy to maximize the long-term growth of winnings in a sequence of simple Bernoulli bets with an edge, that is, when the expected return on each bet is positive. The objective of this work is to consider more general models of returns and the continuous time, or high frequency, limits of those models.", "subjects": "Probability (math.PR); Mathematical Finance (q-fin.MF)", "title": "Kelly Criterion: From a Simple Random Walk to Lévy Processes", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9910145693897764, "lm_q2_score": 0.8221891305219504, "lm_q1q2_score": 0.8148014071411653 }
https://arxiv.org/abs/2209.11307
A combinatorial bound on the number of distinct eigenvalues of a graph
The smallest possible number of distinct eigenvalues of a graph $G$, denoted by $q(G)$, has a combinatorial bound in terms of unique shortest paths in the graph. In particular, $q(G)$ is bounded below by $k$, where $k$ is the number of vertices of a unique shortest path joining any pair of vertices in $G$. Thus, if $n$ is the number of vertices of $G$, then $n-q(G)$ is bounded above by the size of the complement (with respect to the vertex set of $G$) of the vertex set of the longest unique shortest path joining any pair of vertices of $G$. The purpose of this paper is to commence the study of the minor-monotone floor of $n-k$, which is the minimum of $n-k$ among all graphs of which $G$ is a minor. Accordingly, we prove some results about this minor-monotone floor.
\section{Motivation and background} The Inverse Eigenvalue Problem for a Graph (IEPG) starts with a pattern of zero and nonzero constraints for a real symmetric matrix, described by a graph $G$ on $n$ vertices, and asks what spectra are possible for the set of $n \times n$ matrices $\mathcal{S}(G)$ that exhibit this pattern. Two narrower questions about the possible spectra of matrices in $\mathcal{S}(G)$ yield important matrix-theoretic graph parameters that motivate what is studied here. Each of these parameters has a natural combinatorial bound. Firstly, the highest possible nullity, denoted by $\mathrm{M}(G)$, is also equal to the maximum multiplicity that can be obtained by any eigenvalue, and this is bounded by a process called zero forcing, giving $\mathrm{M}(G) \le \mathrm{Z}(G)$, where $Z(G)$ is the zero forcing number of $G$. (See, for example, \cite{HLS2022} for the definition of zero forcing number.) Secondly, the lowest possible number of distinct eigenvalues is denoted by $q(G)$, and this is bounded by the number of vertices in a unique shortest path, which we denote by $\usp{G}$, thus giving $q(G) \ge \usp{G}$ \cite[Theorem 3.2]{MinimumDistinct}. (The parameter $q(G)$ has been studied, for example, in \cite{F10,MinimumDistinct,Analysis,Nordhaus}.) As stated, these combinatorial bounds are in opposite directions, but in fact the matrix parameter that will be of interest is $n - q(G)$, which can be thought of as the maximum number of ordered eigenvalue coincidences, and is therefore called the \emph{maximum spectral equality}. Expressed this way, the combinatorial bound is $n - q(G) \le n - \usp{G}$, involving a combinatorial quantity $n - \usp{G}$ that will be called the \emph{spectator number} of $G$ and will be denoted by $\uspc{G}$. The motivation for this project concerns the behavior of the four graph parameters $\mathrm{M}(G)$, $\mathrm{Z}(G)$, $n - q(G)$, and $n - \usp{G}$ with respect to subgraphs, and more generally with respect to graph minors. The set $\mathcal{S}(G)$ is a manifold, an open subset of a vector space, and the process of removing an edge from the graph, forcing a zero entry in the matrix, collapses a coordinate direction of the enveloping vector space, reducing the dimension of $\mathcal{S}(G)$ by one. One might expect, generically, that removing degrees of freedom in the matrix would not allow greater nullity and would also not allow more eigenvalue coincidences. In other words, one might expect both $\mathrm{M}(G)$ and $n - q(G)$ to be weakly decreasing as edges are deleted, and might expect a similar subgraph monotonicity for the related combinatorial bounds $\mathrm{Z}(G)$ and $n - \usp{G}$. In most cases this expected behavior is observed to hold, but there are counterexamples---an extreme example being that the graph with the fewest edges of all on $n$ vertices, the edgeless graph $\overline{K_n} = nK_1$, is the only graph that achieves a value as high as $n$ for either $\mathrm{M}(G)$ or $\mathrm{Z}(G)$ and is the only graph that achieves a value as high as $n - 1$ for either $n - q(G)$ or $n - \usp{G}$. (This extreme example is also maximally disconnected, but there also exist, for any of these four parameters, connected counterexample graphs to which an edge can be added while decreasing the value of the parameter. See, e.g., Example 2.9 in \cite{Parameters} for an example for $\mathrm{M}(G)$ and $\mathrm{Z}(G)$, and see, e.g., Figures 6.1 and 6.2 of \cite{MinimumDistinct} for an example for $n-q(G)$ and $n-\usp{G}$.) For three of these four parameters, namely $\mathrm{M}(G)$, $\mathrm{Z}(G)$, and $n - q(G)$, there is an established variant of the graph parameter that not only exhibits the generically expected behavior of weakly decreasing as edges are deleted, but also is monotone with respect to graph minors more generally. The purpose of this paper is to complete the set of four by introducing, and commencing the study of, a canonically chosen modification of $n - \usp{G}$ that achieves monotonicity with respect to graph minors. Thus, we study the minor-monotone floor of $n - \usp{G}=\uspc{G}$, that is, the minimum of $\uspc{H}$ among all graphs $H$ containing $G$ as a minor. We denote this minor-monotone floor by $\uspcf{G}$ and call it the \emph{spectator floor} of $G$. One of our main results is that, in order to find a graph $G'$ containing $G$ as a minor and such that $\uspcf{G}=\uspc{G'}$, one need only add edges to $G$. In \cref{Minor Operations and the Spectator Floor}, we will prove the following. \begin{theorem} \label{lem:no decontract-intro} For every graph $G$, there is a graph $G'$ with the same number of vertices as $G$ such that $G$ is a subgraph of $G'$ and such that $\uspcf{G}=\uspc{G'}$. \end{theorem} Another main result is that the spectator floor is additive over connected components. We prove the following result in \cref{Disconnected Graphs}. \begin{theorem} \label{prop:components-intro} For any graph $G=G_1\sqcup G_2$ with no edges connecting $G_1$ to $G_2$, \[\uspcf{G}=\uspcf{G_1}+\uspcf{G_2}\] \end{theorem} \cref{trees} gives some results about the spectator floor of trees. In \cref{calculation of spectator floor}, we present an algorithm to calculate the spectator floor of simple graphs. In \cref{sec:minimal}, we study graphs with small spectator floors. In particular, we determine the minor minimal graphs (whether or not parallel edges are allowed) that have spectator floor $k$, when $k=1$ and when $k=2$. In \cref{minor max graphs}, we characterize the minor maximal graphs with a given spectator floor, subject to a restriction on the number of vertices in the graph and the number of parallel edges between any pair of vertices. Finally, \cref{further questions} presents some questions for further research. Before moving on to \cref{Minor Operations and the Spectator Floor}, we continue this introduction with some additional preliminary information. \subsection{Definitions} We allow a graph $G$ to have multiple edges (unless $G$ is explicitly stated to be a simple graph) but not to have loops at its vertices. Moreover, we assume all graphs are nonempty, that is every graph has at least one vertex. The multigraph convention for matrix entries is that each edge of $G$ contributes additively a non-zero amount to the matrix entry, which in particular allows the contributions from multiple edges to cancel. The convention is also that diagonal entries are unconstrained. Concretely, then, given a graph whose vertices are the index set $\{1, \dots, n\}$, the space $\mathcal{S}(G)$ of matrices conforming to the pattern of $G$ is the set of all real symmetric $n \times n$ matrices $A = [a_{ij}]$ such that \begin{itemize} \item if $i \ne j$ and there is no edge in $G$ connecting $i$ to $j$, then $a_{ij} = 0$, \item if $i \ne j$ and there is exactly one edge in $G$ connecting $i$ to $j$, then $a_{ij} \ne 0$, \item if $i \ne j$ and there is more than one edge in $G$ connecting $i$ to $j$, then $a_{ij}$ is unconstrained, and \item diagonal entries $a_{ii}$ are unconstrained. \end{itemize} The following definitions lead up to the promised naming and explanation of $n - \usp{G}$, together with its minor-monotone floor. Let $G$ be a graph on $n$ vertices. We start with some standard definitions. \begin{itemize} \item A \emph{walk} in $G$ from $u$ to $v$ is a sequence of edges $(e_1, \dots, e_k)$ from $G$ and a sequence of vertices $(u=v_1, v_2, \dots, v_{k}, v_{k+1}=v)$ from $G$ such that each edge $e_i$ has endpoints $v_i$ and $v_{i+1}$. The \emph{length} of the walk is the length $k \ge 0$ of the sequence of edges. \item A \emph{path} in $G$ is a walk all of whose vertices are distinct. The length of a path is the number of edges $k$, but the \emph{order} of a path is the number of vertices $k + 1$. \item A \emph{shortest path} in $G$ is a path in $G$ from $u$ to $v$ of order $k + 1$ such that no path in $G$ from $u$ to $v$ has order $k$ or smaller. \item A \emph{unique shortest path} in $G$ is a shortest path $P$ in $G$ from $u$ to $v$ of order $k + 1$ such that every path in $G$ from $u$ to $v$ of order exactly $k + 1$ is identical to $P$. The graph $G$ may be a multigraph, but no edge in a unique shortest path can have other edges parallel to it, because two paths with a different sequence of edges are not considered identical, even if the sequence of vertices is the same. \end{itemize} We now introduce the following definitions. \begin{itemize} \item A \emph{parade} in $G$ is a unique shortest path in $G$ that achieves the largest possible order for a unique shortest path in $G$. \item The \emph{parade number} of a graph $G$, denoted $\usp{G}$, is the number of vertices in some parade in $G$. \item The \emph{spectator number} of a graph $G$, denoted $\uspc{G}$, is the number of vertices outside some parade in $G$, hence $\uspc{G}=n-\usp{G}$. \item The \emph{spectator floor} of a graph $G$, denoted $\uspcf{G}$, is the minor-monotone floor of $\uspc{G}$; in other words, the minimum value of $\uspc{H}$ over the set of all graphs $H$ of which $G$ is a minor. \end{itemize} \subsection{Matrix-theoretic graph parameters related to \texorpdfstring{$q(G)$}{q(G)}} The graph parameters $q_M(G)$ and $q_S(G)$, introduced in \cite{SSP2017}, satisfy $q(G) \le q_M(G) \le q_S(G)$. For these variants of $q(G)$, matrices are restricted to those satisfying the Strong Multiplicity Property (SMP) or the Strong Spectral Property (SSP), respectively. A consequence of \cite{SSP2020} is that the maximum SMP spectral equality, $n - q_M(G)$, and maximum SSP spectral equality, $n - q_S(G)$, are minor-monotone graph parameters. A consequence of \cite{SSP2017} is that $n - q_M(G)$ and $n - q_S(G)$ each take the sum over components of a disconnected graph, in contrast for example to minor-monotone matrix nullity graph parameters that tend to take the maximum over components. Since $q(G) \le q_M(G) \le q_S(G)$, we also have the inequalities \[ n - q_S(G) \le n - q_M(G) \le n - q(G). \] The existence of a minor-monotone lower bound invites inquiry into the minor-monotone floor of the maximum spectral equality, which by general properties of minor-monotone floors and ceilings shares the same lower bound: \[ n - q_M(G) \le \lfloor n - q(G) \rfloor \le n - q(G). \] \medskip \noindent {\bf Combinatorial bounds.} Given a graph $G$ on $n$ vertices, if the vertices $u$ and $v$ are connected by a unique shortest path on $k$ vertices, then it is straightforward to show, for any matrix $A \in \mathcal{S}(G)$, that the powers $I = A^0, A^1, A^2, \dots, A^{k-1}$ form a linearly independent set in the vector space of symmetric $n \times n$ matrices, implying $q(G) > k - 1$ since the linear combination of powers in the minimal polynomial of a matrix is zero. This yields the result that $q(G) \ge k$, where $k$ is the number of vertices in any unique shortest path. This bound suggests the definition of the \emph{spectator number} given above. In particuar, the spectator number is the minimum cardinality of a vertex set that is complementary, relative to the set of all vertices of $G$, to the set of vertices in a unique shortest path in $G$. The above inequality $q(G) \ge k$, satisfied whenever there exist $k$ vertices forming a unique shortest path, guarantees that the spectator number is an upper bound for the maximum spectral equality $n - q(G)$, just as the combinatorial parameter $Z(G)$ is an upper bound for the matrix parameter $M(G)$. Maximum nullity gives a bound on maximum spectral equality, \[ M(G) - 1 \le n - q(G), \] because any one eigenvalue of multiplicity $k$ induces $k - 1$ eigenvalue equalities on its own. In a parallel way, but for entirely different and combinatorial reasons, the quantity $Z(G) - 1$ is a lower bound for the spectator number. Unlike zero forcing, whose computation is in general NP-hard \cite{Hardness}, the minimum spectator number can be computed in polynomial time for any graph (see, e.g., \cref{USP-in-poly-time}). \medskip \noindent {\bf Bounds for minor-monotone floors.} The minor-monotone floor of a graph parameter on a graph $G$ is the minimum of the parameter over the infinite collection of graphs $H$ of which $G$ is a minor. When a parameter $\beta(G)$ is minor-monotone, it is known that for any fixed $k$ the class of graphs with $\beta(G) \ge k$ can be recognized in polynomial time, and in fact can be recognized in linear time in the special case that the complete list of minor-minimal graphs for $\beta(G) \ge k$ is known and all of them happen to be planar graphs \cite{B93}. In \cref{sec:minimal}, we show that the minor-minimal graphs with spectator floor $1$ and $2$ are all planar. The parameter $\uspcf{G}$ extends in a natural way to multigraphs, signed graphs up to negation, and signed multigraphs up to negation, all of which are categories in which minor-minimal sets are guaranteed to be finite. The signed variant is in terms of \emph{monotone shortest paths}; a path of length $k$ from vertex $u$ to vertex $v$ in a signed graph $G$ is a monotone shortest path if there is no path from $u$ to $v$ of length shorter than $k$, and if every other path from $u$ to $v$ of length $k$ has the same product of edge signs. \section{Minor Operations and the Spectator Floor} \label{Minor Operations and the Spectator Floor} A \emph{minor} of a graph $H$ is obtained from $H$ by a sequence of the following operations: \begin{enumerate} \item Deletion of an isolated vertex \item Deletion of an edge, denoted $H\backslash e$ \item Contraction of an edge $e$ with no edges in parallel with it, denoted $H/e$. (If the minor is to be a simple graph, then the edge cannot be in a triangle.) \end{enumerate} A minor obtained by performing exactly one of these operations is called an \emph{elementary minor}. When calculating the spectator floor, we take the minimum of the spectator number over any graph that can be made by performing the three minor operations above in reverse. The following terminology will be useful to describe the operation that reverses contraction. \begin{definition}\label{def:decontraction} If $G$ and $H$ are graphs and $e\in E(H)$ such that $G=H/e$, then we call the operation used to obtain $H$ from $G$ a \emph{decontraction} of a vertex. \end{definition} Note that there may be many ways to decontract a vertex. Therefore, decontraction is not well-defined without additional context. The main result of this section is \cref{lem:no decontract}, which tells us that in order to calculate the spectator floor, it is not necessary to add vertices or to perform decontraction. The only necessary operation is adding edges. The following lemmas, \cref{lem:no-delete-vertex,lem:isolated,lem:no-isolated,lem:still-usp,lem:no-endpoints,lem:e not in pert}, are all in support of the main result. The lemma below tells us that when taking a minor, it is not necessary to perform step 1 (deletion of an isolated vertex), and that we may order the steps so that all instances of step 3 (contraction) appear before all instances of step 2 (edge deletion). \begin{lemma} \label{lem:no-delete-vertex} Let $G$ and $H$ be graphs such that $G$ is a minor of $H$. If $H$ has no isolated vertices, then there are sets of edges $C$ and $D$ such that $G\cong H/C\backslash D$. \end{lemma} \begin{proof} We need to show that it is not necessary to delete any isolated vertices in order to obtain $G$ from $H$. Since $H$ has no such vertices, such a vertex can only appear after all edges incident with that vertex have been deleted. But, in that case, all but one of the edges incident with the vertex can be deleted and the remaining edge contracted instead. (This procedure works since we are assuming $G$ is nonempty.) \end{proof} The next lemma tells us that the presence of isolated vertices does not affect the spectator floor value. \begin{lemma} \label{lem:isolated} Let $G+v$ be a graph with an isolated vertex $v$, and let $G$ be the graph obtained from $G+v$ by deleting $v$. Then $\uspcf{G+v}=\uspcf{G}$. Moreover, if $G$ is a minor of a graph $H$ and $\uspcf{G}=\uspc{H}$, then there is a graph $H^+$, with exactly one more vertex than $H$, such that $\uspcf{G+v}=\uspc{H^+}$ and $G+v$ is a minor of $H^+$. \end{lemma} \begin{proof} Since $G$ is a minor of $G+v$, we have $\uspcf{G}\leq\uspcf{G+v}$. Let $P$ be a parade in $H$. Let $H^+$ be obtained from $H$ by adding a vertex $v$ of degree $1$ adjacent to one of the endpoints of $P$. Let $P^+$ be the path in $H^+$ obtained from $P$ by adding the vertex $v$. Since $P$ is a parade in $H$, it has $|V(H)|-\uspcf{G}$ vertices. Therefore, $P^+$ has $|V(H^+)|-\uspcf{G}$ vertices. Thus, $\uspc{H^+}\leq\uspcf{G}$. But $G$ is a minor of $H^+$. Therefore, the reverse inequality holds, and we have $\uspc{H^+}=\uspcf{G}$. Since $G+v$ is a minor of $H^+$, we have $\uspcf{G+v}\leq\uspc{H^+}=\uspcf{G}\leq\uspcf{G+v}$. Therefore, we have equality, and the result holds. \end{proof} The next lemma tells us that if $H$ is a graph that realizes the spectator floor value of a graph $G$ with no isolated vertices, then $H$ also has no isolated vertices. \begin{lemma} \label{lem:no-isolated} Suppose $G$ is a graph without isolated vetices, and suppose $G$ is a minor of $H$. If $\uspcf{G}=\uspc{H}$, then $H$ has no isolated vertices. \end{lemma} \begin{proof} Suppose otherwise for a contradiction. Let $W$ be the set of isolated vertices of $H$. It is not difficult to see that deletion of such a vertex results in a graph with spectator number reduced by $1$. (This is because $G$ is neither empty nor $K_1$. Therefore, $H\neq K_1$.) Therefore, $\uspc{H-W}=\uspc{H}-|W|$. Since no vertex of $G$ is isolated, $G$ is a minor of $H-W$. This contradicts the assumption that $\uspcf{G}=\uspc{H}$. \end{proof} The next definition gives us a very precise definition of what it means to contract an edge in a path. This will be of much use in further proofs. \begin{definition} \label{def:P/e} Let $P$ be a path and $e$ an edge in a graph $G$. We define $P/e$ to be the subgraph of $G/e$ obtained from $P$ by contracting $e$. More precisely, if $v$ and $w$ are the endpoints of $e$ and $v'$ is the vertex that results from contracting $e$, then: \begin{itemize} \item If $P$ contains $e$, then $P/e$ is the path whose vertex set is $(V(P)-\{v,w\})\cup\{v'\}$ and whose edge set is $E(P)-\{e\}$. \item If $P$ contains exactly one endpoint of $e$ (say $v$), then $P/e$ is the path whose vertex set is $(V(P)-\{v\})\cup\{v'\}$ and whose edge set is $E(P)$. \item If $P$ contains both endpoints of $e$, but not $e$ itself, then $P/e$ is the graph (with exactly one cycle) whose vertex set is $(V(P)-\{v,w\})\cup\{v'\}$ and whose edge set is $E(P)$. \item If $P$ contains neither endpoint of $e$, then $P/e=P$. \end{itemize} \end{definition} The next lemma tells us that the property of being a unique shortest path is preserved after contraction of an edge in the path. \begin{lemma} \label{lem:still-usp} Let $G=H/e$. If $P$ is a unique shortest path in $H$ and $P$ contains $e$, then $P/e$ is a unique shortest path in $G$. \end{lemma} \begin{proof} Let $P$ be a unique shortest path in $H$, and suppose that the endpoints of $P$ are $v,w$ and $length(P)=\ell$. Let $Q_1,Q_2,Q_2,...,Q_N$ be all of the other paths from $v$ to $w$. Since $P$ is a unique shortest path, each $Q_i$ has $length(Q_i)\geq \ell+1$. For a path $Q_i$ that includes both endpoints of $e$ but does not include $e$, $Q_i/e$ is not a path in $G$, but every path that connects $v,w$ in $G=H/e$ is $P/e$ or $Q_i/e$ for some $i$. Clearly, $length(P/e)=\ell-1$. On the other hand, for each $Q_i/e$ that is a path, we have either $length(Q_i/e)=length(Q_i)\geq\ell+1$ or $length(Q_i/e)=length(Q_i)-1\geq \ell$. In either case, $P/e$ is the unique shortest path connecting $v,w$. \end{proof} The next lemma tells us that if contracting an edge in a parade increases the spectator number of the graph, then the resulting contracted parade is no longer a parade after contraction. \begin{lemma} \label{lem:no-endpoints} Let $G=H/e$. If $\uspc{H}<\uspc{G}$ and $P$ is a parade in $H$ and $P$ contains edge $e$, then $P/e$ is not a parade in $G$. \end{lemma} \begin{proof} Suppose not. Suppose $\uspc{H}<\uspc{G}$ and $P$ is a parade in $H$ and $P$ contains edge $e$ and $P/e$ is a parade in $G$. Then \[\usp{G}=|P/e|=|P|-1=\usp{H}-1\] However, since $\uspc{H}<\uspc{G}$, we also have \[|H|-\usp{H}<|G|-\usp{G}\] \[|G|+1-\usp{H}<|G|-\usp{G}\] \[1+\usp{G}<\usp{H}\] \[\usp{G}<\usp{H}-1\] \[\usp{G}\leq \usp{H}-2\] This is a contradiction. \end{proof} The next lemma tells us that contracting an edge contained in a parade cannot increase the spectator number of the graph. \begin{lemma} \label{lem:e not in pert} If $P$ is a parade in $H$ and $P$ contains both of the endpoints of edge $e$, then $\uspc{H}\geq\uspc{H/e}$. \end{lemma} \begin{proof} Since $P$ is a parade in $H$, note that $P$ must contain the edge $e$. Otherwise, $P$ is not a unique shortest path. Suppose for a contradiction that $\uspc{H}<\uspc{H/e}$. Call $H/e$ by the name $G$. We know from \cref{lem:no-endpoints} that $P/e$ is not a parade in $G$. But $P/e$ is still a unique shortest path by \cref{lem:still-usp}, so it must not be a longest unique shortest path anymore. However, there must be some parade in $G$. Let us call that $Q/e$, so that we can call the uncontracted version in $H$ by the name $Q$. Supposing $|P|=\ell$ in $H$, then $|Q|\leq \ell$ because $P$ was a parade in $H$. And since $P/e$ is not long enough to be a parade in $G$, $|Q/e|>\ell-1$. Putting these together, we get \[\ell-1<|Q/e|\leq |Q|\leq \ell.\] This implies that \[|Q/e|= |Q|= \ell.\] (That is, $Q/e=Q$, and $Q$ does not contain $e$.) Hence $\usp{G}=\usp{H}$. However, $\uspc{H}<\uspc{G}$ implies $\usp{G}\leq \usp{H}-2$. This is a contradiction. \end{proof} Finally, we have the main result of this section, which tells us that for any graph $G$, we can always find a supergraph $G'$ with the same number of vertices that realizes the spectator floor value of $G$. This is one of the major results of the paper and supports further results in other sections. We now prove \cref{lem:no decontract-intro} restated below. \begin{theorem} \label{lem:no decontract} For every graph $G$, there is a graph $G'$ with the same number of vertices as $G$ such that $G$ is a subgraph of $G'$ and such that $\uspcf{G}=\uspc{G'}$. \end{theorem} \begin{proof} By \cref{lem:isolated}, it suffices to consider the case where $G$ has no isolated vertices. Let $H$ be a graph such that $G$ is a minor of $H$ and such that $\uspcf{G}=\uspc{H}$. By \cref{lem:no-isolated}, $H$ has no isolated vertices. Therefore, by \cref{lem:no-delete-vertex}, there are sets $C,D$ of edges such that $G\cong H/C\backslash D$, without the necessity to delete vertices. Now, suppose that $H$ is such that $|C|$ is minimal among all such graphs. Suppose for a contradiction that $C\neq\emptyset$. For some edge $e\in C$, consider the graph $H/e$. Let $u$ and $v$ be the endpoints of $e$, and let $v'$ be the vertex that results from contracting $e$. By minimality of $C$, we have $\uspc{H}<\uspc{H/e}$. (Otherwise, the set of edges that need to be contracted from $H/e$ to obtain $G$ is smaller than $C$.) Let $P$ be a parade of $H$. By \cref{lem:e not in pert}, $P$ does not contain both endpoints of $e$. Therefore, $P$ is also a path in $H/e$. However, since $H/e$ has one fewer vertex than $H$, the fact that $\uspc{H}<\uspc{H/e}$ implies that $\usp{H/e}\leq\usp{H}-2$. This implies that $P$ is not a parade in $H/e$. Since $P$ is longer than the parades of $H/e$, there must be at least one path $Q\neq P$ in $H/e$ with the same length and endpoints as $P$. Let $\{P_1,\ldots,P_t\}$ be the set of all such paths $Q$. Since $P$ is a parade in $H$, these paths $P_1,\ldots P_t$ cannot be paths in $H$. Therefore, for each of these paths $P_i$ for $1\leq i\leq t$, there is a path $P_i'$ in $H$, containing $e$, such that $P_i'/e=P_i$. \begin{claim} \label{3vertices} $|V(P)|=\usp{H}\geq3$ \end{claim} \begin{subproof} Contracting an edge cannot result in an empty graph. Therefore, $|V(H/e)|\geq1$, implying that $\usp{H/e}\geq1$. Thus, since $\usp{H}\geq\usp{H/e}+2$, we have $\usp{H}\geq3$. \end{subproof} We will add an edge $f$ to $H/e$ to result in a graph $F$ such that $\uspc{F}\leq\uspc{H}$, contradicting the minimality of $H$. Let $x=v_0,e_1,v_1,\ldots,e_r,v_r=y$ be the succession of vertices and edges of $P$. We claim that each $P_i$ has only one subpath that diverges from $P$. In other words, we claim the following. \begin{claim} \label{1divergence} For each integer $i$ with $1\leq i\leq t$, there are nonnegative integers $m(i)< n(i)\leq r$, such that $P_i$ contains the vertices and edges $x=v_0,e_1,v_1,\ldots,e_{m(i)},v_{m(i)},v_{n(i)},e_{n(i)+1},\ldots,e_r,v_r=y$ but does not contain the vertices and edges $e_{m(i)+1},v_{m(i)+1},\ldots,v_{n(i)-1},e_{n(i)}$. \end{claim} \begin{subproof} Suppose for a contradiction that there are positive integers $m<n\leq p<q$ such that $P_i$ contains the vertices $v_m$, $v_n$, $v_p$, and $v_q$ but such that the subpaths of $P$ and $P_i$ from $v_m$ to $v_n$ are internally disjoint and such that the subpaths of $P$ and $P_i$ from $v_p$ to $v_q$ are internally disjoint. Without loss of generality, let $v'$ (obtained by contracting $e$) be on the subpath of $P_i$ from $v_m$ to $v_n$. (This includes the possibility that $v'=v_m$ or $v'=v_n$, in which case $v'$ is in $P$.) Let $Q$ be the graph consisting of the subpath of $P$ from $v_0$ to $v_p$, the subpath of $P_i$ from $v_p$ to $v_q$, and the subpath of $P$ from $v_q$ to $v_r$. Note that $Q$ contains a path in $H$ from $v_0$ to $v_r$ and therefore must have more vertices than $P$. Thus, the subpath of $P_i$ from $v_p$ to $v_q$ is longer than the subpath of $P$ from $v_p$ to $v_q$. Since $P$ and $P_i$ have the same length, this implies that the subpath of $P_i$ from $v_m$ to $v_n$ is shorter than the subpath of $P$ from $v_m$ to $v_n$. Let $R$ be the graph consisting of the subpath of $P$ from $v_0$ to $v_m$, the subpath of $P_i$ from $v_m$ to $v_n$, and the subpath of $P$ from $v_n$ to $v_r$. Thus $R$ contains a path in $H/e$ from $v_0$ to $v_r$ that is shorter than $P$. This implies the existence of a path in $H$ from $v_0$ to $v_r$ that is no longer than $P$, a contradiction. \end{subproof} Recall that, for each $i$ with $1\leq i\leq t$, the path $P_i$ in $H/e$ is $P_i'/e$, where $P_i'$ is a path in $H$ containing $e$, and recall that $v'$ is the resulting vertex when $e$ is contracted. By \cref{1divergence}, there is exactly one subpath of each $P_i'$ whose intersection with $P$ is the endpoints of the subpath. Let $p(i)$ be the length (number of edges) of the subpath of $P_i$ from $v_{m(i)}$ to $v'$, and let $q(i)$ be the length of the subpath of $P_i$ from $v'$ to $v_{n(i)}$. Let the sequence of vertices of $P_i$ be $x=v_0,v_1,\ldots,v_{m(i)},u_1,\ldots,u_{p(i)-1},u_{p(i)}=v'=w_{q(i)},w_{q(i)-1},\ldots,w_1,v_{n(i)},\ldots,v_r=y$. (See \cref{fig:PandP'}; note that it is possible $v_{m(i)}=v'$ or $v_{n(i)}=v'$, in which case $p(i)=0$ or $q(i)=0$, respectively.) \begin{figure}[!htbp] \[\begin{tikzpicture}[x=1cm, y=1cm] \node[vertex][fill,inner sep=1pt,minimum size=1pt] (v_0) at (-5,3)[label=left:$x= v_0$] {}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (v_1) at (-4,3)[label=above:$v_1$]{}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (vmi) at (-3,3)[label=above:$v_{m(i)}$]{}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (vmi+1) at (-2,3)[label=below:$v_{m(i)+1}$]{}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (vni-1) at (2,3)[label=below:$v_{n(i)-1}$]{}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (vni) at (3,3)[label=above:$v_{n(i)}$]{}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (vr-1) at (4,3)[label=above:$v_{r-1}$]{}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (v_r) at (5,3)[label=right:$v_r= y$] {}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (u_1) at (-2,2)[label=left:$u_1$] {}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (up-1) at (-1,1)[label=left:$u_{p(i)-1}$] {}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (v') at (0,0)[label=below:$u_{p(i)}= v'= w_{q(i)}$] {}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (wq-1) at (1,1)[label=right:$w_{q(i)-1}$] {}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (w_1) at (2,2)[label=right:$w_1$] {}; \draw(v_0.center) to (v_1.center) to (-3.7,3); \draw[dotted,semithick] (-3.6,3) to (-3.4,3); \draw (-3.3,3) to (vmi) to (vmi+1) to (-1,3); \draw[dotted,semithick] (-.25,3) to (.25,3); \draw (1,3) to (vni-1) to (vni) to (3.3,3); \draw[dotted,semithick] (3.6,3) to (3.4,3); \draw (3.7,3) to (vr-1) to (v_r); \draw(vni) to (w_1);\draw (vmi) to (u_1) to (-1.7,1.7); \draw[dotted,semithick] (-1.6,1.6) to (-1.4,1.4); \draw (-1.3,1.3) to (up-1) to (v') to (wq-1) to (1.3,1.3); \draw[dotted,semithick] (1.4,1.4) to (1.6,1.6); \draw (1.7,1.7) to (w_1) to (vni); \end{tikzpicture}\] \caption{The paths $P$ and $P_i$} \label{fig:PandP'} \end{figure} Based on this labeling of the vertices, we see that $|P|=|P_i|=1+m(i)+p(i)-1+q(i)+r-(n(i)-1)=m(i)+p(i)+q(i)+r-n(i)+1$. Recall that $P_i$ and $P_j$ have the same length, for $i,j\in\{1,\ldots,t\}$. Therefore, for all $i,j\in\{1,\ldots,t\}$, we have \[m(i)+p(i)+q(i)-n(i)=m(j)+p(j)+q(j)-n(j).\] \begin{claim} For all $i,j\in\{1,\ldots,t\}$, we have $m(i)+p(i)=m(j)+p(j)$ and $q(i)-n(i)=q(j)-n(j)$. \end{claim} \begin{subproof} Suppose for a contradiction that $m(i)+p(i)<m(j)+p(j)$. Since $m(i)+p(i)+q(i)-n(i)=m(j)+p(j)+q(j)-n(j)$, we have $q(i)-n(i)>q(j)-n(j)$. Consider the subgraph of $H/e$ consisting of the subpath of $P_i$ from $x$ to $v'$ and the subpath of $P_j$ from $v'$ to $y$. This subgraph contains a path from $x$ to $y$ whose number of vertices is at most $m(i)+p(i)+q(j)+r-n(j)+1<m(j)+p(j)+q(j)+r-n(j)+1=|P|$. (This subgraph \emph{is} such a path if the intersection of the paths used to form it consists only of $v'$.) Regardless of whether this path contains $v'$, it implies the existence of a path in $H$ from $x$ to $y$ whose length is no longer than that of $P$, a contradiction. Thus, we have proved the first equality of the claim. The second equality follows from the first equality and the fact that $m(i)+p(i)+q(i)-n(i)=m(j)+p(j)+q(j)-n(j)$. \end{subproof} \begin{claim} \label{mlessn} For all $i,j\in\{1,\ldots,t\}$, we have $m(i)<n(j)$. \end{claim} \begin{subproof} Suppose for a contradiction that $m(i)\geq n(j)$. It follows from \cref{1divergence} that the number of vertices on the subpaths of $P$ and $P_i$ from $v_{m(i)}$ to $v_{n(i)}$ must be equal. Similarly, the number of vertices on the subpaths of $P$ and $P_j$ from $v_{m(j)}$ to $v_{n(j)}$ must be equal. The number of vertices of the subpath of $P$ from $v_{m(i)}$ to $v_{n(i)}$ is $n(i)-m(i)+1$, and the number of vertices of the subpath of $P$ from $v_{m(j)}$ to $v_{n(j)}$ is $n(j)-m(j)+1$. The number of vertices of the subpath of $P_i$ from $v_{m(i)}$ to $v_{n(i)}$ is $p(i)+q(i)+1$, and number of vertices of the subpath of $P_j$ from $v_{m(j)}$ to $v_{n(j)}$ is $p(j)+q(j)+1$. Thus, we have $n(i)-m(i)=p(i)+q(i)$ and $n(j)-m(j)=p(j)+q(j)$. Consider the subgraph of $H/e$ consisting of the subpath of $P_j$ from $x$ to $v'$ and the subpath of $P_i$ from $v'$ to $y$. This subgraph has at most $m(j)+p(j)+q(i)+r-n(i)+1$ vertices. Therefore, it contains a path $\widehat{P}$ from $x$ to $y$ such that $|\widehat{P}|\leq m(j)+p(j)+q(i)+r-n(i)+1$. This path must be at least as long as $P$. Therefore, $r+1\leq m(j)+p(j)+q(i)+r-n(i)+1$, implying that $n(i)-m(j)\leq p(j)+q(i)$. On the other hand, $n(i)-m(j)\geq n(i)-m(i)+n(j)-m(j)=p(i)+q(i)+p(j)+q(j)$. This leads to a contradiction unless $p(i)=q(j)=0$. Thus, $v_{m(i)}=v'=v_{n(j)}$ is a vertex of $P$. This implies $m(i)=n(j)$. Let $u$ be the endpoint of $e$ not on $P$. Consider the subgraph of $H$ consisting of the subpath of $P_j'$ from $x$ to $u$ and the subpath of $P_i$ from $u$ to $y$. This subgraph contains a path from $x$ to $y$ whose number of vertices is at most $m(j)+p(j)+q(i)+r-n(i)+1=m(i)+p(i)+q(j)+r-n(j)+1=r+1=|P|$, a contradiction. \end{subproof} Let $m=\max\{m(i):1\leq i\leq t\}$ and $n=\min\{n(i):1\leq i\leq t\}$. By \cref{mlessn}, we have $m<n$. Recall from \cref{3vertices} that $|P|\geq3$. Therefore, either $n\geq2$ or $m+2\leq r$. (Otherwise, $m<n<2$, which implies $m=0$. If $m=0$ and $m+2>r$, then $r<2$, implying $|P|=r+1<3$.) Without loss of generality, we assume $m+2\leq r$, reversing the order of the vertices and edges of $P$ if necessary. Thus, $v_{m+2}$ is a vertex of $P$. Construct the graph $F$ from $H/e$ by adding the edge $f$ joining $v_m$ and $v_{m+2}$. We claim that the resulting path $P_F$ whose sequence of vertices is $v_0,\ldots,v_m,v_{m+2},\ldots,v_r$ is the unique shortest path between $x$ and $y$ in $F$. Suppose otherwise for a contradiction. Then $F$ contains a path $P_F'\neq P_F$ from $x$ to $y$ with $|P_F'|\leq|P_F|=|P|-1$. Suppose $P_F'$ is a path in $H/e$. Then $H$ contains a path $Q$ from $x$ to $y$ with at most $|P|$ vertices such that either $Q=P_F'$ or $Q=P_F'/e$. Since $P$ is the unique shortest path from $x$ to $y$ in $H$, we muat have $Q=P$. Thus, $P_F'=Q=P$ since $P$ does not contain $e$. This contradicts the assumption that $|P_F'|\leq|P|-1$. Therefore, $P_F'$ is not a path in $H/e$. Thus, $P_F'$ contains the edge $f$. Let $P_F''$ be the path obtained from $P_F'$ by replacing edge $f$ with the subpath of $P$ from $v_m$ to $v_{m+2}$. Then $|P_F''|=|P_F'|+1\leq|P|$. Since $P_F''$ is a path in $H/e$, this implies that $P_F''=P_i$ for some $i\in\{1,\ldots,t\}$. Since $v_m$ is a vertex of $P_i$, we must have $m(i)=m$. Since $v_{m+2}$ is a vertex of $P_i$, we must have $n(i)\in\{m+1,m+2\}$. But then $P_F'=P_F$, a contradiction. Thus, we have shown that $P_F$ is the unique shortest path between $x$ and $y$ in $F$. Therefore, $\usp{F}\geq|V(P)|-1=\usp{H}-1$. Thus, $\uspc{F}=|V(H/e)|-\usp{F}=|V(H)|-1-\usp{F}\leq|V(H)|-1-\usp{H}+1=\uspc{H}$, contradicting the minimality of $H$. \end{proof} \section{Disconnected Graphs} \label{Disconnected Graphs} In this section, we establish that the spectator floor is additive over disconnected graph components. We proved \cref{prop:components} independently of \cref{lem:no decontract} and have decided to include both proofs in their entirety. However, it is interesting to note that either result could be used to prove the other. We will now prove \cref{prop:components-intro} restated below. \begin{theorem} \label{prop:components} For any graph $G=G_1\sqcup G_2$ with no edges connecting $G_1$ to $G_2$, \[\uspcf{G}=\uspcf{G_1}+\uspcf{G_2}\] \end{theorem} \begin{proof} Let $G=G_1\sqcup G_2$, where $G$ contains no edges connecting $G_1$ to $G_2$. The statement is trivially true when either $G_1$ or $G_2$ is empty, so assume that neither $G_1$ nor $G_2$ is empty. For $i=1,2$, let $F_i$ be a graph that realizes $\uspcf{G_i}$; that is, $F_i$ is such that $G_i$ is a minor of $F_i$, and $F_i$ has a unique shortest path $P_i$ with $|F_i|-\uspcf{G_i}$ vertices. Let $F$ be $F_1\sqcup F_2$ with an edge $e$ added to make a path $P$ that connects $P_1$ and $P_2$ end-to-end. Then $G$ is a minor of $F$, and $P$ is a unique shortest path in $F$ with \[|F_1|-\uspcf{G_1}+|F_2|-\uspcf{G_2}=|F|-\uspcf{G_1}-\uspcf{G_2}\] vertices. Hence $\uspc{F}\leq \uspcf{G_1}+\uspcf{G_2}$ and so $\uspcf{G}\leq \uspcf{G_1}+\uspcf{G_2}$. Now, contrary to the statement of the result, suppose that $\uspcf{G}< \uspcf{G_1}+\uspcf{G_2}$. Then there is some graph $H$, of which $G$ is a minor, such that $\uspc{H}=\uspcf{G}<\uspcf{G_1}+\uspcf{G_2}$. Since $G$ is a minor of $H$, we can build $H$ from $G$ by a sequence of decontracting vertices, adding edges, and/or adding isolated vertices. We will now construct a partition of $H$ into three sets: a subgraph $H_1$, a subgraph $H_2$, and a set of ``bridge'' edges $B$. Let us start by defining $H_1$ as $G_1$ and $H_2$ as $G_2$. Next, for each time that a vertex is decontracted while transforming $G$ into $H$, any vertices or edges that are doubled by the decontraction will remain in the same set that they were in before. Whenever a new edge is added, if it connects two vertices in the same $H_i$, that edge will be added to $H_i$. If the new edge connects a vertex from $H_1$ to a vertex from $H_2$, the edge will be added to the bridge edge set $B$. Whenever a new isolated vertex is added, it will be randomly placed into either $H_1$ or $H_2$. At the end of this process, we have that all the vertices of $H$ are in either $H_1$ or $H_2$, all the edges of $H$ are in $H_1$, $H_2$, or $B$, and no vertex or edge is in more than one of these. Furthermore, $G_1$ is a minor of $H_1$ and $G_2$ is a minor of $H_2$. Let $P$ be a path in $H$ that contains $\usp{H}$ vertices. $P$ cannot be entirely within either $H_1$ or $H_2$ for the following reasons. Suppose, without loss of generality, that $P$ is a subgraph of $H_1$. Since $G$ is a minor of $H_1\cup G_2$, that means $\uspcf{G}\leq \uspcf{H_1\cup G_2}$. Since $H_1$ and $G_2$ are disjoint, by the same arguments used for $G,G_1,G_2$ previously, we can conclude that $\uspcf{H_1\cup G_2}\leq \uspcf{H_1}+\uspcf{G_2}$. Hence, $\uspcf{G}\leq \uspcf{H_1}+\uspcf{G_2}$. This implies that \begin{align*} \uspcf{G}\leq \uspc{H_1}+\uspcf{G_2} &=|H_1|-\usp{H_1}+\uspcf{G_2}\\ &=|H_1|-\usp{H}+\uspcf{G_2}\\ &=|H_1|+|H_2|-\usp{H}+\uspcf{G_2}-|H_2|\\ &=|H|-\usp{H}+\uspcf{G_2}-|H_2|\\ &=\uspc{H}+\uspcf{G_2}-|H_2| \end{align*} By definition, $\uspcf{G}=\uspc{H}$, so the above inequality simplifies to $0\leq \uspcf{G_2}-|H_2|$. Furthermore, $\uspcf{G_2}\leq \uspc{G_2}=|G_2|-\usp{G_2}$. Hence, $0\leq |G_2|-\usp{G_2}-|H_2|$. However, since $G_2$ is a minor of $H_2$, the quantity $|G_2|-|H_2|\leq 0$. Hence, we have $0\leq -\usp{G_2}$, which implies that $G_2$ is an empty graph, in contradiction to our assumption that neither $G_1$ nor $G_2$ is empty. Returning to our discussion of the graph $H$, we now know that the path $P$ with $\usp{H}$ vertices must contain vertices from both $H_1$ and $H_2$, and so $P$ must contain at least one edge in $B$. We will now show that this leads to a contradiction as well. Suppose that $P$ contains $m$ edges in $B$, where $m\geq 1$. Assume without loss of generality that $P$ has at least one end in $H_1$. Then we can break $P$ up into disjoint subpaths $Q_1,Q_2,...,Q_j$ in $H_1$ and $R_1,R_2,...,R_k$ in $H_2$, such that $P$ consists of $Q_1$ followed by $R_1$ followed by $Q_2$ followed by $R_2$, etc., with these subpaths linked together by edges in the bridge $B$. (See \cref{disconnected_diagram}.) Note that if $m$ is even, then $j=k+1$ and $m=2k$, whereas if $m$ is odd, then $j=k$ and $m=2k-1$. \begin{figure}[!htbp] \[\begin{tikzpicture}[x=1cm, y=1cm] \fill[gray!20,rounded corners] (-5,2) rectangle (-1,6); \fill[gray!20] (-5,1) rectangle (-1,3); \fill[gray!20,rounded corners] (5,2) rectangle (1,6); \fill[gray!20] (5,1) rectangle (1,3); \shade[ left color = gray!20, right color = white, shading angle = 0 ] (5,0.5) rectangle (1,1); \shade[ left color = gray!20, right color = white, shading angle = 0 ] (-5,0.5) rectangle (-1,1); \draw[rounded corners] (-1,1) -- (-1,6) -- (-5,6) -- (-5,1); \draw[rounded corners] (1,1) -- (1,6) -- (5,6) -- (5,1); \draw[dotted,semithick] (1,1) -- (1,0.5); \draw[dotted,semithick] (5,1) -- (5,0.5); \draw[dotted,semithick] (-1,1) -- (-1,0.5); \draw[dotted,semithick] (-5,1) -- (-5,0.5); \node at (-3.5,6)[label=above:$H_1$] {}; \node at (3.5,6)[label=above:$H_2$] {}; \node at (0,6)[label=above:$B$] {}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (a1) at (-3,5.5)[label=below:$a_1$]{}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (b1) at (-1.5,5.5)[label=below left:$b_1$]{}; \draw (a1) to (-2.5,5.5); \draw[dotted, semithick] (-2.5,5.5) to (-2,5.5); \draw (-2,5.5) to (b1); \node at (-4,5.5) {$Q_1$}; \node at (-4,4) {$Q_2$}; \node at (-4,2) {$Q_3$}; \node at (4,5) {$R_1$}; \node at (4,3) {$R_2$}; \node at (4,1) {$R_3$}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (c1) at (1.5,5.5)[label=below right:$c_1$]{}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (d1) at (1.5,4.5)[label=below right:$d_1$]{}; \draw (c1) to (2.5,5.5); \draw (2.5,4.5) to (d1); \draw (2.5,5.5) arc (90:45:0.5); \draw (2.5,4.5) arc (-90:-45:0.5); \draw[dotted,semithick] (2.5,5.5) arc (90:-90:0.5); \node[vertex][fill,inner sep=1pt,minimum size=1pt] (a2) at (-1.5,4.5)[label=below left:$a_2$]{}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (b2) at (-1.5,3.5)[label=below left:$b_2$]{}; \draw (a2) to (-2.5,4.5); \draw (-2.5,3.5) to (b2); \draw (-2.5,4.5) arc (90:135:0.5); \draw (-2.5,3.5) arc (-90:-135:0.5); \draw[dotted,semithick] (-2.5,4.5) arc (90:270:0.5); \draw (b1) to[in=150,out=30] (c1); \draw (a2) to[in=150,out=30] (d1); \node[vertex][fill,inner sep=1pt,minimum size=1pt] (c2) at (1.5,3.5)[label=below right:$c_2$]{}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (d2) at (1.5,2.5)[label=below right:$d_2$]{}; \draw (c2) to (2.5,3.5); \draw (2.5,2.5) to (d2); \draw (2.5,3.5) arc (90:45:0.5); \draw (2.5,2.5) arc (-90:-45:0.5); \draw[dotted,semithick] (2.5,3.5) arc (90:-90:0.5); \node[vertex][fill,inner sep=1pt,minimum size=1pt] (a3) at (-1.5,2.5)[label=below left:$a_3$]{}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (b3) at (-1.5,1.5)[label=below left:$b_3$]{}; \draw (a3) to (-2.5,2.5); \draw (-2.5,1.5) to (b3); \draw (-2.5,2.5) arc (90:135:0.5); \draw (-2.5,1.5) arc (-90:-135:0.5); \draw[dotted,semithick] (-2.5,2.5) arc (90:270:0.5); \draw (b2) to[in=150,out=30] (c2); \draw (a3) to[in=150,out=30] (d2); \node[vertex][fill,inner sep=1pt,minimum size=1pt] (c3) at (1.5,1.5)[label=below right:$c_3$]{}; \draw (b3) to[in=150,out=30] (c3); \draw (c3) to (2.5,1.5); \draw (2.5,1.5) arc (90:45:0.5); \draw[dotted,semithick] (2.5,1.5) arc (90:0:0.5); \draw[dashed] (b1) to[out=-45,in=45] (a2); \draw[dashed] (b2) to[out=-45,in=45] (a3); \draw[dashed] (d1) to[out=-135,in=135] (c2); \draw[dashed] (d2) to[out=-135,in=135] (c3); \end{tikzpicture}\] \caption{A generalized diagram of the subgraph of $H$ induced by the path $P$. Dashed lines indicate edges that will be added to form the graphs $H_1'$, $H'$, $H_2''$, and $H''$.}\label{disconnected_diagram} \end{figure} Furthermore, let us label the first vertex of each $Q_i$ by the name $a_i$, and the last vertex of each $Q_i$ by $b_i$, the first vertex of each $R_i$ by $c_i$, and the last vertex of each $R_i$ by $d_i$. Hence, each vertex $b_i$ is connected to the vertex $c_i$ by an edge in $B$, except possibly for $b_j$, and each vertex $d_i$ is connected to the vertex $a_{i+1}$ by an edge in $B$, except possibly for $d_k$. Let $p$ stand for the number of vertices in $P$, $q$ the total number of vertices in $Q_1,Q_2,...,Q_j$, and $r$ the total number of vertices in $R_1,R_2,...,R_k$. Hence $p=q+r$. By assumption, $\uspc{H}=|H|-p<\uspcf{G_1}+\uspcf{G_2}$. Since $|H|-p=|H_1|-q+|H_2|-r$, it must be the case that either $|H_1|-q<\uspcf{G_1}$ or $|H_2|-r<\uspcf{G_2}$ (since, if both of these were false, it would contradict the original inequality). However, we will now show that neither of these is true. Observe that since $P$ is a unique shortest path in $H$, there cannot be an edge in $H_1$ connecting any $b_i$ to $a_{i+1}$, because that would form a shorter path. Likewise, there cannot be an edge in $H_2$ connecting any $d_i$ to $c_{i+1}$. Let $H_1'$ (respectively, $H'$) be formed by taking $H_1$ (respectively, $H$) and adding an edge from each $b_i$ to $a_{i+1}$, except for $b_j$, the last one. (Note that if $j=1$, then $H_1'=H_1$.) These edges are indicated by dashed lines in \cref{disconnected_diagram}. Since $P$ was a unique shortest path in $H$ and none of these new edges were present previously, we now have a shorter unique path connecting the endpoints of $P$. Let us call the portion of this new unique shortest path that lies in $H_1'$ by the name $P'$. (Unless $j=1$, then let $P'$ be the portion of $P$ in $H_1$.) Since any subpath of a unique shortest path is also a unique shortest path, $P'$ is a unique shortest path with $q$ vertices contained in $H_1'$, hence $\uspc{H_1'}\leq |H_1'|-q=|H_1|-q$. Furthermore, since $G_1$ is a minor of $H_1'$, we have $\uspcf{G_1}\leq \uspc{H_1'}\leq |H_1|-q$. Now, let $H_2''$ (respectively, $H''$) be formed by taking $H_2$ (respectively, $H$) and adding an edge from each $d_i$ to $c_{i+1}$, except for $d_k$, the last one. (Note that if $k=1$, then $H_2''=H_2$.) These edges are indicated by dashed lines in \cref{disconnected_diagram}. Since $P$ was a unique shortest path in $H$ and none of these new edges were present previously, we now have a shorter unique path connecting the endpoints of $P$. Let us call the portion of this new unique shortest path that lies in $H_2''$ by the name $P''$. (Unless $k=1$, then let $P''$ be the portion of $P$ in $H_2$.) Since any subpath of a unique shortest path is also a unique shortest path, $P''$ is a unique shortest path with $r$ vertices contained in $H_2''$, hence $\uspc{H_2''}\leq |H_2''|-r=|H_2|-r$. Furthermore, since $G_2$ is a minor of $H_2''$, we have $\uspcf{G_2}\leq \uspc{H_2''}\leq |H_2|-r$. \end{proof} The following is a direct consequence of the last proposition. \begin{cor} \label{cor:components} For any graph $G$ that consists of connected components $G_1, G_2, ..., G_n$, \[\uspcf{G}=\uspcf{G_1}+\uspcf{G_2}+\cdots+\uspcf{G_n}\] \end{cor} \section{Trees} \label{trees} This section contains results that apply to trees, with the exception of \cref{diam.bound} and \cref{lem:diameter}, which offer bounds that apply to all graphs, and were inspired by the consideration of trees. We begin by noting that since $\usp{G}$ is the number of vertices in the longest unique shortest path and $\diam(G)$ is the length of the longest shortest path in $G$, which may or may not be unique, we have the following: \begin{obs}\label{diam.bound} For any graph $G$, $\usp{G} \leq \diam(G) + 1$ and so $\uspc{G} \geq |G| - \diam(G) - 1$. \end{obs} Next, we parlay \cref{diam.bound} into an exact formula for the spectator floor of a tree. \begin{theorem}\label{tree.uspcf} For a tree $T$, $\uspcf{T}=|T|-\diam(T)-1$. \end{theorem} \begin{proof} Let $T$ be a tree. Since all paths in a tree are unique, $\usp{T}=\diam(T)+1$, and so $\uspc{T}=|T|-\diam(T)-1$. Suppose that $\uspcf{T}<|T|-\diam(T)-1$. Then there exists some graph $G$ which has $T$ as a minor, such that $\uspcf{T}=\uspc{G}<|T|-\diam(T)-1$. The bound $\usp{G} \leq \diam(G)+1$ implies that $|G|-\diam(G)-1\leq \uspc{G}$. Then we have \[|G|-\diam(G)-1<|T|-\diam(T)-1\] and rearranging this, we get \[|G|-|T|<\diam(G)-\diam(T).\] The quantity on the left, $|G|-|T|$, is the number of decontractions performed to transform $T$ into $G$. Note that one decontraction can increase the diameter of a graph by at most 1. Furthermore, adding an edge cannot increase the diameter of a graph. Therefore, the quantity $\diam(G)-\diam(T)$ must be less than or equal to the number of decontractions performed to transform $T$ into $G$. This is a contradiction. \end{proof} The next theorem is important in establishing \cref{cor:diam-paths}, which tells us the conditions under which a tree is minor minimal for a given value of the spectator floor. First we need two definitions. A \emph{diametric path} of a graph is a path whose length is the diameter. The endpoints of such a path are called a \emph{diametric pair} of vertices. \begin{theorem} For any tree $T$, there exists a non-empty elementary minor of $T$ with the same spectator floor as $T$ if and only if there exists an edge $e$ of $T$ such that contracting $e$ reduces the diameter of $T$. \end{theorem} \begin{proof} The reverse implication is the easy direction: Suppose that $e$ is an edge such that the contracted tree $T^\prime = T/e$ has diameter less than the diameter of $T$. The distance between vertices in a tree is given by the unique path that joins them, and so contracting $e$ in $T$ reduces the distance of any pair by exactly $1$ in the case that $e$ belongs to the unique path joining the pair, and leaves the distance unchanged otherwise. In particular, the difference between the diameter of $T^\prime$ and the diameter of $T$ can be at most one, and so $\diam(T^\prime) = \diam(T) - 1$. The number of vertices also differs by exactly $1$, and so the elementary minor $T^\prime$ of $T$ satisfies \[ \uspcf{T^\prime}=|T^\prime|-\diam(T^\prime)-1 = |T|-\diam(T)-1 = \uspcf{T}. \] For the forward implication, there are three sorts of elementary minors. Since $T$ is a connected tree, deletion of an isolated vertex can only happen in the case $T=K_1$, leaving the empty graph as a minor. Contraction of an edge $e$ that leaves the spectator number unchanged must reduce the diameter by $1$ by the same calculation as above. This leaves only the case of edge deletion. Assume by way of contradiction, then, firstly that there does exist a specific edge $e$ whose deletion produces a disjoint union of trees $T_1$ and $T_2$ satisfying \[ \uspcf{T_1 \cup T_2} = \uspcf{T_1} + \uspcf{T_2} = \uspcf{T}, \] and secondly that no edge contraction reduces the diameter of $T$. Using the diameter formula to substitute for $\uspcf{T_1}$, $\uspcf{T_2}$, and $\uspcf{T}$ produces an equation \[ |T_1| - \diam(T_1) - 1 + |T_2| - \diam(T_2) - 1 = |T| - \diam(T) - 1 \] whose simplification \[ \diam(T) = \diam(T_1) + \diam(T_2) + 1 \] implies the strict inequality \[ \diam(T) > \diam(T_1) \] since $\diam(T_2) \ge 0$. If no edge contraction reduces the diameter of $T$, then in particular the contraction $T^\prime = T/e$ has the same diameter as $T$. Let $p^\prime$ and $q^\prime$ be a diametric pair of vertices in $T^\prime$; then $e$ cannot be part of the unique path that joins their preimages $p$ and $q$ in $T$. It follows that $p$ and $q$ are in the same component $T_1$ or $T_2$ of $T\setminus e$. Without loss of generality, both are in $T_1$, and thus that the diameter of $T_1$ is at least the diameter of $T$: \[ \diam(T) \le \diam(T_1). \] This contradicts the previous strict inequality and completes the proof. \end{proof} \begin{cor} \label{cor:diam-paths} A tree $T$ is minor-minimal for the spectator floor if and only if no edge lies in the intersection of all diametric paths in $T$. \end{cor} \begin{proof} The contraction of an edge $e$ reduces the diameter of $T$ if and only $e$ lies in the intersection of all diametric paths in $T$, and $T$ is minor-minimal for the spectator floor if and only if no elementary minor of $T$ has the same spectator floor. \end{proof} As a result of \cref{cor:diam-paths}, we have the following, which tells us that star graphs are minor minimal for a given value of the spectator floor. \begin{cor} \label{cor:K1k+2} For $k\geq1$, the graph $K_{1,k+2}$ is minor-minimal among graphs with spectator floor $k$. \end{cor} \begin{proof} By \cref{tree.uspcf}, $\uspcf{K_{1,k+2}}=k+3-2-1=k$. Since $k+2\geq3$, no edge of $K_{1,k+2}$ lies in all of the diametric paths of $K_{1.k+2}$. Therefore, by \cref{cor:diam-paths}, $K_{1,k+2}$ is minor-minimal among graphs with spectator floor $k$. \end{proof} In the final result of this section, we slightly improve upon the bound of \cref{diam.bound}. \begin{theorem} \label{lem:diameter} For every graph $G$, we have $\uspcf{G}\geq|G|-\diam(G)-1$. Moreover, if $\usp{G}\leq\diam(G)$, then $\uspcf{G}\geq|G|-\diam(G)$. \end{theorem} \begin{proof} By \cref{lem:no decontract}, there is a supergraph $G'$ of $G$ such that $\uspcf{G}=\uspc{G'}$ and $|G|=|G'|$. Since $G'$ is obtained from $G$ by adding edges but no vertices, we have $\diam(G')\leq\diam(G)$. Recalling \cref{diam.bound}, we have $\uspcf{G}=\uspc{G'}\geq|G'|-\diam(G')-1\geq|G|-\diam(G)-1$. Now, suppose $\usp{G}\leq\diam(G)$. Consider a parade in $G'$, and let $u$ and $v$ be its endpoints. We will show that the length of this parade in $G'$ is at most $\diam(G)-1$. If the distance between $u$ and $v$ in $G$ is at most $\diam(G)-1$, then the distance between $u$ and $v$ in $G'$ must be at most $\diam(G)-1$ also. On the other hand, consider the case that the distance in $G$ between $u$ and $v$ is $\diam(G)$. Since $\usp{G}\leq\diam(G)$, every parade in $G$ has at most $\diam(G)$ vertices, which implies that every parade in $G$ has length at most $\diam(G)-1$. Thus, the shortest paths joining $u$ and $v$ in $G$ are not unique. These paths are all still present in $G'$. Thus, since $u$ and $v$ are joined by a unique shortest path in $G'$, this path must have length at most $\diam(G)-1$. Thus, we have $\usp{G'}\leq\diam(G)$, implying that $\uspcf{G}=\uspc{G'}\geq|G'|-\diam(G)=|G|-\diam(G)$. \end{proof} \section{Calculation of the Spectator Floor of Simple Graphs} \label{calculation of spectator floor} If $A$ is the adjacency matrix of a graph $G$ and $k$ is a nonnegative integer, then the $(i, j)$-entry of $A^k$ counts the number of distinct walks of length exactly $k$ from vertex $i$ to vertex $j$ in $G$, including for example any paths of order $k + 1$. This leads to the following efficient way of computing the parade number of a given graph. \begin{obs}\label{USP-in-poly-time} Let $G$ be a connected graph and let $k$ range from $0$ to $n$. Then \[\usp{G} = 1 + \max\{k: (A+2I)^k\text{ has a 1 in some entry}\}.\] \end{obs} We use $(A + 2I)^k$ rather than $A^k$ in order to include a contribution of at least $2A^j$ for all $j < k$, which ensures that an entry equal to $1$ represents not just a unique walk but a unique shortest walk, and therefore a unique shortest path. The calculation terminates once every entry is strictly greater than $1$. Recall that the binomial expansion of $(A+2I)^k$ includes all powers of $A$ from $A^0=I$ through $A^k$ so the $(i,j)$-entry of $(A+2I)^k$ is a weighted accumulation of the number of walks from $i$ to $j$ of length at most $k$. The need of $2I$ rather than $I$ can be seen by considering $K_1$; the parade number of $K_1$ is one but the adjacency matrix is $[0]$ and $([0] + I)^k = [1]$ for all $k$. Note that the choice of the multiplier 2 is arbitrary, any integer greater than one is sufficient. By Theorem \ref{lem:no decontract}, in order to compute the spectator floor of $G$, we need only compare the spectator number of $G$ to the spectator number of each super graph of $G$. Moreover, by Corollary \ref{cor:components}, the spectator floor sums over connected components. This leads to Algorithm \ref{algo:spec-floor}, which calculates the spectator floor of every connected graph on at most $N$ vertices. Once the spectator floor number has been calculated for all connected graphs on at most $N$ vertices, one can than use Algorithm \ref{algo:minor-minimal} to determine which of those graphs are minor-minimal with respect to the spectator floor number. These algorithms were implemented in a SageMath \cite{sagemath} program to calculate the spectator floor of simple graphs and to determine which simple graphs are minor-minimal, with the code available at \cite{spec_floor_github_repo}. Note that while calculating the spectator floor for a fixed graph on a small number of vertices is relatively quick, the program takes over four days to run over all connected simple graphs on at most 10 vertices and would take over 400 days to process the connected simple graphs on 11 vertices since the complex nested nature of the subgraph-supergraph relationship does not allow for a simple parallelization of Algorithm \ref{algo:spec-floor}. Since this program takes so long to run and produces a large data set, that data and functions to access that data are also available in the GitHub repository at \cite{spec_floor_github_repo}. \SetKwComment{Comment}{/* }{ */} \RestyleAlgo{ruled} \begin{algorithm}[!h] \caption{Calculate $\uspcf{G}$ for all connected graphs on at most $N$ vertices}\label{algo:spec-floor} \KwData{$N$, maximum number of vertices} \KwResult{All connected graphs on up to $N$ vertices with their spectator floor number} \emph{$L \gets$ empty list}\; \For{$n\leq N$} { \For{$e = n(n-1)/2, \dots, n-1$} { \ForAll{connected graphs $G$ on $n$ vertices and $e$ edges} { $\usp{G} \gets 1 + \max\{k: (A+2I)^k\text{ has a 1 in some entry}\}$;\ $\uspc{G} \gets n - \usp{G}$; $\uspcf{G} \leftarrow \uspc{G}$\Comment*[r]{Initialize the value of $\uspcf{G}$} \ForAll{the $\widehat{G}$, super graphs of $G$} { \If{$\uspc{\widehat{G}} \leq \uspc{G}$} { $\uspcf{G} \leftarrow \uspc{\widehat{G}}$; } } append $(G, \uspcf{G})$ pair to $L$; } } } \Return{$L$} \end{algorithm} \RestyleAlgo{ruled} \begin{algorithm}[!h] \caption{Identify minor-minimal graphs}\label{algo:minor-minimal} \KwData{The graph-spectator floor pairs resulting from Algorithm \ref{algo:spec-floor}} \KwResult{All minor-minimal (w.r.t $\uspcf{G}$) connected graphs on up to $N$ vertices} \emph{$L \gets$ empty list}\; \For{$n\leq N$} { \For{$e = n(n-1)/2, \dots, n-1$} { \ForAll{connected graphs $G$ on $n$ vertices and $e$ edges} { Determine the minors of $G$; \ForEach{minor $H$ of $G$} {\If{$\uspcf{H} = \uspcf{G}$} { go to top of forall section\Comment*[r]{$G$ is not minor-minimal} } } append $G$ to $L$\Comment*[r]{$G$ is minor-minimal} } } } \Return{$L$} \end{algorithm} \section{Graphs of Spectator Floor 0, 1, and 2} \label{sec:minimal} In this section, we give the complete list of minor-minimal graphs of spectator floor 0, 1, and 2. Along the way, we characterize those graphs with $\uspc{G} > 0$, $\uspc{G} > 1$, and $\uspc{G} > 2$, allowing quick recognition of such graphs. We begin with graphs of spectator floor 0. \begin{prop} \label{lem:path floor} Let $G$ be a graph. Then $\uspcf{G}=0$ if and only if $G$ is a disjoint union of paths. \end{prop} \begin{proof} By definition of $\uspc{G}$ and $\uspcf{G}$, it is clear that a path $P$ has $\uspc{P}=0$ and $\uspcf{P}=0$. First suppose $G$ is the disjoint union of paths $P_1,P_2,\ldots P_t$, and let the endpoints of $P_i$ be $x_i$ and $y_i$. Form a supergraph $G'$ of $G$ by adding edges joining $y_i$ and $x_{i+1}$, for $1\leq i\leq t-1$. Since $G'$ is a path, we have $\uspc{G'}=0$ and therefore $\uspcf{G}=0$. Now, suppose $\uspcf{G}=0$. There must be a graph $G'$ such that $G$ is a minor of $G'$ and such that all vertices of $G'$ are contained in a unique shortest path $P$. Any edge added in parallel to an edge of $P$ causes $P$ to no longer be unique. If any other edge is added, it causes $P$ to not be a shortest path. Therefore, $G'$ must be a path. Since $G$ is a minor of a path, $G$ must be a disjoint union of paths. \end{proof} We now turn our attention to minor-minimal graphs with spectator floor $1$. The first results is a corollary that follows from \cref{lem:path floor}. \begin{cor} \label{cor:min-multi-1} The complete list of minor-minimal graphs with spectator floor $1$ is $C_2$ and $K_{1,3}$. The complete list of minor-minimal simple graphs with spectator floor $1$ is $K_3$ and $K_{1,3}$. \end{cor} \begin{proof} For the first statement, if $G$ does not have spectator floor 0, then by \cref{lem:path floor} $G$ is not a disjoint union of paths, and so $G$ either has a vertex of degree at least $3$, or $G$ contains a cycle. If $G$ has a vertex of degree at least $3$, then it contains $K_{1,3}$ as a subgraph; if $G$ contains a cycle, then $G$ con be contracted to $C_2$ (for multigraphs) or $K_3$ (for simple graphs). \end{proof} Next we begin our investigation of minor-minimal graphs with spectator floor 2. The following lemma will be used later to prove that certain graphs have spectator floor 2. \begin{lemma} \label{lem:figures} All of the graphs in \cref{fig:sharevertex,fig:shareedges,fig:one-cycle,fig:one-cycle2} have spectator floor $1$. \end{lemma} \begin{proof} Consider the graphs in \cref{fig:sharevertex,fig:shareedges,fig:one-cycle,fig:one-cycle2}. (The vertex labels and captions will be used in the proof of \cref{thm:min-multi-2} below.) If we ignore the vertex labels, we note that all of these graphs are subgraphs of the graph in \cref{fig:sharevertex}. By \cref{lem:path floor}, all of these graphs have spectator floor at least $1$. It is clear that the graph in \cref{fig:sharevertex} has spectator number $1$. Therefore, all of the graphs in \cref{fig:sharevertex,fig:shareedges,fig:one-cycle,fig:one-cycle2} have spectator floor at most $1$. \end{proof} \begin{figure}[!htbp] \centering \begin{subfigure}[b]{.47 \textwidth} \centering \begin{tikzpicture}[x=.9cm, y=1cm] \node[vertex][fill,inner sep=1pt,minimum size=1pt] (a) at (0,0){}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (b) at (1,0){}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (c) at (2,0){}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (u) at (3,0) [label=below:$u$] {}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (w) at (4,0) [label=below:$w$] {}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (d) at (5,0){}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (e) at (6,0){}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (f) at (7,0){}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (v) at (3.5,1) [label=above:$v$] {}; \path (a) edge (b) (c) edge (u) (u) edge (w) (w) edge (d) (e) edge (f) (u) edge[bend right=20] (v) (u) edge[bend left=20] (v) (w) edge[bend right=20] (v) (w) edge[bend left=20] (v); \draw (b) to (1.3,0); \draw (1.7,0) to (c); \draw [dotted,semithick] (1.4,0) -- (1.65,0); \draw (d) to (5.3,0); \draw (5.7,0) to (e); \draw [dotted,semithick] (5.4,0) -- (5.65,0); \end{tikzpicture} \caption{$G$ if two cycles share exactly one vertex} \label{fig:sharevertex} \end{subfigure} \begin{subfigure}[b]{.47\textwidth} \centering \begin{tikzpicture}[x=.9cm, y=1cm] \node[vertex][fill,inner sep=1pt,minimum size=1pt] (a) at (0,0){}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (b) at (1,0){}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (c) at (2,0){}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (u) at (3,0) [label=below:$u$] {}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (v) at (4,0) [label=below:$v$] {}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (d) at (5,0){}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (e) at (6,0){}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (f) at (7,0){}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (w) at (3.5,1) [label=above:$w$] {}; \path (a) edge (b) (c) edge (u) (u) edge (v) (w) edge (v) (v) edge (d) (e) edge (f) (u) edge[bend right=20] (w) (u) edge[bend left=20] (w); \draw (b) to (1.3,0); \draw (1.7,0) to (c); \draw [dotted,semithick] (1.4,0) -- (1.65,0); \draw (d) to (5.3,0); \draw (5.7,0) to (e); \draw [dotted,semithick] (5.4,0) -- (5.65,0); \end{tikzpicture} \caption{$G$ if it has at least two cycles} \label{fig:shareedges} \end{subfigure} \begin{subfigure}[b]{.47\textwidth} \centering \begin{tikzpicture}[x=.9cm, y=1cm] \node[vertex][fill,inner sep=1pt,minimum size=1pt] (a) at (0,0){}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (b) at (1,0){}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (c) at (2,0){}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (u) at (3,0) [label=below:$u$] {}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (d) at (4,0){}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (e) at (5,0){}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (f) at (6,0){}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (v) at (3,1) [label=above:$v$] {}; \path (a) edge (b) (c) edge (u) (u) edge (d) (e) edge (f) (u) edge[bend right=20] (v) (u) edge[bend left=20] (v); \draw (b) to (1.3,0); \draw (1.7,0) to (c); \draw [dotted,semithick] (1.4,0) -- (1.65,0); \draw (d) to (4.3,0); \draw (4.7,0) to (e); \draw [dotted,semithick] (4.4,0) -- (4.65,0); \end{tikzpicture} \caption{One possibility if $G$ has exactly one cycle} \label{fig:one-cycle} \end{subfigure} \begin{subfigure}[b]{.47\textwidth} \centering \begin{tikzpicture}[x=.9cm, y=1cm] \node[vertex][fill,inner sep=1pt,minimum size=1pt] (a) at (0,0){}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (b) at (1,0){}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (c) at (2,0){}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (u) at (3,0) [label=below:$u$] {}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (v) at (4,0) [label=below:$v$] {}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (d) at (5,0){}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (e) at (6,0){}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (f) at (7,0){}; \path (a) edge (b) (c) edge (u) (v) edge (d) (e) edge (f) (u) edge[bend right=20] (v) (u) edge[bend left=20] (v); \draw (b) to (1.3,0); \draw (1.7,0) to (c); \draw [dotted,semithick] (1.4,0) -- (1.65,0); \draw (d) to (5.3,0); \draw (5.7,0) to (e); \draw [dotted,semithick] (5.4,0) -- (5.65,0); \end{tikzpicture} \caption{Another possibility if $G$ has exactly one cycle} \label{fig:one-cycle2} \end{subfigure} \caption{Some graphs with spectator floor 1} \label{fig:specfloor1} \end{figure} Our first pair of results for spectator floor 2 are concerning simple graphs. The lemma below gives a list of minor-minimal simple graphs with spectator floor 2, and in the following \cref{prop:minimaml-simple} we will also show that this is the complete list of such graphs. \begin{lemma} \label{lem:simple-minimal} The following graphs are minor-minimal among simple graphs with spectator floor $2$: $K_3\sqcup K_3$, $K_3\sqcup K_{1,3}$, $K_{1,3}\sqcup K_{1,3}$, $C_4$, $K_{1,4}$, the $3$-sun (see \cref{fig:3sun}), and the long $Y$, (see \cref{fig:longY}). \end{lemma} \begin{figure}[!htbp] \begin{subfigure}[b]{.47\textwidth} \[\begin{tikzpicture}[x=1cm, y=1cm] \node[vertex][fill,inner sep=1pt,minimum size=1pt] (a) at (0,0) [label=above:$$] {}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (b) at (1,0) [label=left:$$] {}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (c) at (2,0) [label=below:$$] {}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (d) at (3,0) [label=below:$$] {}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (e) at (1.5,1) [label=above:$$] {}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (f) at (1.5,2) [label=below:$$] {}; \path (a) edge (b) (b) edge (c) (c) edge (d) (b) edge (e) (c) edge (e) (e) edge (f); \end{tikzpicture}\] \caption{The $3$-sun} \label{fig:3sun} \end{subfigure} \begin{subfigure}[b]{.47\textwidth} \[\begin{tikzpicture}[x=1cm, y=1cm] \node[vertex][fill,inner sep=1pt,minimum size=1pt] (a) at (0,0) [label=above:$$] {}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (b) at (0,-1) [label=left:$$] {}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (c) at (0,-2) [label=below:$$] {}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (d) at (-.7,.7) [label=below:$$] {}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (e) at (-1.4,1.4) [label=above:$$] {}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (f) at (.7,.7) [label=below:$$] {}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (g) at (1.4,1.4) [label=above:] {}; \path (a) edge (b) (b) edge (c) (a) edge (d) (d) edge (e) (a) edge (f) (f) edge (g); \end{tikzpicture}\] \caption{The long $Y$} \label{fig:longY} \end{subfigure} \caption{Two graphs from \cref{lem:simple-minimal}} \end{figure} \begin{proof} It follows from \cref{prop:components} that a disconnected graph is minor-minimal among simple graphs with spectator floor $2$ if and only if it has exactly two components, each of which have spectator floor $1$. Therefore, by \cref{cor:min-multi-1}, $K_3\sqcup K_3$, $K_3\sqcup K_{1,3}$, $K_{1,3}\sqcup K_{1,3}$ are all minor-minimal among simple graphs with spectator floor $2$. The fact that $K_{1,4}$ is minor-minimal among simple graphs with spectator floor $2$ follows directly from \cref{cor:K1k+2}. It follows from \cref{tree.uspcf,cor:diam-paths} that the long $Y$ is also minor-minimal among simple graphs with spectator floor $2$. Note that $\usp{C_4}=\diam(C_4)=2$. By the second statement of \cref{lem:diameter}, we have $\uspcf{C_4}\geq4-2=2$. Since $\uspcf{C_4}\geq\usp{C_4}=2$, we have $\uspcf{C_4}=2$. To see that $C_4$ is minor-minimal, note that deletion of any edge, results in a path, which has spectator floor $0$ and that contraction of any edge results in a triangle, which has spectator floor $1$. Let $G$ be the $3$-sun, and note that $|G|=6$ and $\diam(G)=3$. By the first statement in \cref{lem:diameter}, we have $\uspcf{G}\geq6-3-1=2$. Since $\uspcf{G}\leq\uspc{G}=2$, we have $\uspcf{G}=2$. To see that the $3$-sun is minor-minimal, first recall from \cref{lem:isolated} that isolated vertices have no effect on the spectator floor of a graph. If we delete any edge from the $3$-sun and then disregard any isolated vertices that may result, we obtain a subgraph of a graph of the form given in \cref{fig:sharevertex}. Thus, deletion of any edge of the $3$-sun results in a graph with spectator floor $1$. We now consider the effect of contracting an edge from the $3$-sun. Since we want to show that the $3$-sun is minor-minimal among simple graphs, we should immediately simplify once if any parallel edges result from the contraction. One can easily check that, if any edge is contracted and the resulting graph is simplified, then the result is a subgraph of a graph of the form given in \cref{fig:sharevertex}. Thus, the resulting graph has spectator floor $1$. \end{proof} We are now prepared to show that the list of graphs in \cref{lem:simple-minimal} is in fact the complete list of minor-minimal simple graphs with spectator floor 2. \begin{prop} \label{prop:minimaml-simple} The complete list of minor-minimal (simple) graphs with spectator floor $2$ is $K_3\sqcup K_3$, $K_3\sqcup K_{1,3}$, $K_{1,3}\sqcup K_{1,3}$, $C_4$, $K_{1,4}$, the long $Y$, and the $3$-sun. \end{prop} \begin{proof} Suppose for a contradiction that $G$ is a minor-minimal simple graph with spectator floor $2$. If $G$ is not connected, then by \cref{prop:components}, $G$ is the disjoint union of graphs $G_1$ and $G_2$, each with spectator floor $1$. The minor-minimal graphs with spectator floor $1$ are $K_3$ and $K_{1,3}$. Therefore, $G$ is $K_3\sqcup K_3$, $K_3\sqcup K_{1,3}$, or $K_{1,3}\sqcup K_{1,3}$. Thus, we may assume that $G$ is connected. Since $G$ is minor-minimal, it does not contain $C_4$ or $K_{1,4}$ as a minor. Therefore, the maximum degree of $G$ is $3$, and $G$ has no cycle of length greater than $3$. We now show that $G$ has at most one triangle. If $G$ has two disjoint triangles, then $G$ has $K_3\sqcup K_3$ as a minor. If two triangles of $G$ share exactly one vertex, then that vertex has degree $4$. If two triangles share an edge, then $G$ contains $C_4$. Therefore, $G$ has at most one triangle. If $G$ has two vertices of degree $3$ that are not contained in a triangle, then by contracting a path joining the vertices, we obtain $K_{1,4}$ as a minor. Thus, either \begin{itemize} \item[(i)] $G$ has exactly one triangle, and all vertices not in the triangle have degree $1$ or $2$, or \item[(ii)] $G$ is a tree with at most one vertex of degree $3$, and all other vertices of $G$ have degree $1$ or $2$. \end{itemize} We first consider case (i). If every vertex of the triangle has degree $3$, then $G$ has the $3$-sun as a minor. Otherwise, $G$ consists of a path with one additional vertex forming a triangle with two vertices in the path. Then $\uspc{G}=1$, and we have a contradiction. Now we consider case (ii). If $G$ has no vertex of degree $3$, then $G$ is a path, and $\uspc{G}=0$, a contradiction. Thus, we may assume that $G$ has exactly one vertex $v$ of degree $3$. If all three vertices adjacent to $v$ have degree $2$, then $G$ has the long $Y$ as a minor. Otherwise, $G$ consists of a path with one additional vertex adjacent to one vertex on the path. Then $\uspc{G}=1$, and we have a contradiction. \end{proof} Our next pair of results for spectator floor 2 are concerning the more general case of multigraphs. We first need to define the graphs $H_1$ through $H_5$. \begin{definition} \label{def:minor-minimal} Let $H_1$ be obtained from $K_3$ by doubling every edge. Let $H_2$ be obtained from $K_{1,3}$ by doubling two of the edges. Let $H_3$ be the $1$-sum of $K_3$ and $C_2$, and let $H_4$ be obtained by contracting one edge of the triangle in the $3$-sun. Let $H_5$ be the graph shown in \cref{fig:rocket}. \begin{figure}[!htbp] \[\begin{tikzpicture}[x=1cm, y=1cm] \node[vertex][fill,inner sep=1pt,minimum size=1pt] (a) at (0,0.5){}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (b) at (0,1.5){}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (c) at (1,1){}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (d) at (2,1){}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (e) at (3,1){}; \path (a) edge (c) (b) edge (c) (d) edge (c) (e) edge[bend right=20] (d) (e) edge[bend left=20] (d); \end{tikzpicture}\] \caption{$H_5$} \label{fig:rocket} \end{figure} \end{definition} The lemma below gives a list of minor-minimal (multi)graphs with spectator floor 2, and in the following \cref{thm:min-multi-2} we will also show that this is the complete list of such graphs. \begin{lemma} \label{lem:multi-minimal} The following graphs are all minor-minimal among graphs with spectator floor $2$: $C_2\sqcup C_2$, $C_2\sqcup K_{1,3}$, $K_{1,3}\sqcup K_{1,3}$, $C_4$, $H_1$, $H_2$, $H_3$, $H_4$, $H_5$, $K_{1,4}$, and the long $Y$. \end{lemma} \begin{proof} It follows from \cref{prop:components} that a disconnected graph is minor-minimal among simple graphs with spectator floor $2$ if and only if it has exactly two components, each of which have spectator floor $1$. Therefore, by \cref{cor:min-multi-1}, $C_2\sqcup C_2$, $C_2\sqcup K_{1,3}$, and $K_{1,3}\sqcup K_{1,3}$ are all minor-minimal among graphs with spectator floor $2$. Every single-edge deletion and every single-edge contraction of $C_4$, $K_{1,4}$, and the long $Y$ is a simple graph. Therefore, \cref{lem:simple-minimal} implies that these graphs are minor-minimal among graphs with spectator floor $2$. One can check that $\diam(H_1)=1$, that $\diam(H_2)=\diam(H_3)=2$, and that $\diam(H_4)=\diam(H_5)=3$. One can also check that $\usp{H_1}=1$, that $\usp{H_2}=\usp{H_3}=2$, and that $\usp{H_4}=\usp{H_5}=3$. It follows that, if $G\in\{H_1,H_2,H_3,H_4,H_5\}$, then $\uspc{G}=2$. Thus, $\uspcf{G}\leq2$. Moreover, since $\usp{G}=\diam(G)$, \cref{lem:diameter} implies that $\uspcf{G}\geq|G|-\diam(G)=2$. Therefore, $\uspcf{G}=2$. To show that $H_1$, $H_2$, $H_3$, $H_4$, and $H_5$ are minor-minimal, we must show that every single-edge deletion and every single-edge contraction from each of these graphs is a graph with spectator number less than $2$. Since every edge of $H_1$ is in parallel with another edge, no edge can be contracted. If an edge is deleted from $H_1$, then the resulting graph is a subgraph of a graph of the form given in \cref{fig:sharevertex}, which has spectator number $1$. If $G\in\{H_2,H_3,H_4,H_5\}$, then one can check that every single-edge deletion and every single-edge contraction of $G$ is a subgraph of a graph of the form given in \cref{fig:sharevertex}, which has spectator number $1$. \end{proof} Next, we have a couple of technical lemmas that will support the proof of \cref{thm:min-multi-2}. \begin{lemma} \label{lem:max2edges} Let $G$ be a graph such that two or more edges join vertices $v$ and $w$. If $e$ is one of these edges, then $\usp{G\backslash e}\geq \usp{G}$ and $\uspc{G\backslash e}\leq \uspc{G}$. Moreover, if three or more edges join vertices $v$ and $w$, then $\usp{G\backslash e}=\usp{G}$ and $\uspc{G\backslash e}=\uspc{G}$. \end{lemma} \begin{proof} No unique shortest path in $G$ contains $e$ since there is at least one other edge in parallel with $e$. Therefore, every unique shortest path in $G$ is also a unique shortest path in $G\backslash e$. The only case where a unique shortest path in $G\backslash e$ is not a unique shortest path in $G$ is if $G\backslash e$ has exactly one edge joining $v$ and $w$ and this path contains that edge. Therefore, the set of unique shortest paths of $G$ is a subset of the set of unique shortest paths of $G\backslash e$, implying that $\usp{G\backslash e}\geq \usp{G}$ and $\uspc{G\backslash e}\leq \uspc{G}$. Moreover, if three or more edges join $v$ and $w$ in $G$, then two or more edges join $v$ and $w$ in $G\backslash e$. Therefore, the set of unique shortest paths of $G$ is equal to the set of unique shortest paths of $G\backslash e$, implying that $\usp{G\backslash e}=\usp{G}$ and $\uspc{G\backslash e}=\uspc{G}$. \end{proof} \begin{lemma} \label{lem:max2edgesfloor} Let $G$ be a graph such that three or more edges join vertices $v$ and $w$. If $e$ is one of these edges, then $\uspcf{G\backslash e}=\uspcf{G}$. \end{lemma} \begin{proof} Since $G\backslash e$ is a minor of $G$, we have $\uspcf{G\backslash e}\leq\uspcf{G}$. Now, let $H$ be a graph containing $G\backslash e$ as a minor such that $\uspcf{G\backslash e}=\uspc{H}$. By \cref{lem:no decontract}, we may assume that $G\backslash e$ is obtained from $H$ by deleting edges. Therefore, there are at least two edges joining $v$ and $w$ in $H$. Add an additional edge joining $v$ and $w$ in $H$ to form the graph $H^+$, which contains $G$ as a minor. By \cref{lem:max2edges}, we have $\uspc{H}=\uspc{H^+}$. We then have $\uspcf{G\backslash e}=\uspc{H}=\uspc{H^+}\geq\uspcf{G}$ \end{proof} In our final result for this section, we are able to prove that the list of graphs in \cref{lem:multi-minimal} is in fact the complete list of minor-minimal (multi)graphs of spectator floor 2. \begin{theorem} \label{thm:min-multi-2} The complete list of minor-minimal graphs with spectator floor $2$ is $C_2\sqcup C_2$, $C_2\sqcup K_{1,3}$, $K_{1,3}\sqcup K_{1,3}$, $C_4$, $H_1$, $H_2$, $H_3$, $H_4$, $H_5$, $K_{1,4}$, and the long $Y$. \end{theorem} \begin{proof} Let $G$ be a minor-minimal graph with spectator floor $2$, and suppose for a contradiction that $G$ is not one of the graphs given in the statement of the result. By \cref{lem:max2edgesfloor}, there are at most two edges joining each pair of vertices of $G$. \begin{claim} \label{connected} $G$ is connected. \end{claim} \begin{subproof} If $G$ is not connected, then by \cref{prop:components}, $G$ is the disjoint union of graphs $G_1$ and $G_2$, each with spectator floor $1$. By \cref{cor:min-multi-1}, the minor-minimal graphs with spectator floor $1$ are $C_2$ and $K_{1,3}$. Therefore, $G$ is $C_2\sqcup C_2$, $C_2\sqcup K_{1,3}$, or $K_{1,3}\sqcup K_{1,3}$, each of which are graphs given in the statement of the result. \end{subproof} Since $G$ does not contain $C_4$ as a minor, we have the following. \begin{claim} \label{cycle} $G$ has no cycle of length at least $4$. \end{claim} Since $G$ does not contain $K_{1,4}$ as a minor, we have the following. \begin{claim} \label{neighborhood} $G$ has no vertex with a neighborhood of cardinality at least $4$. \end{claim} \begin{claim} \label{vertex} $G$ has no pair of disjoint cycles and no pair of cycles that share exactly one vertex. \end{claim} \begin{subproof} Since $C_2\sqcup C_2$ is not a minor of $G$, there is no pair of disjoint cycles in $G$. Since $H_3$ is not a minor of $G$, if two cycles share exactly one vertex, both cycles must have length $2$. Now, suppose for a contradiction that $G$ contains two copies of $C_2$ as subgraphs and that these copies share exactly one vertex $v$. Let $u$ and $w$ be the other vertices in these copies of $C_2$. Because $H_2$ is not a minor of $G$, we have $N_G(v)=\{u,w\}$. Because $H_4$ is not a minor of $G$, we have $|N_G(u)-\{v,w\}|\leq1$ and $|N_G(w)-\{u,v\}|\leq1$. Moreover, since $C_4$ and $H_1$ are not minors of $G$, there is no path from $u$ to $w$ in $G-v$ except possibly at most one edge joining $u$ and $w$. Finally, because $H_5$ is not a minor of $G$, no vertex in $V(G)-\{u,v,w\}$ has a neighborhood of cardinality greater than $2$. Therefore, $G$ is a subgraph of the graph in \cref{fig:sharevertex}. This graph has spectator number $1$, Thus $\uspcf{G}=1$, a contradiction. \end{subproof} \begin{claim} \label{triangles} No pair of triangles of $G$ share exactly one edge. \end{claim} \begin{subproof} Otherwise, the union of these triangles contains a cycle of length $4$, violating \cref{cycle}. \end{subproof} \begin{claim} \label{onecycle} $G$ has at most one cycle. \end{claim} \begin{subproof} Suppose for a contradiction that $G$ has more than one cycle. We know that each pair of vertices of $G$ is joined by at most two edges. Therefore, \cref{cycle,vertex,triangles} imply that $G$ contains a triangle with vertices $u$, $v$, and $w$ with a second edge joining $u$ and $w$. By \cref{vertex}, there is no path in $G-w$ from $u$ to $v$ other than the edge $uv$. Similarly, there is no path in $G-u$ from $w$ to $v$ other than the edge $wv$. We can also see that there is no path from $u$ to $w$ in $G-v$ other than the two edges joining $u$ and $w$. This is because $G$ has no cycle of length at least $4$ and at most two edges joining $u$ and $w$. Therefore, there are subgraphs $G_u$, $G_v$, and $G_w$ of $G$ such that, for each vertex $x$ in $G_i$, the path from $x$ to $\{u,v,w\}$ has endpoints $x$ and $i$. (For each $i\in\{u,v,w\}$, we have $i\in V(G_i)$.) This path must be unique; otherwise, \cref{vertex} is violated. Thus, each of $G_u$, $G_v$, and $G_w$ is a tree. Moreover, if we denote by $F$ the set of four edges both of whose endpoints are in $\{u,v,w\}$, then $G$ is the graph whose vertex set is $V(G_u)\sqcup V(G_v)\sqcup V(G_w)$ and whose edge set is $E(G_u)\sqcup E(G_v)\sqcup E(G_w)\sqcup F$. For $i\in\{u,v,w\}$, vertex $i$ has at most one neighbor in $G_i$. Otherwise, $G$ has $K_{1,4}$ as a minor. Similarly, since $G$ does not have $K_{1,4}$ as a minor, no vertex in $V(G_i)-\{i\}$ has more than two neighbors. If $u$ and $w$ have neighbors in $G_u$ and $G_w$, respectively, then $G$ contains $H_4$ as a minor. Therefore, either $N_G(u)=\{v,w\}$ or $N_G(w)=\{u,v\}$. Without loss of generality, let $N_G(w)=\{u,v\}$. Then $G$ is a subgraph of a graph of the form given in \cref{fig:shareedges}. This graph has spectator number $1$, Thus $\uspcf{G}=1$, a contradiction. \end{subproof} Therefore, one of the following holds. \begin{itemize} \item[(i)] $G$ is a tree, \item[(ii)] $G$ has exactly one cycle, which is a triangle, \item[(iii)] $G$ has exactly one cycle, which is a $C_2$, or \end{itemize} In cases (i) and (ii), $G$ is a simple graph. Since $G$ is not any of the graphs listed above, and since $G$ is connected, \cref{prop:minimaml-simple} implies that $G$ is the $3$-sun. However, by contracting an edge of the triangle in the $3$-sun, we obtain $H_4$. Therefore, $G$ has exactly one cycle, which is a $C_2$. Let $u$ and $v$ be the vertices of $C_2$. Because $H_5$ is not a minor of $G$, every vertex in $V(G)-\{u,v\}$ has degree $1$ or $2$. Since $K_{1,4}$ is not a minor of $G$, we have $|N_G(u)-\{v\}|\leq2$ and $|N_G(v)-\{u\}|\leq2$. Moreover, either $|N_G(u)-\{v\}|<2$ or $|N_G(v)-\{u\}|<2$. Without loss of generality, let $|N_G(v)-\{u\}|<2$. If $|N_G(v)-\{u\}|=0$, then $G$ is a subgraph of a graph of the form shown in \cref{fig:one-cycle}. Thus, we have have $\uspc{G}=\uspcf{G}=1$, a contradiction. If $|N_G(v)-\{u\}|=1$, then since $H_4$ is not a subgraph of $G$, we must also have $|N_G(u)-\{v\}|=1$. Therefore, $G$ is a subgraph of a graph of the form given in \cref{fig:one-cycle2}, which is isomorphic to a subgraph of a graph of the form shown in \cref{fig:shareedges}. The graph in \cref{fig:shareedges} has spectator number $1$. Therefore, $\uspcf{G}=1$, a contradiction. Therefore, by contradiction, we conclude that the result holds. \end{proof} \section{Minor Maximal Graphs} \label{minor max graphs} To this point in the paper, we have discussed minor minimal graphs at some length; in this sense, we have only looked downwards. We shall now look upwards and consider the possibilities of minor maximal graphs of a given spectator floor. We begin with the observation that, without additional restrictions, there are no minor maximal graphs with a given spectator floor. This is so because, given any graph $G$, if we add a new isolated vertex to $G$ in order to obtain $G'$, then $G$ is a minor of $G'$, and by \cref{prop:components}, $G'$ has the same spectator floor as $G$. Hence, we can construct an infinite chain of graphs which are above $G$ and have the same spectator floor. Therefore, in order to obtain any meaningful information, we must search for minor maximal graphs amongst subsets of graphs which are restricted in some way. The most natural restriction to make is on the number of vertices and on the number of parallel edges allowed, for which we can completely characterize the minor maximal graphs. In order to present the result, we first must present new definitions. \begin{definition}\label{def:crowded_parade} If $p\geq2$, then a \emph{crowded $p$-parade} is a graph $G$ with a parade of $p$ vertices that has the following properties: \begin{itemize} \item Every vertex outside the parade is connected to exactly two vertices in the parade, and those two parade vertices are adjacent to each other. \item Any two vertices outside the parade that are adjacent to a common parade vertex are also adjacent to each other. \end{itemize} A \emph{crowded $1$-parade} is a graph whose simplification is a complete graph. \end{definition} \begin{definition}\label{def:sat_crowded_parade} If $p\geq2$, an \emph{$m$-saturated crowded $p$-parade} is a crowded $p$-parade where every edge which is not in the parade has $m$ parallel copies, including itself. An \emph{$m$-saturated crowded $1$-parade} is a graph such that every pair of vertices is joined by exactly $m$ edges. \end{definition} \begin{figure}[!htbp] \[\begin{tikzpicture}[x=1cm, y=1cm] \node[vertex][fill] (a) at (0,0){}; \node[vertex][fill] (b) at (2,0){}; \node[vertex][fill] (c) at (4,0){}; \node[vertex][fill] (d) at (6,0){}; \node[vertex][fill] (e) at (8,0){}; \node[vertex][fill] (f) at (10,0){}; \node[vertex][fill] (g) at (12,0){}; \node[vertex][fill] (bc) at (3,1.5){}; \node[vertex][fill] (cd1) at (5,2){}; \node[vertex][fill] (cd2) at (5,1){}; \node[vertex][fill] (de) at (7,1.5){}; \node[vertex][fill] (fg1) at (11,2){}; \node[vertex][fill] (fg2) at (10.4,1.5){}; \node[vertex][fill] (fg3) at (11.6,1.5){}; \draw (a) to (g); \draw[double distance=2] (b) to (bc) to (c); \draw[double distance=2] (cd1) to (c) to (cd2) to (d) to (cd1); \draw[double distance=2] (cd1) to (cd2); \draw[double distance=2] (d) to (de) to (e); \draw[double distance=2] (bc) to (cd1) to (de) to (cd2) to (bc); \draw[double distance=2] (f) to (fg1) to (g); \draw[double distance=2] (f) to (fg2) to (g); \draw[double distance=2] (f) to (fg3) to (g); \draw[double distance=2] (fg1) to (fg2) to (fg3) to (fg1); \end{tikzpicture}\] \caption{An example of a 2-saturated crowded 7-parade.} \label{fig:crowded_parade} \end{figure} With these definitions in hand, we will show that a graph is minor maximal amongst graphs of a given spectator floor value if and only if the graph is a crowded parade. Before presenting this characterization of minor maximal graphs, however, we first have two technical lemmas that we will use repeatedly in the proof of the characterization. In the first lemma, we use the notation $d_S(x,y)$ to refer to the distance between any vertices $x$ and $y$ within any graph $S$. \begin{lemma}\label{lem:maximals} Let $G$ be a graph that is minor maximal amongst graphs with $n$ vertices, at most $m$ parallel edges between any given vertices, and spectator floor $k$. Let $a,b$ be the endpoints of a parade in $G$. Let $H$ be $G$ with one edge added between some vertices $v$ and $w$ where no edge was present in $G$. Then at least one of the following is true: \[d_H(a,b)=d_G(a,v)+1+d_G(w,b)\] \[d_H(a,b)=d_G(a,w)+1+d_G(v,b)\] \[\] \end{lemma} \begin{proof} By \cref{lem:no decontract}, there is a graph $G'$ with the same number of vertices as $G$ such that $G$ is a subgraph of $G'$ and such that $\uspcf{G}=\uspc{G'}$. Further, we may also assume that $G'$ has at most $m$ parallel edges between any given vertices. (It follows from \cref{lem:max2edges} that adding an edge in parallel with an edge in $G$ can only increase the spectator number.) Since we assumed that $G$ is maximal amongst such graphs, we conclude that $G'=G$. Hence $\uspc{G}=k$. Now consider the graph $H$, which is the same as $G$ but with an edge added between $v$ and $w$, which are some vertices of $G$ that are not adjacent in $G$. Since $G$ is a proper subgraph of $H$, and $H$ is a graph with $n$ vertices and at most $m$ parallel edges between any given vertices, and $G$ is maximal amongst graphs with $n$ vertices, at most $m$ parallel edges between any given vertices, and spectator floor $k$, we conclude that $H$ does not have spectator floor $k$. Furthermore, since $G$ is a subgraph of $H$, $\uspcf{H}>\uspcf{G}$. Thus, $\uspc{H}>\uspc{G}$. Since $\uspc{G}=k$, $G$ has a parade $P$ of $n-k$ vertices. Let us call the endpoints of $P$ by the names $a$ and $b$. Since $H$ still contains $P$ but $\uspc{H}\neq k$, we conclude that $P$ is no longer a longest unique shortest path in $H$. There cannot be a longer unique shortest path than $P$ in $H$, however, since that would result in $\uspc{H}<\uspc{G}$, so it must be that $P$ is not a unique shortest path in $H$. There are two possibilities: either $P$ is still a shortest path in $H$ but no longer unique, or $P$ is no longer a shortest path in $H$. Either way, there must be a new shortest path in $H$ between $a$ and $b$ that is of the same or shorter length than $P$; let us call this new path $Q$. $Q$ must contain the edge $\{v,w\}$, since that edge is the only difference between $G$ and $H$. Then, there are two possibilities. If $Q$ connects $a,v,w,b$ in that order, then we have: \[d_H(a,b)=d_G(a,v)+1+d_G(w,b)\] Otherwise, if $Q$ connects $a,w,v,b$ in that order, then we have: \[d_H(a,b)=d_G(a,w)+1+d_G(v,b)\] \end{proof} We now present another technical lemma to be used in proving the characterization of minor maximal graphs. This lemma tells us that minor maximal graphs must be connected. \begin{lemma}\label{lem:maxls_are_connected} Let $G$ be a graph that is minor maximal amongst graphs with $n$ vertices, at most $m$ parallel edges between any given vertices, and spectator floor $k$. Then $G$ is connected. \end{lemma} \begin{proof} Suppose to the contrary that $G$ is not connected, but rather consists of connected components $G_1,G_2,...,G_r$. We know from the proof of \cref{lem:maximals} that $\uspc{G}=\uspcf{G}=k$, and from \cref{prop:components} we know that \[\uspcf{G}=\uspcf{G_1}+\uspcf{G_2}+\cdots+\uspcf{G_r}\] For each $i=1,2,...,r$, let $H_i$ be a supergraph of $G_i$ that realizes $\uspcf{G_i}$ without adding any additional vertices; that is, $\uspc{H_i}=\uspcf{G_i}$. We know such $H_i$ exist from \cref{{lem:no decontract}}. Then let $H$ be a supergraph of $H_1 \cup H_2 \cup \cdots \cup H_r$ in which we add $r-1$ edges to the graph in order to take one parade from each of the $H_i$ and connect them into one long parade. Then we have that $H$ is a supergraph of $G$ and $H$ still has $n$ vertices. Furthermore, due to the way we constructed $H$, we have \begin{align*} \uspc{H} &=\uspc{H_1}+\uspc{H_2}+\cdots+\uspc{H_r} \\ & =\uspcf{G_1}+\uspcf{G_2}+\cdots+\uspcf{G_r} \\ & =\uspcf{G}=\uspc{G} \end{align*} Since $H$ is a supergraph of $G$, it follows that $\uspcf{H}\geq \uspcf{G}=\uspc{G}$. Furthermore, since $\uspc{H}=\uspc{G}$, it follows that $\uspcf{H}\leq \uspc{G}$. Hence $\uspcf{H}=\uspc{G}=k$. Thus, since $H$ has $n$ vertices and spectator floor $k$, and is a supergraph of $G$, and since we assumed $G$ was maximal amongst such graphs, it must be that $G=H$. Since we assumed that $G$ was disconnected and $H$ is connected by construction, this is a contradiction. \end{proof} Before presenting the main results of this section, we prove a lemma that takes care of a special case. \begin{lemma} \label{lem:1-parade} Let $m\geq2$. Then $G$ is minor maximal amongst graphs with $n$ vertices, at most $m$ parallel edges between any given pair of vertices, and spectator floor $n-1$, if and only if $G$ is an $m$-saturated crowded $1$-parade. \end{lemma} \begin{proof} Note that there is exactly one $m$-saturated crowded $1$-parade $H$ with $n$ vertices. Since $m\neq1$, no edge in $H$ is a unique shortest path. This implies that the unique shortest paths of $H$ each contain only one vertex. Therefore, $\uspc{H}=1$, implying that $\uspcf{H}=1$. Every graph containing $H$ as a minor either has more than $n$ vertices or a pair of vertices with more than $m$ edges joining them. Therefore, $H$ is maximal. Conversely, note that every graph $G$ with $n$ vertices and at most $m$ parallel edges between any given pair of vertices is a minor of $H$. Therefore, the only minor maximal graph amongst graphs with $n$ vertices, at most $m$ parallel edges between any given pair of vertices, and spectator floor $n-1$ is $H$. \end{proof} We now present the first of the two main results of this section, which together provide a complete characterization of the minor maximal graphs with a given spectator floor value. Note that complete graphs are both $1$-saturated $1$-parades and $1$-saturated $2$-parades. This gives some intuition for the reason the first sentence of the theorem is needed. \begin{theorem} Let $k\leq n-2$ and $m\geq1$, or let $k=n-1$ and $m\geq2$. If $G$ is minor maximal amongst graphs with $n$ vertices, at most $m$ parallel edges between any given vertices, and spectator floor $k$, then $G$ is an $m$-saturated crowded $(n-k)$-parade. \end{theorem} \begin{proof} By \cref{lem:1-parade}, the result holds when $k=n-1$. Therefore, we may assume that $k\leq n-2$. Suppose that $G$ is minor maximal amongst graphs with $n$ vertices, at most $m$ parallel edges between any given vertices, and spectator floor $k$. We will show that $G$ is an $m$-saturated crowded $(n-k)$-parade. As was shown in the proof of \cref{lem:maximals}, $\uspc{G}=k$ and so $G$ has a parade $P$ of $n-k\geq2$ vertices. We will call the endpoints of $P$ by the names $a$ and $b$. We will now show that $P$ has the properties in the definition of a crowded $(n-k)$-parade, \cref{def:crowded_parade}. Let $v$ be a vertex outside the parade $P$. We will consider cases based on the number of vertices in $P$ that are adjacent to $v$. Suppose $v$ is adjacent to 3 or more vertices in $P$. Then two of the parade vertices that $v$ is adjacent to have a distance of 2 or more within the parade. Hence going through $v$ would provide a path of the same or lesser length between those two vertices as compared to the parade. This contradicts the properties of a parade. So $v$ cannot be adjacent to 3 or more vertices in the parade. Now suppose that $v$ is adjacent to 2 vertices in $P$, and those two vertices have a distance of 2 or more along the parade. The same argument from the last paragraph applies; this is a contradiction. Hence, if $v$ is adjacent to exactly 2 vertices in the parade, then those two vertices must also be adjacent to each other. Next, suppose that $v$ is adjacent to exactly 1 vertex in $P$. There are two cases: either $v$ is adjacent to an endpoint of $P$, or not. Suppose $v$ is adjacent to an endpoint of the parade; without loss of generality, let us assume $v$ is adjacent to $a$. Let $x$ be the vertex of $P$ that is adjacent to $a$, and let $H$ be the graph made from $G$ by adding edge $\{v,x\}$, as shown in \cref{fig:maxl_pf_1}. \begin{figure}[!htbp] \begin{subfigure}[b]{.47\textwidth} \[\begin{tikzpicture}[x=1cm, y=1cm] \node[vertex][fill,inner sep=1pt,minimum size=1pt] (a) at (0,0)[label=below:$a$]{}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (x) at (1,0)[label=below:$x$]{}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (y1) at (2,0){}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (y2) at (3,0){}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (b) at (4,0)[label=below:$b$]{}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (v) at (0,1)[label=left:$v$]{}; \draw (a) to (y1); \draw (y1) to (2.3,0); \draw[dotted,semithick] (2.4,0) to (2.6,0); \draw(2.7,0) to (y2); \draw (y2) to (b); \draw (a) to (v); \end{tikzpicture}\] \caption{The subgraph of $G$ induced by $P\cup\{v\}$.} \end{subfigure}\begin{subfigure}[b]{.47\textwidth} \[\begin{tikzpicture}[x=1cm, y=1cm] \node[vertex][fill,inner sep=1pt,minimum size=1pt] (a) at (0,0)[label=below:$a$]{}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (x) at (1,0)[label=below:$x$]{}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (y1) at (2,0){}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (y2) at (3,0){}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (b) at (4,0)[label=below:$b$]{}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (v) at (0,1)[label=left:$v$]{}; \draw (a) to (y1); \draw (y1) to (2.3,0); \draw[dotted,semithick] (2.4,0) to (2.6,0); \draw(2.7,0) to (y2); \draw (y2) to (b); \draw (a) to (v); \draw (v) to (x); \end{tikzpicture}\] \caption{The subgraph of $H$ induced by $P\cup\{v\}$.} \end{subfigure} \caption{}\label{fig:maxl_pf_1} \end{figure} Then by \cref{lem:maximals}, one of the following is true: \[d_H(a,b)=d_G(a,v)+1+d_G(x,b)=1+d_G(a,x)+d_G(x,b) \geq 1+d_G(a,b)\] \[d_H(a,b)=d_G(a,x)+1+d_G(v,b)=1+d_G(a,v)+d_G(v,b)\geq 1+d_G(a,b)\] However, since $G$ is a subgraph of $H$, we must have $d_H(a,b)\leq d_G(a,b)$. This is a contradiction. Now we consider the situation where $v$ is adjacent to exactly one vertex of $P$, and that vertex is not an endpoint of $P$. Let $w$ be the vertex of $P$ adjacent to $v$, and let $w$ and $x$ be the vertices of $P$ adjacent to $w$, where $x$ is closer to $a$ than $y$ is. Now consider the graph $X$, which is the same as $G$ but with an edge from $v$ to $x$ added, and the graph $Y$, which is the same as $G$ but with an edge from $v$ to $y$ added. $G$, $X$, and $Y$ are as shown in \cref{fig:maxl_pf_2}. \begin{figure}[!htbp] \centering \begin{subfigure}[b]{.3\textwidth} \[\begin{tikzpicture}[x=0.8cm, y=1cm] \node[vertex][fill,inner sep=1pt,minimum size=1pt] (a) at (0,0)[label=below:$a$]{}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (x) at (1,0)[label=below:$x$]{}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (w) at (2,0)[label=below:$w$]{}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (y) at (3,0)[label=below:$y$]{}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (b) at (4,0)[label=below:$b$]{}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (v) at (2,1)[label=left:$v$]{}; \draw (a) to (0.3,0); \draw [dotted,semithick] (0.4,0) to (0.6,0); \draw (0.7,0) to (x); \draw (x) to (y); \draw (y) to (3.3,0); \draw[dotted,semithick] (3.4,0) to (3.6,0); \draw (3.7,0) to (b); \draw (v) to (w); \end{tikzpicture}\] \caption{The subgraph of $G$ induced by $P\cup\{v\}$.} \end{subfigure}\;\;\;\;\begin{subfigure}[b]{.3\textwidth} \[\begin{tikzpicture}[x=0.8cm, y=1cm] \node[vertex][fill,inner sep=1pt,minimum size=1pt] (a) at (0,0)[label=below:$a$]{}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (x) at (1,0)[label=below:$x$]{}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (w) at (2,0)[label=below:$w$]{}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (y) at (3,0)[label=below:$y$]{}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (b) at (4,0)[label=below:$b$]{}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (v) at (2,1)[label=left:$v$]{}; \draw (a) to (0.3,0); \draw [dotted,semithick] (0.4,0) to (0.6,0); \draw (0.7,0) to (x); \draw (x) to (y); \draw (y) to (3.3,0); \draw[dotted,semithick] (3.4,0) to (3.6,0); \draw (3.7,0) to (b); \draw (v) to (w); \draw (v) to (x); \end{tikzpicture}\] \caption{The subgraph of $X$ induced by $P\cup\{v\}$.} \end{subfigure}\;\;\;\;\begin{subfigure}[b]{.3\textwidth} \[\begin{tikzpicture}[x=0.8cm, y=1cm] \node[vertex][fill,inner sep=1pt,minimum size=1pt] (a) at (0,0)[label=below:$a$]{}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (x) at (1,0)[label=below:$x$]{}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (w) at (2,0)[label=below:$w$]{}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (y) at (3,0)[label=below:$y$]{}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (b) at (4,0)[label=below:$b$]{}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (v) at (2,1)[label=left:$v$]{}; \draw (a) to (0.3,0); \draw [dotted,semithick] (0.4,0) to (0.6,0); \draw (0.7,0) to (x); \draw (x) to (y); \draw (y) to (3.3,0); \draw[dotted,semithick] (3.4,0) to (3.6,0); \draw (3.7,0) to (b); \draw (v) to (w); \draw (v) to (y); \end{tikzpicture}\] \caption{The subgraph of $Y$ induced by $P\cup\{v\}$.} \end{subfigure} \caption{}\label{fig:maxl_pf_2} \end{figure} For $X$, \cref{lem:maximals} tells us that one of \cref{eqn:QX1,eqn:QX2}, below, is true. \begin{align}\label{eqn:QX1} \begin{split} d_X(a,b)&=d_G(a,v)+1+d_G(x,b)\\ &=d_G(a,v)+d_G(v,w)+d_G(x,b)\\ &\geq d_G(a,w)+d_G(x,b)\\ &=d_G(a,b)+1 \end{split} \end{align} Note that the second strict inequality in \cref{eqn:QX2} is because an off-parade path between two parade vertices is longer than the parade route. \begin{align}\label{eqn:QX2} \begin{split} d_X(a,b)&=d_G(a,x)+1+d_G(v,b)\\ &=d_G(a,x)+1+d_G(x,v)-d_G(x,v)+d_G(v,b)\\ &>d_G(a,x)+d_G(x,v)+d_G(v,b)-1\\ &> d_G(a,b)-1 \end{split} \end{align} Since $G$ is a subgraph of $X$, we must have $d_X(a,b)\leq d_G(a,b)$, therefore \cref{eqn:QX1} is a contradiction, and so \cref{eqn:QX2} is true and implies $d_X(a,b)=d_G(a,b)$. For $Y$, \cref{lem:maximals} tells us that one of the following is true: \begin{equation}\label{eqn:QY1} d_Y(a,b)=d_G(a,v)+1+d_G(y,b) \end{equation} \begin{equation}\label{eqn:QY2} d_Y(a,b)=d_G(a,y)+1+d_G(v,b) \end{equation} However, \cref{eqn:QY2} leads to a contradiction by the same logic as for \cref{eqn:QX1}. Hence \cref{eqn:QY1} is true and by the same logic as for \cref{eqn:QX2} it implies $d_Y(a,b)=d_G(a,b)$. When we replace the left-hand sides of \cref{eqn:QX2,eqn:QY1} with $d_G(a,b)$ and add them together, we get the following. Note that use of strict inequality is because an off-parade path between two parade vertices is longer than the parade route. \begin{align}\label{eqn:QXQY} \begin{split} 2d_G(a,b)&=d_G(a,v)+d_G(v,b)+d_G(a,x)+d_G(y,b)+2\\ &>d_G(a,b)+d_G(a,x)+d_G(y,b)+2\implies\\ d_G(a,b)&>d_G(a,x)+d_G(y,b)+2\\ &=d_G(a,b) \end{split} \end{align} which is a contradiction. Thus we conclude that in $G$, a vertex $v$ that is not in $P$ cannot be adjacent to exactly 1 vertex in $P$. For the last case, let us consider vertices of $G$ that are not in $P$ and are not adjacent to any vertices in $P$. We will show these cannot exist. First note that $G$ must be connected because of \cref{lem:maxls_are_connected}. Let $d_G(v,P)$ denote the distance from any vertex $v$ to the path $P$; in other words, \[d_G(v,P):=\min_{p\in P}\{d_G(v,p)\}\] Since $G$ is connected, $d_G(v,P)$ is finite for all $v\in G$. For any vertex $v$ that is not in $P$ and not adjacent to $P$, either $d_G(v,P)=2$ or $d_G(v,P)>2$. If $d_G(v,P)>2$, then there is a path of length $d_G(v,P)$ from $v$ to a vertex $p\in P$, and so there is some vertex $v'$ along that path that has $d_G(v',P)=2$. Hence, if there are any vertices of $G$ that are not in $P$ and are not adjacent to $P$, then there is some vertex $v$ with $d_G(v,P)=2$. We will show this leads to a contradiction. Since $d_G(v,P)=2$, there is some vertex, call it $w$, that is adjacent to $v$ and adjacent to $P$. We have already shown in this proof that if $w$ is adjacent to any vertex of $P$, then it is adjacent to exactly two vertices of $P$, which are also adjacent to each other. Let us call the two vertices of $P$ that $w$ is adjacent to by the names $x$ and $y$, and the endpoints of $P$ by the names $a$ and $b$. As before, we shall assume the $a$ is the endpoint that is closer to $x$. Now consider the graph $X$, which is the same as $G$ but with an edge from $v$ to $x$ added, and the graph $Y$, which is the same as $G$ but with an edge from $v$ to $y$ added. The situation is shown in \cref{fig:maxl_pf_3}. \begin{figure}[!htbp] \centering \begin{subfigure}[b]{.3\textwidth} \[\begin{tikzpicture}[x=1cm, y=1cm] \node[vertex][fill,inner sep=1pt,minimum size=1pt] (a) at (0,0)[label=below:$a$]{}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (x) at (0.85,0)[label=below:$x$]{}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (w) at (1.5,1)[label=right:$w$]{}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (y) at (2.15,0)[label=below:$y$]{}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (b) at (3,0)[label=below:$b$]{}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (v) at (1.5,2)[label=right:$v$]{}; \draw (a) to (0.3,0); \draw [dotted,semithick] (.33,0) to (.52,0); \draw (.55,0) to (x); \draw (y) to (2.45,0); \draw[dotted,semithick] (2.48,0) to (2.67,0); \draw (2.7,0) to (b); \draw (v) to (w) to (x) to (y) to (w); \end{tikzpicture}\] \caption{The subgraph of $G$ induced by $P\cup\{v,w\}$.} \end{subfigure}\;\;\;\;\begin{subfigure}[b]{.3\textwidth} \[\begin{tikzpicture}[x=1cm, y=1cm] \node[vertex][fill,inner sep=1pt,minimum size=1pt] (a) at (0,0)[label=below:$a$]{}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (x) at (0.85,0)[label=below:$x$]{}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (w) at (1.5,1)[label=right:$w$]{}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (y) at (2.15,0)[label=below:$y$]{}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (b) at (3,0)[label=below:$b$]{}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (v) at (1.5,2)[label=right:$v$]{}; \draw (a) to (0.3,0); \draw [dotted,semithick] (.33,0) to (.52,0); \draw (.55,0) to (x); \draw (y) to (2.45,0); \draw[dotted,semithick] (2.48,0) to (2.67,0); \draw (2.7,0) to (b); \draw (v) to (w) to (x) to (y) to (w); \draw (v) to (x); \end{tikzpicture}\] \caption{The subgraph of $X$ induced by $P\cup\{v,w\}$.} \end{subfigure}\;\;\;\;\begin{subfigure}[b]{.3\textwidth} \[\begin{tikzpicture}[x=1cm, y=1cm] \node[vertex][fill,inner sep=1pt,minimum size=1pt] (a) at (0,0)[label=below:$a$]{}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (x) at (0.85,0)[label=below:$x$]{}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (w) at (1.5,1)[label=left:$w$]{}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (y) at (2.15,0)[label=below:$y$]{}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (b) at (3,0)[label=below:$b$]{}; \node[vertex][fill,inner sep=1pt,minimum size=1pt] (v) at (1.5,2)[label=left:$v$]{}; \draw (a) to (0.3,0); \draw [dotted,semithick] (.33,0) to (.52,0); \draw (.55,0) to (x); \draw (y) to (2.45,0); \draw[dotted,semithick] (2.48,0) to (2.67,0); \draw (2.7,0) to (b); \draw (v) to (w) to (x) to (y) to (w); \draw (v) to (y); \end{tikzpicture}\] \caption{The subgraph of $Y$ induced by $P\cup\{v,w\}$.} \end{subfigure} \caption{}\label{fig:maxl_pf_3} \end{figure} These graphs are not the same as the previous $X$ and $Y$, but all of the arguments from before regarding \cref{eqn:QX1,eqn:QX2,eqn:QY1,eqn:QY2} still apply. Most of \cref{eqn:QXQY} applies as well, with the exception of the last line, where we used the fact that $d_G(a,x)+d_G(y,b)+2=d_G(a,b)$, which is no longer true. Now we have $d_G(a,x)+d_G(y,b)+2=d_G(a,b)+1$, which still leads to a contradiction when applied to the penultimate line of \cref{eqn:QXQY}. Thus we conclude that there are no vertices of $G$ that are not in $P$ and not adjacent to $P$. Every vertex of $G$ which is not in $P$ must be adjacent to two adjacent vertices in $P$. This fulfills the first condition for $G$ to be a crowded $(n-k)$-parade. We will now show that the second condition is also true. Let $v$ and $v'$ be two vertices outside $P$ that are adjacent to a common vertex $w$ in $P$, and suppose $v,v'$ are not adjacent to each other. Let $F$ be the graph made from $G$ by adding an edge between $v$ and $v'$. Now we can assume without loss of generality that the first equation from \cref{lem:maximals} is true (if the second equation is the true one, just swap the names of $a$ and $b$). Then we have the following. Note that to get from line 3 to line 4 of the next equation, we are applying the fact that an off-parade path between two parade vertices is strictly longer than the parade route. \begin{align*}\label{eqn:Fdist} d_F(a,b)&=d_G(a,v)+1+d_G(v',b)\\ &=d_G(a,v)+d_G(v,w)-d_G(v,w)+1-d_G(w,v')+d_G(w,v')+d_G(v',b)\\ &=d_G(a,v)+d_G(v,w)-1+d_G(w,v')+d_G(v',b)\\ &\geq d_G(a,w)+d_G(w,b)+1\\ &=d_G(a,b)+1 \end{align*} However, $G$ is a subgraph of $F$, so we also have $d_F(a,b)\leq d_G(a,b)$. This is a contradiction. Hence, $G$ is a crowded $(n-k)$-parade. We will now argue that $G$ is $m$-saturated. For any edge $e$ of $G$ that is not in $P$, if there are fewer than $m$ parallel copies of $e$, then we can add another parallel copy of $e$ without changing any distances in $G$, and without creating any new paths between the endpoints of $P$ that have the same length as $P$. Hence, adding such an edge does not change the spectator floor value of the graph. Given that $G$ is maximal amongst graphs with $n$ vertices, at most $m$ parallel copies of each edge, and spectator floor $k$, then, $G$ must already contain all such edges. Hence $G$ is an $m$-saturated crowded $(n-k)$-parade.\end{proof} We will now present the second of the two main results in this section, which is the converse of the last theorem, showing that we have a complete characterization of the minor maximal graphs. \begin{theorem}\label{thm:maxl_2} Let $k\leq n-2$ and $m\geq1$, or let $k=n-1$ and $m\geq2$. If $G$ is an $m$-saturated crowded $(n-k)$-parade, then $G$ is minor maximal amongst graphs with $n$ vertices, at most $m$ parallel edges between any given vertices, and spectator floor $k$. \end{theorem} \begin{proof} By \cref{lem:1-parade}, the result holds when $k=n-1$. Therefore, we may assume that $k\leq n-2$. Suppose that $G$ is an $m$-saturated crowded $(n-k)$-parade, but $G$ is not minor maximal amongst graphs with $n$ vertices, at most $m$ parallel edges between any given vertices, and spectator floor $k$. Then there is some other graph, call it $G'$, which is minor maximal on that set and of which $G$ is a proper subgraph. Since $G'$ is minor maximal, it must be an $m$-saturated $(n-k)$-parade as well. Let us call the parade for which $G$ fulfills the definition of a crowded parade, \cref{def:crowded_parade}, by the name $P$. First note that if $m>1$, then there are no unique paths in $G'$ except $P$ and its subpaths, hence for $m>1$, $P$ is still the parade for which $G'$ is a crowded parade. Now suppose $m=1$ (in other words, we are working with simple graphs only) and suppose $P$ is either not a parade in $G'$ or not a parade for which $G'$ is crowded. Let us call the parade for which $G'$ is a crowded parade by the name $P'$, and the endpoints of $P$ by the names $a,b$ and of $P'$ by the names $a',b'$. Since $P\neq P'$, at least one of $a',b'$ is not in $P$. Since $G$ is a subgraph of $G'$, $d_{G'}(v,w)\leq d_G(v,w)$ for all pairs of vertices $v,w$. So $d_G(a',b')\geq d_{G'}(a',b')=d_G(a,b)=n-k-1$. A generalized figure of graph $G$ is shown in \cref{fig:gen_crowded_parade}, where path $P$ consists of the vertices labeled $v_1=a$ to $v_{n-k}=b$. Nodes labeled $K_{m_i}$ represent complete subgraphs on $m_i$ vertices, and bold edges represent a complete set of edges between the connected structures. It should be understood, however, that it is possible for some $m_i$ to be 0, in which case $K_{m_i}$ is an empty graph, and there are no edges connecting $K_{m_{i-1}}$, $K_{m_{i}}$, and $K_{m_{i+1}}$. For example, in the previous \cref{fig:crowded_parade}, we had $\{m_1,m_2,m_3,m_4,m_5,m_6\}=\{0,1,2,1,0,3\}$. \begin{figure}[!htbp] \[\begin{tikzpicture}[x=1cm, y=1cm] \node[vertex][fill] (a) at (0,0)[label=below:${v_1=a}$]{}; \node[vertex][fill] (b) at (2,0)[label=below:$v_2$]{}; \node[vertex][fill] (c) at (4,0)[label=below:$v_3$]{}; \node (d) at (6,0){}; \node[vertex][fill] (e) at (8,0)[label=below:$v_{n-k-2}$]{}; \node[vertex][fill] (f) at (10,0)[label=below:$v_{n-k-1}$]{}; \node[vertex][fill] (g) at (12,0)[label=below:${v_{n-k}=b}$]{}; \node[vertex][] (ab) at (1,1.5){$K_{m_1}$}; \node[vertex][] (bc) at (3,1.5){$K_{m_2}$}; \node[vertex][] (cd) at (5,1.5){$K_{m_3}$}; \node[vertex][] (ef) at (9,1.5){$K_{m_{n-k-2}}$}; \node[vertex][] (fg) at (11,1.5){$K_{m_{n-k-1}}$}; \draw (a) to (c); \draw(c) to (5,0); \draw[dotted,semithick] (5.5,0) to (6.5,0); \draw (7,0) to (e); \draw (e) to (g); \draw[very thick] (ab) to (bc) to (cd); \draw[very thick](cd) to (6,1.5); \draw[very thick, dotted] (6.5,1.5) to (7.5,1.5); \draw[very thick] (7.75,1.5) to (ef); \draw[very thick] (ef) to (fg); \draw[very thick] (a) to (ab) to (b) to (bc) to (c) to (cd); \draw[very thick] (e) to (ef) to (f) to (fg) to (g); \draw[very thick] (cd) to (5.4,.9); \draw[very thick, dotted] (5.5, .75) to (5.65,.525); \draw[very thick] (5.75,.375) to (d); \draw[very thick] (de) to (7.3,1.05); \draw[very thick, dotted] (7.4,.9) to (7.55, .675); \draw[very thick] (7.65,.525) to (e); \end{tikzpicture}\] \caption{A generalized figure of $G$, a simple crowded $(n-k)$-parade.} \label{fig:gen_crowded_parade} \end{figure} For graph $G$, since $d_G(a',b')\geq n-k-1$, it must be the case that either $a'$ or $b'$ is in $K_{m_1}$ or $K_{m_{n-k-1}}$, which are on opposite ends of the graph. Furthermore, in order for the path $P'$ from $a'$ to $b'$ to be a unique shortest path in $G'$, it is necessary that $K_{m'_2}$ and $K_{m'_{n-k-2}}$ be empty, where by $K_{m'_i}$ we mean the subgraph in $G'$ corresponding to $K_{m_i}$ in $G$. If these are not empty, then there would be many alternate routes of equal length connecting $a'$ to $b'$. Since $K_{m'_2}$ and $K_{m'_{n-k-2}}$ are empty, and $G'$ is simple, there are graph symmetries identifying any vertex in $K_{m'_1}$ with any other vertex in $K_{m'_1}$ as well as $v_1$. Likewise, there are graph symmetries identifying any vertex in $K_{m'_{n-k-1}}$ with any other vertex in $K_{m'_{n-k-1}}$ and $v_{n-k}$. Thus, there is a graph symmetry identifying $P$ with $P'$. Therefore, $P$ is still a parade in $G'$. Moreover, we can assume that $P$ is the parade for which $G'$ is a crowded parade. In any case, we now have that both $G$ and $G'$ are crowded $(n-k)$-parades on the same number of vertices and are crowded on the same parade $P$, and $G$ is a proper subgraph of $G'$. Thus $G'$ contains at least one edge $e$ that is not in $G$. There are three cases: either $e$ connects two vertices in $P$, or it connects a vertex in $P$ to a vertex not in $P$, or it connects two vertices not in $P$. If $e$ connects two vertices in $P$, then $P$ would not be a parade of $n-k$ vertices in $G'$, so this is a contradiction. If $e$ connects a vertex in $P$ to a vertex not in $P$, then this provides an alternate path of the same or shorter length between the endpoints of $P$, in contradiction to $P$ being a parade. Finally, if $e$ connects two vertices not in $P$, this also provides an alternate path of the same or shorter length between the endpoints of $P$. In any case, we get a contradiction. The theorem is thus proven. \end{proof} \section{Further Questions} \label{further questions} There are many additional questions that one may consider in this line of research. In this paper, we have characterized the minor-minimal graphs $G$ with $\uspcf{G}=k$ for $k=1$ and $k=2$. For larger, $k$, consider the following. \begin{question} Can we characterize the minor-minimal graphs $G$ with $\uspcf{G}=k$ for $k=3$? $k=4$? Etc. \end{question} Algorithmic questions related to the spectator number and spectator floor of a graph have not been considered in this paper, but we hope to work on some of these questions in the future. \begin{question} Can the minor-monotone floor of the spectator number be computed in polynomial time? \end{question} \begin{question} In the algorithm for calculating the minor-monotone floor of the spectator number, what are the optimal edges to add? \end{question} \begin{question} Can the minor-monotone floor of the spectator number be computed with a ``greedy algorithm"? (That is, can we add a set of edges to a graph $G$ to obtain a supergraph $H$ such that each time we add an edge the spectator number is weakly decreasing? strictly decreasing?) \end{question} A related, but distinct question is the following. \begin{question} If $\uspc{G}=k$ and $\uspcf{G}=m$, can we always find a supergraph $F$ of $G$ that achieves $\uspc{F}=i$ for all $i$ such that $m\leq i \leq k$? \end{question} Finally, one can consider how well this bound relates to our original motivation. \begin{question} This problem originated as a bound on the inverse eigenvalue problem. How good of a bound is this, and when does it fail? Etc. \end{question} \section*{Acknowledgements} This work started at the MRC workshop “Finding Needles in Haystacks: Approaches to Inverse Problems using Combinatorics and Linear Algebra”, which took place in June 2021 with support from the National Science Foundation and the American Mathematical Society. The authors are grateful to the organizers of this meeting. In particular, this material is based upon work supported by the National Science Foundation under Grant Number DMS 1916439.
{ "timestamp": "2022-09-26T02:02:24", "yymm": "2209", "arxiv_id": "2209.11307", "language": "en", "url": "https://arxiv.org/abs/2209.11307", "abstract": "The smallest possible number of distinct eigenvalues of a graph $G$, denoted by $q(G)$, has a combinatorial bound in terms of unique shortest paths in the graph. In particular, $q(G)$ is bounded below by $k$, where $k$ is the number of vertices of a unique shortest path joining any pair of vertices in $G$. Thus, if $n$ is the number of vertices of $G$, then $n-q(G)$ is bounded above by the size of the complement (with respect to the vertex set of $G$) of the vertex set of the longest unique shortest path joining any pair of vertices of $G$. The purpose of this paper is to commence the study of the minor-monotone floor of $n-k$, which is the minimum of $n-k$ among all graphs of which $G$ is a minor. Accordingly, we prove some results about this minor-monotone floor.", "subjects": "Combinatorics (math.CO)", "title": "A combinatorial bound on the number of distinct eigenvalues of a graph", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9854964186047327, "lm_q2_score": 0.8267117962054048, "lm_q1q2_score": 0.8147215143787121 }
https://arxiv.org/abs/1209.5360
Balanced Allocations and Double Hashing
Double hashing has recently found more common usage in schemes that use multiple hash functions. In double hashing, for an item $x$, one generates two hash values $f(x)$ and $g(x)$, and then uses combinations $(f(x) +k g(x)) \bmod n$ for $k=0,1,2,...$ to generate multiple hash values from the initial two. We first perform an empirical study showing that, surprisingly, the performance difference between double hashing and fully random hashing appears negligible in the standard balanced allocation paradigm, where each item is placed in the least loaded of $d$ choices, as well as several related variants. We then provide theoretical results that explain the behavior of double hashing in this context.
\section{Introduction} \label{sec:introduction} The standard balanced allocation paradigm works as follows: suppose $n$ balls are sequentially placed into $n$ bins, where each ball is placed in the least loaded of $d$ uniform independent choices of the bins. Then the maximum load (that is, the maximum number of balls in a bin) is $\frac{\log \log n}{\log d} + O(1)$, much lower than the $\frac{\log n}{\log \log n} (1 +o(1))$ obtained where each ball is placed according to a single uniform choice \cite{ABKU}. The assumption that each ball obtains $d$ independent uniform choices is a strong one, and a reasonable question, tackled by several other works, is how much randomness is needed for these types of results (see related work below). Here we consider a novel approach, examining balanced allocations in conjunction with {\em double hashing}. In the well-known technique of standard double hashing for open-addressed hash tables, the $j$th ball obtains two hash values, $f(j)$ and $g(j)$. For a hash table of size $n$, $f(j) \in [0,n-1]$ and $g(j) \in [1,n-1]$. Successive locations $h(j,k) = f(j)+kg(j) \bmod n$, $k=0,1,2,\ldots...$, are tried until an empty slot is found. As discussed later in this introduction, double hashing is extremely conducive to both hardware and software implementations and is used in many deployed systems. In our context, we use the double hashing approach somewhat differently. The $j$th ball again obtains two hash values $f(j)$ and $g(j)$. The $d$ choices for the $j$th ball are then given by $h(j,k) = f(j) + k g(j) \bmod n$, $k=0,1,\ldots,d-1$, and the ball is placed in the least loaded. We generally assume that $f(j)$ is uniform over $[0,n-1]$, $g(j)$ is uniform over all numbers in $[1,n-1]$ relatively prime to $n$, and all hash values are independent. (It is convenient to consider $n$ a prime, or take $n$ to be a power of 2 so that the $g(j)$ are uniformly chosen random odd numbers, to ensure the $h(j,k)$ values are distinct.) It might appear that limiting the space of random choices available to the balls in this way might change the behavior of this random process significantly. We show that this is not the case both in theory and in practice. Specifically, by ``essentially indistinguishable'', we mean that, empirically, for any constant $i$ and sufficiently large $n$ the fraction of bins of load $i$ is well within the difference expected by experimental variance for the two methods. Essentially indistinguishable means that in practice for even reasonable $n$ one cannot readily distinguish the two methods. By ``vanishing'' we mean that, analytically, for any constant $i$ the asymptotic fraction of bins of load $i$ for double hashing differs only by $o(1)$ terms from fully independent choices with high probability. A related key result is that $O(\log \log n)$ bounds on the maximum load hold for double hashing as well. Surprisingly, the difference between $d$ fully independent choices and $d$ choices using double hashing are essentially indistinguishable for sufficiently large $n$ and vanishing asymptotically. \footnote{To be clear, we do not mean that there is {\em no} difference between double hashing and fully random hashing in this setting; there clearly is and we note a simple example further in the paper. As we show, analytically in the limit for large $n$ the difference is vanishing (Theorem~\ref{mainthm} and Corollary~\ref{cormain}), and for finite $n$ the results from our experiments demonstrate the difference is essentially indistinguishable (Section~\ref{sec:sims}).} As an initial example of empirical results, Table~\ref{table_example} below shows the fraction of bins of load $x$ for various $x$ taken over 10000 trials, with $n=2^{14}$ balls thrown into $n$ bins using $d=3$ and $d=4$ choices, using both double hashing and fully random hash values (where for our proxy for ``random'' we utilize the standard approach of simply generating successive random values using the drand48 function in C initially seeded by time). Most values are given to five decimal places. The performance difference is essentially indistinguishable, well within what one would expect simply from variance from the sampling process. \begin{table*}[thp] \begin{subfigure}[h]{0.45\textwidth} \centering \begin{tabular}{|c|c|c|} \hline Load & Fully Random & Double Hashing \\ \hline 0 & 0.17693 & 0.17691 \\ 1 & 0.64664 & 0.64670 \\ 2 & 0.17592 & 0.17589 \\ 3 & 0.00051 & 0.00051 \\ \hline \end{tabular} \caption{3 choices, $n=2^{14}$ balls and bins} \end{subfigure} \qquad \begin{subfigure}[h]{0.45\textwidth} \centering \begin{tabular}{|c|c|c|} \hline Load & Fully Random & Double Hashing \\ \hline 0 & 0.14081 & 0.14081 \\ 1 & 0.71840 & 0.71841 \\ 2 & 0.14077 & 0.14076 \\ 3 & $2.25 \cdot 10^{-5}$ & $2.29\cdot 10^{-5}$ \\ \hline \end{tabular} \caption{4 choices, $n=2^{14}$ balls and bins} \end{subfigure} \caption{An initial example showing the performance of double hashing compared to fully random hashing. In our tables, the row with load $x$ gives the fraction of the bins that have load $x$ over all trials. So over 10000 trials of throwing $n=2^{14}$ balls into $2^{14}$ bins using 3 choices and double hashing, the fraction of bins with load 0 was $0.17691$. \label{table_example}} \end{table*} More extensive empirical results appear in Appendix~\ref{sec:sims}. In particular, we also consider two extensions to the standard paradigm: V\"{o}cking's extension (sometimes called $d$-left hashing), where the $n$ bins are split into $d$ subtables of size $n/d$ laid out left to right, the $d$ choices consist of one uniform independent choice in each subtable, and ties for the least loaded bin are broken to the left \cite{vocking}; and the continuous variation, where the bins represent queues, and the balls represent customers that arrive as a Poisson process and have exponentially distributed service requirements \cite{Mitzenmacher}. We again find empirically that replacing fully random choices with double hashing leads to essentially indistinguishable results in practice.\footnote{We encourage the reader to examine these experimental results. However, because we recognize some readers are as a rule uninterested in experimental results, we have moved them to an appendix.} In this paper, we provide theoretical results explaining why this would be the case. There are multiple methods available that can yield $O(\log \log n)$ bounds on the maximum load when $n$ balls are thrown into $n$ bins in the setting of fully random choices. We therefore first demonstrate how some previously used methods, including the layered induction approach of \cite{ABKU} and the witness tree approach of \cite{vocking}, readily yield $O(\log \log n)$ bounds; this asymptotic behavior is, arguably, unsurprising (at least in hindsight). We then examine the key question of why the difference in empirical results is vanishing, a much stronger requirement. For the case of fully random choices, the asymptotic fraction of bins of each possible load can be determined using fluid limit methods that yield a family of differential equations describing the process behavior \cite{Mitzenmacher}. It is not a priori clear, however, why the method of differential equations should necessarily apply when using double hashing, and the primary result of this paper is to explain why it in fact applies. The argument depends technically on the idea that the ``history'' engendered by double hashing in place of $d$ fully random hash functions has only a vanishing (that is, $o(1)$) effect on the differential equations that correspond to the limiting behavior of the bin loads. We believe this resolution suggests that double hashing will be found to obtain the same results as fully random hashing in other additional hash-based structures, which may be important in practical settings. We argue these results are important for multiple reasons. First, we believe the fact that moving from fully random hashing to double hashing does not change performance for these particular balls and bins problems is interesting in its own right. But it also has practical applications; multiple-choice hashing is used in several hardware systems (such as routers), and double hashing both requires less (pseudo-)randomness and is extremely conducive to implementation in hardware \cite{chunkstash,heileman2005caching}. (As we discuss below, it may also be useful in software systems.) Both the fact that double hashing does not change performance, and the fact that one can very precisely determine the performance of double hashing for load balancing simply using the same fluid limit equations as have been used under the assumption of fully random hashing, are therefore of major importance for designing systems that use multiple-choice methods (and convincing system designers to use them). Finally, as mentioned, these results suggest that using double hashing in place of fully random choices may similarly yield the same performance in other settings that make use of multiple hash functions, such as for cuckoo hashing or in error-correcting codes, offering the same potential benefits for these problems. We have explored this issue further in a subsequent (albeit already published) paper \cite{MT}, where there remain further open questions. In particular, we have not yet found how to use the fluid limit analysis used here for these other problems. Finally, it has been remarked to us that all of our arguments here apply beyond double hashing; any hashing scheme where the $d$ choices for a ball are made so that they are pairwise independent and uniform would yield the same result by the same argument. That is, if for a given ball with $d$ choices $h_1,h_2,\ldots,h_d$, for any distinct bins $b_1$ and $b_2$ we have for all $1\leq i,j \leq d, i \neq j$: $$\Pr(h_i = b_1) = 1/n \mbox{ and }$$ $$\Pr(h_i = b_1 \mbox{ and } h_j = b_2) = \frac{1}{{n \choose 2}},$$ then our results apply. Unfortunately, we do not know of any actual scheme besides double hashing in practical use with these properties; hence we focus on double hashing throughout. \subsection{Related Work} The balanced allocations paradigm, or the power of two choices, has been the subject of a great deal of work, both in the discrete balls and bins setting and in the queueing theoretic setting. See, for example, the survey articles \cite{KSurvey,TwoSurvey} for references and applications. Several recent works have considered hashing variations that utilize less randomness in place of assuming perfectly random hash functions; indeed, there is a long history of work on universal hash functions \cite{CarterWe79}, and more recently min-wise independent hashing \cite{mwi}. Specific recent related works include results on standard one-choice balls and bins problems \cite{SHF}, hashing with linear probing with limited independence \cite{ppr}, and tabulation hashing \cite{pt}; other works involving balls and bins with less randomness include \cite{godfrey,peres}. As another example, Woelfel shows that a variation of V\"{o}cking's results hold using simple hash functions that utilize a collection of $k$-wise independent hash functions for small $k$, and a random vector requiring $o(n)$ space \cite{Woelfel}. Another related work in the balls and bins setting is the paper of Kenthapadi and Panigrahy \cite{kpbalanced}, who consider a setting where balls are not allowed to choose any two bins, but are forced to choose two bins corresponding to an edge on an underlying random graph. In the same paper, they also show that two random choices that yield $d$ bins are sufficient for similar $O(\log \log n)$ bounds on maximum loads that one obtains with $d$ fully random choices, where in their case each random choice gives a contiguous block of $d/2$ bins. Interestingly, the classical question regarding the average length of an unsuccessful search sequence for standard double hashing in an open address hash table when the table load is a constant $\alpha$ has been shown to be, up to lower order terms, $1/(1-\alpha)$, showing that double hashing has essentially the same performance as random probing (where each ball would have its own random permutation of the bins to examine, in order, until finding an empty bin) when using traditional hash tables \cite{bradford2007probabilistic,guibas1978analysis,lueker1993more}. These results appear to have been derived using different techniques than we utilize here; it could be worthwhile to construct a general analysis that applies for both schemes. A few papers have recently suggested using double hashing in schemes where one would use multiple hash functions and shown little or no loss in performance. For Bloom filters, Kirsch and Mitzenmacher \cite{kirsch:bbb}, starting from the empirical analysis by Dillinger and Manolios \cite{dillinger3312bfp}, prove that using double hashing has negligible effects on Bloom filter performance. This result is closest in spirit to our current work; indeed, the type of analysis here can be used to provide an alternative argument for this phenomenon, although the case of Bloom filters is inherently simpler. Several available online implementations of Bloom filters now use this approach, suggesting that the double hashing approach can be significantly beneficial in software as well as hardware implementations.\footnote{See, for example, \url{http://leveldb.googlecode.com/svn/trunk/util/bloom.cc}, \url{https://github.com/armon/bloomd}, and \url{http://hackage.haskell.org/packages/archive/bloomfilter/1.0/doc/html/bloomfilter.txt}.} Bachrach and Porat use double hashing in a variant of min-wise independent sketches \cite{bachrach2010fast}. The reduction in randomness stemming from using double hashing to generate multiple hash values can be useful in other contexts. For example, it is used in \cite{MVad} to improve results where pairwise independent hash functions are sufficient for suitably random data; using double hashing requires fewer hash values to be generated (two in place of a larger number), which means less randomness in the data is required. Finally, in work subsequent to the original draft of this paper \cite{MT}, we have empirically examined double hashing for other algorithms such as cuckoo hashing, and again found essentially no empirical difference between fully random hashing and double hashing in this and other contexts. However, theoretical results for these settings that prove this lack of difference are as of yet very limited. Arguably, the main difference between our work and other related work is that in our setting with double hashing we find the empirical results are essentially indistinguishable in practice, and we focus on examining this phenomenon. \section{Initial Theoretical Results} We now consider formal arguments for the excellent behavior for double hashing. We begin with some simpler but coarser arguments that have been previously used in multiple-choice hashing settings, based on majorization and witness trees. While our witness tree argument dominates our majorization argument, we present both, as they may be useful in considering future variations, and they highlight how these techniques apply in these settings. In the following section, we then consider the fluid limit methodology, which best captures the result we desire here, namely that the load distributions are essentially the same with fully random hashing and double hashing. However, the fluid limit methodology captures results about the fraction of bins with load $i$, for every constant value $i$, and does not readily provide $O(\log \log n)$ bounds (without specialized additional work, which often depend on the techniques used below). The reader conversant with balanced allocation results utilizing majorization and witness trees may choose to skip this section. \subsection{A Majorization Argument} We first note that using double hashing with two choices and using random hashing with two distinct hash values per ball are equivalent. With this we can provide a simple argument, showing the seemingly obvious fact that using double hashing with $d > 2$ choices is at least as good as using 2 random choices. This in turn shows that double hashing maintains $\log \log n +O(1)$ maximum load in the standard balls and bins setting. Our approach uses a standard majorization and coupling argument, where the coupling links the random choices made by the processes when using double hashing and using random hashing while maintaining the fidelity of both individual processes. (See, e.g., \cite{ABKU,Steger}, or \cite{majorization} for more background on majorization.) Let $\vec{x}=(x_1,\ldots,x_n)$ be a vector with elements in non-increasing order, so $x_1 \geq x_2 \ldots \geq x_n$, and similarly for $\vec{y}=(y_1,\ldots,y_n)$. We say that $\vec{x}$ majorizes $\vec{y}$ if $\sum_{i=1}^n x_i = \sum_{i=1}^n y_i$ and, for $j < n$, $\sum_{i=1}^j x_i \geq \sum_{i=1}^j y_i$. For two Markovian processes $X$ and $Y$, we say that $X$ stochastically majorizes $Y$ if there is a coupling of the processes $X$ and $Y$ so that at each step under the coupling the vector representing the state of $X$ majorizes the vector representing the state of $Y$. We note that because we use the loads of the bins as the state, the balls and bins processes we consider are Markovian. We make use of the following simple and standard lemma. (See, for example, \cite[Lemma 3.4]{ABKU}.) \begin{lemma} \label{lem:maj0} If $\vec{x}$ majorizes $\vec{y}$ for vectors $\vec{x}$, $\vec{y}$ of positive integers, and $e_i$ represents a unit vector with a 1 in the $i$th entry and 0 elsewhere, then $\vec{x}+e_i$ majorizes $\vec{y}+e_j$ for $j \geq i$. \end{lemma} \begin{theorem} Let process $X$ be the process where $m$ balls are placed into $n$ bins with two distinct random choices, and $Y$ be the corresponding scheme with $d > 2$ choices using double hashing. Then $X$ stochastically majorizes $Y$. \end{theorem} \begin{proof} At each time step, we let $\vec{x}(t)$ and $\vec{y}(t)$ be the vectors corresponding to the loads sorted in decreasing order. We inductively claim that $\vec{x}(t)$ majorizes $\vec{y}(t)$ at all time steps under the coupling of the processes where if the $a$th and $b$th bins in the sorted order for $X$ are chosen, the $a$th and $b$th bins in the sorted order for $Y$ are chosen as the first two choices, and then the remaining choices are determined by double hashing. That is, the $d$ hash choices are such that the gap between successive choices is $b-a$, so the choices are $a$, $b$, $2b-a$, $3b-2a$, and so on (modulo the size of the table). Clearly $\vec{x}(0)$ majorizes $\vec{y}(0)$ as the vectors are equal. It is simple to check that this process maintains the majorization using Lemma~\ref{lem:maj0}, as the coordinate that increases in $\vec{y}(t)$ at each step is deeper in the sorted order than the coordinate that increases in $\vec{x}(t)$. \end{proof} As two random choices stochastically majorizes $d$ choices from double hashing under this coupling, we see that $$\Pr(x_1 \geq c) \geq \Pr(y_1 \geq c)$$ for any value $c$. Since the seminal result of \cite{ABKU} shows that using two choices gives a maximum load of $\log \log n +O(1)$ with high probability, we therefore have this corollary. \begin{corollary} The maximum load using $d > 2$ choices and double hashing for $n$ balls and $n$ bins is $\log \log n +O(1)$ with high probability. \end{corollary} We note that similarly, when using double hashing, we can show that using $d$ choices stochastically majorizes using $d+1$ choices. \subsection{A Witness Tree Argument} \label{sec:witness} It is well known that $d>2$ choices performs better than 2 choices for multiple-choice hashing; while the maximum load remains $O(\log \log n)$, the constant factor depends on $d$, and can be important in practice. Our simple majorization argument does not provide this type of bound, so to achieve it, we next utilize the witness tree approach, following closely the work of V\"{o}cking \cite{vocking}. (See also \cite{simplified} for related arguments.) While we discuss the case of insertions only, the arguments also apply in settings with deletions as well; see \cite{vocking} for more details. Similarly, here we consider only the standard balls and bins setting of $n$ balls and $n$ bins with $d \geq 3$ being a constant, but similar results for $m = cn$ balls for some constant $c$ can also be derived by simply changing the ``base case'' at the leaves of the witness tree accordingly, and similar results for V\"{o}cking's scheme can be derived by using the ``unbalanced'' witness tree used by V\"{o}cking \cite{vocking} in place of the balanced one. These methods allow us to prove statements of the following form: \begin{theorem} \label{thm:vresult} Suppose $n$ balls are placed into $n$ bins using the balanced allocation scheme with double hashing as described above. Then with $d$ choices the maximum load is $\log \log n / \log d + O(d)$ with high probability. \end{theorem} We note that, while V\"{o}cking obtains a bound of $\log \log n / \log d + O(1)$, we have an $O(d)$ term that appears necessary to handle the leaves in our witness tree. (A similar issue appears to arise in \cite{Woelfel}.) For constant $d$ these are asymptotically the same; however, an $O(1)$ additive term is more pleasing both theoretically and potentially in practice. How we deviate from V\"{o}cking's argument is explained below. \begin{proof} Following \cite{vocking}, we define a witness tree, which is a tree-ordered (multi)set of balls. Each node in the tree represents a ball, inserted at a certain time; the $i$th inserted ball corresponds to time $i$ in the natural way. The ball represented by the root $r$ is placed at time $t$, and a child node must have been inserted at a time previous to its parent. A leaf node in V\"{o}cking's argument is {\em activated} if each of the $d$ locations of the corresponding ball contains at least three balls when it is inserted. An edge $(u,v)$ is activated if when $v$ is the $i$th child of $u$, then the $i$th location of $u$'s ball is the same as one of the locations of $v$'s ball. A witness tree is activated if all of its leaf nodes and edges are activated. Following V\"{o}cking's approach, we first bound the probability that a witness tree is activated for the simpler case where the nodes of the witness trees represent distinct balls. The argument then can be generalized to deal with witness trees where the same ball may appear multiple times. As this follows straightforwardly using the technical approach in \cite{vocking}, we do not provide the full argument here. We now explain where we must deviate from V\"{o}cking's argument. The original argument utilizes the fact that most $n/3$ bins have load at least 3, deterministically. As leaf nodes in V\"{o}cking's argument are required to have all $d$ choices of bins have load at least 3 to be activated, a leaf node corresponding to a ball with $d$ choices of bins is activated with probability at most $3^{-d}$, and a collection of $q$ leaf nodes are all activated with probability $3^{-dq}$. However, this argument will not apply in our case, because the choices of bins are not independent when using double hashing, and depending on which bins are loaded, we can obtain very different results. For example, consider a case where the first $n/3$ bins have load at least 3. The fraction of choices using double hashing where all $d$ bins have load at least 3 is significantly more than $3^{-d}$, which would be the probability if $n/3$ bins with load 3 were randomly distributed. Indeed, for a newly placed ball $j$, if $f(j)$ and $g(j)$ are both less than $n/(3(d+1))$, all $d$ choices will have load at least 3, and this occurs with probability at least $(9(d+1)^2)^{-1}$. While such a configuration is unlikely, the deterministic argument used by V\"{o}cking no longer applies. We modify the argument to deal with this issue. In our double hashing setting, let us call a leaf active if either \vspace{-0.05in} \begin{itemize} \item Some ball in the past has two or more of the bins at this leaf among its $d$ choices. \vspace{-0.1in} \item All the $d$ bins chosen by this ball have previously been chosen by $4d$ previous balls. \end{itemize} \vspace{-0.05in} The probability that any previous ball has hit two or more of the bins at the leaf is $O(d^4n^{-1})$: there are ${d \choose 2}$ pairs of bins from the $d$ choices at the leaf; at most $d(d-1)$ pairs of positions within the $d$ choices where that pair could occur in any previous ball; at most $n$ possible previous balls; and each bad choice that leads that previous ball to have a specific pair of bins in a specific pair of positions occurs with probability $1/(n(n-1))$. Once we exclude this case, we can consider only balls that hit at most one of the $d$ bins associated with the leaf. For any time corresponding to a leaf, we bound the probability that any specific bin has been chosen by $4d$ or more previous balls. We note by symmetry that the probability any specific ball chooses a specific bin is $d/n$. The probability in question is then at most $${n \choose 4d}\left (\frac{d}{n} \right)^{4d} \leq \frac{d^{4d}}{(4d)!} < \left ( \frac{e}{4} \right )^d,$$ which is less than $\frac{1}{3}$ whenever $d \geq 3$. Further, once we consider the case of previous balls that choose two or more bins at this leaf separately, the events that the $d$ bins chosen by this ball have previously been chosen by $4d$ previous balls are negatively correlated. Hence, we find the probability a specific leaf node is activated is less than $3^{-d}$. However, following \cite{vocking}, we need to consider a collection of $q$ leaves and show the probability that they are all active is at most $3^{-dq}$. We will do this below by using Azuma's inequality to show the fraction of choices of hash values from double hashing that lead to an activated ball is less than $3^{-d}$ with high probability. As balls corresponding to leaves independently choose their hash values, this result suffices. Let $S$ be the set of pairs of hash values that generate $d$ values that would activate a leaf at time $n$. We have $\mathbb{E}[|S|] < \left ( \frac{e}{4} \right )^d n(n-1) + cd^4(n-1)$ for some constant $c$, so $\mathbb{E}[|S|] > (3^{-d} - \gamma)n(n-1)$ for some constant $\gamma$ and large enough $n$. Consider the Doob martingale obtained by revealing the bins for the balls one at a time. Each ball can change the final value of $S$ by at most $dn$, since the bin where any ball is placed is involved in less than $dn$ choices of pairs. Azuma's inequality (e.g., \cite[Section 12.5]{MU}) then yields $$\Pr(|S| > 3^{-d}n(n-1)) \leq \mbox{exp}(-\delta n)$$ for a constant $\delta$ that depends on $d$ and $\gamma$. It follows readily that the fraction of pairs of hash values that activate a leaf is at most $3^{-d}$ with very high probability throughout the process; by conditioning on this event, we can continue with V\"{o}cking's argument. (The conditioning only adds an exponentially small additional probability to the probability the maximum load exceeds our bound.) Specifically, we note for there to be a bin of load $L+4d$, there must be an activated witness tree of depth $L$. We can bound the probability that some witness tree (with distinct balls) of depth $L$ is activated. The probability an edge is activated is the probability a ball chooses a specific bin, which as previously noted is $d/n$. As all balls are distinct, the probability that a witness tree of $m$ balls has all edges activated is $(d/n)^{m-1}$, and as we have shown the probability of all leaves being activated is bounded above by $3^{-dq}$ where $q=d^L$ is the number of leaves. Following \cite{vocking}, as there are at most $n^m$ ways of choosing the balls for the witness tree, the probability that there exists an active witness tree is at most \begin{eqnarray*} n^m \left ( \frac{d}{n} \right)^{m-1} 3^{-dq} & \leq & n \cdot d^{2q} \cdot 3^{-dq} \\ & \leq & n \cdot 2^{-q} \\ & = & n \cdot 2^{-d^{L}}. \end{eqnarray*} Hence choosing $L \leq \log_d \log_2 n + \log_d(1+\alpha)$ guarantees a maximum load of $L + 4d$ with probability $O(n^{-\alpha})$. \end{proof} \section{The Fluid Limit Argument} \label{sec:fluid} We now consider the fluid limit approach of \cite{MitzenmacherThesis}. (A useful survey of this approach appears in \cite{Diaz}.) The fluid limit approach gives equations that describe the asymptotic fraction of bins with each possible integer load, and concentration around these values follows from martingale bounds (e.g., \cite{EK,Kurtz,Wormald}). Values can easily be determined numerically, and prove highly accurate even for small numbers of balls and bins. We show that the same equations apply even in the setting of double hashing, giving a theoretical justification for our empirical findings in Appendix~\ref{sec:sims}. This approach can be easily extended to other multiple choice processes (such as V\"{o}cking's scheme and the queuing setting). We emphasize that the fluid limit approach does not, in itself, yield bounds of the type that the maximum load is $O(\log \log n)$ with high probability naturally; rather, it says that for any constant integer $i$, the fraction of bins of load $i$ is concentrated around the value obtained by the fluid limit. One generally has to do additional work -- generally similar in nature to the arguments in the proceeding sections -- to obtain $O(\log \log n)$ bounds. As we already have an $O(\log \log n)$ bound from alternative techniques, here our focus is on showing the fluid limits are the same under double hashing and fully random hashing, which explains our empirical findings. (We show one could achieve an $O(\log \log n)$ bound from the results of this section -- actually bound of $\log_d \log_2 n + O(1)$ -- in Appendix~\ref{sec:followon}.) The standard balls and bins fluid limit argument runs as follows. Let $X_i(t)$ be a random variable denoting the number of bins with load {\em at least} $i$ after $tn$ balls have been thrown; hence $X_0(0) =n$ and $X_i(0) = 0$ for all $i \geq 1$. Let $x_i(t) = X_i(t)/n$. For $X_i$ to increase when a ball is thrown, all of its choices must have load at least $i-1$, but not all of them can have load at least $i$. Hence for $i \geq 1$ $$\mathbb{E}[X_i(t + 1/n) - X_i(t)] = (x_{i-1}(t))^d - (x_{i}(t))^d.$$ Let $\Delta(x_i) = x_i(t + 1/n) - x_i(t)$ and $\Delta(t) = 1/n$. Then the above can be written as: $$\mathbb{E} \left [ \frac{\Delta(x_i)}{\Delta(t)} \right ] = (x_{i-1}(t))^d - (x_{i}(t))^d.$$ In the limit as $n$ grows, we can view the limiting version of the above equation as $$\frac{dx_i}{dt} = x_{i-1}^d - x_{i}^d,$$ where we remove the $t$ on the right hand side as the meaning is clear. Again, previous works \cite{EK,Kurtz,Wormald} justify how the Markovian load balancing process converges to the solution of the differential equations.\footnote{In particular, the technical conditions corresponding to Wormald's result \cite[Theorem 1]{Wormald} hold, and this theorem gives the appropriate convergence; we explain further in our Theorem~\ref{mainthm}.} Specifically, it follows from Wormald's theorem \cite[Theorem 1]{Wormald} that $$X_i(t) = nx_i(t) + o(n)$$ with probability $1-o(1)$, or equivalently that the fraction of balls of load $i$ is within $o(1)$ of the result of the limiting differential equations with probability $1-o(1)$. These equations allow us to compute the limiting fraction of bins of each load numerically, and these results closely match our simulations, as for example shown in Table~\ref{table_6}. \begin{table*}[thp] \centering \begin{tabular}{|c|c|c|c|} \hline Tail load & Fluid Limit & Fully Random & Double Hashing \\ \hline $\geq 1$ & 0.8231 & 0.8231 & 0.8231 \\ $\geq 2$ & 0.1765 & 0.1764 & 0.1764 \\ $\geq 3$ & 0.00051 & 0.00051 & 0.00051 \\ \hline \end{tabular} \caption{3 choices, fluid limit ($n=\infty$) vs. $n=2^{14}$ balls and bins} \label{table_6} \end{table*} Given our empirical results, it is natural to conclude that these differential equations must also necessarily describe the behavior of the process when we use double hashing in place of standard hashing. The question is how can we justify this, as the equations were derived utilizing the independence of choices, which is not the case for double hashing. We now prove that, for constant number of choices $d$, constant load values $i$, and a constant time $T$ (corresponding to $Tn$ total balls), the loads of the bins chosen by double hashing behave essentially the same as though the choices were independent, in that, with high probability over the entire course of the process, $$\mathbb{E}[X_i(t + 1/n) - X_i(t)] = (x_{i-1}(t))^d - (x_{i}(t))^d +o(1);$$ that is, the gap is only in $o(1)$ terms. This suffices for \cite[Theorem 1]{Wormald} (specifically condition (ii) of \cite[Theorem 1]{Wormald} allows such $o(1)$ differences). The result is that double hashing has no effect on the fluid limit analysis. (Again, we emphasize our restriction to constant choices $d$, constant load values $i$, and constant time parameter $T$.) Our approach is inspired by the work of Bramson, Lue, and Prabhakar \cite{Bramson}, who use a similar approach to obtain asymptotic independence results in the queueing setting. However, there the concern was on limiting independence in equilibrium with general service time distributions, and the choices of queues were assumed to be purely random. We show that this methodology can be applied to the double hashing setting. \begin{lemma} \label{lem:mainlemma} When using double hashing, with high probability over the entire course of the process, $$\mathbb{E}[X_i(t + 1/n) - X_i(t)] = (x_{i-1}(t))^d - (x_{i}(t))^d +o(1).$$ \end{lemma} \begin{proof} We refer to the {\em ancestry list} of a bin $b$ at time $t$ as follows. The list begins with the balls $z_1,z_2,\ldots,z_{g(b,t)}$ that have had bin $b$ as one of their choices, where $g(b,t)$ is the number of balls that have chosen bin $b$ up to time $t$. Note that each $z_i$ is associated with a corresponding time $t_i$ and $d-1$ other bin choices. For each $z_i$, we recursively add the list of balls that have chosen each of those $d-1$ bins up to time $t_i$, and so on recursively. We also think of the bins associated with these balls as being part of the ancestry list, where the meaning is clear. It is clear that the ancestry list gives all the necessary information to determine the load of the bin $b$ at time $t$ (assuming the information regarding choices is presented in such a way to include how placement will occur in case of ties; e.g., the bin choices are ordered by priority). We note that the ancestry list holds more information (and more balls and bins) than the witness trees used by V\"{o}cking (and by us in Section~\ref{sec:witness}). In what follows below let us assume $n$ is prime for convenience (we explain the difference if $n$ is not prime in footnotes). We claim that for asymptotic independence of the load among a collection of $d$ bins at a specific time when a new ball is placed, it suffices to show that these ancestry lists are small. Specifically, we start with showing in Lemma~\ref{lem:branching} that all ancestry lists contain only $O(\log n)$ associated bins with high probability. We then show as a consequence in Lemma~\ref{lem:small} that the ancestry lists of the bins associated with a newly placed ball have no bins in common with high probability. This last fact allows us to complete the main lemma, Lemma~\ref{lem:mainlemma}. \begin{lemma} \label{lem:branching} The number of bins in the ancestry list of every bin after the first $Tn$ steps is at most $O(\log n)$ with high probability. \end{lemma} \begin{proof} We view the growth of the ancestry list as a variation of the standard branching process, by going backward in time. Let $B_{0} = 1$ correspond to size of an initial ancestry list of a bin $b$, consisting of the bin itself. If the $(Tn)$th ball thrown has $b$ as one of its $d$ choices, then $d-1$ additional bins are added to the ancestry list, and we then have $B_{1} = d$; otherwise we have no change and $B_{1} = 1$. (Note that when measuring the size of the ancestry list in bins, each bin is counted only once, even if it is associated with multiple balls.) If the $(Tn-1)$st ball thrown has a bin in the ancestry list as one of its $d$ choices, then (at most) $d-1$ bins are added to the ancestry list, and we set $B_2 = B_1 + d-1$; otherwise, we have $B_2 = B_1$. We continue to add to the ancestry list with at each step $B_i = B_{i-1} + d-1$ or $B_i = B_{i-1}$, depending on whether the $(Tn-i+1)$st ball has one of it choices as a bin on the ancestry list, or not. This process is {\em almost} equivalent to a Galton-Watson branching process where in each generation, each existing element produces 1 offspring with probability $1-d/n$ (or equivalently, moves itself into the next generation), or produces $d$ offspring (adding $d-1$ new elements) with probability $d/n$. The one issue is that the production of offspring are not independent events; at most $d-1$ elements are added at each step in the process. (There is also the issue that perhaps fewer than $d-1$ elements are added when elements are added to the ancestry list; for our purposes, it is pessimistic to assume $d-1$ offspring are produced.) Without this dependence concern, standard results on branching process would give that $E[B_{Tn}] = (1+d(d-1)/n)^{Tn} \leq e^{Td(d-1)}$, which is a constant. Further, we could apply (Chernoff-like) tail bounds from Karp and Zhang \cite[Theorem 1]{KZ}, which states the following: for a supercritical finite time branching process $\left \{ Z_n \right \}$ over $n$ time steps starting with $Z_0 =1$, with mean offspring per element $\mathbb{E}[Z_1] = \rho >1$, and with $\mathbb{E}[e^{Z_1}] < \infty$, there exists constants $c_1$ and $c_2$ such that $$\Pr(Z_n > \gamma \rho^n) < c_1 e^{-c_2 \gamma}.$$ In our setting, that would give that there exists constants $c_1$ and $c_2$ such that $$\Pr(B_{Tn} > \gamma (1+d(d-1)/n)^{Tn} ) < c_1 e^{-c_2 \gamma}.$$ This would give our desired $O(\log n)$ high probability bound on the size of the ancestry list. To deal with this small deviation, it suffices to consider a modified Galton-Watson process where each element produces $d$ offspring with probability $d'/n$; we shall see that $d'= d+1$ suffices. Let $B'$ be the resulting size of this Galton Walton process. From the above we have that $B' < c \log n$ with high probability for some suitable constant $c$. Our original desired ancestry list process is dominated by a process where $B_i = \min(B_{i-1} + d-1,n)$ with probability $\min(B_{i-1}d/n,1)$ and $B_i = B_{i-1}$ otherwise, and this process is in turn dominated for values of $B_i$ up to $c \log n$ by a Galton-Waston branching process where the constant $d'$ satisfies $$1-(1-d'/n)^x \geq dx/n$$ for all $1 \leq x \leq c \log n$, so that at every stage the Galton-Watson process is more likely to have at least $d-1$ new offspring (and may have more). We see $d' = d+1$ suffices, as $$1-(1-(d+1)/n)^x = x(d+1)/n - O(dx^2/n^2)$$ which is greater than $dx/n$ for $n$ sufficiently large when $x$ is $O(\log n)$. The straightforward step by step coupling of the processes yields that $$\Pr(B_{Tn} > c \log n) \leq \Pr(B' > c \log n),$$ giving our desired bound. We also suggest a slightly cleaner alternative, which may prove useful for other variations: embed the branching process in a continuous time branching process. We scale time so that balls are thrown as a Poisson process or rate $n$ per unit time over $T$ time units. Each element therefore generates $d-1$ new offspring at time instants that are exponentially distributed with mean $1/d$ (the average time before a ball hits any bin on the ancestry list). Again, assuming $d-1$ new offspring is a pessimistic bound. If we let $C_t$ be the number of elements at time $t$ (starting from 1 element at time 0), it is well known (see, e.g., \cite[p.108 eq. (4)]{AN}, and note that generating $d-1$ new offspring is equivalent to ``dieing'' and generating $d$ offspring) that for such a process, $$\mathbb{E}[C_t] = e^{td(d-1)}.$$ In our case, we run to a fixed time $T$ and $\mathbb{E}[C_T] = e^{Td(d-1)}$, a constant. Indeed, in this specific case, the generating function for the distribution of the number of elements is known (see, e.g., \cite[p.109]{AN}), allowing us to directly apply a Chernoff bound. Specifically, $$\mathbb{E}[s^{C_t}] = se^{-dt}[1-(1-e^{-d(d-1)t})s^{d-1}]^{-1/(d-1)}.$$ Hence we have \begin{eqnarray*} \Pr(C_T > \gamma e^{Td(d-1)}) & = & \Pr(e^{C_T} > e^{\gamma e^{Td(d-1)}}) \\ & \leq & e^{-\gamma e^{Td(d-1)}} \mathbb{E}[e^{C_T}] \\ & \leq & c_3 e^{-c_4 \gamma} \end{eqnarray*} for constants $c_3$ and $c_4$ that depend on $d$ and $T$. Hence, this gives that the size of the ancestry list as viewed from the setting of the continuous branching process is $O(\log n)$ with high probability. The last concern is that running the continuous process for time $Tn$ does not guarantee that $Tn$ balls are thrown; this can be dealt with by thinking of the process running for a slightly longer time $T' > T$. That is, choose $T'= T +\epsilon$ for a small constant $\epsilon$. Standard Chernoff bounds on the Poisson random variables then guarantee that at least $Tn$ balls are then thrown with high probability, and the size of the ancestry lists are stochastically monotonically increasing with the number of balls thrown. Changing to $T'$ time units maintains that each ancestry list is $O(\log n)$ with high probability. Finally, by choosing the constant in the $O(\log n)$ term appropriately, we can achieve a high enough probability to apply a union bound so that this holds for all ancestry lists simultaneously with high probability. \end{proof} We now use Lemma~\ref{lem:branching} to show the following. \begin{lemma} \label{lem:small} The bins in the ancestry lists of the $d$ choices are disjoint with probability $1-\eta$ for $\eta = O(d^2 \log^2 n/n) = o(1)$. \end{lemma} \begin{proof} Let {\cal F} be the probability that the bins are disjoint, and let ${\cal E}$ be the event that no pair of the $d$ choices were previously chosen by the same ball. If ${\cal E}$ occurs, the ancestry lists are clearly not disjoint. Hence we wish to bound $$\Pr(F) \leq \Pr({\cal E}) + \Pr({\cal F} | \neg{\cal E}).$$ Consider any two of the $d$ bins chosen by the ball being placed. Each of the up to $Tn$ previous balls have $O(d^2)$ ways of choosing those two bins as two of their $d$ choices (e.g., picking that bin as the 2nd and 4th choice, for example), and the probability of choosing those two bins for each possible pair of choice positions is $O(1/n^2)$.\footnote{If $n$ is not prime, this probability is $O(1/n\phi(n))$, where $\phi$ is the Euler totient function counting the number of numbers less than $n$ that are relatively prime to $n$. We note $\phi(n)$ is usually $\Omega(n)$ and is always $\Omega(n/\log \log n)$, so this does not affect our argument substantially.} There are ${d \choose 2}$ pairs of balls, so by a union bound $\Pr({\cal E})$ is $O(Td^4/n^2)$. Now suppose that no pair of the $d$ bins were previously chosen by the same ball. Suppose the bins for each of the ancestry lists of the $d$ choices are ordered in some fixed fashion (say according to decreasing ball time, randomly permuted for each ball). We consider the probability that the $i$th bin in the ancestry list of one bin matches the $j$th bin in another. Since the lists do not share any ball in common, the $j$th bin in the second list matches the $i$th bin in the first list with probability only $O(1/n)$, as even conditioned on the value of the $i$th bin on the first list, the $j$th bin on the second list is uniform over $\Omega(n)$ possibilities.\footnote{Again, for $n$ not prime, we may use $\Omega(\phi(n))$ possibilities.} We now condition on all of the $d$ ancestry lists being of size $O(\log n)$; from Lemma~\ref{lem:branching}, this can be made to occur with any inverse polynomial probability by choosing the constant factor in the $O(\log n)$ term, so we assume this bound on ancestry list sizes. In his case, the probability of a match among any of the $d$ bins is only $O(d^2 \log^2 n/n)$ in total, where the $d^2$ factor is from the $d \choose 2$ possible ways of choosing bins, and the $\log^2 n$ term follows the bound on the size ancestry lists. Hence $\Pr({\cal F} | \neg{\cal E})$ is $O(d^2 \log^2 n/n)$, and the total probability that the ancestry lists of the $d$ choices are {\em not} disjoint is $\eta = O(d^2 \log^2 n/n) = o(1)$. \end{proof} We now show that this yields the Lemma~\ref{lem:mainlemma}. To clarify this, consider bins $b_1,b_2,\ldots,b_d$ that were chosen by a ball at some time $t+1/n$. (Recall our scaling of time.) The probability that all $d$ bins have load at least $i$ at that time is equivalent to the probability that each bin $b_j$ has a corresponding ancestry list $A_j$ showing that it has load $i$ at some time $u_j \leq t$. Fix a collection of ancestry lists $A_j$, and let $E_j$ be the event defined by ``bin $b_j$ has ancestry list $A_j$''. If these ancestry lists have disjoint sets of bins, then the corresponding balls in each ancestry list occur at different times and have no intersecting bins, and as such $$ \Pr \left ( \cap_j E_j \right ) = \prod_j \Pr(E_j).$$ For constant $i$, $t$, and $d$, the probability that all $d$ bins have load at least $i$ is constant. Hence, if the probability that the ancestry lists for the $d$ bins intersect at any bin is $\eta = o(1)$, we have asymptotic independence. Specifically, let $\cal X$ be the set of collections of $d$ ancestry lists for balls $b_1,b_2,\ldots,b_d$ that yield that each bin has load at least $i$ at time $t$, let $\cal Y$ be the subset of collections in $\cal X$ where the $d$ ancestry lists have no bins in common, and for a collection $Z$ in $\cal X$ let $E_j(Z)$ be the corresponding event defined by ``bin $b_j$ has ancestry list $A_j$ in collection $Z$''. Then \begin{eqnarray*} \sum_{Z \in {\cal X}} \Pr \left ( \cap_j E_j(Z) \right ) &= & \left [ \sum_{Z \in {\cal Y}} \Pr \left ( \cap_j E_j(Z) \right ) \right ] + o(1) \\ & = & \sum_{Z \in {\cal Y}} \left (\prod_j \Pr E_j(Z) \right ) + o(1) \\ & = & \sum_{Z \in {\cal X}} \left (\prod_j \Pr E_j(Z) \right ) + o(1). \end{eqnarray*} Here the first line uses that the $d$ ancestry lists intersect somewhere with probability $o(1)$; the second lines uses that for ancestry lists in $\cal Y$ we probability of the intersection is the product of the probabilities; and the third line is again because the the collections $Z$ in ${\cal X} - {\cal Y}$ have total probability $o(1)$. Hence up to an $o(1)$ term, the behavior is the same as if the $d$ choices were independent (with respect to all bins having load at least $i$). Thus $$\mathbb{E}[X_i(t + 1/n) - X_i(t)] = (x_{i-1}(t))^d - (x_{i}(t))^d +o(1)$$ as needed. \end{proof} As a result of Lemma~\ref{lem:mainlemma}, we have the following theorem, generalizing the differential equations approach for balanced allocations to the setting of double hashing. \begin{theorem} \label{mainthm} Let $i$, $d$, and $T$ be constants. Suppose $Tn$ balls are sequentially thrown into $n$ bins with each ball having $d$ choices obtained from double hashing and each ball being placed in the least loaded bin (ties broken randomly). Let $X_i(T)$ be the number of bins of load at least $i$ after the balls are thrown. Let $x_i(t)$ be determined by the family of differential equations $$\frac{dx_i}{dt} = x_{i-1}^d - x_{i}^d,$$ where $x_0(t) = 1$ for all time and $x_i(0) = 0$ for $i \geq 1$. Then with probability $1-o(1)$, $$\frac{X_i(T)}{n} = x_i(T) + o(1).$$ \end{theorem} \begin{proof} This follows from the fact that $$\mathbb{E}[X_i(t + 1/n) - X_i(t)] = (x_{i-1}(t))^d - (x_{i}(t))^d +o(1),$$ and applying Wormald's result \cite[Theorem 1]{Wormald}. We remark that Theorem 1 of \cite{Wormald} includes other technical conditions that we briefly consider here. The first condition is that $|X_i(t + 1/n) - X_i(t)|$ is bounded by a constant; all such values here are bounded by 1. The second (and only challenging) condition exactly corresponds to our statement that $\mathbb{E}[X_i(t + 1/n) - X_i(t)] = (x_{i-1}(t))^d - (x_{i}(t))^d +o(1)$ over the course of the process. The third condition is our functions on the right hand side, that is $(x_{i-1}(t))^d - (x_{i}(t))^d$, are continuous and satisfy a Lipschitz condition on an open neighborhood containing the path of the process. These functions are continuous on the domain where all $x_i \in [0,1]$ up to the value $i$ being considered, and they satisfy the Lipschitz condition as \begin{eqnarray*} |(x_{i-1}(t))^d - (x_{i}(t))^d| \! \! & \leq & \! \! |x_{i-1}(t) - x_{i}(t)| \sum_{j=0}^{d-1} (x_{i-1}(t))^{j}(x_i(t))^{d-1-j} \\ \! \! & \leq & \! \! d|(x_{i-1}(t)) - (x_{i}(t))|, \end{eqnarray*} taking note that all $x_i,x_{i-1}$ values are in the interval $[0,1]$. Hence the conditions for Wormald's theorem are met. \end{proof} The following corollary, based on the known fact that the result of Theorem~\ref{mainthm} also holds in the setting of fully random hashing \cite{MitzenmacherThesis}, states that the difference between fully random hashing and double hashing is vanishing. \begin{corollary} \label{cormain} Let $i$, $d$, and $T$ be constants. Consider two processes, where in each $Tn$ balls are sequentially thrown into $n$ bins with each ball having $d$ choices and each ball being placed in the least loaded bin (ties broken randomly), In one process, the $d$ choices are fully random; in the other, the $d$ choices are made by double hashing. Then with probability $1-o(1)$, the fraction of bins with load $i$ differ by an $o(1)$ additive term. \end{corollary} Given the results for the differential equations, it is perhaps unsurprising that one can use these methods to obtain, for example, a maximum load of $\log \log n/ \log d+ O(1)$ maximum load for $n$ balls in $n$ bins, using the related layered induction approach of \cite{ABKU}. While we suggest this is not the main point (given Theorem~\ref{thm:vresult}), we provide further details in Appendix~\ref{sec:followon}. \section{Conclusion} We have first demonstrated empirically that using double hashing with balanced allocation processes (e.g., the power of (more than) two choices), surprisingly, does not noticeably change performance when compared with fully random hashing. We have then shown that previous methods can readily provide $O(\log \log n)$ bounds for this approach. However, explaining why the fraction of bins of load $k$ for each $k$ appears the same requires revisiting the fluid limit model for such processes. We have shown, interestingly, that the same family of differential equations applies for the limiting process. Our argument should extend naturally to other similar processes; for example, the analysis can similarly be made to apply in a straightforward fashion for the differential equations for V\"{o}cking's $d$-left scheme \cite{MV}. This opens the door to the interesting possibility that double hashing can be suitable for other problem or analyses where this type of fluid limit analysis applies, such as low-density parity-check codes \cite{LMSS}. Here, however, the asymptotic independence required was aided by the fact that we were looking at the history of the process, allowing us to tie the ancestry lists to a corresponding branching process. Whether similar asymptotic independence can be derived for other problems remains to be seen. For other problems, such as cuckoo hashing, the fluid limit analysis, while an important step, may not offer a complete analysis. Even for load balancing problems, fluid limits do not straightforwardly apply for the heavily loaded case where the number of balls is superlinear in the number of bins \cite{Steger}, and it is unclear how double hashing performs in that setting. So again, determining more generally where double hashing can be used in place of fully random hashing without significantly changing performance may offer challenging future questions. \section*{Acknowledgments} The author thanks George Varghese for the discussions which led to the formulation of this problem, and thanks Justin Thaler for both helpful conversations and offering several suggestions for improving the presentation of results. \bibliographystyle{plain}
{ "timestamp": "2014-01-30T02:13:43", "yymm": "1209", "arxiv_id": "1209.5360", "language": "en", "url": "https://arxiv.org/abs/1209.5360", "abstract": "Double hashing has recently found more common usage in schemes that use multiple hash functions. In double hashing, for an item $x$, one generates two hash values $f(x)$ and $g(x)$, and then uses combinations $(f(x) +k g(x)) \\bmod n$ for $k=0,1,2,...$ to generate multiple hash values from the initial two. We first perform an empirical study showing that, surprisingly, the performance difference between double hashing and fully random hashing appears negligible in the standard balanced allocation paradigm, where each item is placed in the least loaded of $d$ choices, as well as several related variants. We then provide theoretical results that explain the behavior of double hashing in this context.", "subjects": "Data Structures and Algorithms (cs.DS); Distributed, Parallel, and Cluster Computing (cs.DC); Discrete Mathematics (cs.DM)", "title": "Balanced Allocations and Double Hashing", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9828232914907946, "lm_q2_score": 0.8289388019824946, "lm_q1q2_score": 0.8147003618088714 }
https://arxiv.org/abs/2203.03772
The product structure of squaregraphs
A squaregraph is a plane graph in which each internal face is a $4$-cycle and each internal vertex has degree at least 4. This paper proves that every squaregraph is isomorphic to a subgraph of the semi-strong product of an outerplanar graph and a path. We generalise this result for infinite squaregraphs, and show that this is best possible in the sense that "outerplanar graph" cannot be replaced by "forest".
\section{Introduction} \label{Introduction} \footnotetext[3]{School of Mathematics, Monash University, Melbourne, Australia (\texttt{\{robert.hickingbotham,david.wood\}@monash.edu}). Research of R.H.\ supported by an Australian Government Research Training Program Scholarship. Research of D.W.\ supported by the Australian Research Council.} \footnotetext[4]{Institute of Theoretical Informatics, Karlsruhe Institute of Technology, Germany (\texttt{\{paul.jungeblut,laura.merker2\}@kit.edu}).} \renewcommand{\thefootnote}{\arabic{footnote}} A \defn{squaregraph} is a plane graph\footnote{A \defn{plane graph} is a graph embedded in the plane with no crossings. The word `face' refers to the subgraph on the boundary of the face. A graph is \defn{outerplanar} if it is isomorphic to a plane graph where every vertex is on the outer-face.} in which each internal face is a $4$-cycle and each internal vertex has degree at least $4$. These graphs were introduced in 1973 by \citet{SZP73}. They have many interesting structural and metric properties. For example, \citet{BCE10} showed that squaregraphs are median graphs and are thus partial cubes, and that every squaregraph can be isometrically embedded\footnote{A graph $H$ can be \defn{isometrically embedded} into a graph $G$ if there exists an isomorphism $\phi$ from $V(H)$ to a subgraph of $G$ such that $\dist_H(u,v)=\dist_G(\phi(u),\phi(v))$ for all $u,v\in V(H)$.} into the cartesian product\footnote{The following are the standard graph products. For graphs $ G $ and $ H $, the \defn{cartesian product} $ G \boxempty H $ is the graph with vertex-set $ V(G) \times V(H) $ with an edge between two vertices $ (v,w) $ and $ (v',w') $ if $ v=v' $ and $ ww' \in E(H) $, or $ w=w' $ and $ vv' \in E(G) $. The \defn{direct product} $ G \times H $ is the graph with vertex-set $ V(G) \times V(H) $ with an edge between two vertices $ (v,w) $ and $ (v',w') $ if $ vv' \in E(G) $ and $ ww' \in E(H) $. The \defn{strong product} $ G \boxtimes H := (G\boxempty H)\cup (G\times H)$. } of five trees. See the survey by \citet{BC08} for background on metric graph theory. The primary contribution of this paper is the following product structure theorem for squaregraphs, as illustrated in \cref{fig:squaregraph-product}. For graphs $G$ and $H$, the \defn{semi-strong product} \defn{$ G \Bow H $} is the graph with vertex-set $ V(G) \times V(H) $ with an edge between two vertices $ (v,w) $ and $ (v',w') $ if $ v=v' $ and $ ww' \in E(H) $, or $ vv' \in E(G) $ and $ ww' \in E(H) $; see \citep{GRW76,HLL21} for example. Note that \[G \times H \,\sse\, G \,\Bow\, H \,\sse\, G \boxtimes H.\] We write \defn{$H \subsetsim G$} to mean that $H$ is isomorphic to a subgraph of $G$. \begin{restatable}{thm}{squaregraphs} \label{squaregraphs} For every squaregraph $G$ there is an outerplanar graph $H$ and a path $P$ such that $G\subsetsim H\Bow P$. \end{restatable} Note that since a path is bipartite, $H \Bow P$ is also bipartite. \begin{figure}[h] \centering \includegraphics{product} \caption A squaregraph $ G $ (left) isomorphic to a subgraph of the semi-strong product $ H \Bow P $ of an outerplanar graph $ H $ and a path $P$ (right). } \label{fig:squaregraph-product} \end{figure} We in fact prove a more general sufficient condition for a plane graph to have such a product structure which implies \cref{squaregraphs}; see \cref{srtw2-bfs} in \cref{SectionUB}. The second contribution of this paper is to show that \cref{squaregraphs} is best possible in the sense that ``outerplanar graph'' cannot be replaced by ``forest''. Moreover, this lower bound holds for strong products. In fact, we prove that for every integer $\ell\in\mathbb{N}$ there is a squaregraph $G$ such that for any graph $H$ and path $P$, if $G\subsetsim H\boxtimes P\boxtimes K_\ell$ then $H$ contains a cycle (and is therefore not a forest). This result actually follows from a stronger lower bound for bipartite graphs, which has other interesting consequences; see \cref{BipartiteLower} in \cref{SectionLB}. Also note that \cref{squaregraphs} cannot be strengthened by replacing ``outerplanar graph'' by ``graph with bounded pathwidth''. Indeed, \citet{BDJMW} showed that for every $k \in\mathbb{N}$ there is a tree $T$ (which is a squaregraph) such that for any graph $H$ and path $P$, if $T\subsetsim H \boxtimes P$ then $\pw(H)\geq k$. In \cref{squaregraphs} it is natural to ask whether there is such an outerplanar graph $H$ independent of $G$. This leads to the study of infinite squaregraphs, previously investigated by \citet{BCE10}. Our final contribution is an extension of \cref{squaregraphs} in which we show that every (possibly infinite) squaregraph is isomorphic to a subgraph of $O\Bow \overrightarrow{P}$, where $O$ is the universal outerplanar graph and $\overrightarrow{P}$ is the 1-way infinite path; see \cref{Infinite}. Before proving the above results, we provide further motivation by putting \cref{squaregraphs} in context. The study of the product structure of graph classes emerged with the following seminal result by \citet{DJMMUW20}, now called the \emph{Planar Graph Product Structure Theorem}. This result describes planar graphs in terms of the strong product of graphs with bounded treewidth\footnote{A \defn{tree-decomposition} of a graph $G$ is a collection $(B_x\subseteq V(G):x\in V(T))$ of subsets of $V(G)$ (called \defn{bags}) indexed by the nodes of a tree $T$, such that (a) for every edge $uv\in E(G)$, some bag $B_x$ contains both $u$ and $v$, and (b) for every vertex $v\in V(G)$, the set $\{x\in V(T):v\in B_x\}$ induces a non-empty subtree of $T$. The \defn{width} of a tree-decomposition is the size of the largest bag minus~$1$. The \defn{treewidth} of a graph $G$, denoted by \defn{$\tw(G)$}, is the minimum width of a tree-decomposition of $G$. A \defn{path-decomposition} of a graph $G$ is a tree decomposition $(B_x\subseteq V(G):x\in V(T))$ where $T$ is a path. The \defn{pathwidth} of a graph $G$, denoted by \defn{$\pw(G)$}, is the minimum width of a path-decomposition of $G$.} and a path. A connected graph has treewidth at most 1 if and only if it is a tree. Treewidth measures how similar a graph is to a tree and is an important parameter in algorithmic and structural graph theory; see \cite{HW17,Reed97}. Graphs with bounded treewidth are considered to be a relatively simple class of graphs. \begin{thm}[\cite{UWY,DJMMUW20}]\label{PGPST} For every planar graph $G$ there is a graph $H$ of treewidth at most $6$ and a path $P$ such that $G\subsetsim H \boxtimes P$. \end{thm} The original version of the Planar Graph Product Structure Theorem by \citet{DJMMUW20} had ``treewidth at most $8$'' instead of ``treewidth at most 6''. \citet{UWY} proved \cref{PGPST} with ``treewidth at most $6$''. Since outerplanar graphs have treewidth at most $2$, \Cref{squaregraphs} is stronger than \cref{PGPST} in the case of squaregraphs. \Cref{squaregraphs} is also stronger than \cref{PGPST} in the sense that \Cref{squaregraphs} uses $\Bow$ whereas \cref{PGPST} uses $\boxtimes$. That said, as explained in \cref{Preliminaries}, it is well-known that in the case of bipartite planar graphs $G$, the proof of \cref{PGPST} can be adapted to show that $G\subsetsim H\Bow P$. Product structure theorems are useful since they reduce problems on a complicated class of graphs (such as planar graphs or squaregraphs) to a simpler class of graphs (bounded treewidth graphs such as outerplanar graphs). They have been the key tool to resolve several open problems regarding queue layouts~\citep{DJMMUW20}, nonrepetitive colourings~\citep{DEJWW20}, centered colourings~\citep{DFMS21}, clustered colourings~\citep{DEMWW22}, adjacency labellings~\citep{BGP20,DEJGMM21,EJM}, vertex rankings~\citep{BDJM}, twin-width~\citep{BKW}, odd colourings~\citep{DMO}, and infinite graphs \cite{HMSTW}. Similar product structure theorems are known for other classes including graphs with bounded Euler genus~\cite{DJMMUW20,DHHW}, apex-minor-free graphs~\cite{DJMMUW20}, $(g,d)$-map graphs~\cite{DMW}, $(g,\delta)$-string graphs~\cite{DMW}, $(g,k)$-planar graphs~\cite{DMW}, powers of planar graphs~\cite{DMW,HW21b}, $k$-semi-fan-planar graphs~\cite{HW21b} and $k$-fan-bundle planar graphs~\cite{HW21b}. \subsection{Preliminaries} \label{Preliminaries} We consider undirected simple graphs $G$ with vertex-set $V(G)$ and edge-set $E(G)$. Unless stated otherwise, graphs are finite. Undefined terms and notation can be found in Diestel's textbook~\citep{Diestel5}. For $m,n \in \mathbb{Z}$ with $m \leq n$, let $[m,n]:=\{m,m+1,\dots,n\}$ and $[n]:=[1,n]$. Let $P_n$ denote a path on $n$ vertices. For graphs $G$ and $H$, the \defn{complete join $G+H$} is the graph obtained by the disjoint union of~$G$ and~$H$ by adding all edges between~$G$ and~$H$. For a graph $G$ with $A,B\sse V(G)$, let \defn{$G[A,B]$} be the subgraph of $G$ with $V(G[A,B]):=A \cup B$ and $E(G[A,B]):=\{uv \in E(G):u \in A, v\in B\}$. A \defn{matching} $M$ in a graph $G$ is a set of edges in $G$ such that no two edges in $M$ have a common endvertex. A matching $M$ \defn{saturates} a set $S\sse V(G)$ if every vertex in $S$ is incident to some edge in $M$. A \defn{model} of $H$ in $G$ is a function $\mu$ with domain $V(H)$ such that: $\mu(v)$ is a connected subgraph of $G$; $\mu(v)\cap \mu(w)=\emptyset$ for all distinct $v,w\in V(H)$; and $\mu(v)$ and $\mu(w)$ are adjacent for every edge $vw \in E(H)$. If, for some $s \in \mathbb{N}_0$, there is a model $\mu$ of $H$ in $G$ such that $|V(\mu(v))|\leq s$ for each $v \in V(H)$, then $H$ is an \defn{$s$-small minor} of $G$. In a plane graph $G$, a vertex is \defn{outer} if it is on the outer-face of $G$ and is \defn{inner} otherwise. Let \defn{$I_G$} denote the set of inner vertices in $G$. Let $G$ be a graph. A \defn{partition} of $G$ is a set $\Pcal$ of sets of vertices in $G$ such that each vertex of $G$ is in exactly one element of $\Pcal$. Each element of $\Pcal$ is called a \defn{part}. The \defn{quotient} of $\Pcal$ (with respect to $G$) is the graph, denoted by \defn{$G/\Pcal$}, with vertex set $\Pcal$ where distinct parts $A,B\in \mathcal{P}$ are adjacent in $G/\Pcal$ if and only if some vertex in $A$ is adjacent in $G$ to some vertex in $B$. An \defn{$H$-partition} of $G$ is a partition $\Pcal=(A_x:x \in V(H))$ where $H\cong G/\Pcal$. For an $H$-partition $(A_x:x\in V(H))$ of $G$, for each subgraph $J\sse G$ the quotient $\tilde{H}$ of the partition $(A_x\cap V(J):x\in V(H),A_x\cap V(J)\neq\emptyset)$ is called the \defn{sub-quotient} for $J$. Note that $\tilde{H}$ is a subgraph of $H$. A \defn{layering} of a graph $G$ is an ordered partition $\mathcal{L}:=(L_0,L_1,\dots)$ of $V(G)$ such that for every edge $vw \in E(G)$, if $v \in L_i$ and $w \in L_j$, then $|i-j|\leq 1$. $\mathcal{L}$ is a \defn{\textsc{bfs}-layering} (of $G$) if $L_0 = \{r\}$ for some \defn{root vertex} $r\in V(G)$ and $L_i=\{v\in V(G):\dist_G(v,r)=i\}$ for all $i\geq 1$. A path $P$ is \defn{vertical} (with respect to $\mathcal{L}$) if $|V(P)\cap L_i|\leq 1$ for all $i\geq 0$. A \defn{layered partition} $(\Pcal,\mathcal{L})$ of a graph $G$ consists of a partition $\Pcal$ and a layering~$\mathcal{L}$ of $G$. If $\Pcal$ is an $H$-partition, then $(\Pcal,\mathcal{L})$ is a \defn{layered $H$-partition}. If $\Pcal=(A_x:x\in V(H))$, then the \defn{width} of $(\Pcal,\mathcal{L})$ is $\max\{|A_x\cap L|:x\in V(H), L \in \mathcal{L}\}$. Layered partitions of width at most $1$ are \defn{thin}. Layered partitions were introduced by \citet{DJMMUW20} who observed the following connection to strong products (which follows directly from the definitions). \begin{obs}[\cite{DJMMUW20}]\label{OrthogonalPartitions} For all graphs $G$ and $H$, $G \subsetsim H\boxtimes P\boxtimes K_{\ell}$ for some path $P$ if and only if $G$ has a layered $H$-partition $(\Pcal,\mathcal{L})$ with width at most $\ell$. \end{obs} We have the following analogous observation for $\Bow$ (which also follows directly from the definitions). \begin{obs}\label{BowPartitions} For all graphs $G$ and $H$, $G \subsetsim (H\boxtimes K_\ell) \Bow P$ for some path $P$ if and only if $G$ has a layered $H$-partition $(\Pcal,\mathcal{L})$ with width at most $\ell$, such that each $L \in \mathcal{L}$ is an independent set in $G$. \end{obs} In \cref{BowPartitions} we may use $G \subsetsim (H \boxtimes K_{\ell}) \Bow P $ instead of $G \subsetsim H\boxtimes K_{\ell} \boxtimes P$ when each $L\in\Lcal$ is an independent set, since no edges in $G$ correspond to edges in $H\boxtimes K_{\ell} \boxtimes P$ of the form $ (v,x,w)(v',y,w) $ where $vv'\in E(H)$, $x,y\in V(K_{\ell})$ and $w \in V(P)$. As mentioned in \cref{Introduction}, it is well-known that in the case of bipartite planar graphs $G$, the proof of \cref{PGPST} can be adapted to show that $G\subsetsim H\Bow P$ for some graph $H$ of treewidth at most $6$ and for some path $P$. To see this, we may assume that $G$ is edge-maximal bipartite planar. Thus $G$ is connected, and each face is a 4-cycle. Let $\Lcal=(L_0,L_1,\dots)$ be a \textsc{bfs}-layering of $G$. So each $L_i$ is an independent set. Each face can be written as $(a,b,c,d)$ where $a\in L_i$ and $b,d\in L_{i+1}$ and $c\in L_i \cup L_{i+2}$, for some $i\geq 0$. Let $G'$ be the planar triangulation obtained from $G$ by adding the edge $bc$ across each such face. Thus $(L_0,L_1,\dots)$ is a layering of $G'$. The proof of \cref{PGPST} shows that $G'$ has a partition $\Pcal$ such that $\tw(G/\Pcal)\leq 6$ and $(\Pcal,\Lcal)$ is a thin layered partition. By construction, $(\Pcal,\Lcal)$ is a layered partition of $G$. By \cref{BowPartitions}, $G \subsetsim H \Bow P$. A \defn{red-blue colouring} of a bipartite graph $G$ is a proper vertex $2$-colouring of $G$ with colours `red' and `blue'. \section{Sufficient Conditions} \label{SectionUB} In this section we prove \cref{squaregraphs}. We first prove the following, more general sufficient condition for a plane graph to be isomorphic to a subgraph of the strong or semi-strong product of an outerplanar graph and a path. Afterwards, we show that this more general result implies \cref{squaregraphs}. \begin{thm}\label{srtw2-bfs} Let~$G$ be a plane graph with inner vertices~$I_G$. If $ G $ has a layering $\mathcal{L}= (L_0,L_1,\dots)$ such that $G[L_{i - 1},L_i]$ has a matching saturating $L_{i - 1}\cap I_G$ for each $i \in [n]$, then $G \subsetsim H \boxtimes P$ for some outerplanar graph $H$ and path $P$. Moreover, if $V(L_i)$ is an independent set for all $L_i \in \mathcal{L}$, then $G \subsetsim H \Bow P$. \end{thm} \begin{proof} By \cref{OrthogonalPartitions,BowPartitions}, it suffices to show that $G$ has a thin layered $H$-partition $\Pcal$ (with respect to $\mathcal{L}$) for some outerplanar graph $H$. For each $i \in [n]$, let $E_{i}$ be a matching in $G[L_{i-1},L_i]$ that saturates $ L_{i - 1}\cap I_G$. For vertices $u \in L_{i-1}$ and $v \in L_i$ and an edge $uv\in E_{i}$, we say that $u$ is the \defn{parent} of $v$ and $v$ is the \defn{child} of $u$. Observe that each vertex $u \in L_{i-1}\cap I_G$ has exactly one child and each vertex $v \in L_i$ has at most one parent. Let $J$ be the subgraph of $G$ where $V(J)=V(G)$ and $E(J)=\bigcup_{i \in [n]} E_i$. Let $X$ be a connected component of $J$. Choose the maximum $j \in [0,n]$ such that there exists some vertex $v \in V(X)\cap L_j$. Vertex~$v$ must be outer because each vertex in $L_j \cap I_G$ is adjacent in~$J$ to some vertex in~$L_{j+1}$. As illustrated in \cref{fig:path-contraction}, since each vertex in $X$ has at most one child and at most one parent, $X$ is a vertical path with respect to $\mathcal{L}$. \begin{figure}[!ht] \centerin \includegraphics[page = 3]{path-contraction-leveled} \qquad \includegraphics[page = 6]{path-contraction-leveled} \caption Left: A squaregraph with a \textsc{bfs}-layering and a partition $ \Pcal $ into vertical paths (thick orange). The vertical paths are constructed from matchings between consecutive layers, where the leftmost vertex in $ L_i $ is chosen for each inner vertex in $ L_{i-1} $. Right: The lower endpoint of each path is on the outer-face, so when each path is contracted we obtain an outerplanar graph. } \label{fig:path-contraction} \end{figure} Let $\Pcal$ be the partition of $G$ determined by the connected components of $J$. Let $H=G/\Pcal$ be the quotient of $\Pcal$. Since each part in $\Pcal$ is a vertical path with respect to $\mathcal{L}$, it follows that $(\Pcal,\mathcal{L})$ is a thin layered $H$-partition. It remains to show that $H$ is outerplanar. Since each part in $\Pcal$ is connected, $H$ is a minor of $G$ and is therefore planar. Since each part of $\Pcal$ contains a vertex on the outer-face, contracting each part of $\Pcal$ into a single vertex gives a plane embedding of $H$ with each vertex on the outer-face; see \cref{fig:path-contraction}. Therefore $H$ is outerplanar. \end{proof} We now work towards showing that squaregraphs satisfy the conditions for \cref{srtw2-bfs}. A plane graph $G$ is \defn{leveled} if the edges are straight line-segments and vertices are placed on a sequence of horizontal lines, $(L_0,L_1,\dots)$, called \defn{levels}, such that each edge joins two vertices in consecutive levels. If, in addition, we allow straight-line edges between consecutive vertices on the same level, then $G$ is \defn{weakly leveled}. Observe that the levels in a weakly leveled plane graph $G$ define a layering of $G$. Leveled plane graphs were first introduced by \citet{STT81}, and have since been well studied \cite{BDDEW19}. For a weakly leveled plane graph $G$ with levels $(L_0,L_1,\dots)$ and a vertex $v\in L_i$, the \defn{up-degree} of $v$ is $|N_G(v)\cap L_{i-1}|$ and the \defn{down-degree} of $v$ is $|N_G(v)\cap L_{i+1}|$. We now give a more natural condition that forces our desired matching between two consecutive levels. \begin{lem}\label{WeaklyLeveledUp} Let~$G$ be a weakly leveled plane graph with inner vertices $I_G$. If each vertex in $I_G$ has down-degree at least $2$, then $G \subsetsim H \boxtimes P$ for some outerplanar graph $H$ and path $P$. Moreover, if $G$ is a leveled plane graph, then $G \subsetsim H \Bow P$. \end{lem} \begin{proof} Let $(L_0,L_1,\dots)$ be the levels of $G$. Observe that if $G$ is a leveled plane graph, then $V(L_i)$ is an independent set for all $i\geq 0$. For each $ i \in [n]$, let $E_i$ be the set of edges in $G[L_{i - 1},L_i]$ between each vertex $v \in L_{i-1}\cap I_G$ and its leftmost neighbour in $ L_i $; see \cref{fig:path-contraction}. For the sake of contradiction, suppose there exists a vertex $u\in L_{i - 1}\cup L_i$ that is incident to two edges in $E_i$. By construction, each vertex in $L_{i-1}\cap I_G$ is incident to at most one edge in $E_i$ so $u\in L_i$. Let $x$ and $y$ be the neighbours of $u$ in $L_{i-1}$, where $x$ is to the left of $y$. Since $x$ has down-degree at least $2$, $x$ is adjacent to a vertex $v$ that is to the right of $u$. However, this contradicts $G$ being weakly leveled plane since $uy$ and $vx$ cross; see \cref{fig:LevelCrossing}. Therefore, $E_i$ is a matching that saturates $L_{i-1}\cap I_G$. The claim therefore follows by \cref{srtw2-bfs}. \end{proof} \begin{figure}[h!] \centering \includegraphics[width=0.21\textwidth]{LevelCrossing2} \caption{Contradiction in the proof of \Cref{WeaklyLeveledUp}.} \label{fig:LevelCrossing} \end{figure} We are ready to prove \cref{squaregraphs} which we restate here for convenience. \squaregraphs* \begin{proof} We may assume that $G$ is connected (since if each component of $G$ has the desired product structure, then so does $G$). By taking a \textsc{bfs}-layering of $G$ rooted at any vertex $r$ on the outer-face, \citet{BDDEW19} showed that $G$ is isomorphic to a leveled plane graph. Without loss of generality, assume $G$ is leveled plane with corresponding levels $(L_0,L_1,\dots)$. Below we show that every inner vertex in $G$ has up-degree at most 2. Since each inner vertex has degree at least $4$, each inner vertex has down-degree at least $2$. The result thus follows from \cref{WeaklyLeveledUp}. For the sake of contradiction, suppose there exists an inner vertex with up-degree at least $3$. Let $ i \in [n] $ be minimum such that there is a vertex $ v \in L_i\cap I_G $ with up-degree at least $3$. Let $ u_1, u_2, u_3 $ be neighbours of $v$ in $L_{i-1}$ ordered left to right. Since the levels are defined by a \textsc{bfs}-layering, there is an $(u_1,r)$-path and an $(u_3,r)$-path that does not contain $ u_2 $; see \cref{fig:three-parents}. Hence, $ u_2 $ is an inner vertex of $G$ and thus has degree at least $4$. However, by planarity, $ v $ is the only neighbour of $ u_2 $ in $ L_i $. Since $ u_2 $ has no neighbours in $L_{i-1}$ (as $G$ is leveled plane), $u_2$ has three neighbours in $ L_{i - 2} $, which contradicts the minimality of $ i $, as required. \end{proof} \begin{figure}[!ht] \centering \includegraphics{three-parents} \caption{Vertex $ v \in L_i $ with three neighbours $ u_1, u_2, u_3 $ in the preceding layer $ L_{i - 1} $. Since $ u_2 $ is an inner vertex, it has degree at least 4.} \label{fig:three-parents} \end{figure} We now give an application of \cref{squaregraphs}. A colouring $\phi$ of a graph $G$ is \defn{nonrepetitive} if for every path $v_1,\dots,v_{2h}$ in $G$, there exists $i \in [h]$ such that $\phi(v_i)\neq \phi(v_{i+h})$. The \defn{nonrepetitive chromatic number}, $\pi(G)$, is the minimum number of colours in a nonrepetitive colouring of $G$. Nonrepetitive colourings were introduced by \citet{AGHR02} and have since been widely studied; see the survey \citep{Wood21}. \citet{KP-DM08} showed that $\pi(G)\leq 4^{\tw(G)}$ for every graph $G$. Building upon this result, \citet{DEJWW20} proved the following: \begin{lem}[\citep{DEJWW20}]\label{NonrepProduct} For any graph $H$ and path $P$, if $G \subsetsim H \boxtimes P$ then $\pi(G)\leq 4^{\tw(H)+1}$. \end{lem} Using (a variation of) \cref{PGPST,NonrepProduct}, \citet{DEJWW20} resolved a long-standing conjecture of \citet{AGHR02} by showing that planar graphs $G$ have bounded nonrepetitive chromatic number; in particular, $\pi(G) \leq 768$. When $G$ is a squaregraph, \Cref{squaregraphs,NonrepProduct} imply that $\pi(G)\leq 4^3=64$. \section{Tightness}\label{SectionLB} In this section, we show that \cref{squaregraphs} is tight by proving a lower bound for the product structure of bipartite graphs. The \defn{row treewidth} of a graph $G$ is the minimum integer $k$ such that $G\subsetsim H \boxtimes P$ for some graph $H$ with treewidth $k$ and path $P$ \cite{BDJMW}. \cref{PGPST} says that every planar graph has row treewidth at most $6$. \citet{DJMMUW20} showed that the maximum row treewidth of planar graphs is at least $3$. They in fact proved the following stronger result. \begin{thm}[\cite{DJMMUW20}] \label{rtwLB} For all $k,\ell\in\mathbb{N}$ with $k\geq 2$ there is a graph $G$ with pathwidth $k$ such that for any graph $H$ and path $P$, if $G \subsetsim H \boxtimes P \boxtimes K_{\ell}$ then $K_{k+1}\subsetsim H$ and thus $H$ has treewidth at least $k$. Moreover, if $k=2$ then $G$ is outerplanar, and if $k=3$ then $G$ is planar. \end{thm} \cref{squaregraphs} says that squaregraphs have row treewidth at most 2. We show that this bound is tight by proving \cref{BipartiteLower} which is an analogous result to \cref{rtwLB} for bipartite graphs. As an introduction to the key ideas in the proof of \cref{BipartiteLower}, we first establish \cref{LowerBoundSubgraph} which is a slight generalisation of \cref{rtwLB}. We need the following lemma for finding long paths in quotient graphs. \begin{lem}\label{lem:BaseCaseLB} Let $ G $ be a graph and $(A_x : x\in V(H))$ be an $H$-partition of $G$ such that $|A_x|\leq a$ for all $x \in V(H)$. Then for every $w \in V(H)$ and $ n \in \N $, there is a sufficiently large $ n' \in \N $ such that if $ G $ contains a path on $ n' $ vertices, then $ H-w$ contains a path on $ n $ vertices. \end{lem} \begin{proof} Let $m$ be sufficiently large compared to $n$ and let $n':=(a+1)am+a$. Suppose $G$ has a path on $n'$ vertices. Let $G'=G-A_w$. Since $|V(P)\cap A_w|\leq a$, $P$ is split into at most $a + 1$ disjoint subpaths in $G'$. Thus, there is a path $P_{\max}$ in $G'$ with at least $am$ vertices. Let $\tilde{H}$ be the sub-quotient of $H$ with respect to $P_{\max}$. Observe that $\tilde{H}$ is connected and that $|V(\tilde{H})|\geq am/a = m $. Moreover, $\tilde{H}\sse H-w$ since $A_w\cap V(P_{\max})=\emptyset$. Now $\tilde{H}$ has maximum degree at most $2a$ since every vertex in $P_{\max}$ has degree at most~$2$. Thus, since $m$ is sufficiently large, $\tilde{H}$ contains a path on at least $n$ vertices, as required. \end{proof} The following result generalises \cref{rtwLB} (which is the $n= 2$ case). \begin{prop}\label{LowerBoundSubgraph} For all $k, \ell,n \in \N$ there exists a graph $G$ with pathwidth at most $k+1$ such that for any graph $H$ and path $P$, if $G \subsetsim H \boxtimes P \boxtimes K_{\ell}$ then $P_n+K_k\subsetsim H$. \end{prop} \begin{proof} We proceed by induction on $k\geq 1$. Let $ n' $ be sufficiently large compared to $n$. Let~$G^{(1)}$ be the graph obtained from a path on $n'$ vertices plus a dominant vertex $v$. Observe that $ G^{(1)} $ has radius 1 and pathwidth at most $2$. Suppose $ G^{(1)} \subsetsim H \boxtimes P \boxtimes K_{\ell}$ for some graph~$H$ and path~$P$. By \cref{OrthogonalPartitions}, there is a layered $H$-partition $(A_x : x \in V(H))$ of~$G$ of width~$\ell$. Let $w\in V(H)$ be such that $v \in A_w$. Since $G^{(1)}$ has radius $1$, every layering of $G^{(1)}$ consists of at most three layers so $|A_x|\leq 3\ell$ for all $x \in V(H)$. By \cref{lem:BaseCaseLB} and since $n'$ is sufficiently large, $ H-w $ contains a path on $ n $ vertices. As~$v$ is dominant in $ G^{(1)} $, $w$ is also dominant in $ H $. Thus $P_n+K_1 \subsetsim H$. Now suppose $k > 1$ and let $G^{(k-1)}$ be a graph that satisfies the induction hypothesis for $k-1$. Let~$G^{(k)}$ be obtained by taking $3\ell$ disjoint copies of~$G^{(k-1)}$ plus a dominant vertex~$v$. Then $G^{(k)}$ has pathwidth at most $k+1$. As in the base case, let $(A_x : x \in V(H))$ be a layered~$H$-partition of~$G^{(k)}$ of width~$\ell$. Let $w \in V(H)$ be such that $v \in A_w$. Since $G^{(k)}$ has radius $1$, it follows that $|A_x-\{v\}|\leq 3\ell-1$ for all $x \in V(H)$. Thus, there is a copy of $G^{(k-1)}$ that contains no vertices from~$A_w$. Now consider the sub-quotient $\tilde{H}$ of $H$ with respect to this copy of $G^{(k-1)}$. By induction, $P_n+K_{k-1}\subsetsim \tilde{H}$. Since~$v$ is dominant in $G^{(k)}$, $w$ is dominant in $H$ and thus $P_n+K_k\subsetsim H$, as required. \end{proof} Note that in \cref{LowerBoundSubgraph}, the graph $G^{(1)}$ is outerplanar and the graph $G^{(2)}$ is planar for every $n \in \N$. We now prove our main lower bound which is a bipartite version of \cref{LowerBoundSubgraph}. \begin{thm} \label{BipartiteLower} For all $i,j,k, \ell,n \in \N$ where $i+j=k$, there exists a bipartite graph $G$ with pathwidth at most $k+1$ such that for any graph $H$ and path $P$, if $G \subsetsim H \boxtimes P \boxtimes K_{\ell}$ then $P_n+K_{i,j}$ is a $2$-small minor of $H$. \end{thm} \begin{proof} Let $P_n=(a_1,\dots,a_n)$ be a path on $n$ vertices. Let $B=\{b_1,\dots,b_i\}$ and $C=\{c_1,\dots,c_j\}$ be the bipartition of $V(K_{i,j})$. We proceed by induction on $k$ with the following hypothesis: for every $i,j,k, \ell,n \in \N$ where $i+j=k$, there exists a red-blue coloured connected bipartite graph $G$, such that for any graph $H$, if $(A_x : x \in V(H))$ is a layered $H$-partition of $G$ of width $\ell$, then $H$ contains a model $\mu$ of $P_n+K_{i,j}$ such that for each $u \in V(P_n+K_{i,j})$ we have $|V(\mu(u))|\leq 2$ and $\bigcup_{a \in V(\mu(u))}A_a$ contains: \begin{enumerate} \item a red vertex when $u\in B$; \item a blue vertex when $u \in C$; and \item a red and a blue vertex when $u \in V(P_n)$. \end{enumerate} The claimed theorem follows by \cref{OrthogonalPartitions}. For~$k = 1$ we may assume that $i=1$ and $j=0$. Let $ n' $ be sufficiently large and let~$G^{(1,0)}$ be the bipartite graph obtained from a red-blue coloured path $P_G=(u_1,\dots,u_{n'})$ on $n'$ vertices plus a red vertex $v$ adjacent to all the blue vertices in $V(P_G)$. Observe that $ G^{(1,0)} $ has radius 2 and pathwidth at most $2$. Let $(A_x : x \in V(H))$ be a layered $H$-partition of~$G^{(1,0)}$ of width~$\ell$. Let $w\in V(H)$ be such that $v \in A_w$. Then $A_w$ contains a red vertex. Since $G^{(1,0)}$ has radius $2$, every layering of $G^{(1,0)}$ has at most five layers, so $|A_x|\leq 5\ell$ for all $x \in V(H)$. By \cref{lem:BaseCaseLB} and since $n'$ is sufficiently large, $ H-w $ contains a path $P_H=(a_1',\dots,a_{2n}')$ on $ 2n $ vertices. Now for every edge $a'_i a'_{i+1}\in E(P_H)$, there exists $j \in [n'-1]$ such that $u_j,u_{j+1}\in A_{a_i'}\cup A_{a_{i+1}'}$. As such, $A_{a_i'}\cup A_{a_{i+1}'}$ contains a red and a blue vertex. For all $i \in [n]$, let $\mu(a_i)=H[\{a_{2i-1}',a_{2i}'\}]$ and $\mu(b_1)=\{w\}$. Then $\mu$ is a model of $P_n+K_{1,0}$ in $H$ which satisfies the induction hypothesis. Now suppose $k > 1$ and that there is a red-blue coloured connected bipartite graph $G^{(i-1,j)}$ such that for any graph $H$, if $(A_x : x \in V(H))$ is a layered $H$-partition of $G$ of width $\ell$, then $H$ contains a model $\tilde{\mu}$ of $P_n+K_{i-1,j}$ where $|V(\tilde{\mu}(u))|\leq 2$ for all $u \in V(P_n+K_{i-1,j})$ and $\bigcup_{a \in V(\mu(u))}A_a$ contains a red vertex when $u\in B$; a blue vertex when $u \in C$; and a red and a blue vertex when $u \in V(P_n)$. Let~$G^{(i,j)}$ be obtained by taking $5\ell$ copies of~$G^{(i-1,j)}$ plus a red vertex~$v$ that is adjacent to all the blue vertices. Then $G^{(i,j)}$ has radius $2$ and pathwidth at most $k+1$. As in the base case, let $(A_x : x \in V(H))$ be a layered~$H$-partition of~$G^{(i,j)}$ of width~$\ell$. Let $w \in V(H)$ be such that $v \in A_w$. Then $A_w$ contains a red vertex. Since $G^{(i,j)}$ has radius $2$, $|A_w-\{v\}|\leq 5\ell-1$. Thus, there is a copy of $G^{(i-1,j)}$ that contains no vertices from~$A_w$. Now consider the sub-quotient $\tilde{H}$ of $H$ with respect to this copy of $G^{(i-1,j)}$. By induction, $\tilde{H}$ contains a model $\tilde{\mu}$ which satisfies the induction hypothesis. Let $\mu(b_i)=\{w\}$ and $\mu(v)=\tilde{\mu}(v)$ for all $v \in V(P_n+K_{i-1,j})$. Since~$v$ is adjacent to all the blue vertices in $G$, $w$ is adjacent to a vertex in $\bigcup_{a\in V(\mu(u))}A_a$ whenever $u \in V(P_n)\cup C$. Thus $\mu$ is a model of $P_n+K_{i,j}$ in $H$ which satisfies the induction hypothesis, as required. \end{proof} \begin{figure}[!h] \centering \includegraphics[width=0.85\textwidth]{BipartiteLowerBound} \caption{The graphs $G^{(1,0)}$ and $G^{(1,1)}$ from \cref{BipartiteLower}.} \label{fig:BipartiteLower} \end{figure} We now highlight several consequences of \cref{BipartiteLower}. First, when $i=1$ and $j=0$, the graph $G^{(1,0)}$ is an outerplanar squaregraph as illustrated in \cref{fig:BipartiteLower}. Since $P_2+K_{1,0}$ is a $3$-cycle, we have the following: \begin{cor} For every $\ell \in \N$, there exists a squaregraph $G$ such that for any graph $H$ and path $P$, if $G \subsetsim H \boxtimes P \boxtimes K_{\ell}$ then $H$ contains a cycle of length at most 6. \end{cor} Thus \Cref{squaregraphs} is best possible in the sense that ``outerplanar graph'' cannot be replaced by ``forest''. Second, when $i=j=1$, the graph $G^{(1,1)}$ is a bipartite planar graph, as illustrated in \cref{fig:BipartiteLower}. Since $P_2+K_{1,1}\cong K_4$ which has treewidth $3$, we have the following: \begin{cor} For every $\ell \in \N$, there exists a bipartite planar graph $G$ such that for any graph $H$ and path $P$, if $G \subsetsim H \boxtimes P \boxtimes K_{\ell}$ then $H$ contains a 2-small minor of $K_4$ and thus $\tw(H)\geq 3$. \end{cor} Therefore, the maximum row treewidth of bipartite planar graphs is at least $3$. We conclude this section with the following open problem: what is the maximum row treewidth of bipartite planar graphs? As in the case of (non-bipartite) planar graphs, the answer is in $\{3,4,5,6\}$. \section{Infinite Squaregraphs} \label{Infinite} In this section by `graph' we mean a graph $G$ with $V(G)$ finite or countably infinite. \citet{HMSTW} showed how \cref{PGPST} can be used to construct a graph that contains every planar graph as a subgraph and has several interesting properties. Here we adapt their methods to construct an analogous graph that contains every squaregraph as a subgraph. \citet{BCE10} gave several equivalent definitions of an infinite squaregraph. The following definition suits our purposes. Let $G$ be a locally finite\footnote{A graph $G$ is \defn{locally finite} if every vertex of $G$ has finite degree.} graph. For every vertex $v$ of $G$ and every $r\in\mathbb{N}$ the subgraph $G[ \{ w\in V(G): \dist_G(v,w)\leq r\} ]$ is called a \defn{ball}. Since $G$ is locally finite, every ball is finite. An infinite graph $G$ is a \defn{squaregraph} if it is locally finite and every ball in $G$ is a squaregraph. Let $\overrightarrow{P}$ be the 1-way infinite path, which has vertex-set $\mathbb{N}_0$ and edge-set $\{ \{i,i+1\} : i \in\mathbb{N}_0 \}$. It is well known that there is a \defn{universal} outerplanar graph $O$. This means that $O$ is outerplanar and every outerplanar graph is isomorphic to a subgraph of $O$. See Theorem~4.14 in \citep{HMSTW} for an explicit definition of $O$. \begin{thm} \label{InfiniteSquaregraph} Every squaregraph is isomorphic to a subgraph of $O\Bow \overrightarrow{P}$. \end{thm} \cref{InfiniteSquaregraph} follows from \cref{squaregraphs} and the next lemma, which is an adaptation of Lemma~5.3 in \citep{HMSTW}. \begin{lem} Let $H$ be a graph. Let $G$ be a locally finite graph such that $B\subsetsim H\Bow \overrightarrow{P}$ for every ball $B$ in $G$. Then $G\subsetsim H \Bow \overrightarrow{P}$. \end{lem} \begin{proof}[Proof Sketch] Fix $v\in V(G)$. For $n\in\mathbb{N}_0$, let $V_n :=\{w\in V(G): \dist_G(v,w)=n\}$ and $G_n:=G[ V_0\cup V_1 \cup\dots \cup V_n]$. So $G_n$ is a finite ball in $G$. By assumption, $G_n \subsetsim H \Bow \overrightarrow{P}$. Let $X_n$ be the set of all thin layered $H$-partitions $(\Pcal,\mathcal{L})$ of $G_n$, such that $L$ is an independent set in $G_n$ for each $L \in \mathcal{L}$. By \cref{BowPartitions}, $X_n\neq\emptyset$. Since $G_n$ is finite and connected, $X_n$ is finite. For each $n\in\mathbb{N}$ and for each $(\Pcal,\mathcal{L})\in X_n$, if $\Pcal':= \{ Y \setminus V_n : Y \in \Pcal, Y \setminus V_n\neq\emptyset\}$ and $\Lcal':= \{ L \setminus V_n : L \in \Lcal, Y \setminus V_n\neq\emptyset\}$ then $(\Pcal',\Lcal')\in X_{n-1}$ (since $G_{n-1}$ is connected). By K\H{o}nig's Lemma, there is an infinite sequence $(\Pcal_0,\Lcal_0), (\Pcal_1,\Lcal_1), (\Pcal_2,\Lcal_2), \dots$ where $\Pcal_{n-1}=\Pcal'_{n}$ and $\Lcal_{n-1}=\Lcal'_{n}$ for each $n\in\mathbb{N}$. By construction, $\Pcal_{n-1}$ is a `sub-partition' of $\Pcal_{n}$ and $\Lcal_{n-1}$ is a `sub-partition' of $\Lcal_{n}$. Let $\Pcal:= \bigcup_{n\in\mathbb{N}_0} \Pcal_n$ and $\Lcal:= \bigcup_{n\in\mathbb{N}_0} \Lcal_n$. Then $(\Pcal,\Lcal)$ is a thin layered $H$-partition of $G$; see \citep{HMSTW} for details. By \cref{BowPartitions}, $G\subsetsim H \Bow \overrightarrow{P}$. \end{proof} \subsection*{Acknowledgement} This research was initiated at the workshop, \emph{Geometric Graphs and Hypergraphs}, 30 August -- 3 September 2021, organised by Torsten Ueckerdt and Lena Yuditsky. Thanks to the organisers and other participants for creating a productive environment. \fontsize{11}{12} \selectfont \let\oldthebibliography=\thebibliography \let\endoldthebibliography=\endthebibliography \renewenvironment{thebibliography}[1] \begin{oldthebibliography}{#1 \setlength{\parskip}{0.3ex \setlength{\itemsep}{0.3ex }{\end{oldthebibliography}} \bibliographystyle{DavidNatbibStyle}
{ "timestamp": "2022-03-09T02:05:40", "yymm": "2203", "arxiv_id": "2203.03772", "language": "en", "url": "https://arxiv.org/abs/2203.03772", "abstract": "A squaregraph is a plane graph in which each internal face is a $4$-cycle and each internal vertex has degree at least 4. This paper proves that every squaregraph is isomorphic to a subgraph of the semi-strong product of an outerplanar graph and a path. We generalise this result for infinite squaregraphs, and show that this is best possible in the sense that \"outerplanar graph\" cannot be replaced by \"forest\".", "subjects": "Combinatorics (math.CO)", "title": "The product structure of squaregraphs", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9828232914907945, "lm_q2_score": 0.8289387998695209, "lm_q1q2_score": 0.8147003597321915 }
https://arxiv.org/abs/1310.4112
Subalgebras of the Fomin-Kirillov algebra
The Fomin-Kirillov algebra $\mathcal E_n$ is a noncommutative quadratic algebra with a generator for every edge of the complete graph on $n$ vertices. For any graph $G$ on $n$ vertices, we define $\mathcal E_G$ to be the subalgebra of $\mathcal E_n$ generated by the edges of $G$. We show that these algebras have many parallels with Coxeter groups and their nil-Coxeter algebras: for instance, $\mathcal E_G$ is a free $\mathcal E_H$-module for any $H\subseteq G$, and if $\mathcal E_G$ is finite-dimensional, then its Hilbert series has symmetric coefficients. We determine explicit monomial bases and Hilbert series for $\mathcal E_G$ when $G$ is a simply-laced finite Dynkin diagram or a cycle, in particular showing that $\mathcal E_G$ is finite-dimensional in these cases. We also present conjectures for the Hilbert series of $\mathcal E_{\tilde{D}_n}$, $\mathcal E_{\tilde{E}_6}$, and $\mathcal E_{\tilde{E}_7}$, as well as for which graphs $G$ on six vertices $\mathcal E_G$ is finite-dimensional.
\section{Introduction} The Fomin-Kirillov algebra $\mathcal E_n$ \cite{FK} is a certain noncommutative algebra with generators $x_{ij}$ for $1 \leq i < j \leq n$ that satisfy a simple set of quadratic relations. \iffalse \begin{definition}The \emph{Fomin-Kirillov algebra} $\mathcal E_n$ is the quadratic algebra (say, over $\mathbb{Q}$) with generators $x_{ij}=-x_{ji}$ for $1 \leq i < j \leq n$ with the following relations: \begin{itemize} \item $x_{ij}^2 = 0$ for distinct $i,j$; \item $x_{ij}x_{kl} = x_{kl}x_{ij}$ for distinct $i,j,k,l$; \item $x_{ij}x_{jk}+x_{jk}x_{ki}+x_{ki}x_{ij}=0$ for distinct $i,j,k$. \end{itemize} \end{definition} \fi While it was originally introduced as a tool to study the structure constants for Schubert polynomials, since then the Fomin-Kirillov algebra and its generalizations have received much attention from the perspectives of both combinatorics and algebra: see, for instance, \cite{Bazlov, FP, Kirillov, KirillovMaeno, Lenart, LenartMaeno, Majid, MPP, MS, P, Vendramin}. But despite its simple presentation, even some basic questions about $\mathcal E_n$ have eluded an answer thus far, such as whether or not it is finite-dimensional for $n \geq 6$. In order to better understand the structure of $\mathcal E_n$, we consider the following subalgebras: \[ \parbox{14cm}{for any graph $G$ on vertices $1, 2, \dots, n$, the \emph{Fomin-Kirillov algebra ${\mathcal E_G}$ of $G$} is the subalgebra of $\mathcal E_n$ generated by $x_{ij}$ for all edges $\overline{ij}$ in $G$.} \] While this definition might seem hopelessly ingenuous, our initial computations using the algebra package \texttt{bergman} \cite{bergman} revealed that, remarkably, whenever $G$ is a graph with at most five vertices, $\mathcal E_G$ has a one-dimensional top degree component, and in fact its Hilbert series has symmetric coefficients. (See the Appendix for these computations.) Our current study is therefore dedicated to an investigation of these beautiful, yet mysterious algebras. \medskip The first half of this paper is devoted to proving structural properties of Fomin-Kirillov algebras. We prove that any finite-dimensional $\mathcal E_G$ has a Hilbert series with symmetric coefficients, as well as that $\mathcal E_G$ is a free $\mathcal E_H$-module whenever $H$ is a subgraph of $G$. We also demonstrate that the Fomin-Kirillov algebras exhibit a striking amount of structure, much of which parallels Coxeter groups and nil-Coxeter algebras: for instance, we describe analogues of minimal coset representatives, descent sets, Bruhat order, and long words. The key tools to proving these facts can be derived from a braided Hopf algebra structure on $\mathcal E_n$ \cite{FP, MS}---in this context, the subalgebras $\mathcal E_G$ are \emph{(left) coideal subalgebras}. Coideal subalgebras are important objects in the study of Hopf algebras, and they sometimes possess freeness and symmetry properties analogous to the ones that we exhibit for $\mathcal E_G$ \cite{Masuoka}. There also exists a certain symmetric bilinear form on $\mathcal E_n$ whose nondegeneracy (known for $n \leq 5$ \cite{Grana}) implies that $\mathcal E_n$ is a special type of braided Hopf algebra called a Nichols algebra \cite{AS, MS}. Though Nichols algebras have been studied since \cite{Nichols}, the many parallels to Coxeter groups demonstrated here have not been observed in the literature on Nichols algebras to our knowledge. In our exposition below, we will not assume familiarity with braided Hopf algebras or Nichols algebras. We refer the reader to \cite{AS} for more details about these objects. \medskip A particularly exciting part of this study is the abundance of graphs $G$ for which $\mathcal E_G$ is finite-dimensional---a much larger class than finite-dimensional Coxeter groups---and the combinatorial mysteries awaiting discovery here. In the second half of this paper, we present results obtained so far about these finite-dimensional algebras. Specifically, we determine explicit monomial bases and Hilbert series for $\mathcal E_G$ when $G$ is a simply-laced Dynkin diagram or a cycle. Here, another surprising connection between the Fomin-Kirillov algebras and Coxeter groups appears: for $G$ a simply-laced Dynkin diagram, the dimension of $\mathcal E_G$ is equal to the dimension of the Weyl group of $G$ divided by the index of connection. In the case when $G$ is a cycle on $n$ vertices, we describe $\mathcal E_G$ explicitly as a quotient of the nil-Coxeter algebra of the affine symmetric group, showing that it is essentially a $q=0$ version of the twisted product of the group algebra of the symmetric group and the ring of coinvariants. We also present intriguing conjectures for the Hilbert series of $\mathcal E_{\tilde{D}_n}$, $\mathcal E_{\tilde{E}_6}$, and $\mathcal E_{\tilde{E}_7}$, as well as for which graphs $G$ on six vertices $\mathcal E_G$ is finite-dimensional. \medskip This paper is organized as follows. In Section 2, we introduce $\mathcal E_n$ and briefly describe some examples of $\mathcal E_G$. In Section 3, we discuss some structural properties of $\mathcal E_n$ and prove that whenever $\mathcal E_G$ is finite-dimensional, its Hilbert series has symmetric coefficients. In Section 4, we prove that when $H$ is a subgraph of $G$, $\mathcal E_G$ is a free $\mathcal E_H$-module. We also show that $\mathcal E_n$ has a tensor product decomposition with factors given by certain complementary $\mathcal E_G$. In Section 5, we discuss Coxeter groups and nil-Coxeter algebras, as well as their relationships and similarities to the subalgebras $\mathcal E_G$. In Section 6, we describe $\mathcal E_G$ for $G$ a simply-laced Dynkin diagram, computing its Hilbert series in each case. In Section 7, we describe $\mathcal E_G$ when $G$ is a cycle (that is, an affine Dynkin diagram of type $\tilde{A}_{n-1}$). Finally, we close in Section 8 with some open questions and conjectures to guide further research. We also include an Appendix containing the Hilbert series of $\mathcal E_G$ for all connected graphs $G$ on at most five vertices. \section{Preliminaries} We begin with the definition of the Fomin-Kirillov algebra \cite{FK}. \begin{defn} The \emph{Fomin-Kirillov algebra} $\mathcal E_n$ is the quadratic algebra (say, over $\mathbb Q$) with generators $x_{ij}=-x_{ji}$ for $1 \leq i < j \leq n$ with the following relations: \begin{itemize} \item $x_{ij}^2 = 0$ for distinct $i,j$; \item $x_{ij}x_{kl} = x_{kl}x_{ij}$ for distinct $i,j,k,l$; \item $x_{ij}x_{jk}+x_{jk}x_{ki}+x_{ki}x_{ij}=0$ for distinct $i,j,k$. \end{itemize} \end{defn} Let $V$ be the vector space spanned by the generators $x_{ij}$. Then $\mathcal E_n$ is a quotient of the tensor algebra $T(V) = \bigoplus_{n \geq 0} V^{\otimes n}$, the free associative algebra on the generators of $\mathcal E_n$. Since the relations are homogeneous, $\mathcal E_n$ is graded with respect to the usual degree. We will denote the degree $d$ part by $\mathcal E_n^d$. Note that $\mathcal E_n$ has another grading with respect to the symmetric group $\S_n$: define the $\S_n$-degree of $x_{ij}$ to be $\sigma_{ij} \in \S_n$, the transposition switching $i$ and $j$, and extend this $\S_n$-degree to all monomials in $T(V)$ by multiplicativity. Since each of the relations in $\mathcal E_n$ is homogeneous with respect to $\S_n$-degree, this gives an $\S_n$-grading on $\mathcal E_n$. We will write $\sigma_P$ for the $\S_n$-degree of a homogeneous element $P \in \mathcal E_n$. (We will always specify when we mean $\S_n$-degree; the use of ``degree'' unqualified will refer to the usual notion of degree.) There is a third grading related to the support of each monomial. For a monomial $m \in \mathcal E_n$, define $\Pi(m)$ to be the coarsest set partition of $[n]$ for which $i$ and $j$ lie in the same part if $x_{ij}$ or $x_{ji}$ appears in $m$. For instance, $\Pi(x_{12}x_{23}x_{45}x_{31}) = 123|45$. Then all of the relations of $\mathcal E_n$ are homogeneous with respect to $\Pi$. Note that $\Pi(m_1m_2)$ is the common coarsening of $\Pi(m_1)$ and $\Pi(m_2)$. The set of relations of $\mathcal E_n$ is also symmetric with respect to the indices. In other words, for all $\sigma \in \S_n$, there exists an automorphism of $\mathcal E_n$ given by $\sigma(x_{ij}) = x_{\sigma(i)\sigma(j)}$. Also note that $\mathcal E_n$ is isomorphic to its opposite algebra: the linear map $\operatorname{rev}\colon T(V) \to T(V)$ sending any monomial to the product of the same generators but in the reverse order preserves the set of relations and thus gives an antiautomorphism of $\mathcal E_n$. \begin{rmk} The natural category for $\mathcal E_n$ is the \emph{Yetter-Drinfeld category} over $\mathbb Q[S_n]$, which is the braided monoidal category consisting of $S_n$-graded $S_n$-modules $M = \bigoplus_{\sigma \in S_n} M_\sigma$ satisfying $\sigma(M_\pi) \subset M_{\sigma\pi\sigma^{-1}}$. \end{rmk} \subsection{Hilbert series} \label{sec-hilbert} Let $\mathcal H_n(t)$ be the Hilbert series of $\mathcal E_n$. Values of $\mathcal H_n(t)$ for small values of $n$ are given as follows. (To simplify expressions, we will often write $[k] = 1+t+t^2+\cdots+t^{k-1}$.) \begin{align*} \mathcal H_1(t) =&\; 1\\ \mathcal H_2(t) =&\; [2]\\ \mathcal H_3(t) =&\; [2]^2[3]\\ \mathcal H_4(t) =&\; [2]^2[3]^2[4]^2\\ \mathcal H_5(t) =&\; [4]^4[5]^2[6]^4\\ \mathcal H_6(t) =&\; 1+15t+125t^2+765t^3+3831t^4+16605t^5+64432t^6\\ &+228855t^7+755777t^8+2347365t^9+6916867t^{10}+\cdots \end{align*} A closed form for $\mathcal H_n$ is not known for $n \geq 6$. It is not even known whether $\mathcal E_n$ is finite-dimensional for $n \geq 6$. \subsection{Subalgebras} In order to better understand $\mathcal E_n$, we will study its subalgebras. Such algebras are mentioned by Kirillov in the introduction of \cite{Kirillov}, and analogues for other root systems are also studied in unpublished work of Bazlov and Kirillov \cite{Kirillov2}. \begin{defn} For any graph $G$ with vertex set $[n]$, the \emph{Fomin-Kirillov algebra} ${\mathcal E_G}$ of $G$ is the subalgebra of $\mathcal E_n$ generated by $x_{ij}$ for all edges $\overline{ij}$ in $G$. \end{defn} We will write $\mathcal E_G^d$ for the degree $d$ part of $\mathcal E_G$ and $\mathcal E_G^+ = \bigoplus_{d \geq 1} \mathcal E_G^d$ for the positive degree part of $\mathcal E_G$. \medskip Note that by this definition, $\mathcal E_n=\mathcal E_{K_n}$. If graphs $G$ and $G'$ are isomorphic, then so are the algebras $\mathcal E_G$ and $\mathcal E_{G'}$ since we can apply the automorphism of $\mathcal E_n$ that permutes the indices appropriately. Moreover, since $\mathcal E_n$ is a subalgebra of $\mathcal E_{n+1}$ (as can be seen from considering $\Pi$-degree), $\mathcal E_G$ does not depend on the choice of $n$---that is, it does not change if we add or remove isolated vertices. Finally, if $G$ and $H$ are graphs on disjoint vertex sets, then all variables in $\mathcal E_G$ commute with all variables in $\mathcal E_H$, so $\mathcal E_{G+H} \cong \mathcal E_G \otimes \mathcal E_H$. \medskip Given a subalgebra $\mathcal E_G$, we will write $\mathcal H_G(t)$ for its Hilbert series. Values of $\mathcal H_G$ for all connected graphs $G$ with at most five vertices are given in the Appendix. Computations were performed using the algebra package \texttt{bergman} \cite{bergman}. Although one might not necessarily expect $\mathcal H_G(t)$ to be particularly nice in general, a quick glance at the Appendix shows that in fact $\mathcal H_G(t)$ is a product of cyclotomic polynomials for all $G$ with at most five vertices. \subsection{Examples} \label{sec-examples} Note that it is not easy to give a presentation of $\mathcal E_G$: while $\mathcal E_n$ is defined by quadratic relations, $\mathcal E_G$ will usually have minimal relations that are not quadratic, possibly of much higher degree. (We may sometimes omit the word ``minimal'' when referring to minimal relations.) To illustrate this, we start with a few examples. \begin{ex}[The Dynkin diagram $A_3$]\label{ex-a3} Let $G=A_3$ be the path with two edges, as shown in Figure~\ref{fig-a3}. The only quadratic relations in $\mathcal E_{A_3}$ are those that come from the definition of $\mathcal E_3$, namely $\mathsf{a}^2=\mathsf{b}^2=0$. However, $\mathcal E_{A_3}$ has other relations: in $\mathcal E_3$, there is a quadratic relation involving the third edge $\mathsf{c}$: \begin{equation} \label{eq1} 0=\mathsf{ab}+\mathsf{bc}+\mathsf{ca} \tag{$*$} \end{equation} Multiplying \eqref{eq1} on the right by $\mathsf{a}$ gives \[0=\mathsf{aba}+\mathsf{bca}+\mathsf{caa}=\mathsf{aba}+\mathsf{bca},\] while multiplying \eqref{eq1} on the left by $b$ gives \[0=\mathsf{bab}+\mathsf{bbc}+\mathsf{bca}=\mathsf{bab}+\mathsf{bca}.\] Equating these, we deduce that $\mathsf{aba}=\mathsf{bab}$ in $\mathcal E_{A_3}$, which we call a \emph{braid relation}. We will see in Theorem~\ref{thm-a} that these generate all relations, so that \[\mathcal E_{A_3} = \langle \mathsf{a},\mathsf{b} \mid \mathsf{a}^2=\mathsf{b}^2=0, \;\; \mathsf{aba}=\mathsf{bab}\rangle.\] Then $\mathcal E_{A_3}$ is the \emph{nil-Coxeter algebra} of type $A_2$. It has basis $\{\mathrm{id}, \mathsf{a}, \mathsf{b}, \mathsf{ab}, \mathsf{ba}, \mathsf{aba}\}$ and Hilbert series \[\mathcal H_{A_3}(t) = 1+2t+2t^2+t^3 = (1+t)(1+t+t^2) = [2][3].\] \end{ex} \begin{figure} \begin{center} \begin{tikzpicture} \node[v] (i) at (0,0) {}; \node[v] (j) at (2,0) {}; \node[v] (k) at (4,0) {}; \draw[thick, ->] (i) -- node[above]{$\mathsf{a}$} (j); \draw[thick, ->] (j) -- node[above]{$\mathsf{b}$} (k); {\draw[dashed, thick, ->] (k.south) to [out = -135, in=-45] node[below]{$\mathsf{c}$} (i.south);} \end{tikzpicture} \end{center} \caption{\label{fig-a3} The Dynkin diagram $A_3$ consists of the two solid edges. The label on an edge directed from vertex $i$ to vertex $j$ represents the generator $x_{ij}$.} \end{figure} \begin{ex}[The star on four vertices] \label{ex-star3} Consider the star graph $K_{1,3}$ on four vertices, as shown on the left of Figure~\ref{fig-star}. A presentation of $\mathcal E_{K_{1,3}}$ is given by three quadratic relations: \[\mathsf{a}^2=\mathsf{b}^2=\mathsf{c}^2=0;\] three braid relations: \[\mathsf{aba}+\mathsf{bab}=\mathsf{bcb}+\mathsf{cbc}=\mathsf{cac}+\mathsf{aca}=0;\] and two \emph{claw relations}: \[\mathsf{abca}+\mathsf{bcab}+\mathsf{cabc}=\mathsf{acba}+\mathsf{bacb}+\mathsf{cbac}=0.\] \end{ex} The braid and claw relations are special cases of the following \emph{cyclic relations}. \begin{lemma} \label{lemma-cyclic}\cite[Lemma 7.2]{FK} For $m=3, \ldots, n,$ and any distinct $a_1, \ldots, a_m \in [n]$ the following relation holds in $\mathcal E_n$: \[\sum_{i=2}^m x_{a_1, a_i}x_{a_1, a_{i+1}}\cdots x_{a_1, a_m}x_{a_1, a_2}x_{a_1,a_3}\cdots x_{a_1, a_i}=0.\] \end{lemma} However, even star graphs have minimal relations that are not of this type. \begin{ex}[The star on five vertices] \label{ex-star4} For the star graph $K_{1,4}$ on five vertices, as shown in the middle of Figure~\ref{fig-star}, $\mathcal E_{K_{1,4}}$ has a presentation consisting of four quadratic relations, six braid relations, eight claw relations, six quartic cyclic relations, and three sextic relations of the following form: \[\mathsf{abacdc}-\mathsf{abcdca}+\mathsf{acdcba}+\mathsf{bacdcb}-\mathsf{bcdcab}-\mathsf{cabadc}+\mathsf{cdabac}-\mathsf{cdcaba}+\mathsf{dabacd}-\mathsf{dcabad}=0.\] \end{ex} \begin{figure} \begin{center} \vc{ \begin{tikzpicture} \node[v] (1) at (0,0){}; \node[v] (2) at (-1,1){}; \node[v] (3) at (1,1){}; \node[v] (5) at (1, -1){}; \draw[thick, ->] (1) -- node[above right]{$\mathsf{a}$} (2); \draw[thick, ->] (1) -- node[below right]{$\mathsf{b}$} (3); \draw[thick, ->] (1) -- node[below left]{$\mathsf{c}$} (5); \end{tikzpicture} } \quad\quad\quad \vc{ \begin{tikzpicture} \node[v] (1) at (0,0){}; \node[v] (2) at (-1,1){}; \node[v] (3) at (1,1){}; \node[v] (4) at (-1,-1){}; \node[v] (5) at (1, -1){}; \draw[thick, ->] (1) -- node[above right]{$\mathsf{a}$} (2); \draw[thick, ->] (1) -- node[below right]{$\mathsf{b}$} (3); \draw[thick, ->] (1) -- node[below left]{$\mathsf{c}$} (5); \draw[thick, ->] (1) --node[above left]{$\mathsf{d}$} (4); \end{tikzpicture} } \quad\quad\quad \vc{ \begin{tikzpicture} \node[v] (1) at (0,0){}; \node[v] (2) at (1.5,0){}; \node[v] (3) at (1.5,-1.5){}; \node[v] (4) at (0,-1.5){}; \draw[thick, ->] (1) -- node[above]{$\mathsf{a}$} (2); \draw[thick, ->] (2) -- node[right]{$\mathsf{b}$} (3); \draw[thick, ->] (3) -- node[below]{$\mathsf{c}$} (4); \draw[thick, ->] (4) --node[left]{$\mathsf{d}$} (1); \end{tikzpicture} } \end{center} \caption{\label{fig-star} On the left, the star $K_{1,3}$ on four vertices. In the middle, the star $K_{1,4}$ on five vertices. On the right, the $4$-cycle $\tilde{A}_3$.} \end{figure} \begin{ex}[The 4-cycle $\tilde{A}_3$] \label{ex-a3tilde} For the 4-cycle $\tilde{A}_3$, as shown on the right of Figure~\ref{fig-star}, $\mathcal E_{\tilde{A}_3}$ has a presentation consisting of four quadratic relations, four braid relations, and the following three relations: \[\mathsf{abc}+\mathsf{bcd}+\mathsf{cda}+\mathsf{dab} = 0,\] \[\mathsf{cba}+\mathsf{dcb}+\mathsf{adc}+\mathsf{bad} = 0,\] \[\mathsf{abda}+\mathsf{bcab}+\mathsf{cdbc}+\mathsf{dacd}+\mathsf{acbd} + \mathsf{bdac} = 0.\] \end{ex} The complexity of the relations in $\mathcal E_G$ increases quickly as the number of edges increases. Despite this, the Fomin-Kirillov algebras often seem to be relatively well-behaved. \section{Structure} In this section, we will describe some structural properties of $\mathcal E_n$ and $\mathcal E_G$. In particular, we will show that if $\mathcal E_G$ is finite-dimensional, then its Hilbert series has symmetric coefficients. \subsection{A bilinear form} The Fomin-Kirillov algebra $\mathcal E_n$ admits an action on itself defined as follows. \begin{prop} \label{prop-delta} \cite{FK} There exists a unique linear map $\Delta_{ab}\colon \mathcal E_n \to \mathcal E_n$ satisfying \[\Delta_{ab}(x_{ij}) = \begin{cases} 1, &\text{if $i=a$, $j=b$;}\\-1, & \text{if $i=b$, $j=a$;}\\0,&\text{otherwise;}\end{cases}\] and $\Delta_{ab}(PQ) = \Delta_{ab}(P)\cdot Q +\sigma_{ab}(P)\cdot \Delta_{ab}(Q)$. The operators $\Delta_{ab}$ satisfy the relations of $\mathcal E_n$, so they describe an action of $\mathcal E_n$ on itself. \end{prop} We will write $\Delta_P$ for the operator corresponding to an element $P \in \mathcal E_n$. In other words, if $P = x_{i_1j_1}x_{i_2j_2}\cdots x_{i_kj_k}$, then we let $\Delta_P = \Delta_{i_1j_1}\Delta_{i_2j_2}\cdots \Delta_{i_kj_k}$, and then we extend to all of $\mathcal E_n$ by linearity. Note that the operators $\Delta_P$ intertwine the automorphisms $\sigma \in \S_n$ in the following way: $\sigma(\Delta_P(Q))=\Delta_{\sigma(P)}(\sigma(Q))$. We can think of $\Delta_{ab}$ as having degree $-1$ and $\S_n$-degree $\sigma_{ab}$: if $P$ is homogeneous with respect to both degree and $\S_n$-degree, then $\Delta_{ab}P$ has degree $\deg P-1$ and $\S_n$-degree $\sigma_{ab}\sigma_P$. Similarly, there exists a dual (right) action of $\mathcal E_n$ on itself, defined as follows. The proof is essentially the same as that of Proposition~\ref{prop-delta} (and can also be deduced from Proposition~\ref{prop-pairing}). \begin{prop} \label{prop-nabla} There exists a unique linear map $\nabla_{ab}\colon \mathcal E_n \to \mathcal E_n$ (acting on the right) satisfying $(x_{ij})\nabla_{ab} =\Delta_{ab}(x_{ij}) $ and $(PQ)\nabla_{ab} = P\cdot (Q)\nabla_{ab} + (P)(\sigma_Q\nabla_{ab})\cdot Q$, where $\sigma_Q\nabla_{ab} = \nabla_{\sigma_Q(a)\sigma_Q(b)}$. The operators $\nabla_{ab}$ satisfy the relations of $\mathcal E_n$, so they describe an action of $\mathcal E_n$ on itself. \end{prop} We will similarly write $\nabla_P$ for the operator corresponding to an element $P \in \mathcal E_n$. The following lemma will be useful for performing calculations involving $\Delta_{ab}$ and $\nabla_{ab}$. It follows easily from repeated use of the Leibniz rules given in Propositions~\ref{prop-delta} and \ref{prop-nabla}. \begin{lemma} \label{lemma-leibniz} For a monomial $P= p_1 \cdots p_d \in \mathcal E_n$, write $P = P_k^L p_k P_k^R$. Then \begin{align*} \Delta_{ab}(P) &= \sum_{k=1}^d \Delta_{ab}(p_k) \cdot \sigma_{ab}(P_k^L) P_k^R, \text{ and}\\ (P)\nabla_{ab} &= \sum_{k=1}^d (p_k)(\sigma_{P_k^R}\nabla_{ab}) \cdot P_k^LP_k^R. \end{align*} \end{lemma} \begin{ex} \label{ex-delta} Here is a brief example of how to apply the $\Delta_{ij}$ and $\nabla_{ij}$ operators. \begin{align*} \Delta_{12}(x_{12}x_{23}x_{31}) &= \Delta_{12}(x_{12})\cdot x_{23}x_{31} + x_{21}\cdot \Delta_{12}(x_{23})\cdot x_{31} + x_{21}x_{13}\cdot \Delta_{12}(x_{31})\\ &= x_{23}x_{31},\\ (x_{12}x_{23}x_{31})\nabla_{12} &= x_{12}x_{23} \cdot (x_{31})\nabla_{12} + x_{12} \cdot (x_{23})\nabla_{32} \cdot x_{31} + (x_{12})\nabla_{23} \cdot x_{23}x_{31}\\ &= -x_{12}x_{31}. \end{align*} \end{ex} These actions are dual in the following sense. \begin{prop} \label{prop-pairing} If $P$ and $Q$ are homogeneous of the same degree, then $\Delta_P(Q) = \Delta_Q(P) = (P)\nabla_Q = (Q)\nabla_P$. This defines a symmetric bilinear form $\langle P, Q \rangle$ on $\mathcal E_n$. With respect to this form, the operators $\Delta_P$ and $\nabla_P$ are adjoint to right and left multiplication by $P$, respectively. \end{prop} \begin{proof} We induct on the degree $d$ of $P$ and $Q$. Choose monomials $P=p_1\cdots p_d$ and $Q=q_1\cdots q_d$, and write $P' = P_d^L= p_1\cdots p_{d-1}$ and $Q=Q_k^Lq_kQ_k^R$ as in Lemma~\ref{lemma-leibniz}. Then \[\Delta_P(Q) = \Delta_{P'} \Delta_{p_d}(Q) = \sum_{k=1}^d \Delta_{P'} (\sigma_{p_d}(Q_k^L)\cdot Q_k^R) \cdot \Delta_{p_d}(q_k).\] By induction, this equals \begin{equation} \label{eq-pairing} \sum_{k=1}^d\Delta_{ \sigma_{p_d}Q_k^L}\Delta_{Q_k^R}(P')\cdot \Delta_{q_k}(p_d) =\sum_{k=1}^d\Delta_{Q_k^L}(\sigma_{p_d}(\Delta_{Q_k^R}(P'))) \cdot \Delta_{q_k}(p_d). \tag{$*$} \end{equation} The $k$th term in the sum is only nonzero if $p_d = \pm q_k$, that is, if $\sigma_{p_d} = \sigma_{q_k}$. We therefore find that $\Delta_P(Q)$ equals \[\sum_{k=1}^d\Delta_{Q_k^L}(\sigma_{q_k}(\Delta_{Q_k^R}(P'))) \cdot \Delta_{q_k}(p_d) = \Delta_Q(P' \cdot p_d) = \Delta_Q(P).\] Also by induction, the left side of \eqref{eq-pairing} equals \[ \sum_{k=1}^d(P')\nabla_{ \sigma_{p_d}Q_k^L}\nabla_{Q_k^R}\cdot (p_d)\nabla_{q_k} = (P' \cdot p_d)\nabla_Q = (P)\nabla_Q. \] All that remains is to show the adjointness properties: \begin{align*} \langle P_1P_2, Q\rangle &= \Delta_{P_1P_2}(Q) = \Delta_{P_1}(\Delta_{P_2}(Q)) = \langle P_1, \Delta_{P_2}(Q)\rangle\\[2mm] \langle P_1P_2, Q\rangle &= (Q)\nabla_{P_1P_2} = ((Q)\nabla_{P_1})\nabla_{P_2} = \langle P_2, (Q)\nabla_{P_1} \rangle. \qedhere \end{align*} \end{proof} \begin{ex} By Example~\ref{ex-delta} and Proposition~\ref{prop-pairing}, \[\langle x_{12}x_{13}x_{12}, x_{12}x_{23}x_{31} \rangle = \langle x_{12}x_{13}, \Delta_{12}(x_{12}x_{23}x_{31}) \rangle = \langle x_{12}x_{13}, x_{23}x_{31} \rangle.\] Similarly, \[\langle x_{12}x_{13}, x_{23}x_{31}\rangle = \langle (x_{12}x_{13})\nabla_{23}, x_{31} \rangle = \langle -x_{13}, x_{31} \rangle = 1.\] \end{ex} Note that if $P$ and $Q$ are both homogeneous with respect to both the usual degree and $\S_n$-degree, then $\langle P, Q \rangle = 0$ unless $P$ and $Q$ have the same degree and $\sigma_P = \sigma_Q^{-1}$. \theoremstyle{plain} \newtheorem*{conj-nichols}{Conjecture \ref{conj-nichols}} \begin{conj} \label{conj-nichols} \cite{MS} The bilinear form $\langle\cdot,\cdot\rangle$ is nondegenerate on $\mathcal E_n$. \end{conj} This conjecture is equivalent to $\mathcal E_n$ being a special type of braided Hopf algebra called a Nichols algebra, which in this case is the quotient of $\mathcal E_n$ by the kernel of the bilinear form (see \cite{AS}). It is known that Conjecture~\ref{conj-nichols} holds for $n \leq 5$ \cite{Grana}. \subsection{Coproduct} The Fomin-Kirillov algebra has the structure of a braided Hopf algebra. This was noted in \cite{MS} and can be derived from the Hopf algebra structure of the twisted version of the algebra described in \cite{FP}. For more information about braided Hopf algebras, see \cite{AS}. We describe only the coproduct here, as we will need it later. The tensor product $\mathcal E_n \otimes \mathcal E_n$ has a braided product structure given by \[(P_1 \otimes Q_1)(P_2 \otimes Q_2) = (P_1 \sigma_{Q_1}(P_2)) \otimes (Q_1Q_2)\] for monomials $P_1$, $P_2$, $Q_1$, and $Q_2$. Then the coproduct $\Delta\colon \mathcal E_n \to \mathcal E_n \otimes \mathcal E_n$ is defined to be the braided homomorphism such that $\Delta(x_{ij}) = x_{ij} \otimes 1 + 1 \otimes x_{ij}$. Let $\mathcal E_n^\vee$ be the graded dual of $\mathcal E_n$, that is, the direct sum of the duals of each graded piece of $\mathcal E_n$. Then $\Delta$ defines an action of $\mathcal E_n^\vee$ on $\mathcal E_n$ as follows: if $p^\vee \in \mathcal E_n^\vee$ and $Q \in \mathcal E_n$, we let $p^\vee * Q = \sum p^\vee(Q_{(1)}^i) \cdot Q_{(2)}^i$, where $\Delta(Q) = \sum Q_{(1)}^i \otimes Q_{(2)}^i$. \begin{ex} We calculate $\Delta(x_{12}x_{23})$ to be \[(x_{12} \otimes 1+1 \otimes x_{12})(x_{23} \otimes 1 + 1 \otimes x_{23}) = x_{12}x_{23} \otimes 1 + x_{12}\otimes x_{23} + x_{13} \otimes x_{12} + 1 \otimes x_{12}x_{23}.\] Note that $(1 \otimes x_{12})(x_{23} \otimes 1) = x_{13} \otimes x_{12}$ due to the braiding. Let $\{x_{ij}^\vee\} \subset \mathcal E_n^\vee$ be the dual basis to $\{x_{ij}\} \subset \mathcal E_n^1$. Then $x_{13}^\vee* x_{12}x_{23}=x_{12}$. \end{ex} \begin{rmk} If $x_{ij}^\vee$ is an element of the dual basis as above, then $x_{ij}^\vee * Q = \operatorname{rev} ((\operatorname{rev} Q)\nabla_{ij}$). \end{rmk} \subsection{Properties of subalgebras} It is important to note how our subalgebras $\mathcal E_G$ behave with respect to the operators and bilinear form described above. The following lemma follows easily from the definitions of these operators. \begin{lemma} \label{lemma-delta} \begin{enumerate}[(a)] \item If $\overline{ij} \not \in G$, then $\Delta_{ij}(\mathcal E_G)=0$. \item For any $\nabla_{ij}$ and any graph $G$, $(\mathcal E_G) \nabla_{ij} \subset \mathcal E_G$. \item The coproduct $\Delta$ sends any element of $\mathcal E_G$ into $\mathcal E_n \otimes \mathcal E_G$. \item The left action of $\mathcal E_n^\vee$ on $\mathcal E_n$ restricts to an action on $\mathcal E_G$. \item If $H$ is a subgraph of $G$, then $\Delta(\mathcal E_H^+\mathcal E_G) \subset \mathcal E_H^+\mathcal E_n \otimes \mathcal E_G + \mathcal E_n \otimes \mathcal E_H^+\mathcal E_G$. \end{enumerate} \end{lemma} In the language of braided Hopf algebras, Lemma~\ref{lemma-delta}(c) says that $\mathcal E_G$ is a \emph{left coideal subalgebra} of $\mathcal E_n$. Likewise, Lemma~\ref{lemma-delta}(e) implies that $\mathcal E_H^+\mathcal E_n$ is a \emph{coideal} of $\mathcal E_n$. One important consequence is the following. \begin{lemma} \label{lemma-orthogonal} Let $G_1$ be a graph on $n$ vertices and $G_2$ its complement. Then the left ideal $\mathcal E_n\mathcal E_{G_1}^+$ is orthogonal to $\mathcal E_{G_2}$ with respect to $\langle \cdot, \cdot \rangle$. \end{lemma} \begin{proof} This follows using the adjointness of right multiplication by $x_{ij}$ and the left action of $\Delta_{ij}$ for $\overline{ij} \in G_1$ together with Lemma~\ref{lemma-delta}(a). \end{proof} \subsection{Finite dimensionality} \label{sec-finitedim} In this section, we will show that if $\mathcal E_G$ is finite-dimensional, then its Hilbert series must have symmetric coefficients. (This was proven for $\mathcal E_n$ in \cite{MS}.) We begin with a definition motivated by the theory of Coxeter groups. \begin{definition} For $w\in \mathcal E_n$, the \emph{right descent set} of $w$, denoted $R(w)$, is the graph containing edge $\overline{ij}$ whenever $wx_{ij} = 0$. Similarly, define the \emph{left descent set} $L(w)$ as the graph containing $\overline{ij}$ whenever $x_{ij}w=0$. \end{definition} We now use these descent sets to prove a key lemma regarding the action of $\mathcal E_n^\vee$ on $\mathcal E_n$. \begin{lemma} \label{lemma-integral} For all $P \in \mathcal E_n$, $\mathcal E_n^\vee * P$ is a left $\mathcal E_{L(P)}$-module. \end{lemma} \begin{proof} Let $Q=q^\vee * P$ be an arbitrary element of $\mathcal E_n^\vee * P$. For $\overline{ij} \in L(P)$, we compute $\sigma_{ij}q^\vee * x_{ij}P$, where $\sigma_{ij}q^\vee \in \mathcal E_n^\vee$ is given by $(\sigma_{ij}q^\vee)(R) = q^\vee(\sigma_{ij}R)$ for all $R \in \mathcal E_n$. If $\Delta(P) = \sum_k P_{(1)}^k \otimes P_{(2)}^k$, then \begin{align*} \Delta(x_{ij}P) &= \Delta(x_{ij})\Delta(P)\\ &= (x_{ij}\otimes 1 + 1 \otimes x_{ij}) \cdot \sum_k P_{(1)}^k \otimes P_{(2)}^k\\ &= \sum_k x_{ij}P_{(1)}^k \otimes P_{(2)}^k + \sum_k \sigma_{ij}P_{(1)}^k \otimes x_{ij}P_{(2)}^k. \end{align*} Thus \begin{align*} \sigma_{ij}q^\vee * x_{ij}P &= \sum_k (\sigma_{ij}q^\vee)(x_{ij}P^k_{(1)})\cdot P_{(2)}^k + \sum_k (\sigma_{ij}q^\vee)(\sigma_{ij}P_{(1)}^k)\cdot x_{ij}P_{(2)}^k\\ &=-\sum_kq^\vee(x_{ij}\sigma_{ij}P^k_{(1)}) \cdot P^k_{(2)} + x_{ij}\cdot \sum_k q^\vee(P_{(1)}^k) \cdot P_{(2)}^k\\ &=-r^\vee * P + x_{ij}Q, \end{align*} where $r^\vee \in \mathcal E_n^\vee$ is given by $r^\vee(R) =q^\vee(x_{ij}\sigma_{ij}R)$. Since $\overline{ij} \in L(P)$, $x_{ij}P=0$, so $x_{ij}Q = r^\vee * P$. \end{proof} One can similarly show that $\mathcal E_n^\vee * P$ is a right $\mathcal E_{R(P)}$-module. (See Proposition~\ref{prop-monicmcr} for a similar calculation.) As a direct consequence of Lemma~\ref{lemma-integral}, we have the following result. \begin{prop} \label{prop-monic} Suppose that $G \subset L(P)$ for some $P \in \mathcal E_n$. Then $\mathcal E_G \subset \mathcal E_n^\vee * P$, and $\mathcal E_G$ is finite-dimensional. If $P \in \mathcal E_G$, then $\mathcal E_n^\vee * P = \mathcal E_G$, and $P$ spans the top degree part of $\mathcal E_G$. \end{prop} \begin{proof} We may assume that $P$ is homogeneous of degree $d$. Then the only component of $\Delta(P)$ that lies in $\mathcal E_n^d \otimes \mathcal E_n^0$ is $P \otimes 1$. Thus $1 \in \mathcal E_n^\vee * P$. By Lemma~\ref{lemma-integral}, we must have that $\mathcal E_G \subset \mathcal E_n^\vee * P$. But clearly every element of $\mathcal E_n^\vee * P$ has degree at most $d$, so $\mathcal E_G$ has bounded degree and is therefore finite-dimensional. If $P \in \mathcal E_G$, then by Lemma~\ref{lemma-delta}, $\mathcal E_n^\vee * P \subset \mathcal E_G$, so we must have $\mathcal E_n^\vee * P = \mathcal E_G$. But the highest degree part of $\mathcal E_n^\vee * P$ has degree $d$, and since the homogeneous part of $\Delta(P)$ in $\mathcal E_n^0 \otimes \mathcal E_n^d$ is $1 \otimes P$, it follows that $P$ spans $\mathcal E_G^d$. \end{proof} We can now prove the main theorem of this section. \begin{thm} \label{thm-finitedim} Let $G$ be a graph such that $\mathcal E_G$ is finite-dimensional. Then \begin{enumerate}[(a)] \item the top degree component of $\mathcal E_G$ is spanned by a single monomial $w_0^G$ of degree $d_0$; \item for any $Q \in \mathcal E_G$, there exists $p^\vee \in \mathcal E_n^\vee$ such that $Q = p^\vee * w_0^G$; \item for any nonzero $Q \in \mathcal E_G$, there exists a monomial $P \in \mathcal E_G$ such that $PQ = w_0^G$; \item the coefficients of the Hilbert series $\mathcal H_G(t)$ are symmetric; \item the subwords of $w_0^G$ span $\mathcal E_G$; and \item $\operatorname{rev}(w_0^G) = \pm w_0^G$. \end{enumerate} \end{thm} \begin{proof} These all follow easily from Proposition~\ref{prop-monic}: For (a), any element $w_0^G$ of top degree $d_0$ satisfies $x_{ij}w_0^G=0$ for all $\overline{ij} \in G$, so $w_0^G$ spans $\mathcal E_G^{d_0}$. For (b), this is equivalent to $\mathcal E_n^\vee * w_0^G = \mathcal E_G$. For (c), let $P \in \mathcal E_G$ be a maximum degree monomial such that $PQ$ is nonzero. Then $x_{ij}PQ=0$ for all $\overline{ij} \in G$, so $PQ$ must be a multiple of $w_0^G$. For (d), the bilinear form $\mathcal E_G^d \otimes \mathcal E_G^{d_0-d} \to \mathbb Q$ that sends $P \otimes Q$ to the coefficient of $w_0^G$ in $PQ$ is nondegenerate by (c), so $\mathcal E_G^d$ and $\mathcal E_G^{d_0-d}$ have the same dimension. For (e), any element of $\mathcal E_n^\vee * w_0^G = \mathcal E_G$ lies in the span of the subwords of $w_0^d$ by the definition of the $*$ action. For (f), $\operatorname{rev}(w_0^G) = cw_0^G$ for some constant $c$, and since $\operatorname{rev}$ is an involution, $c=\pm 1$. \end{proof} \begin{ex} For $G=K_{1,3}$, as in Example~\ref{ex-star3}, the lexicographically minimal choice for $w_0^G$ is $\mathsf{abacabac}$. For $G=K_{1,4}$, as in Example~\ref{ex-star4}, the lexicographically minimal choice is $w_0^G=\mathsf{abacabacdabacabacdabadcabacd}$. \end{ex} \begin{rmk} Theorem~\ref{thm-finitedim} implies that $\mathcal E_G$ is a \emph{Frobenius algebra} when finite-dimensional: its Frobenius form is the bilinear form given in the proof of part (d). \end{rmk} The results in Theorem~\ref{thm-finitedim} are analogues of known results about finite Coxeter groups. See Section~\ref{sec-coxeter} for more discussion of this relationship. \section{Tensor product decomposition} \subsection{Subgraphs} We begin this section by describing the relationship between $\mathcal E_G$ and $\mathcal E_H$ when $H$ is a subgraph of $G$. \begin{thm} \label{thm-subgraph} Let $H$ be a subgraph of $G$. Then $\mathcal E_G$ is a free (left or right) $\mathcal E_H$-module. Specifically, $\mathcal E_G \cong \mathcal E_H \otimes (\mathcal E_G/\mathcal E_H^+\mathcal E_G)$ as left $\mathcal E_H$-modules and $\mathcal E_G \cong (\mathcal E_G/\mathcal E_G\mathcal E_H^+) \otimes \mathcal E_H$ as right $\mathcal E_H$-modules. \end{thm} \begin{proof} We prove just that $\mathcal E_G$ is a free left $\mathcal E_H$-module; the other result follows by passing to the opposite algebra. Let $I=\mathcal E_H^+\mathcal E_G$, and let the projection map be $\pi\colon \mathcal E_G\to \mathcal E_G/I$. We first claim that if $f\colon \mathcal E_G/I \to \mathcal E_G$ is any degree-preserving ($\mathbb Q$-linear) section and $\mu$ is the multiplication map, then $\varphi = \mu \circ (\mathrm{id} \otimes f) \colon \mathcal E_H \otimes (\mathcal E_G/I) \to \mathcal E_G$ is surjective. We will prove by induction that $\mathcal E_G^d$ lies in the image of $\varphi$. Note that the image of $\varphi$ is clearly closed under left multiplication by $\mathcal E_H$. Then if $\mathcal E_G^d$ lies in the image, so does the degree $d+1$ part of $I$. Since any element of $\mathcal E_G^{d+1}$ differs from an element in the image of $f$ by an element of $I$ of degree $d+1$, it follows that $\mathcal E_G^{d+1}$ also lies in the image, completing the induction. Next we show that $\varphi$ is injective. Choose bases $\{h_i\}$ of $\mathcal E_H$ and $\{\bar{g}_j\}$ of $\mathcal E_G/I$, and suppose that $\varphi(\sum_{i,j} c_{ij} h_i \otimes \bar{g}_j)= \sum_{i,j} c_{ij} h_ig_j= 0$ for some constants $c_{ij}$ not all zero, where $g_j = f(\bar{g}_j)$. By restricting to the degree $d$ part, we may assume that $\deg h_i + \deg \bar{g}_j = d$ for all $i$ and $j$. Find $i'$ such that some $c_{i'j}$ is nonzero with $h_{i'}$ of minimum degree $d'$. Let $\{h_{i}^\vee \mid \deg h_i \leq d\} \subset \mathcal E_H^\vee$ be the dual basis to $\{h_i \mid \deg h_i \leq d\} \subset \mathcal E_H$, and extend each $h_i^\vee$ to an element of $\mathcal E_n^\vee$ arbitrarily. We claim that \[0 = \pi(h_{i'}^\vee * \textstyle\sum_{i,j} c_{ij}h_ig_j) = \textstyle\sum_j c_{i'j}\bar{g}_j.\] This will be a contradiction since the $\bar{g}_j$ are linearly independent. To see why the claim is true, consider any term $c_{ij}h_ig_j$ with $c_{ij}$ nonzero. Then $\Delta(h_ig_j) = \Delta(h_i) \cdot \Delta(g_j)$. Since $\mathcal E_n \otimes I$ is a right ideal of $\mathcal E_n \otimes \mathcal E_G$ and $\Delta(h_i)$ is congruent to $h_i \otimes 1$ modulo $\mathcal E_n \otimes I$, we find that \[\pi(h_{i'}^\vee * h_ig_j) = (h_{i'}^\vee \otimes \pi)(\Delta(h_ig_j))= (h_{i'}^\vee \otimes \pi)((h_i \otimes 1) \cdot \Delta(g_j)).\] Since $\deg h_i \geq d'$, $(h_i \otimes 1) \cdot \Delta(g_j)$ can only have a nonzero component in $\mathcal E_n^{d'} \otimes \mathcal E_G^{d-d'}$ when $\deg h_i = d'$, in which case this component is $(h_i \otimes 1)(1 \otimes g_j) = h_i \otimes g_j$. Applying $h_{i'}^\vee \otimes \pi$ then gives 0 unless $i = i'$, in which case it gives $\bar{g}_j$, as desired. This completes the proof. \end{proof} \begin{cor} Let $H$ be a subgraph of $G$. Then $\mathcal H_H(t)$ divides $\mathcal H_G(t)$, and their quotient has positive coefficients. \end{cor} \begin{proof} The quotient is the Hilbert series of $\mathcal E_G/\mathcal E_G\mathcal E_H^+$. \end{proof} \begin{rmk} As noted in Lemma~\ref{lemma-delta}(c), $\mathcal E_G$ is a left coideal subalgebra of the braided Hopf algebra $\mathcal E_n$. Compare Theorem~\ref{thm-subgraph} to the results of \cite{Masuoka}, which gives several conditions that imply that a Hopf algebra is a free module over a (left) coideal subalgebra. \end{rmk} In light of Theorem~\ref{thm-subgraph}, we make the following definition. \begin{defn} \label{defn-mcr} A subset $M \subset \mathcal E_G$ is a set of left (resp. right) \emph{minimal coset representatives} for $\mathcal E_H$ if it is a basis of $\mathcal E_G$ as a right (resp. left) $\mathcal E_H$-module. \end{defn} Equivalently, by Theorem~\ref{thm-subgraph}, the projections of left (resp. right) minimal coset representatives give a basis of $\mathcal E_G/\mathcal E_G\mathcal E_H^+$ (resp. $\mathcal E_G/\mathcal E_H^+\mathcal E_G$). For this reason, we will sometimes abuse terminology and consider left minimal coset representatives to be elements of $\mathcal E_G/\mathcal E_G\mathcal E_H^+$. \begin{ex} Let $G=A_3$ be the path with two edges as in Example~\ref{ex-a3}, and let $H$ be the subgraph containing only edge $\mathsf{a}$. Then since $\mathcal E_H$ has basis $\{\mathrm{id}, \mathsf{a}\}$ and $\mathcal E_G$ has basis $\{\mathrm{id}, \mathsf{a}, \mathsf{b}, \mathsf{ab}, \mathsf{ba}, \mathsf{aba}\}$, a set of left minimal coset representatives is $\{\mathrm{id}, \mathsf{b}, \mathsf{ab}\}$. \end{ex} We use the term ``minimal coset representatives'' by analogy to the case of Coxeter groups and their corresponding nil-Coxeter algebras (see Section~\ref{sec-coxeter} for more details). We will typically take the elements of $M$ to be represented by monomials. Using Theorem~\ref{thm-subgraph}, we can prove the following lemma, which will be useful in the next section. \begin{lemma} \label{lemma-intersect} Let $H$ be a subgraph of $G$. Then $\mathcal E_H^+\mathcal E_n \cap \mathcal E_G = \mathcal E_H^+\mathcal E_G$. \end{lemma} \begin{proof} Let $M$ and $N$ be sets of right minimal coset representatives for $\mathcal E_H$ in $\mathcal E_G$ and for $\mathcal E_G$ in $\mathcal E_n$, respectively. Then any element $x \in \mathcal E_n$ can be written uniquely in the form $x=\sum h_{m,n}mn$, where $h_{m,n} \in \mathcal E_H$, $m \in M$, and $n \in N$. Thus $\{mn \mid m \in M, n \in N\}$ is a set of right minimal coset representatives for $\mathcal E_H$ in $\mathcal E_n$. If $x \in \mathcal E_H^+\mathcal E_n$, then each $h_{m,n}$ has positive degree, while if $x \in \mathcal E_G$, then $h_{m,n}=0$ unless $n$ is a constant. Thus if both hold, then we can write $x=\sum h_{m,n}m$ with $h_{m,n} \in \mathcal E_H^+$, so $x \in \mathcal E_H^+\mathcal E_G$. \end{proof} \subsection{Finite rank} In the event that $\mathcal E_G$ has finite rank as an $\mathcal E_H$-module, we can show that there is essentially a unique minimal coset representative of maximum degree. (If $\mathcal E_G$ itself is finite-dimensional, then this follows from Theorem~\ref{thm-finitedim}.) The general result will follow from the following proposition, akin to Proposition~\ref{prop-monic}. \begin{prop} \label{prop-monicmcr} Let $H$ be a subgraph of $G$. Suppose $P \in \mathcal E_G$ such that $P \not\in \mathcal E_H^+\mathcal E_G$ but $Px_{ij} \in \mathcal E_H^+\mathcal E_G$ for all $x_{ij} \in \mathcal E_G$. Then $P$ spans the top degree part of $\mathcal E_G/\mathcal E_H^+\mathcal E_G$. \end{prop} \begin{proof} Let $(\mathcal E_n^\vee)^H$ be the set of all $q^\vee \in \mathcal E_n^\vee$ such that $q^\vee(\mathcal E_H^+\mathcal E_n) = 0$. By Lemma~\ref{lemma-delta}(e), $q^\vee * \mathcal E_H^+\mathcal E_G \subset \mathcal E_H^+\mathcal E_G$, so $(\mathcal E_n^\vee)^H$ gives a left action $*$ on $\mathcal E_G/\mathcal E_H^+\mathcal E_G$. We may assume that $P$ is homogeneous of degree $d$. We claim that $(\mathcal E_n^\vee)^H * P$ spans $\mathcal E_G/\mathcal E_H^+\mathcal E_G$. First, since $P\not\in \mathcal E_H^+\mathcal E_G$, by Lemma~\ref{lemma-intersect}, $P \not\in \mathcal E_H^+\mathcal E_n$, so there exists a homogeneous element $q^\vee \in (\mathcal E_n^\vee)^H$ such that $q^\vee(P)=1$. Then since the component of $\Delta(P)$ in $\mathcal E_n^d \otimes \mathcal E_G^0$ is $P \otimes 1$, $q^\vee* P=1$, so $1 \in (\mathcal E_n^\vee)^H * P$. Then the claim will follow if we can show that the span of $(\mathcal E_n^\vee)^H * P$ in $\mathcal E_G/\mathcal E_H^+\mathcal E_G$ is a right $\mathcal E_G$-module. Let $Q = q^\vee * P \in (\mathcal E_n^\vee)^H * P$, and let $x_{ij}\in \mathcal E_G$. Write $\Delta(P) = \sum_k P^k_{(1)} \otimes P^k_{(2)}$, so that \[\Delta(Px_{ij}) = \sum_k P^k_{(1)}\sigma_{P^k_{(2)}}(x_{ij}) \otimes P^k_{(2)} + \sum_k P^k_{(1)} \otimes P^k_{(2)}x_{ij}.\] Then \begin{align*} q^\vee * (Px_{ij}) &= \sum_k q^\vee(P^k_{(1)}\sigma_{P^k_{(2)}}(x_{ij})) \cdot P^k_{(2)} + \sum_k q^\vee(P^k_{(1)}) \cdot P^k_{(2)}x_{ij}\\ &= -r^\vee * P + Qx_{ij}, \end{align*} where $r^\vee \in \mathcal E_n^\vee$ is defined (for $\S_n$-homogeneous $R$) by $r^\vee(R) = -q^\vee(R \sigma_R^{-1}\sigma_P(x_{ij}))$. If $R$ lies in $\mathcal E_H^+\mathcal E_n$, then so does $R \sigma_R^{-1}\sigma_P(x_{ij})$. Hence $q^\vee \in (\mathcal E_n^\vee)^H$ implies $r^\vee \in (\mathcal E_n^\vee)^H$. Then since $Px_{ij} \in \mathcal E_H^+\mathcal E_G$, it follows that $r^\vee * P$ and $Qx_{ij}$ are congruent modulo $\mathcal E_H^+\mathcal E_G$. Thus $(\mathcal E_n^\vee)^H * P$ spans $\mathcal E_G/\mathcal E_H^+\mathcal E_G$. Since the component of $\Delta(P)$ in $\mathcal E_n^0 \otimes \mathcal E_G^d$ is $1 \otimes P$, the top degree part of $(\mathcal E_n^\vee)^H * P$ is spanned by $P$, which gives the result. \end{proof} As an easy consequence, we get the following theorem analogous to Theorem~\ref{thm-finitedim}. \begin{thm} Let $H$ be a subgraph of $G$ such that $\mathcal E_G$ has finite rank as an $\mathcal E_H$-module, and let $M$ be a set of right minimal coset representatives. Then \begin{enumerate}[(a)] \item $M$ has a unique element $m_0$ of top degree; \item for any $m \in M$, $m = q^\vee * m_0$ in $\mathcal E_G/\mathcal E_H^+\mathcal E_G$ for some $q^\vee \in \mathcal E_n^\vee$ with $q^\vee(\mathcal E_H^+\mathcal E_n) = 0$; and \item for any $m \in M$, there exists $g \in \mathcal E_G$ such that $m_0 = mg$ in $\mathcal E_G/\mathcal E_H^+\mathcal E_G$. \end{enumerate} \end{thm} \begin{proof} By Proposition~\ref{prop-monicmcr}, the only $m \in M$ such that $mx_{ij} \in \mathcal E_H^+\mathcal E_G$ for all $x_{ij} \in \mathcal E_G$ has maximum degree in $M$ (which exists since $\mathcal E_G$ has finite rank), and this element spans the top degree of $\mathcal E_G/\mathcal E_H^+\mathcal E_G$ so must be unique. Parts (b) and (c) then follow as in the proof of Theorem~\ref{thm-finitedim}(b) and (c). \end{proof} \subsection{Complementary graphs} \label{sec-complementary} In some cases, the tensor product decomposition described in Theorem~\ref{thm-subgraph} is particularly simple. \begin{thm} \label{thm-tensor} Let $G$ be a graph, and let $G_1$ and $G_2$ be complementary subgraphs of $G$ such that any two vertices in the same connected component of $G_2$ have the same neighbors in $G_1$. Then the multiplication map $\mu\colon \mathcal E_{G_1} \otimes \mathcal E_{G_2} \to \mathcal E_G$ is an isomorphism of $\mathcal E_{G_1}$-$\mathcal E_{G_2}$-bimodules. In particular, $\mathcal H_{G_1}(t)\cdot \mathcal H_{G_2}(t) = \mathcal H_G(t)$. \end{thm} As a special case of this theorem, we have the following corollary. \begin{cor} \label{cor-tensor} Let $G_1$ be a complete multipartite graph on $n$ vertices, and let $G_2$ be its complement, a disjoint union of complete graphs. Then $\mathcal E_n \cong \mathcal E_{G_1} \otimes \mathcal E_{G_2}$. \end{cor} The case when $G_1 = K_{1, n-1}$ and $G_2 = K_{n-1}$ was proven in \cite{FP, MS}. \begin{proof}[Proof of Theorem~\ref{thm-tensor}] By our choice of $G_1$ and $G_2$, if $\overline{ij} \in G_1$ and $\overline{jk} \in G_2$, then $\overline{ik} \in G_1$. We first show that $\mu$ is surjective. By Theorem~\ref{thm-subgraph}, it suffices to show that the map $\mathcal E_{G_1} \to \mathcal E_G/\mathcal E_G\mathcal E_{G_2}^+$ is surjective. Choose a monomial $p_1\cdots p_d \in \mathcal E_G^d$. We show by induction on $d$ that it lies in the image of this map. Since the image is closed under left multiplication by $\mathcal E_{G_1}$, we are done if $p_1 \in \mathcal E_{G_1}$. Then suppose $p_1 \in \mathcal E_{G_2}$. By induction, we may assume that $p_2 \in \mathcal E_{G_1}$. If $p_1$ commutes with $p_2$, then $p_1p_2 \cdots p_d = p_2 p_1 \cdots p_d$, and then we are again done because $p_2 \in \mathcal E_{G_1}$. Otherwise, if $p_1 = x_{jk}$ and $p_2=x_{ij}$, then rewrite $x_{jk}x_{ij} = x_{ij}x_{ik} + x_{ik}x_{jk}$. Since $x_{ij}, x_{ik} \in \mathcal E_{G_1}$, we are again done by induction. To show that $\mu$ is injective, by Theorem~\ref{thm-subgraph}, it suffices to show that the map $\mathcal E_{G_2} \to \mathcal E_G/\mathcal E_{G_1}^+\mathcal E_G$ is injective, that is, that $\mathcal E_{G_2}$ intersects $\mathcal E_{G_1}^+\mathcal E_G$ trivially. But this holds because the $\Pi$-degree of any $\Pi$-homogeneous element of $\mathcal E_{G_2}$ has each part contained in a connected component of $G_2$, but this is not the case for any (nonzero) element of $\mathcal E_{G_1}^+\mathcal E_G$. \end{proof} In fact, it appears that the class of complementary graphs $G_1$ and $G_2$ for which $\mathcal E_G \cong \mathcal E_{G_1} \otimes \mathcal E_{G_2}$ is much more general than Corollary~\ref{cor-tensor} implies. Using Theorem~\ref{thm-tensor} and computations for small graphs, we can prove the following partial result on when such a tensor product decomposition holds. \begin{cor} \label{cor-complementary} Let $G_1$ be a graph on $n$ vertices and $G_2$ its complement. The class of graphs $G_1$ for which $\mathcal E_n \cong \mathcal E_{G_1} \otimes \mathcal E_{G_2}$ holds (as in Theorem~\ref{thm-tensor}) contains all graphs with at most five vertices and is closed under disjoint unions and complementation. \end{cor} \begin{proof We checked the claim for graphs with at most five vertices using \texttt{bergman}. Closure under complementation follows since $\mathcal E_n$ and all $\mathcal E_G$ are isomorphic to their opposite algebras. For disjoint unions, suppose that the tensor product decomposition exists for graphs $G_1 \cup G_2 = K_m$ and $H_1 \cup H_2 = K_n$. Then the complement of $G_1+H_1$ in $K_{m+n}$ is $L=(G_2+H_2) \cup K_{m,n}$. By Theorem~\ref{thm-tensor}, \[ \mathcal E_{m+n} \cong \mathcal E_{K_m+K_n} \otimes \mathcal E_{K_{m,n}} \cong \mathcal E_{G_1+H_1} \otimes \mathcal E_{G_2+H_2}\otimes \mathcal E_{K_{m,n}} \cong \mathcal E_{G_1+H_1} \otimes \mathcal E_L.\qedhere\] \end{proof} However, the tensor product decomposition does not hold in general. \begin{figure} \begin{tabular}{ccc} \begin{tikzpicture}[scale=0.7] \node[v] (1) at (-0.5,0){}; \node[v] (2) at (1,0){}; \node[v] (3) at (2,1){}; \node[v] (4) at (2,-1){}; \node[v] (5) at (3, 0){}; \node[v] (6) at (4.5, 0){}; \draw(1) node[above]{1} -- (2)node[above]{2} -- (3)node[above]{3} -- (5)node[above]{5} -- (6)node[above]{6} (2)--(4)node[below]{4}--(5) (2)--(5); \end{tikzpicture} &\quad& \begin{tikzpicture}[scale=0.7] \node[v] (1) at (-0.5,0){}; \node[v] (2) at (1,0){}; \node[v] (3) at (2,1){}; \node[v] (4) at (2,-1){}; \node[v] (5) at (3, 0){}; \node[v] (6) at (4.5, 0){}; \draw(1) node[above]{2} -- (2)node[above]{6} -- (3)node[above]{3} -- (5)node[above]{1} -- (6)node[above]{5} (2)--(4)node[below]{4}--(5) (3)--(4) (2)--(5); \end{tikzpicture}\\ \begin{tikzpicture}[scale=0.7] \node[v] (1) at (0,0){}; \node[v] (2) at (18:1.5){}; \node[v] (3) at (90:1.5){}; \node[v] (4) at (162:1.5){}; \node[v] (5) at (234:1.5){}; \node[v] (6) at (306:1.5){}; \draw (2) node[right]{2}--(3)node[above]{1}--(4)node[left]{5}--(5)node[below]{4}--(6)node[below]{3}--(2)--(1)node[below]{6}--(4); \end{tikzpicture} &\quad& \begin{tikzpicture}[scale=0.7] \node[v] (1) at (0,0){}; \node[v] (2) at (18:1.5){}; \node[v] (3) at (90:1.5){}; \node[v] (4) at (162:1.5){}; \node[v] (5) at (234:1.5){}; \node[v] (6) at (306:1.5){}; \draw (2) node[right]{3}--(3)node[above]{1}--(4)node[left]{4}--(5)node[below]{2}--(6)node[below]{5}--(2)--(1)node[below]{6}--(4) (1)--(3); \end{tikzpicture} \end{tabular} \caption{\label{fig-counter} Either of the graphs on the left can be used for $G_1$ in Proposition~\ref{prop-counter}. On the right are their complements.} \end{figure} \begin{prop} \label{prop-counter} Let $G_1$ be either of the two graphs on the left of Figure~\ref{fig-counter} and $G_2$ its complement (shown on the right). Then $\mathcal E_6 \not \cong \mathcal E_{G_1} \otimes \mathcal E_{G_2}$. \end{prop} \begin{proof} Note that $G_1$ has the property that its complement $G_2$ is isomorphic to $G_1$ but with an extra edge connecting two twin vertices. Hence by Theorem~\ref{thm-tensor}, $\mathcal H_{G_2}(t) = \mathcal H_{G_1}(t) \cdot (1+t)$. Then if $\mathcal E_6 \cong \mathcal E_{G_1} \otimes \mathcal E_{G_2}$, we would be able to solve for $\mathcal E_{G_1}$ using the first few terms of the Hilbert series for $\mathcal E_6$ as given in Section~\ref{sec-hilbert} to find that \begin{align*} \mathcal H_{G_1}(t) &= \left(\frac{\mathcal H_6(t)}{1+t}\right)^{1/2}\\ &= \left(\frac{1 + 15t + 125t^2 + 765t^3 + 3831t^4 + 16605t^5 + 64432t^6 + 228855t^7+\cdots}{1+t}\right)^{1/2}\\ &= \left(1+14t+111t^2+654t^3+3177t^4+13428t^5+51004t^6+177851t^7+\cdots\right)^{1/2}\\ &= 1+7t+31t^2+110t^3+338t^4+938t^5+2408t^6+\tfrac{11623}{2}t^7+\cdots, \end{align*} which is impossible due to the coefficient of $t^7$. \end{proof} It would be interesting to try to classify for which graphs and their complements a tensor product decomposition as in Corollary~\ref{cor-tensor} holds. \subsection{Computation} We include a brief discussion of a method for computing minimal coset representatives. This method of computation was used to achieve some of the results in the next section as well as the conjectures we will present later. Let $G$ be a graph and $e$ an edge not in $G$. Denote $G \cup \{e\}$ by $G'$. Suppose that we wish to compute a set of minimal coset representatives for $\mathcal E_{G}$ in $\mathcal E_{G'}$, that is, a basis for $\mathcal E_{G'}/\mathcal E_{G'}\mathcal E_{G}^+$. Unfortunately, without prior knowledge of the relations of $\mathcal E_{G'}$ (which would naively require an expensive noncommutative Gr\"obner basis calculation for $\mathcal E_n$), one cannot easily determine whether an element of $\mathcal E_{G'}$ lies in $\mathcal E_{G'}\mathcal E_{G}^+$. We can, however, give a simple sufficient condition for an element of $\mathcal E_{G'}$ not to lie in $\mathcal E_{G'}\mathcal E_{G}^+$. Let $H$ be the complement of $G'$ and $H' = H \cup \{e\}$ the complement of $G$. By Lemma~\ref{lemma-orthogonal}, every element of $\mathcal E_{G'}\mathcal E_{G}^+$ is orthogonal to $\mathcal E_{H'}$. Hence, any element of $\mathcal E_{G'}$ that pairs nontrivially with some element of $\mathcal E_{H'}$ does not lie in $\mathcal E_{G'}\mathcal E_{G}^+$. But we need not even pair with all elements of $\mathcal E_{H'}$: any element of $\mathcal E_{H'}\mathcal E_H^+$ is orthogonal to every element of $\mathcal E_{G'}$. Hence we only need pair with elements that are linearly independent in $\mathcal E_{H'}/\mathcal E_{H'}\mathcal E_H^+$. This suggests the following algorithm to calculate linearly independent sets of minimal coset representatives for $\mathcal E_G$ in $\mathcal E_{G'}$ and for $\mathcal E_H$ in $\mathcal E_{H'}$ simultaneously: \begin{alg}\label{alg-mcr} Let $G$ and $H$ be graphs and $e$ an edge such that $G \sqcup H \sqcup \{e\}=K_n$. Let $G' = G \cup \{e\}$ and $H' = H \cup \{e\}$, and set $M^0 = N^0 = \{\mathrm{id}\}$. For $d \geq 0$: \begin{itemize} \item Construct a matrix with rows indexed by $x_{ij}p$ for $x_{ij} \in \mathcal E_{G'}$ and $p \in M^d$, columns indexed by $x_{kl}q$ for $x_{kl} \in \mathcal E_{H'}$ and $q \in N^d$, and entries $\langle x_{ij}p, x_{kl}q \rangle$. \item Set $M^{d+1}$ to be the indices of a maximal set of linearly independent rows and likewise $N^{d+1}$ for columns. \end{itemize} Then $M^d$ and $N^d$ are subsets of degree $d$ minimal coset representatives for $\mathcal E_{G}$ inside $\mathcal E_{G'}$ and $\mathcal E_H$ inside $\mathcal E_{H'}$. (In other words, $M^d$ is linearly independent modulo $\mathcal E_{G'}\mathcal E_G^+$, and likewise $N^d$ modulo $\mathcal E_{H'}\mathcal E_H^+$.) \end{alg} \begin{rmk} Algorithm~\ref{alg-mcr} can be used to give a lower bound on $\mathcal H_{G'}(t)/\mathcal H_G(t)$, but this will not in general be exact. However, it is usually quite accurate for small graphs. For instance, one can show that if Conjecture~\ref{conj-nichols} holds and $\mathcal E_{H'} \otimes \mathcal E_G \cong \mathcal E_n$ (as in Theorem~\ref{thm-tensor}), then Algorithm~\ref{alg-mcr} will give a complete set of minimal coset representatives for $\mathcal E_G$ in $\mathcal E_{G'}$. In particular, the algorithm is exact for all graphs on at most five vertices. \end{rmk} \section{Coxeter groups and nil-Coxeter algebras} \label{sec-coxeter} In this section, we will describe various ways in which the subalgebras $\mathcal E_G$ share similar properties to Coxeter groups, or more specifically, to their nil-Coxeter algebras. \subsection{Definitions} We first recall the definition of (simply-laced) Coxeter groups and nil-Coxeter algebras as well as some of their basic properties. For further details, see \cite{hump} or \cite{BjornerBrenti}. Let $D$ be a graph, which we will refer to in this context as a \emph{simply-laced Dynkin diagram}. \begin{defn} Given a simply-laced Dynkin diagram $D$, the \emph{Coxeter group} $(W, S)$ of $D$ is the group generated by $S$, the set of \emph{simple reflections} $s_i$ for each vertex $i$ of $D$, and relations \begin{itemize} \item $s_i^2 = 1$ for any vertex $i$ of $D$; \item $s_is_j = s_js_i$ for nonadjacent vertices $i,j$ of $D$; \item $s_is_js_i = s_js_is_j$ for adjacent vertices $i,j$ of $D$ (called a \emph{braid relation}). \end{itemize} \end{defn} \begin{defn} Given an element $w \in W$, a \emph{reduced word} (or \emph{reduced expression}, or \emph{reduced decomposition}) $s_{i_1}s_{i_2}\cdots s_{i_\ell}$ for $w$ is a minimum length expression of $w$ as a product of generators $s_i$. The \emph{length} of $w$, denoted $\ell(w)$, is the length of any reduced word for $w$. We say that $w = u \cdot v$ is a \emph{reduced factorization} if $\ell(w) = \ell(u)+\ell(v)$. \end{defn} Given a Coxeter group, one can define the corresponding nil-Coxeter algebra as follows. \begin{definition} Given a simply-laced Dynkin diagram $D$, the \emph{nil-Coxeter algebra $\mathcal N$} is the associative algebra with a generator $t_i$ for each vertex $i$ of $D$ and relations \begin{itemize} \item $t_i^2 = 0$ for any vertex $i$ of $D$; \item $t_{i}t_{j} = t_j t_i$ for nonadjacent vertices $i,j$ of $D$; \item $t_{i}t_{j}t_{i} = t_{j}t_{i}t_{j}$ for adjacent vertices $i,j$ of $D$. \end{itemize} \end{definition} Note the similarity of this definition to that of $\mathcal E_n$. Given an element $w \in W$, we can define an element $t_w \in \mathcal N$ by choosing any reduced word $w = s_{i_1}s_{i_2} \cdots s_{i_\ell}$ and letting $t_w = t_{i_1}t_{i_2} \cdots t_{i_\ell}$. The element $t_w$ does not depend on the choice of reduced word. \begin{prop} The nil-Coxeter algebra $\mathcal N$ has basis $\{t_w \mid w \in W\}$ with multiplication given by \[t_ut_v = \begin{cases} t_{uv},&\text{if $\ell(u)+\ell(v) = \ell(uv)$;}\\ 0,& \text{otherwise.}\end{cases}\] \end{prop} For this reason, we will sometimes abuse notation and use the same letters to refer to both elements of $W$ and elements of $\mathcal N$. \subsection{Line graphs} We briefly describe how to relate the subalgebra $\mathcal E_G$ to a certain (twisted) nil-Coxeter algebra. \begin{definition} Given a graph $G$, let $L(G)$ denote the line graph of $G$, which we will think of as a simply-laced Dynkin diagram. Given a directed graph $G'$, the \emph{twisted nil-Coxeter algebra} of $L(G')$ is the associative algebra with a generator for each edge of $G'$ and relations \begin{itemize} \item $\mathsf{e} = \mathsf{f}$ for edges $\mathsf{e},\mathsf{f}$ of $G'$ with the same ends and direction; \item $\mathsf{e} = -\mathsf{f}$ for any directed 2-cycle $\mathsf{e},\mathsf{f}$ in $G'$; \item $\mathsf{e}^2 = 0$ for any edge $\mathsf{e}$ of $G'$; \item $\mathsf{ef} = \mathsf{fe}$ for edges $\mathsf{e}, \mathsf{f}$ of $G'$ that do not share an end; \item $\mathsf{efe} = \mathsf{fef}$ for edges $\mathsf{e}, \mathsf{f}$ of $G'$ that form a directed path; \item $\mathsf{efe} = -\mathsf{fef}$ for edges $\mathsf{e}, \mathsf{f}$ of $G'$ that share one end but do not form a directed path. \end{itemize} We refer to these last two relations as \emph{positive} and \emph{negative braid relations}, respectively. \end{definition} Note that we have abused notation slightly since the algebra depends on $G'$ and not just the line graph $L(G')$. The nonzero monomials of the twisted nil-Coxeter algebra of $L(G')$ are in bijection with the nonzero monomials of the nil-Coxeter algebra of $L(G)$, where $G$ is the underlying simple undirected graph of $G'$. Since the generators of $\mathcal E_G$ satisfy the same versions of the braid relation satisfied by the generators of the twisted nil-Coxeter algebra, we have the following proposition. \begin{proposition}\label{p Coxeter to FK} For an undirected graph $G$, let $G'$ be any directed graph whose underlying simple undirected graph is $G$. The algebra $\mathcal E_G$ is a quotient of the twisted nil-Coxeter algebra of $L(G')$. Moreover, if the edges of $G$ can be directed so that the indegree and outdegree at each vertex are at most one, then $\mathcal E_G$ is a quotient of the nil-Coxeter algebra of $L(G)$. \end{proposition} It is important to note that the twisted nil-Coxeter algebra of $L(G')$ is almost always infinite-dimensional since, among the Dynkin diagrams of finite type, only those of type $A_n$ are line graphs. Nevertheless, we will use Proposition~\ref{p Coxeter to FK} in our discussion of $\mathcal E_{\tilde{A}_{n-1}}$, utilizing the fact that the line graph of a cycle is again a cycle. \subsection{Analogy to Fomin-Kirillov algebras} Many of the results proved about $\mathcal E_G$ in the previous sections are analogues of the following facts about Coxeter groups and nil-Coxeter algebras. Throughout this section, let $D$ be a simply-laced Dynkin diagram, $(W, S)$ its Coxeter group, and $\mathcal N$ its nil-Coxeter algebra. There is a notion of \emph{Bruhat order} in $W$ defined as follows. Let $T = \{wsw^{-1} \mid w \in W, s \in S\}$ be the set of \emph{reflections} of $W$. The \emph{Bruhat order $<$} of $W$ is the transitive closure of the relations $tw < w$ for $t\in T$ such that $\ell(tw) < \ell(w)$. The \emph{left weak order $<_L$} of $W$ is the transitive closure of the relations $sw <_L w$ for $s\in S$ such that $\ell(sw) < \ell(w)$. An equivalent way of formulating Bruhat order is that $v \leq w$ if some reduced word for $w$ contains a substring that gives a reduced word for $v$; in fact, if this is true for some reduced word for $w$, then it is true for any reduced word for $w$. We can also formulate left weak order similarly: $v \leq_L w$ if there exists a reduced word for $w$ that has a final substring that is a reduced word for $v$ (but this depends on the reduced word for $w$). The analogue of Bruhat order in the Fomin-Kirillov algebra setting is, for $P, Q \in \mathcal E_G$: \[ Q \leq P \text{\quad if } Q \in \mathcal E_n^\vee * P. \] From the definition of the action of $\mathcal E_n^\vee$, it follows that if $Q \leq P$, then $Q$ lies in the span of the subwords of $P$. \begin{rmk} If Conjecture~\ref{conj-nichols} holds, then $Q \leq P$ if and only if $Q=(P)\nabla_x$ for some $x \in \mathcal E_n$. \end{rmk} The analogue of left weak order in the Fomin-Kirillov algebra setting is, for $P, Q \in \mathcal E_G$: \[ Q \leq_L P \text{\quad if there exists } x \in \mathcal E_G \text{ such that } xQ=P. \] Theorem~\ref{thm-finitedim} is then the analogue of the following facts about $W$ and $\mathcal N$: if $W$ is finite, then: \begin{enumerate}[(a)] \item there is a unique element $w_0 \in W$ of maximum length (so the top degree part of $\mathcal N$ is one-dimensional); \item $w\leq w_0$ for all $w\in W$; \item $w \leq_L w_0$ for all $w\in W$; \item the Poincar\'e series of $W$ (and hence the Hilbert series of $\mathcal N$) is symmetric; \item every element of $W$ is a subword of any reduced expression for $w_0$; and \item $w_0 = w_0^{-1}$. \end{enumerate} \begin{rmk} Although $Q \leq P$ is well-defined (independent of the choice of expression for $P$), the span of the subwords of $P$ is not well-defined in general. For instance, in $\mathcal E_3$, $x_{12}x_{23}x_{12} = x_{12}x_{13}x_{23}$, but clearly $x_{13}$ does not lie in the span of the subwords of $x_{12}x_{23}x_{12}$. However, Theorem~\ref{thm-finitedim} shows that this notion is well-defined when $P = w_0^G$. Likewise, in $\mathcal E_G$, $Q \leq_L P$ does not imply $Q \leq P$ in general: for instance, $x_{13}x_{23} \leq_L x_{12}x_{13}x_{23}$, but $x_{13}x_{23} \not\leq x_{12}x_{13}x_{23}$. But again, Theorem~\ref{thm-finitedim} shows that these notions coincide when $P = w_0^G$. \end{rmk} The notion of descent sets introduced in Section~\ref{sec-finitedim} is also the direct analogue of a notion from Coxeter groups: the \emph{left descent set} $L(w)$ of an element $w \in W$ is the set of all $s_i \in S$ such that $\ell(s_iw)<\ell(w)$ (or in $\mathcal N$, $s_iw=0$). (Define $R(w)$ similarly.) Then Proposition~\ref{prop-monic} is the analogue of the following statement about Coxeter groups: \begin{itemize} \item if $W$ has an element $w$ such that $L(w) = S$, then $W$ is finite-dimensional and $w$ is the longest element of $W$. \end{itemize} We can also now prove the following results about descent sets in $\mathcal E_G$, which are again analogues of facts about Coxeter groups. \begin{prop} If $w \in \mathcal E_G$, then $L(w),R(w)\subset G$. \end{prop} \begin{proof} Suppose $e = \overline{ij} \in L(w)$. Then Corollary~\ref{cor-tensor} implies that $\mathcal E_e \otimes \mathcal E_H \cong \mathcal E_n$, where $H=K_n \backslash \{e\}$. Thus left multiplication by $x_{ij}$ is injective on $\mathcal E_H$, so since $x_{ij}w = 0$, we must have that $w \not\in \mathcal E_H$. Thus $G \not\subset H$, so $e \in G$. \end{proof} \begin{proposition}\label{p descent set longword} Let $w \in \mathcal E_G$, and suppose $H \subset L(w)$. Then one can write $w = w_0^H g$ for some $g \in \mathcal E_G$. (Similarly, if $H' \subset R(w)$, then $w = g' w_0^{H'}$ for some $g' \in \mathcal E_G$.) \end{proposition} Recall that by Proposition~\ref{prop-monic}, $\mathcal E_H$ is necessarily finite-dimensional. \begin{proof} By Theorem~\ref{thm-subgraph}, there is a unique expression for $w$ of the form $w= \sum_{m \in M} h_mm$ for some $h_m \in \mathcal E_H$, where $M$ is a set of right minimal coset representatives for $\mathcal E_H$ in $\mathcal E_G$. For any $\overline{ij} \in H$, since $\mathcal E_G$ is a free left $\mathcal E_H$-module, $0=x_{ij}w=\sum_{m \in M} (x_{ij}h_m)m$ implies that $x_{ij} h_m=0$ for all $m\in M$. Hence by Proposition~\ref{prop-monic}, $h_m = c_mw_0^H$ for some constant $c_m$, so $w = w_0^H (\sum_{m \in M} c_mm)$. \end{proof} Along similar lines, we can prove the following result. \begin{prop} Let $w \in \mathcal E_G$, and suppose $w_0^Hw = 0$ for some $H \subset G$ with $\mathcal E_H$ finite-dimensional. Then $w \in \mathcal E_H^+\mathcal E_G$. \end{prop} \begin{proof} As in Proposition~\ref{p descent set longword}, we can write $w= \sum_{m \in M} h_mm$. Multiplying by $w_0^H$ gives $0= \sum_{m \in M} (w_0^H h_m)m$. Since $M$ is a set of minimal coset representatives, $w_0^H h_m=0$ for all $m$. But then each $h_m$ has degree at least 1, so $w \in \mathcal E_H^+\mathcal E_G$. \end{proof} Finally, Coxeter groups have a notion of parabolic subgroups: for $J \subseteq S$, the \emph{parabolic subgroup} $W_J$ is the subgroup of $W$ generated by the set $J$. The pair $(W_J, J)$ is equal to the Coxeter group of the induced subgraph of $D$ with vertex set $J$. Let $\mathcal N_J$ denote the nil-Coxeter algebra of $W_J$. The analogues of parabolic subgroups for $\mathcal E_G$ are the subalgebras $\mathcal E_H$ as $H$ ranges over subgraphs of $G$. However the fact about parabolic subgroups just mentioned does not hold---the minimal relations satisfied by the generators of $\mathcal E_H$ are not easily determined from those of $\mathcal E_G$. Despite this, we still have that Theorem~\ref{thm-subgraph} is the analogue of the following fact about parabolic subgroups: \begin{itemize} \item Each right (resp. left) coset $W_Jw$ (resp. $wW_J$) of $W_J$ contains a unique element of minimal length called a \emph{minimal coset representative}. Let $\leftexp{J}{W}$ (resp. $W^J$) denote the set of all such elements. Then $\mathcal N$ is a free left (resp. right) $\mathcal N_J$-module and as a left (resp. right) $\mathcal N_J$-module has basis $\leftexp{J}{W}$ (resp. $W^J$). \end{itemize} Although there exist many parallels between Coxeter groups and Fomin-Kirillov algebras, we do not yet have a good analogue for many of the geometric notions associated with Coxeter groups, such as reflections or root systems. For instance, for finite Coxeter groups, the length of the long word equals the number of positive roots in the corresponding root system, but we know of no combinatorial object that determines the maximum degree that occurs in $\mathcal E_G$. \section{Dynkin diagrams} In this section, we will describe $\mathcal E_G$ when $G$ is a (simply-laced) Dynkin diagram of finite type. In particular, we will show that any minimal relation of $\mathcal E_G$ in these cases is of one of three types: it is either a \emph{quadratic relation} inherited from $\mathcal E_n$, a \emph{braid relation} of the form $\mathsf{aba}+\mathsf{bab}=0$ between two edges that share a vertex, or a \emph{claw relation} of the form $\mathsf{abca}+\mathsf{bcab}+\mathsf{cabc}=0$ among three edges that share a vertex. (See the examples in Section~\ref{sec-examples} and Lemma~\ref{lemma-cyclic}.) \subsection{The case $\mathcal E_{A_n}$} Define $A_n$ to be the path on $n$ vertices (that is, the Dynkin diagram of type $A_n$). Then, in accordance with Example~\ref{ex-a3}, we have the following theorem. \begin{thm} \label{thm-a} The only minimal relations in $\mathcal E_{A_n}$ are the quadratic and braid relations. Its Hilbert series is \[\mathcal H_{A_n}(t) = [2][3][4] \cdots [n].\] In other words, $\mathcal E_{A_n} \cong \mathcal N_n$, the \textit{nil-Coxeter algebra} of type $A_{n-1}$. \end{thm} This follows easily from the existence of the divided difference representation of $\mathcal E_n$ given in \cite{FK}, but we present a different proof as it will be related to our investigation of $\mathcal E_{\tilde{A}_{n-1}}$ later. \begin{proof} Since $\ensuremath{\mathcal N}_{n}$ is defined using the quadratic and braid relations, $\mathcal E_{A_n}$ is a quotient of $\ensuremath{\mathcal N}_{n}$. We will show that the projection $\Theta\colon \ensuremath{\mathcal N}_{n} \to \mathcal E_{A_n}$ is an isomorphism by showing that the image under $\Theta$ of the basis $\{w\mid w \in \S_n\}$ of $\ensuremath{\mathcal N}_{n}$ is linearly independent. We prove this by the following pairing computation: \begin{equation*}\label{e pairing Sn} \text{ for $w,v\in \S_n$,} \qquad\qquad \langle \Theta(w), \operatorname{rev}(\Theta(v)) \rangle = \begin{cases} 1 & \text{ if } w = v,\\ 0 & \text{ if } w\neq v. \end{cases} \end{equation*} This pairing is zero unless the $\S_n$-degree of $\Theta(w)$ and $\Theta(v)$ are the same, which gives the second case. To prove $\langle \Theta(w), \operatorname{rev}(\Theta(w)) \rangle = 1$, we induct on the length of $w$. Let $P = \Theta(w) = p_1\cdots p_d$ be an expression for $\Theta(w)$ as a product of generators of $\mathcal E_{A_n}$, and write $P = P^L_jp_jP^R_j$. Then by Lemma \ref{lemma-leibniz} and Proposition \ref{prop-pairing}, \[ \langle P,\operatorname{rev} (P)\rangle =\sum_{j=1}^d (p_j) (\sigma_{P^R_j}\nabla_{p_d}) \cdot \langle P^L_j P^R_j, \operatorname{rev}(P^L_d) \rangle.\] By the strong exchange condition for $\S_n$, there is a unique index $j$ for which $P^L_jP^R_j$ has the same $\S_n$-degree as $P^L_d$; clearly this index is $j=d$. Hence only the $j=d$ term in the sum can be nonzero, so we find that $\langle P, \operatorname{rev}(P) \rangle = \langle P^L_d, \operatorname{rev}(P^L_d)\rangle = 1$ by induction. \end{proof} \subsection{The case $\mathcal E_{D_n}$} The next simplest case is that of the Dynkin diagram of type $D_n$ (which we will simply call $D_n$). We denote the edges of $D_n$ by $\mathsf{a}$, $\mathsf{b}$, $\mathsf{1}$, \dots, $\mathsf{n-3}$ as shown in Figure~\ref{fig-d}. (We will use these labels to denote both the edges of the graph and the corresponding variables.) \begin{figure} \begin{center} \begin{tikzpicture}[scale = 1.2] \node[v] (a) at (150:1){}; \node[v] (b) at (-150:1){}; \node[v] (1) at (0,0){}; \node[v] (2) at (1,0){}; \node[v] (3) at (2,0){}; \node[v] (4) at (4,0){}; \node[v] (5) at (5,0){}; \node at (0,-1.6){}; \draw[thick, ->] (5)--node[above]{${\scriptstyle \mathsf{n-3}}$}(4); \draw[thick] (4)--(3.5,0); \draw[thick, ->] (2.5,0)--(3); \draw[thick, ->] (3)--node[above]{${\scriptstyle \mathsf{2}}$}(2); \draw[thick, ->] (2)--node[above]{${\scriptstyle \mathsf{1}}$}(1); \draw[thick, ->] (1)--node[above]{${\scriptstyle \mathsf{a}}$}(a); \draw[thick, ->] (1)--node[below]{${\scriptstyle \mathsf{b}}$}(b); \draw[thick, loosely dotted] (2.7,0)--(3.4,0); \end{tikzpicture} \qquad \begin{tikzpicture}[scale = 1.2] \node[v] (a) at (150:1){}; \node[v] (b) at (-150:1){}; \node[v] (1) at (0,0){}; \node[v] (2) at (1,0){}; \node[v] (3) at (2,0){}; \node[v] (4) at (4,0){}; \node[v] (5) at (5,0){}; \node at (0,-1.6){}; \draw[thick, ->] (5)--node[below, near end]{${\scriptstyle \mathsf{(n-3)'}}$}(4); \draw[thick, loosely dotted] (2.7,0)--(3.4,0); \draw[thick, ->] (5) to [out = 165, in=15, near end] node[above]{${\scriptstyle \mathsf{3'}}$}(3); \draw[thick, ->] (5) to [out =150, in=35, near end] node[above]{${\scriptstyle \mathsf{2'}}$}(2); \draw[thick, ->] (5) to [out =135, in=45, near end] node[above]{${\scriptstyle \mathsf{1'}}$}(1); \draw[thick, ->] (5) to [out =120, in=35, near end] node[above]{${\scriptstyle \mathsf{a'}}$}(a); \draw[thick, ->] (5) to [out =-135, in=-15] node[above]{${\scriptstyle \mathsf{b'}}$}(b); \draw[dashed] (4)--(3.5,0) (2.5,0)--(3)--(2)--(1)--(a) (1)--(b); \end{tikzpicture} \end{center} \caption{\label{fig-d} On the left, the Dynkin diagram $D_n$ with edge labels. On the right, the star $K_{1,n-1}$ with primed edge labels to be used in Lemma~\ref{lemma-d3}.} \end{figure} The main result of this section is the following theorem. \begin{thm} \label{thm-d} The only minimal relations in $\mathcal E_{D_n}$ are the quadratic, braid, and claw relations. Its Hilbert series is \[\mathcal H_{D_n}(t) = [n][n-1] \cdot [4][6][8]\cdots[2n-4].\] \end{thm} In order to prove Theorem~\ref{thm-d}, we will construct a set of minimal coset representatives for $\mathcal E_{D_{n-1}}$ in $\mathcal E_{D_{n}}$. For $n \geq 3$, let \begin{align*} M_n=\{&\mathrm{id}, \quad \mathsf{n-3}, \quad \mathsf{(n-4)(n-3)}, \quad \dots, \quad \mathsf{(n-3)!_{\scriptscriptstyle\swarrow} }, \\ &\mathsf{a(n-3)!_{\scriptscriptstyle\swarrow} }, \quad \mathsf{b(n-3)!_{\scriptscriptstyle\swarrow} },\quad \mathsf{ab(n-3)!_{\scriptscriptstyle\swarrow} }, \quad \mathsf{ba(n-3)!_{\scriptscriptstyle\swarrow} }, \quad \mathsf{aba(n-3)!_{\scriptscriptstyle\swarrow} }, \\ & \mathsf{1aba(n-3)!_{\scriptscriptstyle\swarrow} },\quad \mathsf{2!_{\scriptscriptstyle\searrow} aba(n-3)!_{\scriptscriptstyle\swarrow} },\quad \dots,\quad \mathsf{(n-3)!_{\scriptscriptstyle\searrow} aba(n-3)!_{\scriptscriptstyle\swarrow} } \}, \end{align*} where $\mathsf{i!_{\scriptscriptstyle\swarrow}}=\mathsf{12\cdots(i-1)i}$ and $\mathsf{i!_{\scriptscriptstyle\searrow}}=\mathsf{i(i-1)\cdots 21}$. Note that $M_n$ can be given the structure of a (graded) partially ordered set in which $m'$ covers $m$ if there exists an edge variable $\mathsf{e}$ such that $m' = \mathsf{e}\,m$. Thus we can label the edges in the Hasse diagram of $M_n$ by generators of $\mathcal E_{D_n}$, and each element $m$ of $M_n$ is (up to sign) the right-to-left product of the edge labels along a saturated chain that starts at the minimal element $\mathrm{id}$ and ends at $m$. See Figure~\ref{fig-mcr} for the case $M_5$. \begin{figure} \begin{center} \begin{tikzpicture} \node[v] (1) at (0,0){}; \node[v] (2) at (0,1){}; \node[v] (3) at (0,2){}; \node[v] (4) at (-.75, 2.75){}; \node[v] (5) at (.75, 2.75){}; \node[v] (6) at (-.75, 3.75){}; \node[v] (7) at (.75, 3.75){}; \node[v] (8) at (0, 4.5){}; \node[v] (9) at (0, 5.5){}; \node[v] (10) at (0,6.5){}; \draw(1)node[left]{$\mathrm{id}$}--(2)node[left]{$\mathsf{2}$}--(3)node[left]{$\mathsf{12}$}--(4)node[left]{$\mathsf{a12}$}--(6)node[left]{$\mathsf{ba12}$}--(8)node[left]{$\mathsf{aba12}$}--(9)node[left]{$\mathsf{1aba12}$}--(10)node[left]{$\mathsf{21aba12}$} (3)--(5)node[right]{$\mathsf{b12}$}--(7)node[right]{$\mathsf{ab12}$}--(8); \end{tikzpicture} \end{center} \caption{\label{fig-mcr} Hasse diagram for $M_5$, the (left) minimal coset representatives for $D_4$ inside $D_5$.} \end{figure} When $n=3$, $D_3=A_3$, and $\mathcal E_{D_3}$ has basis $M_3 = \{\mathrm{id}, \mathsf{a}, \mathsf{b}, \mathsf{ab}, \mathsf{ba}, \mathsf{aba}\}$ by Theorem~\ref{thm-a}. For $n \geq 4$, we claim that $M_n$ is a set of (left) minimal coset representatives for $\mathcal E_{D_{n-1}}$ in $\mathcal E_{D_{n}}$. We proceed in two steps. Let $I_n = \mathcal E_{D_n}\mathcal E_{D_{n-1}}^+$. \begin{lemma} \label{lemma-d1} The set $M_n$ spans $\mathcal E_{D_n}/I_n$. \end{lemma} In other words, we can write any element of $\mathcal E_{D_n}$ in a \emph{normal form} as a linear combination of monomials, each of which is a product of a minimal coset representative and a monomial in $\mathcal E_{D_{n-1}}$. The proof of this lemma essentially gives a straightening algorithm for $\mathcal E_{D_n}$. \begin{proof} We induct on $n$. Assume $n \geq 4$. Consider any monomial $m \in \mathcal E_{D_n}$. If $m$ does not contain any instance of the variable $\mathsf{n-3}$, then it either lies in $I_n$ or is the identity element $\mathrm{id} \in M_n$. If $m$ contains exactly one occurrence of $\mathsf{n-3}$, then for it not to lie in $I_n$, it must equal $A\mathsf{(n-3)}$ for some $A \in \mathcal E_{D_{n-1}}$. By induction, $A$ is congruent modulo $I_{n-1}$ to a linear combination of elements in $M_{n-1}$. Since $\mathsf{n-3}$ commutes with $\mathcal E_{D_{n-2}}$, we have $I_{n-1}\cdot \mathsf{(n-3)} \subset I_n$, so $m$ is congruent modulo $I_n$ to a linear combination of elements of $M_{n-1} \cdot \mathsf{(n-3)} \subset M_n$. If $m$ has more than one occurrence of $\mathsf{n-3}$, as above we may assume that it ends with a substring of the form $\mathsf{(n-3)}A\mathsf{(n-3)}$, where $A \in \mathcal E_{D_{n-1}}$. By the previous paragraph, we may assume that $A \in M_{n-1}$. If $A=\mathrm{id}$, this clearly vanishes. Otherwise, either $A=B\mathsf{(n-4)}$ for some $B \in M_{n-2}$ or $A = \mathsf{(n-4)!_{\scriptscriptstyle\searrow} aba(n-4)!_{\scriptscriptstyle\swarrow} }$. In the first case, by the braid relation \[\mathsf{(n-3)}A\mathsf{(n-3)} = B\mathsf{(n-3)(n-4)(n-3)} = B\mathsf{(n-4)(n-3)(n-4)} \in I_n,\] so $m$ will also lie in $I_n$. It remains to consider the case when $m$ ends in $X=\mathsf{(n-3)!_{\scriptscriptstyle\searrow} aba(n-3)!_{\scriptscriptstyle\swarrow} }$. We claim that $\mathsf{a}X=X\mathsf{b}$, $\mathsf{b}X=X\mathsf{a}$, and $\mathsf{i}X = X\mathsf{i}$ for $\mathsf{i} = \mathsf{1}, \mathsf{2}, \dots, \mathsf{n-4}$. This shows that either $m = X \in M_n$ or $m \in I_n$, which will complete the proof. To show $X\mathsf{a}=\mathsf{b}X$, since $\mathsf{a}$ and $\mathsf{b}$ commute with $\mathsf{2}, \mathsf{3}, \dots, \mathsf{n-3}$, it suffices to show that $\mathsf{1aba1a} = \mathsf{b1aba1}$. This follows from the quadratic, braid, and claw relations because \[\mathsf{1aba1a} = \mathsf{1ab1a1} = (\mathsf{ab1a}+\mathsf{b1ab})\mathsf{a1} = \mathsf{ab1aa1}+\mathsf{b1aba1} = \mathsf{b1aba1}.\] Similarly, $X\mathsf{b}=\mathsf{a}X$. Finally, for $\mathsf{i}=\mathsf{1}, \mathsf{2}, \dots, \mathsf{n-4}$, since $\mathsf{i}$ commutes with $\mathsf{i+2}, \dots, \mathsf{n-3}$, it suffices to show that $\mathsf{(i+1)!_{\scriptscriptstyle\searrow} aba(i+1)!_{\scriptscriptstyle\swarrow} i} = \mathsf{i(i+1)!_{\scriptscriptstyle\searrow} aba(i+1)!_{\scriptscriptstyle\swarrow} }$. This holds because \begin{multline*} \mathsf{(i+1)!_{\scriptscriptstyle\searrow} aba(i-1)!_{\scriptscriptstyle\swarrow} i(i+1)i} = \mathsf{(i+1)!_{\scriptscriptstyle\searrow} aba(i-1)!_{\scriptscriptstyle\swarrow} (i+1)i(i+1)} \\ = \mathsf{(i+1)i(i+1)(i-1)!_{\scriptscriptstyle\searrow} aba(i+1)!_{\scriptscriptstyle\swarrow} } = \mathsf{i(i+1)i(i-1)!_{\scriptscriptstyle\searrow} aba(i+1)!_{\scriptscriptstyle\swarrow} }.\qedhere \end{multline*} \end{proof} Note that in order to put an element of $\mathcal E_{D_n}$ into normal form, we used only the quadratic, braid, and claw relations. \medskip To show linear independence, we first show that the highest degree minimal coset representative is nonzero. \begin{lemma}\label{lemma-d3} The highest degree element $X = \mathsf{(n-3)!_{\scriptscriptstyle\searrow} aba(n-3)!_{\scriptscriptstyle\swarrow} }\in M_n$ does not lie in $I_n$. \end{lemma} Let $K_{1, n-1}$ be the star centered at the end vertex in $D_n$, and label its edges using primed symbols as in Figure~\ref{fig-d}. (Note that, for convenience, we let $\mathsf{(n-3)} = \mathsf{(n-3)'}$.) Define $\mathsf{i'!_{\scriptscriptstyle\searrow} }$ and $\mathsf{i'!_{\scriptscriptstyle\swarrow} }$ analogously to the unprimed versions. \begin{proof} By Lemma~\ref{lemma-orthogonal}, $I_n$ is orthogonal to $\mathcal E_{K_{1, n-1}}$. Hence we need only show that $X$ is not orthogonal to some element of $\mathcal E_{K_{1, n-1}}$. Let $X' = \mathsf{(n-3)'!_{\scriptscriptstyle\searrow} a'b'a'(n-3)'!_{\scriptscriptstyle\swarrow} }$. We will show that $\langle X, X' \rangle = (-1)^{n-1}$. For $n=3$, an easy computation shows that $\langle \mathsf{aba}, \mathsf{aba}\rangle = 1$. For $n\geq 4$, let $Y = \mathsf{(n-4)!_{\scriptscriptstyle\searrow} aba(n-4)!_{\scriptscriptstyle\swarrow} }$ and $Y' = \mathsf{(n-4)'!_{\scriptscriptstyle\searrow} a'b'a'(n-4)'!_{\scriptscriptstyle\swarrow} }$, so that $X=\mathsf{(n-3)}Y\mathsf{(n-3)}$ and $X'=\mathsf{(n-3)}Y'\mathsf{(n-3)}$. Since \[ \Delta_{\mathsf{n-3}}(X') = Y'\mathsf{(n-3)}+ \sigma_{\mathsf{n-3}}(\mathsf{(n-3)}Y') = Y'\mathsf{(n-3)} - \mathsf{(n-3)}\sigma_{\mathsf{n-3}}(Y'), \] we have \[ \langle X, X' \rangle = \langle \mathsf{(n-3)}Y, \Delta_{\mathsf{n-3}}(X') \rangle = \langle\mathsf{(n-3)}Y, Y'\mathsf{(n-3)}\rangle - \langle \mathsf{(n-3)}Y, \mathsf{(n-3)}\sigma_{\mathsf{n-3}}(Y') \rangle. \] But $\mathsf{(n-3)}Y$ ends in $\mathsf{n-4}$ (or $\mathsf{a}$ if $n=4$), which does not occur in $Y'\mathsf{(n-3)}$, so the first term vanishes. Thus \[ \langle X, X' \rangle = -\langle \mathsf{(n-3)}Y, \mathsf{(n-3)}\sigma_{\mathsf{n-3}}(Y') \rangle = - \langle (\mathsf{(n-3)}Y)\nabla_{\mathsf{n-3}}, \sigma_{\mathsf{n-3}}(Y')\rangle. \] But in fact $(\mathsf{(n-3)}Y)\nabla_{\mathsf{n-3}} = Y$. To see this, note that none of the edges in $Y$ contain the end vertex in $D_n$, meaning $\nabla_{\mathsf{n-3}}$ will not act on any variable in $Y$, and also that $\sigma_{Y}(\mathsf{n-3}) = \mathsf{n-3}$. It follows that $\langle X, X' \rangle = - \langle Y, \sigma_{\mathsf{n-3}}(Y')\rangle$. But note that the edges appearing in $\sigma_{\mathsf{n-3}}(Y')$ are contained in the star centered at the end vertex of $D_{n-1}$. Thus $\langle Y, \sigma_{\mathsf{n-3}}(Y')\rangle$ is the computation for $n-1$, so the result follows by induction. \end{proof} \begin{lemma}\label{lemma-d2} The set $M_n$ is linearly independent in $\mathcal E_{D_n}/I_n$. \end{lemma} \begin{proof} Note that the elements of $M_n$ all differ in either degree or $\S_n$-degree. Therefore, they are linearly independent unless one of them lies in $I_n$. But each element of $M_n$ is a right factor of the longest element $X$, so since $X$ does not lie in the left ideal $I_n$, none of the elements do. \end{proof} \begin{proof} [Proof of Theorem~\ref{thm-d}] By Lemmas~\ref{lemma-d1} and \ref{lemma-d2}, $M_n$ is a set of minimal coset representatives for $\mathcal E_{D_{n-1}}$ in $\mathcal E_{D_{n}}$. Since the only relations needed in Lemma~\ref{lemma-d1} were the quadratic, braid, and claw relations, Lemma~\ref{lemma-d2} implies that these generate all relations in $\mathcal E_{D_n}$. We can now prove \[\mathcal H_{D_n}(t) = [n][n-1] \cdot [4][6][8]\cdots [2n-4]\] by induction on $n$. The case $n=3$ follows from Theorem~\ref{thm-a}. For $n>3$, since $M_n$ is a set of minimal coset representatives and $\mathcal H_{M_n}(t) = [n](1+t^{n-2}) = \frac{[n][2n-4]}{[n-2]}$, we have \begin{align*} \mathcal H_{D_n}(t) &= \mathcal H_{M_n}(t) \cdot \mathcal H_{D_{n-1}}(t)\\ &= \frac{[n][2n-4]}{[n-2]} \cdot [n-1][n-2] \cdot [4][6] \cdots [2n-6]\\ &= [n][n-1] \cdot [4][6] \cdots [2n-4]. \qedhere \end{align*} \end{proof} \subsection{The cases $\mathcal E_{E_n}$ for $n=6$, $7$, and $8$} In this section, we will describe $\mathcal E_{E_n}$ for $n=6$, $7$, and $8$, where $E_n$ is the Dynkin diagram of type $E_n$ as shown in Figure~\ref{fig-e}. \begin{figure} \begin{center} \dr{(0,0)node[v]{}--(1,0)node[v]{}--(2,0)node[v]{}--(3,0)node[v]{}--(4,0)node[v]{} (2,0)--(2,1)node[v]{}} \quad\quad\quad \dr{(0,0)node[v]{}--(1,0)node[v]{}--(2,0)node[v]{}--(3,0)node[v]{}--(4,0)node[v]{}--(5,0)node[v]{} (2,0)--(2,1)node[v]{}} \\[5mm] \dr{(0,0)node[v]{}--(1,0)node[v]{}--(2,0)node[v]{}--(3,0)node[v]{}--(4,0)node[v]{}--(5,0)node[v]{}--(6,0)node[v]{} (2,0)--(2,1)node[v]{}} \end{center} \caption{\label{fig-e} The Dynkin diagrams $E_6$, $E_7$, and $E_8$.} \end{figure} \begin{thm} The only minimal relations in $\mathcal E_{E_6}$, $\mathcal E_{E_7}$, and $\mathcal E_{E_8}$ are the quadratic, braid, and claw relations. Their Hilbert series are \begin{align*} \mathcal H_{E_6}(t) &= \frac{[4][5][6]^2[8][9]}{[3]},\\ \mathcal H_{E_7}(t) &= \frac{[6]^2[8][9][10][12][14]}{[3]},\\ \mathcal H_{E_8}(t) &= \frac{[6][8][10][12][14][15][18][20][24]}{[3][5]}. \end{align*} \end{thm} Our proof of this theorem is largely computational. For this reason, we only summarize the basic strategy. \begin{proof} For each $n=6, 7, 8$, we first compute using \texttt{bergman} a purported set of minimal coset representatives for $\mathcal E_{E_{n-1}}$ in $\mathcal E_{E_n}$ (note $E_5=D_5$) assuming that the only relations are the quadratic, braid, and claw relations. This gives an upper bound on $\mathcal H_{E_n}(t)/\mathcal H_{E_{n-1}}(t)$. We then use Algorithm~\ref{alg-mcr} to compute a subset of the minimal coset representatives for $\mathcal E_{E_{n-1}}$ in $\mathcal E_{E_n}$. This gives a lower bound on $\mathcal H_{E_n}(t)/\mathcal H_{E_{n-1}}(t)$. In each case, the upper and lower bounds coincide, giving the desired Hilbert series. \end{proof} This method does not work for $E_9$ (also known as the affine Dynkin diagram $\tilde{E}_8$) since $\mathcal E_{E_9}$ appears to require relations other than the quadratic, braid, and claw relations and is, in any case, beyond our current computational abilities. \subsection{Connection to Weyl groups} If we combine the results on $A_n$, $D_n$, and $E_n$ ($n \leq 8$) with known results about Weyl groups, we arrive at the following theorem. \begin{thm}\label{thm-weyl} Let $G$ be the graph of a simply-laced Dynkin diagram. Then the relations in $\mathcal E_G$ are generated by quadratic, braid, and claw relations, and \[\mathcal H_G(t) = \frac{W_G(t)}{C_G(t)},\] where $W_G(t)$ is the Poincar\'e series of the corresponding Weyl group $W$ and $C_G(t)$ is the characteristic polynomial of a Coxeter element in $W$. Moreover, $\dim \mathcal E_G = |W|/f$, where $f=C_G(1)$ is the index of connection (the index of the root lattice in the weight lattice). \end{thm} However, we know of no explanation of this fact that does not require explicitly computing the requisite Hilbert series first. \section{The case ${\affineA{n-1}}$} In this section, we will describe $\mathcal E_{{\affineA{n-1}}}$, where ${\affineA{n-1}}$ is the cycle on $n$ vertices (named after the affine Dynkin diagram). Label the edges $\mathsf{0}, \mathsf{1}, \dots, \mathsf{n-1}$ in order as shown in Figure~\ref{fig-cyc}. For convenience, we will take these labels modulo $n$ so that for any integers $i$ and $j$, the edges $\mathsf{i}$ and $\mathsf{j}$ are equal if and only if $i \equiv j \pmod n$. \begin{figure} \begin{center} \begin{tikzpicture}[scale = 1.5] \node[v] (0) at (90:1){}; \node[v] (1) at (54:1){}; \node[v] (2) at (18:1){}; \node[v] (3) at (-18:1){}; \node[v] (k-1) at (-90:1){}; \node[v] (k) at (-126:1){}; \node[v] (k+1) at (-162:1){}; \node[v] (n-1) at (126:1){}; \draw[thick, ->] (0)--node[above]{${\scriptstyle\mathsf{0}}$}(1); \draw[thick, ->] (1)--node[right]{${\scriptstyle\mathsf{1}}$}(2); \draw[thick, ->] (2)--node[right]{${\scriptstyle\mathsf{2}}$}(3); \draw[thick, ->] (0)--node[below left]{${\scriptstyle\mathsf{a}}$}(2); \draw[thick, ->] (n-1)--node[above, near start]{${\scriptstyle\mathsf{n-1}}$}(0); \draw[thick, ->] (0)--node[left]{${\scriptstyle\mathsf{b}}$}(k); \draw[thick, ->] (k-1)--node[below left, near start]{${\scriptstyle\mathsf{n-k-1}}$}(k); \draw[thick, ->] (k)--node[below left]{${\scriptstyle\mathsf{n-k}}$}(k+1); \draw[thick, loosely dotted] (-36:1) to [bend left = 18] (-72:1) (180:1) to [bend left = 18](144:1); \end{tikzpicture} \end{center} \caption{\label{fig-cyc} The $n$-cycle ${\affineA{n-1}}$ with edges labeled $\mathsf{0}, \dots, \mathsf{n-1}$. The additional edges $\mathsf{a}$ and $\mathsf{b}$ will be used in the proofs of Lemma~\ref{l elem sym vanish} and Proposition~\ref{p elem sym vanish}.} \end{figure} Since the line graph of ${\affineA{n-1}}$ is isomorphic to itself, Proposition~\ref{p Coxeter to FK} shows that $\mathcal E_{\affineA{n-1}}$ is a quotient of the nil-Coxeter algebra $\widetilde\mathcal N_n$ of type ${\affineA{n-1}}$. The main result of this section will be to describe the kernel of the quotient map explicitly as well as a set of minimal coset representatives for $\mathcal E_{A_n}$ in $\mathcal E_{\affineA{n-1}}$. As a corollary, we will obtain the following result. (For a more precise statement, see Theorem~\ref{t cycle}.) \begin{thm} \label{thm-cyc} The algebra $\mathcal E_{\affineA{n-1}}$ has a presentation consisting of quadratic relations, braid relations, and $n-1$ additional relations of degrees $k(n-k)$ for $1 \leq k \leq n-1$. The Hilbert series of $\mathcal E_{\affineA{n-1}}$ is given by \[ \mathcal H_{\affineA{n-1}}(t)=[n]\cdot\prod^{n-1}_{k=1}[k(n-k)]. \] In particular, $\dim \mathcal E_{\affineA{n-1}} = n! \cdot (n-1)!$, and the top degree of $\mathcal E_{\affineA{n-1}}$ is $\binom{n+1}{3}$. \end{thm} It will often be more convenient to work with an extended version of $\mathcal E_{\affineA{n-1}}$ defined as follows: $\widehat{\mathcal E}_{\affineA{n-1}}$ is the twisted algebra $\Pi\cdot\mathcal E_{\affineA{n-1}}$ generated by $\Pi\cong\mathbb Z/ n\mathbb Z$ and $\mathcal E_{\affineA{n-1}}$, where the generator $\pi \in \Pi$ satisfies $\pi \mathsf{i}=\mathsf{(i+1)}\pi$. Enlarging the algebra $\mathcal E_{\affineA{n-1}}$ to $\widehat{\mathcal E}_{\affineA{n-1}}$ is quite harmless---if $B$ is any basis of $\mathcal E_{\affineA{n-1}}$, then $\{\pi^i b\mid 0 \leq i \leq n-1,\; b\in B\}$ is a basis of $\widehat{\mathcal E}_{\affineA{n-1}}$. We may think of $\pi$ as having degree 0 and $\S_n$-degree equal to the automorphism of ${\affineA{n-1}}$ sending each vertex to the next adjacent vertex clockwise. Hence $\sigma_\pi(\mathsf{i}) = \mathsf{i+1} = \pi \mathsf{i} \pi^{-1}$. We begin by presenting some background on the (extended) affine symmetric group. We will then explicitly describe the relations of $\mathcal E_{\affineA{n-1}}$. This will allow us to find minimal coset representatives for $\mathcal E_{A_n} = \mathcal N_n$ in $\mathcal E_{\affineA{n-1}}$ and thereby prove the main result. \subsection{Background} Here we review some background on the geometry of the extended affine symmetric group. What follows is a somewhat cursory treatment; for a more thorough treatment of this material, see \cite{Haiman}. The \emph{affine symmetric group} $(\widetilde\S_n, K)$ is the Coxeter group with simple reflections $K=\{s_0, s_1, \dots, s_{n-1}\}$ whose Dynkin diagram is an $n$-cycle. Here $\S_n = (\S_n, S)$ is the symmetric group generated by $S=\{s_1, \dots, s_{n-1}\}$. The \emph{extended affine symmetric group} $\ensuremath{\widehat \S_n}$ is the semidirect product $\Pi \ltimes \ensuremath{\widetilde{\S}_n}$, where $\Pi$ is the cyclic group of order $n$ generated by $\pi$ satisfying $\pi s_i = s_{i+1}\pi$ (indices taken modulo $n$). The groups $\S_n$ and $\ensuremath{\widetilde{\S}_n}$ can be realized as affine reflection groups as follows. Let $\{\epsilon_1, \dots, \epsilon_n\}$ be the standard basis of $\mathbb R^n$ with the usual inner product $(\cdot,\cdot)$. Let $V \subset \mathbb R^n$ be the subspace spanned by $\alpha_i = \epsilon_i-\epsilon_{i+1}$ for $i = 1, \dots, n-1$. We identify $V$ in the natural way with $\mathbb R^n/\mathbb R\varepsilon$, where $\varepsilon=\epsilon_1+\cdots+\epsilon_n$. The $\alpha_i$ are the simple roots of the root system $\Phi = \Phi^+ \cup \Phi^-$ of type $A_{n-1}$, where $\Phi^+ = \{\epsilon_i-\epsilon_j \mid 1 \leq i < j \leq n\}$. For a root $\alpha \in \Phi$ and $k \in \mathbb Z$, denote by $h_{\alpha- k\delta}$ the (affine) hyperplane $\{x \in V \mid (x, \alpha)=k\}$, and let $s_{\alpha-k\delta}$ be the reflection over $h_{\alpha-k\delta}$. Then the map sending $s_i$ to the reflection $s_{\alpha_i}$ gives a faithful representation of $\S_n$. Denoting the highest root by $\bar\alpha = \epsilon_1-\epsilon_n$ and sending $s_0$ to the reflection $s_{\alpha_0}$ over the hyperplane $h_{\alpha_0} = h_{-\bar\alpha+\delta}=\{x \in V \mid (x, \epsilon_1-\epsilon_n)=1\}$ extends this to a faithful representation of $\widetilde\S_n$. The connected components of $V-\bigcup_{\alpha \in \Phi^+} h_{\alpha}$ are called \emph{chambers}, and the connected components of $V-\bigcup_{\alpha\in\Phi^+, k\in \mathbb Z} h_{\alpha-k\delta}$ are called \emph{alcoves}. The actions of $\S_n$ and $\ensuremath{\widetilde{\S}_n}$ are simply transitive on chambers and alcoves, respectively. We define \begin{align*} \mathbf C_0 &= \{x \in V \mid (\alpha_i, x)>0, \text{ for } i=1, \dots, n-1\}\\ \mathbf A_0 &= \{x \in \mathbf C_0 \mid (\bar\alpha, x)<1\} \end{align*} to be the \emph{fundamental chamber} and \emph{fundamental alcove}. A point in the closure of $\mathbf C_0$ is called \emph{dominant}. The set of all affine transformations that preserve the set of alcoves can be identified with $\ensuremath{\widehat \S_n}$: let $Y$ denote the weight lattice $\mathbb Z^n/\mathbb Z\varepsilon \subset V$. Then $\ensuremath{\widehat \S_n} = Y \rtimes \S_n$, where elements of $Y$ are treated as translations. In this context, $\ensuremath{\widetilde{\S}_n} = \mathbb Z\Phi \rtimes \S_n$, where $\mathbb Z\Phi \subset Y$ is the root lattice. We will denote elements $Y \subset \ensuremath{\widehat \S_n}$ using the multiplicative notation $y^\lambda = y_1^{\lambda_1}\cdots y_n^{\lambda_n}$ for $\lambda = \lambda_1\epsilon_1 + \cdots + \lambda_n\epsilon_n \in Y$. Note that $y_1y_2\cdots y_n = \mathrm{id}$. Here $\Pi$ is the stabilizer of $\mathbf A_0$, and we will take $\pi = y_1s_1s_2\cdots s_{n-1}$ as a generator of $\Pi$. Recall that if $w$ is an element of a Coxeter group, then the \emph{length} $\ell(w)$ of $w$ is the minimal length of an expression for $w$ as a product of simple reflections. For $w \in \ensuremath{\widehat \S_n}$, we set $\ell(w) = \ell(v)$, where $v \in \ensuremath{\widetilde{\S}_n}$ is the unique element such that $w = \pi^k v$ for some $k$. Equivalently, $\ell(w)$ is the number of hyperplanes $h_{\alpha-k\delta}$ separating $\mathbf A_0$ from $w^{-1}(\mathbf A_0)$. We will sometimes abuse notation by identifying $\S_n$, $\ensuremath{\widetilde{\S}_n}$, and $\ensuremath{\widehat \S_n}$ with their nil-Coxeter counterparts $\mathcal N_n$, $\ensuremath{\widetilde{\mathcal N}}_n$, and $\ensuremath{\widehat{\mathcal N}}_n$. Note that a monomial $y^\lambda$, when considered as an element of $\ensuremath{\widehat{\mathcal N}}_n$, is assumed to be a reduced expression (and therefore nonzero). We will write $\widetilde\Theta\colon \ensuremath{\widetilde{\mathcal N}}_n \to \mathcal E_{\affineA{n-1}}$ and $\widehat\Theta\colon \ensuremath{\widehat{\mathcal N}}_n \to \widehat \mathcal E_{\affineA{n-1}}$ for the canonical surjections sending $s_i \mapsto \mathsf{i}$. We will sometimes abuse notation by writing $\mathsf{i}$ for $s_i \in \mathcal N_n$ or omitting $\widetilde\Theta$ or $\widehat\Theta$ when convenient. We will also write $w_0$ for the longest element of $\S_n$. \subsection{Relations} In this section, we will describe the extra relations that occur in $\mathcal E_{\affineA{n-1}}$. Consider the translations $y_1, \dots, y_n \in \widehat\S_n$ as described above. Let us write each translation $y_{i_1}y_{i_2}\cdots y_{i_k} \in \widehat\S_n$ (for $1 \leq i_1<\cdots <i_k\leq n$) as a reduced word---we will see below that this has length $k(n-k)$. Let $e_k(y_1, \dots, y_n)$ be the sum of these words in $\widehat\mathcal N_n$. \begin{proposition}\label{p elem sym vanish} The image of $e_k(y_1, \dots, y_n)$ vanishes in $\widehat{\mathcal E}_{\affineA{n-1}}$ for $k = 1, \dots, n-1$. Therefore, in $\mathcal E_{\affineA{n-1}}$, \[R_k = R_k({\affineA{n-1}}) = \pi^{-k} e_k(y_1, \dots, y_n) = 0.\] \end{proposition} Note that $\operatorname{rev}(R_k) = R_{n-k}$. Before we prove this result, we will give an explicit description of $R_k$ in terms of the generators of $\mathcal E_{{\affineA{n-1}}}$. Consider a \emph{Grassmannian permutation} $w \in \S_n$ with $w_1<w_2<\dots<w_k$ and $w_{k+1}<w_{k+2}<\dots<w_n$, i.e., $w$ has at most one descent appearing at position $k$. Equivalently, $w$ is a minimal coset representative of $\S_k \times \S_{n-k}$ in $\S_n$. We associate to $w$ the partition $\lambda = (w_k-k, \dots, w_1-1)$ and write $w = \gamma(\lambda)$. This gives a bijection between such $w$ and partitions $\lambda$ with at most $k$ parts, each of size at most $n-k$. There is a nice description of the set of reduced words of a Grassmannian permutation called the $\delta$-rule, due to Winkel \cite{Winkel}. For a partition $\lambda$ as above, let $\delta_\lambda$ be the tableau of shape $\lambda$ with entry $k+j-i$ in the box at row $i$, column $j$. The $\delta$-rule says that the reduced words of $\gamma(\lambda)$ are obtained by, starting with the tableau $\delta_\lambda$, successively removing outer corners and recording the entries until all the entries are removed. For example, $s_6s_2s_4s_5s_3s_4$ is the reduced expression for $\gamma(3,2,1,0)$ corresponding to the sequence \[ \begin{array}{cc} {\tiny \delta_{(3,2,1,0)}\;=\;\tableau{4&5&6\\3&4\\2}\quad\to\quad \tableau{4&5\\3&4\\2} \quad\to\quad \tableau{4&5\\3&4} \quad\to\quad \tableau{4&5\\3} \quad\to\quad \tableau{4\\3} \quad\to\quad \tableau{4}}\,. \end{array} \] Given a partition $\mu \subset \lambda$, define $\gamma(\lambda/\mu) = \gamma(\lambda) \gamma(\mu)^{-1}$, and let $\delta_{\lambda/\mu}$ be the skew subtableau of $\delta_\lambda$ of shape $\lambda/\mu$. It follows from the $\delta$-rule that the reduced expressions for $\gamma(\lambda/\mu)$ are obtained by removing entries of $\delta_{\lambda/\mu}$ just like the $\delta$-rule. It is easy to check from the definitions that $y_1 \cdots y_k = \pi^{k}\cdot \gamma(\Omega)$, where $\Omega=(n-k)^k$, the partition with $k$ parts of size $n-k$. Now given any $1 \leq i_1 < i_2 < \dots < i_k \leq n$, let $\lambda = (i_k-k, \dots, i_1-1)$. Then $y_{i_1}\cdots y_{i_k}$ has the reduced factorization \begin{equation} \label{e y reduced} y_{i_1}\cdots y_{i_k} = \gamma(\lambda) y_1 \cdots y_k \gamma(\lambda)^{-1} = \gamma(\lambda) \cdot \pi^k \cdot \gamma(\Omega/\lambda). \tag{$\spadesuit$} \end{equation} Therefore \[ \pi^{-k} y_{i_1}\cdots y_{i_k} = \pi^{-k} \gamma(\lambda) \pi^k \cdot \gamma(\Omega/\lambda) = \sigma_{\pi}^{-k}(\gamma(\lambda)) \cdot \gamma(\Omega/\lambda). \] Summing over all $\lambda \subset \Omega$ gives $R_k$. \begin{ex} If $n=5$, then \begin{align*} R_1 &= \mathsf{4321} + \mathsf{0\cdot 432} + \mathsf{10 \cdot 43} + \mathsf{210 \cdot 4} + \mathsf{3210},\\ R_2 &= \mathsf{342312} + \mathsf{0 \cdot 34231} + \mathsf{10 \cdot 3421} + \mathsf{40 \cdot 3423} + \mathsf{210 \cdot 321}\\ & \qquad + \mathsf{410 \cdot 342} + \mathsf{4210 \cdot 32} + \mathsf{0410 \cdot 34} + \mathsf{04210 \cdot 3} + \mathsf{104210},\\ R_3 &= \mathsf{234123} + \mathsf{0 \cdot 23412} + \mathsf{10 \cdot 2312} + \mathsf{40 \cdot 2341} + \mathsf{410 \cdot 231}\\ & \qquad + \mathsf{340 \cdot 234} + \mathsf{0410 \cdot 21} + \mathsf{3410 \cdot 23} + \mathsf{30410 \cdot 2} + \mathsf{430410},\\ R_4 &= \mathsf{1234}+ \mathsf{0 \cdot 123} + \mathsf{40 \cdot 12} + \mathsf{340 \cdot 1} + \mathsf{2340}. \end{align*} See also Example~\ref{ex-a3tilde} for the case $n=4$. \end{ex} We now prove the following special case of Proposition~\ref{p elem sym vanish}. \begin{lemma} \label{l elem sym vanish} The following relations hold in $\mathcal E_{{\affineA{n-1}}}$: \begin{align*} R_1&= \sum_{i=0}^{n-1} \mathsf{(i-1)(i-2)\cdots 0(n-1)(n-2)\cdots(i+1)}=0,\\ R_{n-1}&= \sum_{i=0}^{n-1} \mathsf{(i+1)(i+2)\cdots(n-1)01\cdots(i-1)}=0. \end{align*} \end{lemma} \begin{proof} We prove the $R_1$ relation, the $R_{n-1}$ relation being similar. We induct on $n$. For the base case $n=3$, $R_1 = \mathsf{21} + \mathsf{02} + \mathsf{10}$ is one of the quadratic relations of $\mathcal E_{\tilde{A}_2}$. For $n>3$, consider the additional edge $\mathsf{a}$ as drawn in Figure~\ref{fig-cyc}, and let $C$ be the cycle $\mathsf{2}, \mathsf{3}, \dots, \mathsf{n-1}, \mathsf{a}$. For $i=2, 3, \dots, n-1$, let \[X_i = \mathsf{(i-1)(i-2)\cdots 2\cdot a\cdot (n-1)(n-2)\cdots (i+1)}.\] Then by induction, $R_1(C) = \mathsf{(n-1)(n-2) \cdots 2} + \sum_{i=2}^{n-1} X_i=0$. Using the commutation relations and the quadratic relation $\mathsf{0a} + \mathsf{a1} = \mathsf{10}$, \[\mathsf{0}\cdot X_i + X_i \cdot \mathsf{1} = \mathsf{(i-1)\cdots 2}\cdot (\mathsf{0a}+\mathsf{a1}) \cdot \mathsf{(n-1)\cdots (i+1)} = \mathsf{(i-1)\cdots 210(n-1) \cdots (i+1)}.\] It now follows easily that $R_1({\affineA{n-1}})=\mathsf{0} \cdot R_1(C) + R_1(C) \cdot \mathsf{1}=0$, as desired. \end{proof} Using Lemma~\ref{l elem sym vanish}, we can complete the proof of Proposition~\ref{p elem sym vanish}. \begin{proof}[Proof of Proposition \ref{p elem sym vanish}] Lemma \ref{l elem sym vanish} establishes the cases $k=1$ and $k= n-1$. Suppose that $n>3$ and $k \in \{2,\dots, n-2\}$. To prove that $e_k(y_1,\dots, y_n)$ is zero in $\widehat{\mathcal E}_{\affineA{n-1}}$, write \[ e_k(y_1,\dots, y_n) = e_k(y_1, \dots, y_{n-1}) + e_{k-1}(y_1, \dots, y_{n-1})y_n \] We will show that \begin{equation}\label{e ek split} - e_k(y_1, \dots, y_{n-1}) = e_{k-1}(y_1, \dots, y_{n-1})y_n. \tag{$*$} \end{equation} in $\widehat{\mathcal E}_{{\affineA{n-1}} \cup \{\mathsf{b}\}}$, where $\mathsf{b}$ is the additional edge shown in Figure~\ref{fig-cyc}. Maintain the notation of \eqref{e y reduced} with $\Omega = (n-k)^k$ and $\Omega^L = (n-k-1)^k$. The left side of \eqref{e ek split} contains terms $y_{i_1}\cdots y_{i_k}$ for which $i_k \neq n$. Then $\lambda_1 = i_k-k < n-k$, so $\gamma(\Omega/\lambda) = \mathsf{(n-k)(n-k+1)\cdots (n-1)} \cdot \gamma(\Omega^L/\lambda)$ is a reduced factorization (corresponding to first removing the rightmost column of $\delta_{\Omega/\lambda}$), which yields \begin{equation} \label{e y reduced ik less n} -y_{i_1} \cdots y_{i_k} = -\gamma(\lambda) \pi^k \mathsf{(n-k)(n-k+1)\cdots (n-1)} \gamma(\Omega^L/\lambda).\tag{$**$} \end{equation} Using the $R_k$ relation of the cycle $\mathsf{n-k}, \dots, \mathsf{n-1}, \mathsf{b}$, which is \[ -\mathsf{(n-k)\cdots (n-1)} = \sum_{i=1}^k \mathsf{(n-i+1)\cdots (n-1) \cdot b \cdot (n-k) \cdots (n-i-1)}, \] we can expand the right side of \eqref{e y reduced ik less n} into $k$ monomials. The left side of \eqref{e ek split} then expands into $k\binom{n-1}{k}$ monomials of the form \begin{multline*} \gamma(\lambda) \pi^k \mathsf{(n-i+1) \cdots (n-1) \cdot b \cdot (n-k) \cdots (n-i-1)} \gamma(\Omega^L/\lambda)\\ =\gamma(\lambda) \mathsf{(k-i+1) \cdots (k-1)}\cdot \pi^k \mathsf{ b \cdot (n-k) \cdots (n-i-1)} \gamma(\Omega^L/\lambda), \end{multline*} where $\lambda \subset \Omega^L$ and $1 \leq i \leq k$. But $\gamma(\lambda)\mathsf{(k-i+1)\cdots (k-1)}$ ranges over all minimal coset representatives $\S_{n-1}^J$ for the parabolic subgroup $\S_{k-1} \times \S_1 \times \S_{n-k-1}$ (generated by $J=\{s_1, \dots, s_{k-2}, s_{k+1}, \dots, s_{n-2}\}$) in $\S_{n-1}$: indeed, $\gamma(\lambda)$ are the minimal coset representatives for $\S_k \times \S_{n-k-1}$ in $\S_{n-1}$, while $\mathsf{(k-i+1)\cdots (k-1)}$ are the ones for $\S_{k-1} \times \S_1$ in $\S_k$. A similar argument shows that each $\mathsf{(n-k)\cdots (n-i-1)}\gamma(\Omega^L/\lambda)$ is a reduced expression. Hence \[-e_k(y_1, \dots, y_{n-1}) = \sum_{w \in \S_{n-1}^J} w\pi^k\mathsf{b}w',\] where $w'$ is the unique element of $\S_n$ such that the $\S_n$-degree of $w\pi^k \mathsf{b}w'$ is the identity. Similarly, the right side of \eqref{e ek split} contains terms $y_{i_1}\cdots y_{i_k}$ for which $i_k = n$ so that $\lambda_1 = n-k$. Then $\lambda^B=(\lambda_2, \dots, \lambda_k) \subset \Omega^B = (n-k)^{k-1}$, so we find (by removing the top row of $\delta_\lambda$ last) that $\gamma(\lambda) = \gamma(\lambda^B) \cdot \mathsf{(n-1)(n-2) \cdots k}$. Thus \begin{align*} y_{i_1}\cdots y_{i_k} &= \gamma(\lambda^B)\mathsf{(n-1)(n-2)\cdots k}\pi^k \gamma(\Omega^B/\lambda^B)\\ &= \gamma(\lambda^B)\pi^k\mathsf{(n-k-1)(n-k-2)\cdots 0} \gamma(\Omega^B/\lambda^B). \end{align*} Then using the $R_1$ relation of the cycle $\mathsf{0}, \mathsf{1}, \dots, \mathsf{n-k-1}, \mathsf{b}$, the right side of \eqref{e ek split} expands into $(n-k)\binom{n-1}{k-1}$ monomials of the form \begin{multline*} \gamma(\lambda^B)\pi^k \mathsf{(i-1) \cdots 0\cdot b \cdot (n-k-1) \cdots (i+1)}\gamma(\Omega^B/\lambda^B) \\ = \gamma(\lambda^B) \mathsf{(k+i-1) \cdots k} \cdot \pi^k \mathsf{b \cdot(n-k-1) \cdots (i+1)}\gamma(\Omega^B/\lambda^B) \end{multline*} for $\lambda^B \subset \Omega^B$ and $0 \leq i \leq n-k-1$. But as above, $\gamma(\lambda^B) \mathsf{(k+i-1) \cdots k}$ ranges over $\S_{n-1}^J$ since $\gamma(\lambda^B)$ are the minimal coset representatives for $\S_{k-1} \times \S_{n-k}$ in $\S_{n-1}$ while $\mathsf{(k+i-1) \cdots k}$ are the ones for $\S_1 \times \S_{n-k-1}$ in $\S_{n-k}$. Thus as above we have \[e_{k-1}(y_1, \dots, y_n)y_n = \sum_{w \in \S_{n-1}^J} w\pi^k\mathsf{b}w' = -e_k(y_1, \dots, y_{n-1}).\qedhere\] \end{proof} \begin{example} We illustrate the proof of Proposition \ref{p elem sym vanish} in the case $n = 5$, $k=2$. There, using the relation $\mathsf{210} = \mathsf{10b} + \mathsf{0b2} + \mathsf{b21}$: \begin{align*} \pi^{-2} e_1(y_1, y_2, y_3, y_4)y_5 \quad = & \phantom{{}+{}} \mathsf{210321} + \mathsf{421032} + \mathsf{042103} + \mathsf{104210}\\[1.5mm] =&\phantom{{}+{}}\mathsf{10b321 + 410b32 + 0410b3 + 10410b}\\ &+\mathsf{0b2321 + 40b232 + 040b23 + 1040b2}\\ &+\mathsf{b21321 + 4b2132 + 04b213 + 104b21}, \end{align*} Similarly, using the relation $\mathsf{-34} = \mathsf{4b} + \mathsf{b3}$: \begin{align*} -\pi^{-2}e_2(y_1, y_2, y_3, y_4) \quad = & - \mathsf{342312 - 034231 - 103421 - 403423 - 410342 - 041034}\\[1.5mm] =&\phantom{{}+{}}\mathsf{4b2312 + 04b231 + 104b21 + 404b23 + 1404b2 + 04104b}\\ &+ \mathsf{b32312 + 0b3231 + 10b321 + 40b323 + 140b32 + 0410b3}\\[1.5mm] =&\phantom{{}+{}}\mathsf{4b2132 + 04b213 + 104b21 + 040b23 + 1040b2 + 10410b}\\ &+ \mathsf{b21321 + 0b2321 + 10b321 + 40b232 + 410b32 + 0410b3}. \end{align*} The last equality uses only braid and commutation relations. We see directly in this case that the twelve terms that appear in each case are the same. \end{example} \subsection{Primitive elements} Let $I_0 \subset \widehat\mathcal N_n$ be the (two-sided) ideal generated by the relations $R_k$, or equivalently by $e_k(y_1, \dots, y_n)$. By Proposition~\ref{p elem sym vanish}, $\widehat\mathcal E_{\affineA{n-1}}$ is a quotient of $\widehat \mathcal N_n/I_0$. To show that they are isomorphic, we first describe a basis of $\widehat \mathcal N_n/I_0$ in terms of a subset of $\widehat\S_n$ which we call \emph{primitive elements}, previously studied in \cite{LJantzen, Xi, SW, BlasiakFactor, BlasiakCyclage}. These primitive elements will turn out to form a set of minimal coset representatives of $ \mathcal N_n = \mathcal E_{A_n}$ in $\widehat\mathcal E_{\affineA{n-1}}$. We will work with a geometric description of primitive elements from \cite{LJantzen}. Let a \emph{box} be a connected component of $H - \bigcup_{i \in [n-1], k \in \mathbb Z} h_{\alpha_i - k\delta}$. We denote by $\mathbf{B}_0$ the box containing $\mathbf{A}_0$, which lies between the hyperplanes $h_{\alpha_i}$ and $h_{\alpha_i - \delta}$ for $i \in [n-1]$. An element $w$ in $\ensuremath{\widehat \S_n}$ is \emph{primitive} if $w^{-1}(\mathbf{A}_0) \subseteq \mathbf{B}_0$. Let $D^S$ denote the set of primitive elements of $\ensuremath{\widehat \S_n}$. Note that since $\mathbf B_0$ is contained in the fundamental chamber $\mathbf C_0$, every primitive element is an element of $\widehat\S_n^S$, that is, a (left) minimal coset representative of $\S_n$ in $\widehat\S_n$. The elements of $D^S$ are in bijection with elements of $\S_n$: for any $w \in \S_n$, there is a unique $y^\lambda$ such that $y^\lambda w \in D^S$. Also, since $\pi$ stabilizes $\mathbf A_0$, $v \in D^S$ if and only if $\pi v \in D^S$. \begin{example}\label{ex SL3} The primitive elements of $\widehat{\S}_3$, expressed as products of simple reflections (top line) and using the fact that $\widehat{S}_3 = Y\rtimes \S_3$ (bottom line), are: \[ { \setlength\arraycolsep{15pt} \begin{array}{cccccc} \mathrm{id} & \pi & \pi^2 & \pi s_0& \pi^2 s_0 & \pi^3 s_0\\ \mathrm{id}&y_1s_1s_2&y_1y_2s_2s_1&y_2s_2&y_1y_3s_1&y_1^2y_2s_1s_2s_1 \end{array} } \] \end{example} \subsection{Spanning} We will now show how to construct a spanning set of $\widehat \mathcal N_n/I_0$ from the primitive elements. We will see in Theorem~\ref{t cycle} that it is actually a basis. We will first need the following lemma about reduced factorizations. \begin{lemma} \label{lemma-redfac} Suppose $\lambda \in Y$ is a dominant weight and $w \in \widehat\S_n^S$. Then $y^\lambda \cdot w^{-1}$ is a reduced factorization. \end{lemma} \begin{proof} It suffices to show that no hyperplane that separates $y^{-\lambda}(\mathbf A_0)$ and $\mathbf A_0$ also separates $\mathbf A_0$ and $w^{-1}(\mathbf A_0)$. Since $(-\lambda, \alpha) \leq 0$ for any $\alpha \in \Phi^+$, any hyperplane separating $y^{-\lambda}(\mathbf A_0)$ from $\mathbf A_0$ has the form $h_{\alpha-k\delta}$, where $\alpha\in \Phi^+$ and $k \leq 0$. Likewise, $w^{-1}(\mathbf A_0)$ lies in $\mathbf C_0$, so any hyperplane separating $\mathbf A_0$ and $w^{-1}(\mathbf A_0)$ has the form $h_{\alpha-k\delta}$ for some $\alpha \in \Phi^+$ and $k>0$. \end{proof} Let $\mathcal N_n^d$ be the degree $d$ part of $\mathcal N_n$, and let $\mathcal N_n^+ = \bigoplus_{d>0} \mathcal N_n^d$. \begin{lemma} \label{lemma-primitive span} If $x \in \widehat\S_n^S$ but $x \not\in D^S$, then $x \in I_0+\widehat\mathcal N_n\mathcal N_n^+$. \end{lemma} \begin{proof} By our choice of $x$, $x^{-1}(\mathbf A_0)$ lies in a box $\mathbf B = y^\lambda(\mathbf B_0) \neq \mathbf B_0$ contained in the fundamental chamber. Hence by Lemma~\ref{lemma-redfac}, $x^{-1}$ has a reduced factorization of the form $y^\lambda\cdot v^{-1}$, where $\lambda$ is a nonzero dominant weight and $v \in D^S$. Thus it suffices to show that the image of $y^{\lambda}$ lies in $I_0 + \mathcal N_n^+\widehat\mathcal N_n$. Choose any $k$ such that $\lambda_k>\lambda_{k+1}$. The only term of $e_k(y_1, \dots, y_n)$ that corresponds to a dominant weight is $y^\mu = y_1\cdots y_k$; hence all other terms lie in $\mathcal N_n^+\widehat\mathcal N_n$ and so $y^\mu \in I_0 + \mathcal N_n^+\widehat\mathcal N_n$. By Lemma~\ref{lemma-redfac}, $y^{\lambda} = y^\mu \cdot y^{\lambda-\mu}$ is a reduced factorization, so the result follows. \end{proof} \begin{prop} \label{prop-primitive span} The image of $\{vw \mid v \in D^S, w \in \mathcal N_n\}$ spans $\widehat\mathcal N_n/I_0$. \end{prop} \begin{proof} Since $\{xw \mid x \in \widehat\S_n^S, w \in \mathcal N_n\}$ is a basis for $\widehat\mathcal N_n$, it spans $\widehat\mathcal N_n/I_0$. Suppose $w \in \mathcal N_n^{d}$. By Lemma~\ref{lemma-primitive span}, if $x \not \in D^S$, then $xw \in I_0 + \widehat\mathcal N_n\mathcal N_n^+w \subset I_0 + \widehat\mathcal N_n\mathcal N_n^{d+1}$. Hence $\widehat\mathcal N_n\mathcal N_n^d / \widehat\mathcal N_n\mathcal N_n^{d+1}$ is spanned by $\{vw \mid v \in D^S, w \in \mathcal N_n^d\}$ and elements of $I_0$. (In particular, $\widehat\mathcal N_n\mathcal N_n^{\ell(w_0)}$ is spanned by $\{vw_0 \mid v \in D^S\}$.) The desired result then follows from a straightforward induction on $\ell(w_0)-d$. \end{proof} \subsection{Linear independence} In this section, we will prove that the images of the primitive elements under $\widehat\Theta$ are linearly independent as (left) minimal coset representatives of $\mathcal E_{A_n} = \mathcal N_n$ in $\widehat \mathcal E_{\affineA{n-1}}$ by computing appropriate evaluations of the bilinear form $\langle \cdot, \cdot \rangle$. First we recall some facts about Bruhat order in $\widetilde\S_n$. A product of simple reflections $s_{i_1}s_{i_2}\cdots s_{i_d}$ can be visualized by the alcove walk (see, e.g., \cite{Ram}) \[ \mathbf D_0 = \mathbf{A}_0,\quad \mathbf D_1 = s_{i_1}(\mathbf{A}_0),\quad \mathbf D_2 = s_{i_1}s_{i_2}(\mathbf{A}_0),\quad\dots,\quad \mathbf D_d = s_{i_1}s_{i_2}\cdots s_{i_d}(\mathbf{A}_0). \] Here $\mathbf D_j$ is the reflection of $\mathbf D_{j-1}$ across one of its facets, namely the one contained in the hyperplane $h_{\beta^j}= s_{i_1}\cdots s_{i_{j-1}}(h_{\alpha_{i_j}})$. Suppose we omit a simple reflection $s_{i_j}$ from the product $s_{i_1}\cdots s_{i_d}$, giving the product $s_{i_1}\cdots \widehat{s_{i_j}}\cdots s_{i_d}$. Then the resulting alcove walk is obtained from that of $s_{i_1}\cdots s_{i_d}$ by reflecting the second part of the walk across $h_{\beta^j}$. More precisely, its alcove walk is \[ \mathbf{D}_0, \dots,\,\mathbf{D}_{j-1},\, s_{\beta^j}(\mathbf{D}_{j+1}),\,\dots,\, s_{\beta^j}(\mathbf{D}_d). \] It is well known that $s_{i_1}\cdots s_{i_d}$ is a reduced expression if and only if its alcove walk never crosses a given hyperplane more than once. We claim that \begin{equation}\label{etext crossed twice} \parbox{14cm}{ If $s_{i_1}\cdots s_{i_d}$ and $s_{i_1}\cdots \widehat{s_{i_j}}\cdots s_{i_d}$ are reduced expressions, then $h_{\beta^j}$ is either the first or last hyperplane parallel to $h_{\beta^j}$ crossed in the alcove walk of $s_{i_1}\cdots s_{i_d}$. } \tag{$\diamondsuit$} \end{equation} If not, then the alcove walk of $s_{i_1}\cdots s_{i_d}$ crosses $h_{\beta^j+\delta}$ and $h_{\beta^j-\delta}$. But then the alcove walk of $s_{i_1}\cdots \widehat{s_{i_j}}\cdots s_{i_d}$ crosses one of them twice (since $s_{\beta^j}(h_{\beta^j+\delta}) = h_{\beta^j-\delta}$). \begin{lemma}\label{l rho pairing} Suppose $w = p_1p_2\cdots p_d \in \widetilde\S_n$ is a reduced expression with $p_1\cdots p_{\ell(w_0)} = w_0$ and $w^{-1}w_0 = p_dp_{d-1} \cdots p_{\ell(w_0)+1}$ primitive. If $j \in [d]$ is an index such that $p_1\cdots \widehat{p_j} \cdots p_d$ is a reduced expression and $p_j = \sigma_{p_{j+1}\cdots p_d}(p_d)$, then $j=d$. \end{lemma} \begin{proof} Since $p_j = \sigma_{p_{j+1}\cdots p_d}(p_d)$, it follows that $p_j \cdots p_{d-1}$ and $p_{j+1} \cdots p_{d}$ have the same $\S_n$-degree. Hence $p_1 \cdots \widehat{p_j} \cdots p_d$ and $p_1 \cdots p_{d-1}$ have the same $\S_n$-degree, so they differ by a translation in $\widetilde \S_n$. Defining $\beta^j$ as in the discussion above, we have that $p_1 \cdots \widehat{p_j} \cdots p_d = s_{\beta^j}w$ and $p_1 \cdots p_{d-1} = s_{\beta^d}w$, so the hyperplanes $h_{\beta^j}$ and $h_{\beta^d}$ are parallel. By \eqref{etext crossed twice} and the assumptions of the lemma, $h_{\beta^j}$ must be the first or last hyperplane parallel to $h_{\beta^j}$ crossed in the alcove walk of $p_1 p_2\cdots p_d$. If it is the last, then $d=j$ and we are done, so assume it is the first. Since $p_1\cdots p_{\ell(w_0)}$ is a reduced expression for $w_0$, the first $\ell(w_0)$ hyperplanes crossed in the alcove walk of $p_1\cdots p_d$ contain all $\ell(w_0)$ different hyperplane directions. Therefore, $j\leq \ell(w_0)$. Set $y=p_1\cdots \widehat{p_j}\cdots p_{\ell(w_0)}$, which is a reduced expression for $y$. Therefore $\ell(y)=\ell(w_0)-1$, so it follows that $y= s_cw_0$ for some simple reflection $s_c \in \S_n$. Hence $s_{\beta^j}w = p_1 \cdots \widehat{p_j} \cdots p_d = s_cw$, so $\beta^j = \alpha_c$. Since $w^{-1}w_0$ is primitive, $w_0w(\mathbf A_0) \subset \mathbf B_0$, or equivalently $w(\mathbf A_0) \subset w_0(\mathbf B_0)$. The region $w_0(\mathbf{B}_0)$ is also a box and lies between $h_{\alpha_i}$ and $h_{\alpha_i+\delta}$ for $i\in [n-1]$. We conclude that the only hyperplane separating $\mathbf{A}_0$ and $w(\mathbf{A}_0)$ and parallel to $h_{\alpha_c}$ is $h_{\alpha_c}$ itself. Hence $j=d$, as desired. \end{proof} We can now prove the following result along the same lines as the proof of Theorem~\ref{thm-a}. For $w, w' \in \widetilde \mathcal N_n$, write $\langle w, w' \rangle = \langle \widetilde\Theta(w), \widetilde\Theta(w') \rangle$. \begin{prop}\label{p primitive pairing} If $v \in \ensuremath{\widetilde{\mathcal N}}_n$ is primitive, then $\langle w_0\operatorname{rev} (v), vw_0\rangle=1$. \end{prop} \begin{proof} First note that if $p_1\cdots p_d$ is not a reduced expression in $\ensuremath{\widetilde{\S}_n}$, then $\langle p_1\cdots p_d, Q\rangle=0$ for all $Q\in \mathcal E_n$. (This is because $p_1\cdots p_d = 0$ in $\widetilde\mathcal N_n$ and hence also in $\mathcal E_{\affineA{n-1}}$.) Let $p_1\cdots p_d=w_0\operatorname{rev}(v)$ as in Lemma \ref{l rho pairing}. If $v = \mathrm{id}$, then $\langle w_0, w_0 \rangle = 1$ by the proof of Theorem~\ref{thm-a}. Otherwise, by Lemma \ref{lemma-leibniz}, Proposition \ref{prop-pairing}, and Lemma \ref{l rho pairing}, \begin{align*} \langle p_1\cdots p_d,\;p_d \cdots p_1\rangle &=\sum_{j=1}^d (p_j) (\sigma_{p_{j+1} \cdots p_d}\nabla_{p_d}) \cdot \langle p_1 \cdots \widehat{p_j} \cdots p_d,\; p_{d-1}p_{d-2}\cdots p_1 \rangle\\ &=\langle p_1\cdots p_{d-1},\; p_{d-1} \cdots p_1\rangle. \end{align*} But $p_1 \cdots p_{d-1} = w_0\operatorname{rev} (v')$ for some primitive element $v'$: since $v$ is primitive, the alcove walk of $\operatorname{rev}(v)$ does not leave $\mathbf B_0$, so neither does that of $\operatorname{rev}(v')$. The result then follows by induction. \end{proof} It is now straightforward to prove that the primitive elements satisfy an appropriate linear independence property. \begin{prop} \label{prop-primitive ind} The images of the primitive elements $D^S \subset \widehat\mathcal N_n$ under $\widehat\Theta$ form a subset of the (left) minimal coset representatives of $\mathcal N_n = \mathcal E_{A_n}$ in $\widehat\mathcal E_{\affineA{n-1}}$, i.e., they are linearly independent modulo $\widehat\mathcal E_{\affineA{n-1}} \mathcal E_{A_n}^+$. \end{prop} \begin{proof} We need to show that if $\sum_{v \in D^S} c_v\widehat\Theta(v) \in \widehat\mathcal E_{\affineA{n-1}} \mathcal E_{A_n}^+$ for some $c_v \in \mathbb Q$, then all the $c_v$ vanish. We may assume that all $v$ lie in $\widetilde\mathcal N_n$. Multiplying the above expression by $w_0$, we have that \begin{equation}\label{e linear ind FK} \sum_{ v\in D^S} c_{v} \widehat{\Theta} (v w_0) =0. \tag{$*$} \end{equation} Since all elements of $D^S$ have different $\S_n$-degree, $\langle w_0 \operatorname{rev}(v'), vw_0 \rangle=0$ unless $v=v'$, in which case it equals 1 by Lemma~\ref{l rho pairing}. Thus pairing \eqref{e linear ind FK} with $\widehat\Theta(w_0 \operatorname{rev}(v))$ gives $c_v=0$. \end{proof} \subsection{Proof of main theorem} Combining the results of the previous sections, we can now prove Theorem~\ref{thm-cyc}. It is an immediate consequence of the following more explicit result. \begin{thm} \label{t cycle} The algebra $\widehat\mathcal E_{\affineA{n-1}}$ is isomorphic to $\widehat\mathcal N_n/I_0$, where $I_0$ is the ideal generated by $e_k(y_1, \dots, y_n)$ for $k=1, \dots, n-1$. Moreover, $D^S$ is a set of (left) minimal coset representatives for $\mathcal N_n = \mathcal E_{A_{n}}$ in $\widehat\mathcal E_{\affineA{n-1}}$. \end{thm} \begin{proof} By Proposition~\ref{p elem sym vanish}, $\widehat\mathcal E_{\affineA{n-1}}$ is a quotient of $\widehat\mathcal N_n/I_0$. By Proposition~\ref{prop-primitive span}, $\{vw \mid v \in D^S, w \in \mathcal N_n\}$ spans $\widehat\mathcal N_n/I_0$, and by Proposition~\ref{prop-primitive ind} and Theorem~\ref{thm-subgraph}, its image is linearly independent in $\widehat\mathcal E_{\affineA{n-1}}$. It follows that it must be a basis of both. \end{proof} \begin{cor} The Hilbert series of $\mathcal E_{\affineA{n-1}}$ is given by \[ \mathcal{H}_{\affineA{n-1}}(t)=[n]\cdot\prod^{n-1}_{i=1}[i(n-i)]. \] \end{cor} \begin{proof} By Remark 1.5 and Lemma 2.2 of \cite{SW}, the length generating function for the subset $D^S\subset \ensuremath{\widehat{\mathcal N}}_n$ is given by \[ \sum_{v\in D^S }t^{\ell(v)}= n\cdot \prod_{i=1}^{n-1}\frac{1-t^{i(n-i)}}{1-t^{i}}. \] Hence by Theorem \ref{t cycle}, the Hilbert series of $\widehat{\mathcal E}_{\affineA{n-1}}$ is given by \[ \sum_{\substack{v \in D^S\\ w\in\ensuremath{\mathcal N}_n}} t^{\ell(v w)} = [n]!\cdot n\cdot \prod_{i=1}^{n-1}\frac{1-t^{i(n-i)}}{1-t^i} =n\cdot\prod_{i=1}^{n}\frac{1-t^i}{1-t}\cdot \prod_{i=1}^{n-1}\frac{1-t^{i(n-i)}}{1-t^i} =n\cdot [n]\cdot\prod_{i=1}^{n-1}[i(n-i)]. \] Since the Hilbert series of $\widehat{\mathcal E}_{\affineA{n-1}}$ is equal to $n$ times the Hilbert series of $\mathcal E_{\affineA{n-1}}$, the desired result follows. \end{proof} \begin{rmk}\label{r coinvariants} Let $I$ be the ideal of $\mathbb Q[y_1, \dots, y_n]$ generated by symmetric polynomials of positive degree. The quotient of $\mathbb Q [y_1, y_2, \dots, y_n]$ by $I$ is called the \emph{ring of coinvariants} and is known to have dimension $n!$. The reference \cite{BlasiakCyclage} identifies a subalgebra $\pH_n$ of the extended affine Hecke algebra $\eH_n$ of type $A$. The subalgebra $\pH_n$ is a $q$-analogue of $\mathbb Q\S_n \ltimes \mathbb Q [y_1,\dots,y_n]$ and inherits a canonical basis from that of $\eH_n$. Let $\mathcal{I}$ denote the two-sided ideal of $\pH_n$ generated by the symmetric polynomials of positive degree in the Bernstein generators. Corollary $6.7$ of \cite{BlasiakCyclage} states that the $\pH_n$-module $\pH_n \ensuremath{C^{\prime}}_{w_0}/ \mathcal{I}\ensuremath{C^{\prime}}_{w_0}$ has canonical basis $\{\ensuremath{C^{\prime}}_{vw_0} \mid v\in D^S\}$, where $\ensuremath{C^{\prime}}_w$ denotes the canonical basis element labeled by $w\in\ensuremath{\widehat \S_n}$. The basis $\{vw_0 \mid v\in D^S\}$ of $\ensuremath{\widehat{\mathcal N}}_n w_0/I_0 w_0$ is essentially the $q=0$ specialization of this canonical basis. (It is not exactly the $q=0$ specialization because the canonical basis of \cite{BlasiakCyclage} is for the $G=GL_n$ extended version of the affine Weyl group, whereas here we have used the $G=SL_n$ version.) \end{rmk} \section{Questions and conjectures} We conclude with some questions and conjectures to guide further research. \begin{question} What is the Hilbert series $\mathcal H_G(t)$ of $\mathcal E_G$? \end{question} Towards this end, we have a conjecture for the Hilbert series of the affine Dynkin diagrams $\tilde{D}_n$ as shown in Figure~\ref{fig-dntilde}. This conjecture was obtained by using Algorithm~\ref{alg-mcr} to compute minimal coset representatives for $\mathcal E_{D_n}$ in $\mathcal E_{\tilde D_n}$. Here, $[n]!! = [n][n-2][n-4]\cdots$. \begin{figure} \begin{center} \begin{tikzpicture} \draw (150:1)node[v]{}--(0,0)node[v]{}--(1,0)node[v]{}--(1.5,0) (-150:1)node[v]{}--(0,0); \draw[thick, loosely dotted] (1.5,0)--(2.5,0); \draw (2.5,0)--(3,0)node[v]{}--(4,0)node[v]{}--+(30:1)node[v]{} (4,0)--+(-30:1)node[v]{}; \end{tikzpicture} \end{center} \caption{\label{fig-dntilde}The affine Dynkin diagram $\tilde D_n$ with $n+1$ vertices.} \end{figure} \begin{conj} \label{conj-dtilde} The Hilbert series for $\mathcal E_{\tilde{D}_n}$ is \[ \frac{[2n-2]!!}{[2n-3]!!} \cdot \frac{[n][n+1]}{[2]^2[n-1][n-2]} \cdot \left[\frac{n^2-n}{2}\right]^2 \cdot \left[\frac{n^2-n-2}{2}\right]^2 \cdot \prod_{i=1}^{n-3} [i(2n-i-1)]. \] In particular, $\dim \mathcal E_{\tilde D_n} = (n+1)^2\cdot 2^{2n-8}$, and the top degree of $\mathcal E_{\tilde D_n}$ is $\frac13n(n-1)(2n-1)$. \end{conj} For small values of $n$ (including $n=3$, for which $\tilde{D}_3=\tilde{A}_3$), this gives the following: \begin{align*} n=3&&&[3]^2[4]^2\\ n=4&&&\frac{[4]^2[5]^2[6]^4}{[2]^2[3]^2}\\ n=5&&&\frac{[6]^2[8]^2[9]^2[10]^2[14]}{[2][3]^2[7]}\\ n=6&&&\frac{[6]^2[8][10]^2[14]^2[15]^2[18][24]}{[2][3][5]^2[9]}\\ n=7&&&\frac{[4][8]^2[10][12]^2[20]^2[21]^2[22][30][36]}{[2][3][5]^2[9][11]} \end{align*} \begin{figure} \begin{center} \dr{(-2,0)node[v]{}--(-1,0)node[v]{}--(0,0)node[v]{}--(1,0)node[v]{}--(2,0)node[v]{} (0,0)--(0,1)node[v]{}--(0,2)node[v]{}} \qquad\quad \dr{(-3,0)node[v]{}--(-2,0)node[v]{}--(-1,0)node[v]{}--(0,0)node[v]{}--(1,0)node[v]{}--(2,0)node[v]{}--(3,0)node[v]{} (0,0)--(0,1)node[v]{} (0,2)node{}}\\[5mm] \dr{(-2,0)node[v]{}--(-1,0)node[v]{}--(0,0)node[v]{}--(1,0)node[v]{}--(2,0)node[v]{}--(3,0)node[v]{} --(4,0)node[v]{}--(5,0)node[v]{} (0,0)--(0,1)node[v]{}} \end{center} \caption{\label{fig-entilde}The affine Dynkin diagrams $\tilde E_6$, $\tilde E_7$, and $\tilde E_8$.} \end{figure} We also have conjectures regarding the affine Dynkin diagrams $\tilde{E}_6$ and $\tilde{E}_7$, as shown in Figure~\ref{fig-entilde}. \begin{conj} \label{conj-etilde} The Hilbert series for $\mathcal E_{\tilde{E}_6}$ and $\mathcal E_{\tilde{E}_7}$ are: \begin{align*} \mathcal H_{\tilde{E}_6}(t) &= \frac{[6][9][12][14]^2[16]^2[21][22][30]^2}{[3]^2[4][7][11]},\\ \mathcal H_{\tilde{E}_7}(t) &= \frac{[6][8][10][12][14][18][24][27][32][34][48][49][52][66][75]}{[3][4][5][7][9][11][13][17]}. \end{align*} \end{conj} As for the remaining affine Dynkin diagram, $\mathcal E_{\tilde{E}_8}$ appears finite-dimensional but too large for us to confidently infer a possible Hilbert series. Even more basic is the following question. \begin{question} For which graphs $G$ is $\mathcal E_G$ finite-dimensional? \end{question} While $\mathcal E_G$ is finite-dimensional for all graphs $G$ on at most five vertices, it appears that some graphs on six vertices may have $\mathcal E_G$ infinite-dimensional. While we are not yet able to prove that any $\mathcal E_G$ is infinite-dimensional, our computations do suggest the following conjecture. \begin{conj} \label{conj-infinite} Let $G$ be a graph on six vertices. Then $\mathcal E_G$ is infinite-dimensional if and only if it contains a subgraph isomorphic to one of the graphs shown in Figure~\ref{fig-infinite}. \end{conj} \begin{figure} \begin{center} \dr{(0,0)node[v]{}--(90:1)node[v]{} (0,0)--(18:1)node[v]{} (0,0)--(162:1)node[v]{} (0,0)--(234:1)node[v]{} (0,0)--(306:1)node[v]{}} \quad\quad\quad \dr{(1,0)node[v]{}--(0,0)node[v]{}--(-1,0)node[v]{}--(-1,-1)node[v]{}--(0,-1)node[v]{}--(0,0)--(0,1)node[v]{}} \quad\quad\quad \dr{(1,0)node[v]{}--(60:1)node[v]{}--(120:1)node[v]{}--(180:1)node[v]{}--(240:1)node[v]{}--(300:1)node[v]{}--(1,0)--(-1,0)} \quad\quad\quad \dr{(1,0)node[v]{}--(60:1)node[v]{}--(120:1)node[v]{}--(180:1)node[v]{}--(240:1)node[v]{}--(300:1)node[v]{}--(1,0) (60:1)--(-60:1) (120:1)--(-120:1)}\\[.5cm] \dr{(18:1)node[v]{}--(90:1)node[v]{}--(162:1)node[v]{}--(234:1)node[v]{}--(306:1)node[v]{}--(18:1)--(0,0)node[v]{}--(162:1)}\quad\quad\quad \dr{(0,0)node[v]{}--(1,0)node[v]{}--(2.5,0)node[v]{}--(1.75,.8)node[v]{}--(1,0)--(1.75,-.8)node[v]{}--(2.5,0)--(3.5,0)node[v]{}}\quad\quad\quad \dr{(-1,0)node[v]{}--(0,0)node[v]{}--(1,1)node[v]{}--(2,0)node[v]{}--(1,-1)node[v]{}--(0,0) (1,1)--(1,0)node[v]{}--(1,-1)} \end{center} \caption{\label{fig-infinite}Minimal graphs $G$ on six vertices for which $\mathcal E_G$ appears to be infinite-dimensional. See Conjecture~\ref{conj-infinite}.} \end{figure} In particular, this would imply that $\mathcal E_6$ is infinite-dimensional. When $\mathcal E_G$ is finite-dimensional, it appears that its Hilbert series is especially nice. \begin{conj} If $\mathcal E_G$ is finite-dimensional, then $\mathcal H_G(t)$ is a product of cyclotomic polynomials. \end{conj} This holds for all Hilbert series we have been able to compute, and it also appears plausible for those Hilbert series that we cannot compute in full. Note that the corresponding statement is also true for all finite Coxeter groups. In Section~\ref{sec-coxeter}, we discussed many similarities between Coxeter groups and Fomin-Kirillov algebras. \begin{question} What other results about Coxeter groups have analogues for Fomin-Kirillov algebras? Are there corresponding notions to reflections/root systems/etc.? \end{question} We have studied the cases when $G$ is a Dynkin diagram of finite type because $\mathcal E_G$ appears to be relatively simple in these cases. However, despite results such as Theorem~\ref{thm-weyl}, we know of no direct explanation for the apparent connection between $\mathcal E_G$ and the corresponding Weyl group in these cases. \begin{question} Is there a uniform explanation for the structure of $\mathcal E_G$ when $G$ is a (simply-laced) Dynkin diagram of finite type? What if $G$ is an affine Dynkin diagram? \end{question} Our next question is related to the discussion in Section~\ref{sec-complementary}. By Corollary~\ref{cor-complementary}, there is a large class of graphs for which $\mathcal E_{G_1} \otimes \mathcal E_{G_2} \cong \mathcal E_n$, but by Proposition~\ref{prop-counter}, this does not hold for all pairs of complementary graphs. \begin{question} For which complementary graphs $G_1$ and $G_2$ is it true that $\mathcal E_{G_1} \otimes \mathcal E_{G_2} \cong \mathcal E_n$? In general, is the multiplication map $\mathcal E_{G_1} \otimes \mathcal E_{G_2} \to \mathcal E_n$ always injective? What is the structure of $\mathcal E_n$ as an $\mathcal E_{G_1}$-$\mathcal E_{G_2}$-bimodule? \end{question} Our next set of questions concerns relations in $\mathcal E_G$. \begin{question} Is there a straightforward description of the relations of $\mathcal E_G$? \end{question} In all of the examples we have discussed above, the minimal relations have a very special form: their $\S_n$-degrees are always automorphisms of the support of the relation (the graph containing the edges that appear in the relation). \begin{question} Must the $\S_n$-degree of a minimal relation be an automorphism of the support of the relation? \end{question} Also for many of the examples considered above, the minimal coset representatives of $\mathcal E_H$ inside $\mathcal E_G$ are relatively simple: the choice of minimal coset representatives is unique up to scalar factors (if we stipulate that they be representable by monomials). This allows us to describe a poset structure on the minimal coset representatives using the analogue of weak order. Then Theorem~\ref{thm-subgraph} suggests the following question. \begin{question} For which graphs $H \subset G$ does $\mathcal E_G/\mathcal E_H^+\mathcal E_G$ have a unique monomial basis (up to scalar factors)? \end{question} Finally, recall Conjecture~\ref{conj-nichols}. \begin{conj-nichols} \cite{MS} The bilinear form $\langle\cdot,\cdot\rangle$ is nondegenerate on $\mathcal E_n$. \end{conj-nichols} The quotient of $\mathcal E_n$ by the kernel of this bilinear form is a certain type of braided Hopf algebra called a \emph{Nichols algebra} (see \cite{AS}). Nichols algebras exist in much greater generality than the examples just mentioned. For instance, the analogues of Fomin-Kirillov algebras for types other than $A$ as defined in \cite{KirillovMaeno} can be described in terms of Nichols algebras \cite{Bazlov}. As such, it would be interesting to investigate the following question. \begin{question} To what extent can one extend the results of this paper to Fomin-Kirillov algebras of other types or more general Nichols algebras? \end{question} \section{Acknowledgments} The authors would like to thank John Stembridge for his valuable input throughout the duration of this research, as well as Thomas Lam and Sergey Fomin for interesting discussions. \section{Appendix} Here we give the Hilbert series $\mathcal H_G(t)$ for any connected graph on at most five vertices along with its top degree and total dimension. Note that $[k] = 1+t+t^2+\dots+t^{k-1}$. \bigskip \tikzset{every picture/.append style={scale=0.6}} \begin{longtable}{c>{\qquad$}c<{$\qquad}cc} $G$&\mbox{Hilb}&deg&dim\\ \hline\hline \dr{(0,0) node[v]{} -- (1,0)node[v]{}} &[2]&1&2\\ \hline \dr{(0,0) node[v]{} -- (1,0)node[v]{} -- (2,0)node[v]{}} &[2][3]&3&6\\ \dr{(0,0) node[v]{} -- (1,0)node[v]{} -- +(120:1)node[v]{} -- cycle} &[2]^2[3]&4&12\\ \hline \dr{(0,0) node[v]{} -- (1,0) node[v]{} -- (2,0) node[v]{} -- (3,0) node[v]{}} &[2][3][4]&6&24\\ \dr{(0,0) node[v]{} -- +(150:1) node[v]{} +(-150:1) node[v]{}--(0,0)--(1,0) node[v]{}} &[3][4]^2&8&48\\ \dr{(0,0) node[v]{} -- +(150:1) node[v]{} -- +(-150:1) node[v]{}--(0,0)--(1,0) node[v]{}} &[2][3][4]^2&9&96\\ \dr{(0,0)node[v]{}--(1,0)node[v]{}--(1,1)node[v]{}--(0,1)node[v]{}--cycle} &[3]^2[4]^2&10&144\\ \dr{(0,0)node[v]{}--(1,0)node[v]{}--(1,1)node[v]{}--(0,1)node[v]{}--cycle--(1,1)} &[2][3]^2[4]^2&11&288\\ \dr{(0,0)node[v]{}--(1,0)node[v]{}--(1,1)node[v]{}--(0,1)node[v]{}--cycle--(1,1) (1,0)--(0,1)} &[2]^2[3]^2[4]^2&12&576\\ \hline \dr{(0,0)node[v]{}--(1,0)node[v]{}--(2,0)node[v]{}--(3,0)node[v]{}--(4,0)node[v]{}} &[2][3][4][5]&10&120\\ \dr{(0,0) node[v]{} -- +(150:1) node[v]{} +(-150:1) node[v]{}--(0,0)--(1,0) node[v]{}--(2,0)node[v]{}} &[4]^2[5][6]&15&480\\ \dr{(0,0)node[v]{} -- ++(45:1)node[v]{} -- +(45:1)node[v]{} +(135:1)node[v]{} -- +(0,0) -- +(-45:1)node[v]{}} &[2]^{-2}[3]^{-2}[4]^2[5]^2[6]^4&28&14400\\ \dr{(0,0)node[v]{} -- ++(150:1)node[v]{} -- ++(-90:1)node[v]{} -- (0,0) -- (1,0)node[v]{} -- (2,0)node[v]{}} &[2][4]^2[5][6]&16&960\\ \dr{(0,0)node[v]{}--(-1,0)node[v]{}--+(-150:1)node[v]{}--(-1,-1)node[v]{}--(-1,0)(-1,-1)--(0,-1)node[v]{}}&[4]^2[5][6]^2&20&2880\\ \dr{(0,0)node[v]{}--++(36:1)node[v]{}--++(-36:1)node[v]{}--++(-108:1)node[v]{}--++(-1,0)node[v]{}--cycle}&[4]^2[5][6]^2&20&2880\\ \dr{(0,0)node[v]{}--(0,1)node[v]{}--(-1,1)node[v]{}--(-1,0)node[v]{}--(0,0)--(1,0)node[v]{}}&[2]^{-1}[4]^2[5][6]^3&24&8640\\ \dr{(0,0)node[v]{} -- ++(-150:1)node[v]{} -- ++(-150:1)node[v]{} -- ++(90:1)node[v]{} -- ++(-30:1) -- ++(-30:1)node[v]{}} &[2]^{-1}[3]^{-2}[4]^2[5]^2[6]^4&29&28800\\ \dr{(2,0)node[v]{}--(1,0)node[v]{}--(0,0)node[v]{}--(0,1)node[v]{}--(1,1)node[v]{}--(1,0)node[v]{} (0,0)--(1,1)}&[4]^2[5][6]^3&25&17280\\ \dr{(0,0)node[v]{}--++(150:1)node[v]{}--++(0,-1)node[v]{}--++(30:1)--++(30:1)node[v]{}--++(0,-1)node[v]{}--cycle}&[3]^{-2}[4]^2[5]^2[6]^4&30&57600\\ \dr{(0,0)node[v]{}--(-1,0)node[v]{}--(-1,1)node[v]{}--(0,1)node[v]{}--(30:1)node[v]{}--(0,0)--(0,1)node[v]{}} &[2]^{-1}[3]^{-1}[4]^3[5][6]^4&30&69120\\ \dr{(2,0)node[v]{}--(1,0)node[v]{}--(0,0)node[v]{}--(0,1)node[v]{}--(1,1)node[v]{}--(1,0)node[v]{} (1,0)--(0,1)}&[2]^{-1}[3]^{-1}[4]^2[5]^2[6]^4&31&86400\\ \dr{(0,0)node[v]{}--(1,0)node[v]{}--(1,1)node[v]{}--(0,1)node[v]{}--cycle--(1,1) (.5,.5)node[v]{}} &[2]^{-3}[3]^{-1}[4]^4[5]^2[6]^4&35&345600\\ \dr{(1,0)node[v]{}--(0,0)node[v]{}--(-1,0)node[v]{}--(-1,1)node[v]{}--(0,1)node[v]{}--(0,0)--(-1,1) (-1,0)--(0,1)} &[3]^{-1}[4]^2[5]^2[6]^4&32&172800\\ \dr{(120:1)node[v]{}--(-1,0)node[v]{}--(0,0)node[v]{}--(120:1)--(60:1)node[v]{}--(1,0)node[v]{}--(0,0)--(60:1)} &[2]^{-1}[3]^{-1}[4]^3[5]^2[6]^4&34&345600\\ \dr{(0,0)node[v]{}--(1,0)node[v]{}--(1,1)node[v]{}--(0,1)node[v]{}--cycle--(1,1) (.5,.5)node[v]{}--(1,0)} &[2]^{-2}[3]^{-1}[4]^4[5]^2[6]^4&36&691200\\ \dr{(0,0)node[v]{}--(-1,0)node[v]{}--(0,1)node[v]{}--(0,0)--(-1,1)node[v]{}--(0,1)--(30:1)node[v]{}--(0,0)} &[2]^{-2}[3]^{-1}[4]^4[5]^2[6]^4&36&691200\\ \dr{(-1,0)node[v]{}--(-1,1)node[v]{}--(0,1)node[v]{}--(30:1)node[v]{}--(0,0)node[v]{}--(-1,0)--(0,1)--(0,0)--(-1,1)} &[2]^{-1}[3]^{-1}[4]^4[5]^2[6]^4&37&1382400\\ \dr{(0,0)node[v]{}--(1,0)node[v]{}--(1,1)node[v]{}--(0,1)node[v]{}--(0,0)--(1,1) (1,0)--(0,1) (.5,.5)node[v]{}} &[2]^{-2}[4]^4[5]^2[6]^4&38&2073600\\ \dr{(306:1)node[v]{}--(18:1)node[v]{}--(90:1)node[v]{}--(162:1)node[v]{}--(234:1)node[v]{} --(18:1)--(162:1)--(306:1)--(90:1)--(234:1)} &[2]^{-1}[4]^4[5]^2[6]^4&39&4147200\\ \dr{(306:1)node[v]{}--(18:1)node[v]{}--(90:1)node[v]{}--(162:1)node[v]{}--(234:1)node[v]{} --(18:1)--(162:1)--(306:1)--(90:1)--(234:1)--(306:1)} &[4]^4[5]^2[6]^4&40&8294400 \end{longtable}
{ "timestamp": "2014-03-13T01:04:41", "yymm": "1310", "arxiv_id": "1310.4112", "language": "en", "url": "https://arxiv.org/abs/1310.4112", "abstract": "The Fomin-Kirillov algebra $\\mathcal E_n$ is a noncommutative quadratic algebra with a generator for every edge of the complete graph on $n$ vertices. For any graph $G$ on $n$ vertices, we define $\\mathcal E_G$ to be the subalgebra of $\\mathcal E_n$ generated by the edges of $G$. We show that these algebras have many parallels with Coxeter groups and their nil-Coxeter algebras: for instance, $\\mathcal E_G$ is a free $\\mathcal E_H$-module for any $H\\subseteq G$, and if $\\mathcal E_G$ is finite-dimensional, then its Hilbert series has symmetric coefficients. We determine explicit monomial bases and Hilbert series for $\\mathcal E_G$ when $G$ is a simply-laced finite Dynkin diagram or a cycle, in particular showing that $\\mathcal E_G$ is finite-dimensional in these cases. We also present conjectures for the Hilbert series of $\\mathcal E_{\\tilde{D}_n}$, $\\mathcal E_{\\tilde{E}_6}$, and $\\mathcal E_{\\tilde{E}_7}$, as well as for which graphs $G$ on six vertices $\\mathcal E_G$ is finite-dimensional.", "subjects": "Quantum Algebra (math.QA); Combinatorics (math.CO)", "title": "Subalgebras of the Fomin-Kirillov algebra", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9881308761574288, "lm_q2_score": 0.8244619306896955, "lm_q1q2_score": 0.8146762899308542 }
https://arxiv.org/abs/1204.5704
A variant of Touchard's Catalan number identity
It is well known that the Catalan number C_n counts dissections of a regular (n+2)-gon into triangles. Here we count such dissections by number of triangles that contain two sides of the polygon among their three edges, leading to a combinatorial interpretation of the identity C_n =sum_{1<=k<=n/2} 2^{n-2k} n-choose-2k C_k (k(n+2))/(n(n-1)), and illustrating its connection with Touchard's identity.
\section{Introduction} \vspace*{-5mm} Consider a regular polygon of $n+2$ sides with one side designated the base. It is a classic result that there are the Catalan number $C_{n}$ ways to insert noncrossing diagonals connecting nonadjacent vertices of the polygon so as to dissect it into triangles (see illustration in Figure \ref{fig}). Each such dissection contains $n-1$ diagonals and $n$ triangles. When $n\ge 2$, each triangle may have 0, 1, or 2 sides in common with the polygon. Let $u_{n,k}$ denote the number of dissections in which precisely $k$ triangles contain 2 sides of the polygon. In any dissection, the number of such 2-polygon-side triangles ranges from a minimum of 2 (provided $n\ge 2$) to a maximum of $\lfloor(n+2)/2\rfloor$. Our main result is that $u_{n,k+1}=2^{n-2k} {n \choose 2k} C_{k}\,\frac{k(n+2)}{n(n-1)}$, yielding the apparently new identity \begin{equation}\label{main} \hspace*{30mm} C_{n} =\sum_{1 \,\le\,k \,\le\, n/2} 2^{n-2k} {n \choose 2k} C_{k}\,\frac{k(n+2)}{n(n-1)},\hspace*{20mm} n\ge 2. \end{equation} This identity is reminiscent of Touchard's identity \cite{A091894}, \begin{equation}\label{touchard} \hspace*{38mm} C_{n+1} =\sum_{0 \,\le\,k \,\le\, n/2} 2^{n-2k} {n \choose 2k} C_{k},\hspace*{27mm} n\ge 0, \end{equation} and indeed we will see a connection between them. To obtain an expression for $u_{n,k}$, it is convenient to color the base of the polygon blue and the remaining edges black, and let $v_{n,k}$ denote the number of dissections in which $k$ triangles contain two \emph{black} edges of the polygon. In Section 2, we express the \{$u_{n,k}$\} directly in terms of the \{$v_{n,k}$\}. In Section 3, we show bijectively that $v_{n+1,k+1}$ is actually the summand $2^{n-2k} {n \choose 2k} C_{k}$ in (\ref{touchard}), incidentally giving another combinatorial interpretation of Touchard's identity. Section 4 then establishes the main result. \section[A relation between u(n,k) and v(n,k)]{\protect{A relation between $\mbf{u_{n,k}}$ and $\mbf{v_{n,k}}$}} \vspace*{-5mm} Clearly, $u_{1,1}=1,\ u_{2,2}=2,\ u_{3,2}=5$. For $n\ge 4$, let us count the contribution to $u_{n,k}$ according to the positive vertex $r$ of the triangle that contains the base, after labelling the vertices of the polygon $r=-1,0,1,...,n$ counterclockwise from the left endpoint of the base as illustrated in Figure \ref{fig}. We find that the contribution to $u_{n,k}$ for both $r=1$ and $r=n$ is $v_{n-1,k-1}$, and for $2\le r \le n-1$, the contribution is $ \sum_{k-\frac{n-r+1}{2} \le\, j\, \le \frac{r}{2}}v_{r-1,j}\,v_{n-r,k-j}. $ Hence, \[ u_{n,k} = 2 v_{n-1,k-1} + \sum_{r=2}^{n-1}\:\sum_{k-\frac{n-r+1}{2} \le\, j\, \le \frac{r}{2}}v_{r-1,j}\,v_{n-r,k-j}, \] valid for $n \ge 4 ,\ 2 \le k \le \frac{n + 2}{2}$. Similarly, we find a recurrence for $v_{n,k}$, \[ v_{n,k} = 2 v_{n-1,k} + \sum_{r=2}^{n-1}\:\sum_{k-\frac{n-r+1}{2} \le\, j\, \le \frac{r}{2}}v_{r-1,j}\,v_{n-r,k-j}, \] that involves the same double sum. Eliminating the double sum in the two equations leads to the relation \begin{equation}\label{eq2} u_{n,k} = v_{n,k} + 2v_{n-1,k-1}-2v_{n-1,k}, \end{equation} which in fact holds for all $n,k$. \section{A bijection} \label{bij} \vspace*{-5mm} It is well known that $2^{n-1-2k} {n-1 \choose 2k} C_{k}$ is the number of Dyck paths that contain $k\ DDU$'s, where $U$ denotes an upstep and $D$ a downstep. (See \cite{dyck2004} for a bijective proof.) We now present a bijection from polygon dissections to Dyck paths which makes it visually obvious that the triangles containing two black sides, taken in clockwise order from the base, except that the last one is ignored, correspond to the $DDU$'s, taken left to right, in the Dyck path. This bijection is simply the composition of the following 3 well known bijections, (1) the Erdelyi-Etherington bijection from triangle-dissections of a polygon to binary trees \cite[p.\,171]{ec2}, (2) the standard bijection from binary trees to ordered trees (Knuth's ``natural'' correspondence \cite[Section 2.3.2]{acp1}), and (3) the (trivial) ``glove'' bijection from ordered trees to Dyck paths. Here is an illustration. \setcounter{equation}{0} \vspace*{-1mm} \begin{equation}\label{fig}\notag \textrm{Figure 1} \end{equation} \begin{center} \begin{pspicture}(-1,-4)(10,2.5 \psset{unit=1.4cm} \psdots(-2,0)(-1.62,1.18)(-.62,1.9)(.62,1.9)(1.62,1.18)(2,0)(1.62,-1.18)(.62,-1.9)(-.62,-1.9)(-1.62,-1.18) \pspolygon(-2,0)(-1.62,1.18)(-.62,1.9)(.62,1.9)(1.62,1.18)(2,0)(1.62,-1.18)(.62,-1.9)(-.62,-1.9)(-1.62,-1.18)(-2,0) \psline[linecolor=blue,linewidth=1pt](-.62,-1.9)(.62,-1.9) \psline(-.62,-1.9)(-.62,1.9) \psline(.62,-1.9)(.62,1.9) \psline(-.62,1.9)(-2,0)(-.62,-1.9) \psline(.62,-1.9)(1.62,1.18)(1.62,-1.18) \psline(-.62,-1.9)(.62,1.9) \rput(-.8,-2.1){\textrm{{\footnotesize $-1$}}} \rput(.75,-2.1){\textrm{{\footnotesize $0$}}} \rput(1.8,-1.18){\textrm{{\footnotesize $1$}}} \rput(2.2,0){\textrm{{\footnotesize $2$}}} \rput(-1.8,-1.18){\textrm{{\normalsize $n$}}} \rput(1.75,1.5){$\ddots$} \rput(-2.2,0.1){\vdots} \rput(2.9,-.5){$\longrightarrow$} \psline[linecolor=red,linewidth=1pt](0,-2.4)(0,-1.4)(-.3,0.5)(-1,0)(-1.5,1) \psline[linecolor=red,linewidth=1pt](-1,0)(-1.5,-1) \psline[linecolor=red,linewidth=1pt](0,-1.4)(1,0)(1.3,-.5)(1.8,0) \rput(0,-2.8){\textrm{{\footnotesize a dissection into $n=8$ triangles}}} \rput(6,-2.8){\textrm{{\footnotesize left-planted binary tree}}} \psset{dotscale=1.5} \psdots[linecolor=red](0,-2.4)(0,-1.4)(-.3,0.5)(-1,0)(-1.5,1)(-1.5,-1)(1,0)(1.3,-.5)(1.8,0) \psset{unit=1.0cm} \psdots[linecolor=red](5,1)(6,0)(7,1)(7,-1)(8,-2)(9,-3)(9,-1)(10,0)(9,1) \psline[linecolor=red](5,1)(6,0)(7,1) \psline[linecolor=red](6,0)(7,-1)(8,-2)(9,-3) \psline[linecolor=red](8,-2)(9,-1)(10,0)(9,1) \psset{unit=1.4cm} \psset{dotscale=1.6} \psdots[linecolor=red](-1.5,1)(-1.5,-1) \psset{unit=1.0cm} \psset{dotscale=1.2} \psdots[linecolor=red](5,1)(7,1) \rput(11.7,-.5){$\xrightarrow[45^{\circ}]{\mathrm{rotate}}$} \end{pspicture} \end{center} \begin{center} \begin{pspicture}(-1,-1)(14,5 \psset{unit=1.0cm} \psdots(-1,0)(-1,1)(-1,2)(-1,3)(-1,4)(0,3)(0,1)(1,1)(1,2) \psline(-1,0)(-1,1)(-1,2)(-1,3)(-1,4) \psline(-1,3)(0,3) \psline(-1,1)(0,1)(1,1)(1,2) \rput(3,2){$\xrightarrow[\textrm{east$\:\rightarrow\:$west edges}]{\textrm{swing down}}$} \psdots(5,0)(5,1)(5,2)(5,3)(5,4)(6,3)(6,1)(7,1)(7,2) \psline(6,3)(5,2)(5,3)(5,4) \psline(6,1)(5,0)(5,1)(5,2) \psline(5,0)(7,1)(7,2) \rput(8.2,2){$\xrightarrow{\textrm{``prettify''}}$} \psdots(9.5,4)(9.5,3)(10,2)(10,1)(10.5,3)(11,0)(11,1)(12,1)(12,2) \psline(9.5,4)(9.5,3)(10,2)(10.5,3) \psline(10,2)(10,1)(11,0)(11,1) \psline(11,0)(12,1)(12,2) \rput(11,-0.5){\textrm{{\footnotesize ordered tree}}} \rput(13,2){$\longrightarrow$} \psset{dotscale=1.8} \psdots(-1,4)(0,3)(5,4)(6,3)(9.5,4)(10.5,3) \end{pspicture} \end{center} \vspace*{-3mm} \Einheit=0.6cm \[ \Pfad(-8,0),3333443444343344\endPfad \SPfad(-8,0),1111111111111111\endSPfad \DuennPunkt(-8,0) \DuennPunkt(-7,1) \DuennPunkt(-6,2) \DuennPunkt(-5,3) \NormalPunkt(-4,4) \DuennPunkt(-3,3) \DuennPunkt(-2,2) \NormalPunkt(-1,3) \DuennPunkt(0,2) \DuennPunkt(1,1) \DuennPunkt(2,0) \DuennPunkt(3,1) \DuennPunkt(4,0) \DuennPunkt(5,1) \DuennPunkt(6,2) \DuennPunkt(7,1) \DuennPunkt(8,0) \] \begin{center} \begin{pspicture}(-1,-1)(0,0 \psset{unit=1.0cm} \rput(-.5,0.3){\textrm{{\footnotesize Dyck path}}} \rput(0,-.5){\textrm{ {\normalsize Bijection from triangle dissections to Dyck paths}}} \end{pspicture} \end{center} \vspace*{-5mm} \noindent The last step is the glove bijection: walk clockwise around the tree starting from the root and record an upstep (resp. downstep) each time an edge is traversed upward (resp. downward). Or, more picturesquely, burrow up the edges from the root to form a multi-fingered glove and fan out the fingers. Thus each edge in the tree corresponds to a matching upstep and downstep in the path. The illustrated dissection has 3 triangles containing two black sides of the polygon; all but the last are highlighted using enlarged dots, and they show up in the Dyck path as vertices initiating a descent of 2 or more downsteps followed by an upstep, that is, they correspond to $DDU$s in the Dyck path, as claimed. \vspace*{-2mm} \section{Conclusion} \vspace*{-5mm} The preceding section shows that $v_{n,k} = 2^{n+1-2k}\binom{n-1}{2k-2}C_{k-1}$. Substituting into (\ref{eq2}), we find \[ u_{n,k}=2^{n+1-2k}\left(\binom{n-2}{2k-3}C_{k-1}+\binom{n-2}{2k-4}4C_{k-2}\right), \] which simplifies to \[ u_{n,k+1}=2^{n-2k} \binom{n}{2k}C_{k} \frac{(n+2)k}{n(n-1)}. \] Now sum over $k$ to obtain (\ref{main}). The first few values of $u_{n,k}$ are given in the following table. \[ \begin{array}{c|ccccc} n^{\textstyle{\,\backslash \,k}} & 2 & 3 & 4 & 5 \\ \hline 2& 2 & & & \\ 3& 5 & & & \\ 4& 12 & 2 & & \\ 5& 28 & 14 & & \\ 6& 64 & 64 & 4 & \\ 7& 144 & 240 & 45 & \\ 8& 320 & 800 & 300 & 10 \\ 9& 704 & 2464 & 1540 & 154 \\ \end{array} \] \vspace*{2mm} \centerline{{\normalsize Table of values of $u_{n,k}$ }} \vspace*{4mm} \textbf{Added in proof} \ Tewodros Amdeberhan informs me that he has recently discovered an identity equivalent to (\ref{main}), namely \[ \frac{2n}{n+3} C_{n+1} =\sum_{0 \,\le\,k \,\le\, (n-1)/2} 2^{n-2k} {n \choose 2k+1} C_{k}\,\frac{2k+1}{k+2}, \] and has observed that subtracting the latter from Touchard's identity (\ref{touchard})(multiplied by 2) gives an alternating sum expression for the super ballot number $6/(n+3)C_{n+1}$, sequence \htmladdnormallink{A007054}{http://oeis.org/A007054} in OEIS. \textbf{Acknowledgement of priority}\ \ Alon Regev pointed out to me that the main results of this paper have previously been obtained by Hurtado and Noy \cite{ears}.
{ "timestamp": "2013-05-14T02:02:33", "yymm": "1204", "arxiv_id": "1204.5704", "language": "en", "url": "https://arxiv.org/abs/1204.5704", "abstract": "It is well known that the Catalan number C_n counts dissections of a regular (n+2)-gon into triangles. Here we count such dissections by number of triangles that contain two sides of the polygon among their three edges, leading to a combinatorial interpretation of the identity C_n =sum_{1<=k<=n/2} 2^{n-2k} n-choose-2k C_k (k(n+2))/(n(n-1)), and illustrating its connection with Touchard's identity.", "subjects": "Combinatorics (math.CO)", "title": "A variant of Touchard's Catalan number identity", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9907319866190856, "lm_q2_score": 0.8221891261650247, "lm_q1q2_score": 0.8145690663420849 }
https://arxiv.org/abs/1710.00442
Virtual Element Methods on Meshes with Small Edges or Faces
We consider a model Poisson problem in $\R^d$ ($d=2,3$) and establish error estimates for virtual element methods on polygonal or polyhedral meshes that can contain small edges ($d=2$) or small faces ($d=3$).
\section{Introduction}\label{sec:Introduction} Let ${\Omega}\subset \mathbb{R}^d$ ($d=2,3$) be a bounded polygonal/polyhedral domain and $f\in L_2({\Omega})$. The Poisson problem with the homogeneous Dirichlet boundary condition is to find $u\in H^1_0({\Omega})$ such that \begin{equation}\label{eq:Poisson} a(u,v)=(f,v) \qquad\forall\,v\in H^1_0({\Omega}), \end{equation} where \begin{equation}\label{eq:aDef} a(u,v)=\int_{\Omega} \nabla u\cdot\nabla v\,dx \end{equation} and $(\cdot,\cdot)$ is the inner product for $L_2({\Omega})$. Here and throughout the paper we follow standard notation for differential operators, function spaces and norms that can be found for example in \cite{Ciarlet:1978:FEM,ADAMS:2003:Sobolev,BScott:2008:FEM}. \par Problem \eqref{eq:Poisson} can be solved by virtual element methods \cite{BBCMMR:2013:VEM,AABMR:2013:Projector} on polygonal or polyhedral meshes. It has been observed in numerical experiments that the convergence rates for the virtual element methods do not deteriorate noticeably even in the presence of small edges or faces (cf. \cite{AABMR:2013:Projector,BLR:2016:VEM,BDR:2017:VEM}). Our goal is to establish error estimates that justify these numerical results for the virtual element methods introduced in \cite{AABMR:2013:Projector}. \par We will develop error estimates that are based on general shape regularity assumptions on the subdomains in the polygonal or polyhedral meshes. For the two dimensional problem, we assume that (i) each polygonal subdomain is star-shaped with respect to a disc whose diameter is comparable to the diameter of the subdomain and (ii) the number of edges of the subdomains is uniformly bounded. For the three dimensional problem, we assume that (i) each polyhedral subdomain is star-shaped with respect to a ball whose diameter is comparable to the diameter of the subdomain; (ii) the number of faces of the subdomains is uniformly bounded; and (iii) the faces of the subdomains satisfy the two dimensional shape regularity assumptions. Our error estimates are optimal up to at most a logarithmic factor that involves the ratio of the lengths of the longest edge and the shortest edge of each subdomain in two dimensions and a similar ratio over the edges of the faces on the subdomains in three dimensions. \par The rest of the paper is organized as follows. We begin with a star-shaped condition in Section~\ref{sec:SS}. Then we treat the two dimensional case in Section~\ref{sec:LocalVEM2D} and Section~\ref{sec:Poisson2D}, where the analysis benefits from the techniques developed in \cite{BLR:2016:VEM} and \cite{BGS:2017:VEM2}. The extension to the three dimensional Poisson problem is presented in Section~\ref{sec:Poisson3D}. We end with some concluding remarks in Section~\ref{sec:Conclusions}. \par In order to avoid the proliferation of constants, we will often use the notation $A\lesssim B$ to represent the statement that $A\leq (\text{constant}) B$, where the positive constant is independent of mesh sizes. The notation $A\approx B$ is equivalent to $A\lesssim B$ and $B\lesssim A$. The precise dependence of the hidden constants will be declared in the text. \par To minimize the technicalities, we also assume that ${\Omega}$ is convex so that the solution of \eqref{eq:Poisson} belongs to $H^2({\Omega})$ by elliptic regularity \cite{Grisvard:1985:EPN,Dauge:1988:EBV}. \section{A Star-Shaped Condition}\label{sec:SS} Let $D$ be a bounded open polygon ($d=2$) or a bounded open polyhedron $(d=3)$, and $\hspace{1pt}h_{{\scriptscriptstyle D}}$ be the diameter of $D$. \par We assume that \begin{equation}\label{eq:SA} \text{$D$ is star-shaped with respect to a disc/ball $\fB_{\!\ssD}\subset D$ with radius $\rho_{\!{_\ssD}}\hspace{1pt}h_{{\scriptscriptstyle D}}$}. \end{equation} \par We will denote by $\tilde\fB_{\!\ssD}$ the disc/ball concentric with $\fB_{\!\ssD}$ whose radius is $\hspace{1pt}h_{{\scriptscriptstyle D}}$. It is clear that \begin{equation}\label{eq:discs} \fB_{\!\ssD}\subset D\subset \tilde\fB_{\!\ssD}. \end{equation} \par Below are some consequences of the star-shaped condition \eqref{eq:SA}. The hidden constants in Section~\ref{subsec:Sobolev}--Section~\ref{subsec:PF} only depend on $\rho_{\!{_\ssD}}$, while those in Section~\ref{subsec:DE} also depend on $k$. \subsection{Sobolev Inequalities}\label{subsec:Sobolev} It follows from \eqref{eq:SA} that \begin{alignat}{3} \|\zeta\|_{{L_\infty(D)}}&\lesssim \hspace{1pt}h_{{\scriptscriptstyle D}}^{-(d/2)}\|\zeta\|_{L_2(D)}+\hspace{1pt}h_{{\scriptscriptstyle D}}^{1-(d/2)}|\zeta|_{{H^1(D)}} +\hspace{1pt}h_{{\scriptscriptstyle D}}^{2-(d/2)}|\zeta|_{H^2(D)}&\qquad&\forall\, \zeta\in H^2(D),\label{eq:Sobolev}\\ \intertext{and in the case where $d=2$,} \|\zeta\|_{{L_\infty(D)}}&\lesssim \hspace{1pt}h_{{\scriptscriptstyle D}}^{-1}\|\zeta\|_{L_2(D)}+|\zeta|_{{H^1(D)}} +\hspace{1pt}h_{{\scriptscriptstyle D}}^{1/2}|\zeta|_{H^{3/2}(D)}&\qquad&\forall\, \zeta\in H^{3/2}(D).\label{eq:Sobolev2} \end{alignat} Details can be found in \cite[Lemma~4.3.4]{BScott:2008:FEM} and \cite[Section~6]{DS:1980:BH}. \subsection{Bramble-Hilbert Estimates}\label{subsec:BH} Condition \eqref{eq:SA} also implies the following Bramble-Hilbert estimates \cite{BH:1970:Lemma}: \begin{alignat}{3} \inf_{q\in\mathbb{P}_\ell}|\zeta-q|_{H^m(D)}&\lesssim h_D^{\ell+1-m}|\zeta|_{H^{\ell+1}(D)} &\qquad&\forall\,\zeta\in H^{\ell+1}(D), \,\ell=0,\ldots,k,\;\text{and}\; m\leq \ell, \label{eq:BHEstimates}\\ \inf_{q\in\mathbb{P}_\ell}|\zeta-q|_{H^m(D)}& \lesssim h_D^{\ell+\frac12-m}|\zeta|_{H^{\ell+\frac12}(D)} &\qquad&\forall\,\zeta\in H^{\ell+\frac12}(D), \,\ell=0,\ldots,k,\;\text{and}\; m\leq \ell. \label{eq:BHEstimates2} \end{alignat} Details can be found in \cite[Lemma~4.3.8]{BScott:2008:FEM} and \cite[Section~6]{DS:1980:BH}. \subsection{A Lipschitz Isomorphism between $D$ and $\fB_{\!\ssD}$}\label{subsec:Lipschitz} \par In view of the star-shaped condition \eqref{eq:SA}, there exists a Lipschitz isomorphism $\Phi:\fB_{\!\ssD}\longrightarrow D$ such that both $|\Phi|_{W^{1,\infty}(\fB_{\!\ssD})}$ and $|\Phi^{-1}|_{W^{1,\infty}(D)}$ are bounded by a constant that only depends on $\rho_{\!{_\ssD}}$ (cf. \cite[Section~1.1.8]{Mazya:2011:Sobolev}). \par It follows that % \begin{align} |D|&\approx \hspace{1pt}h_{{\scriptscriptstyle D}}^d,\label{eq:G1}\\ |\partial D|&\approx \hspace{1pt}h_{{\scriptscriptstyle D}}^{d-1},\label{eq:G2} \end{align} where $|D|$ is the area of $D$ ($d=2$) or the volume of $D$ ($d=3$), and $|\partial D|$ is the arclength of $\partial D$ ($d=2$) or the surface area of $D$ ($d=3$). \par Moreover we have (cf. \cite[Theorem~4.1]{Wloka:1987:PDE}) \begin{alignat}{3} \|\zeta\|_{L_2(\partial D)}&\approx \|\zeta\circ\Phi\|_{L_2(\partial\fB_{\!\ssD})} &\qquad&\forall\,\zeta\in L_2(\partial D),\label{eq:L2Iso1}\\ \|\zeta\|_{L_2(D)}&\approx \|\zeta\circ\Phi\|_{L_2(\fB_{\!\ssD})} &\qquad&\forall\,\zeta\in L_2(D),\label{eq:L2Iso2}\\ |\zeta|_{H^1(\partial D)}&\approx|\zeta\circ\Phi|_{H^1(\partial\fB_{\!\ssD})} &\qquad&\forall\,\zeta\in H^1(\partial D),\label{eq:H1Iso1}\\ |\zeta|_{{H^1(D)}}&\approx|\zeta\circ\Phi|_{H^1(\fB_{\!\ssD})} &\qquad&\forall\,\zeta\in {H^1(D)},\label{eq:H1Iso2}\\ |\zeta|_{H^{1/2}(\partial D)}&\approx |\zeta\circ\Phi|_{H^{1/2}(\partial\fB_{\!\ssD})} &\qquad&\forall\,\zeta\in H^{1/2}(\partial D). \label{eq:HalfIso} \end{alignat} \subsection{Poincar\'e-Friedrichs Inequalities}\label{subsec:PF} The Bramble-Hilbert estimate \eqref{eq:BHEstimates} and the geometric estimates \eqref{eq:G1}--\eqref{eq:G2} imply the following Poincar\'e-Friedrichs inequalities: \begin{alignat}{3} h_D^{-(d/2)} \|\zeta\|_{L_2(D)}&\lesssim h_D^{-d}\Big|\int_{D} \zeta\,dx\Big| +\hspace{1pt}h_{{\scriptscriptstyle D}}^{1-(d/2)}|\zeta|_{{H^1(D)}} &\qquad&\forall\,\zeta\in {H^1(D)}, \label{eq:PF1}\\ h_D^{-(d/2)} \|\zeta\|_{L_2(D)}&\lesssim h_D^{-(d-1)}\Big|\int_{\partial D} \zeta\,ds\Big| +\hspace{1pt}h_{{\scriptscriptstyle D}}^{1-(d/2)}|\zeta|_{{H^1(D)}} &\qquad&\forall\,\zeta\in {H^1(D)}.\label{eq:PF2} \end{alignat} \subsection{Estimates for $|\cdot|_{H^{1/2}(\partial D)}$}\label{subsec:Half} On the circle $\partial \fB_{\!\ssD}$, we have a standard estimate \begin{equation*} |\zeta|_{H^{1/2}(\partial\fB_{\!\ssD})}\lesssim \hspace{1pt}h_{{\scriptscriptstyle D}}^{-1/2}\|\zeta\|_{L_2(\partial\fB_{\!\ssD})} +\hspace{1pt}h_{{\scriptscriptstyle D}}^{1/2}|\zeta|_{H^1(\partial \fB_{\!\ssD})} \qquad\forall\,\zeta\in H^1(\partial \fB_{\!\ssD}). \end{equation*} It then follows from a Poincar\'e-Friedrichs inequality for a circle that \begin{equation*} |\zeta|_{H^{1/2}(\partial\fB_{\!\ssD})}=|\zeta-\bar\zeta|_{H^{1/2}(\partial \fB_{\!\ssD})} \lesssim \hspace{1pt}h_{{\scriptscriptstyle D}}^{-1/2}\|\zeta-\bar\zeta\|_{L_2(\partial\fB_{\!\ssD})} +\hspace{1pt}h_{{\scriptscriptstyle D}}^{1/2}|\zeta|_{H^1(\partial \fB_{\!\ssD})} \lesssim \hspace{1pt}h_{{\scriptscriptstyle D}}^{1/2}|\zeta|_{H^1(\partial\fB_{\!\ssD})}, \end{equation*} where $\bar\zeta$ is the mean of $\zeta$ over $\partial \fB_{\!\ssD}$. Therefore, in view of \eqref{eq:H1Iso1} and \eqref{eq:HalfIso}, we have % \begin{equation}\label{eq:HalfAndOne} |\zeta|_{H^{1/2}(\partial D)}\lesssim \hspace{1pt}h_{{\scriptscriptstyle D}}^{1/2}|\zeta|_{H^1(\partial D)} \qquad\forall\,\zeta\in H^1(\partial D). \end{equation} \par Similarly, it follows from \eqref{eq:H1Iso2}, \eqref{eq:HalfIso} and the trace theorem for $H^1(\fB_{\!\ssD})$ that \begin{alignat}{3} |\zeta|_{H^{1/2}(\partial D)}&\lesssim |\zeta|_{{H^1(D)}} &\qquad&\forall\,\zeta\in {H^1(D)}.\label{eq:HalfTrace} \end{alignat} \subsection{Trace Inequalities}\label{subsec:Trace} It follows from \eqref{eq:L2Iso1}, \eqref{eq:L2Iso2}, \eqref{eq:H1Iso2} and standard (scaled) trace inequalities for $H^1(\fB_{\!\ssD})$ that \begin{alignat}{3} \|\zeta\|_{L_2(\partial D)}^2&\lesssim \hspace{1pt}h_{{\scriptscriptstyle D}}^{-1}\|\zeta\|_{L_2(D)}^2+\hspace{1pt}h_{{\scriptscriptstyle D}}|\zeta|_{{H^1(D)}}^2 &\qquad&\forall\,\zeta\in {H^1(D)}.\label{eq:Trace} \end{alignat} \par We also have trace inequalities for the $H^1$ norm on $\partial D$ that require a different derivation. \begin{lemma}\label{lem:CZ1} Let $e$ be an edge of $D\subset \mathbb{R}^2$. We have \begin{equation* \hspace{1pt}h_{{\scriptscriptstyle D}}|\zeta|_{H^1(e)}^2\lesssim |\zeta|_{H^1(D)}^2+\hspace{1pt}h_{{\scriptscriptstyle D}}|\zeta|_{H^{3/2}(D)}^2 \qquad \forall\,\zeta\in H^{3/2}(D). \end{equation*} \end{lemma} \begin{proof} By scaling we can assume $\hspace{1pt}h_{{\scriptscriptstyle D}}=1$. Without loss of generality we may also assume that \begin{equation}\label{eq:CZ1Est0} \int_D \zeta\,dx=0. \end{equation} \par The existence of the Lipschitz isomorphism $\Phi:D\longrightarrow \fB_{\!\ssD}$ implies that the domain $D$ satisfies a uniform cone condition \cite[Section~4.8]{ADAMS:2003:Sobolev}, with one reference cone and a finite cover of $\bar D$ that contains a fixed number of congruent open discs. Furthermore, the angle and the height of the reference cone and the radius of the open discs only depend on $\rho_{\!{_\ssD}}$. It follows that there exists a Calderon-Zygmund extension operator $E:H^1(D)\longrightarrow H^1(\mathbb{R}^2)$ (cf. \cite[Theorem~5.28]{ADAMS:2003:Sobolev}) such that $E$ maps $H^2(D)$ into $H^2(\mathbb{R}^2)$ and the restriction of $E\zeta$ to $D$ equals $\zeta$. Moreover we have \begin{equation*} \|E\zeta\|_{H^1(\mathbb{R}^2)}\lesssim \|\zeta\|_{H^1(D)} \quad\forall\,\zeta\in{H^1(D)} \quad\text{and}\quad \|E\zeta\|_{H^2(\mathbb{R}^2)}\lesssim \|\zeta\|_{H^2(D)} \quad\forall\,\zeta\in{H^2(D)}. \end{equation*} It follows from the interpolation of Sobolev spaces \cite[Chapter~7]{ADAMS:2003:Sobolev} that \begin{align}\label{eq:CZ1Est1} \|E\zeta\|_{H^{3/2}(\mathbb{R}^2)}\lesssim \|\zeta\|_{H^{3/2}(D)} \lesssim |\zeta|_{H^1(D)} +|\zeta|_{H^{3/2}(D)} \qquad\forall\,\zeta\in H^{3/2}(D), \end{align} where we have used \eqref{eq:PF2} and \eqref{eq:CZ1Est0}. \par Let $e$ be an edge of $D$, $\tilde e$ be the infinite line that contains $e$ and $G$ be a half-plane that borders $\tilde e$. Then we have, by \eqref{eq:CZ1Est1} and the trace theorem \cite[Theorem~8.1]{Wloka:1987:PDE}, \begin{align* |\zeta|_{H^1(e)}&=|E\zeta|_{H^1(e)} \leq |E\zeta|_{H^1(\tilde e)} \lesssim \|E\zeta\|_{H^{3/2}(G)} \lesssim |\zeta|_{H^1(D)}+|\zeta|_{H^{3/2}(D)}. \end{align*} \end{proof} \par The proof of the following result is similar. \begin{lemma}\label{lem:CZ2} Let $F$ be a face of $D\subset \mathbb{R}^3$. We have \begin{equation* \hspace{1pt}h_{{\scriptscriptstyle D}}|\zeta|_{H^1(F)}^2\lesssim |\zeta|_{H^1(D)}^2+\hspace{1pt}h_{{\scriptscriptstyle D}}^{2}|\zeta|_{H^2(D)}^2 \qquad \forall\,\zeta\in H^{2}(D). \end{equation*} \end{lemma} \subsection{A Lifting Operator}\label{subsec:Lifting} It follows from \eqref{eq:L2Iso2}, \eqref{eq:H1Iso2}, \eqref{eq:HalfIso} and the inverse trace theorem for $H^1(\fB_{\!\ssD})$ (cf. \cite[Theorem~8.8]{Wloka:1987:PDE}) that there exists a linear operator $\mathrm {Tr}^\dag:H^{1/2}(\partial D)\longrightarrow {H^1(D)}$ such that \begin{equation* \mathrm {Tr}^\dag \zeta=\zeta\;\text{on}\;\partial D \quad\text{and}\quad \hspace{1pt}h_{{\scriptscriptstyle D}}^{-1}\|\mathrm {Tr}^\dag \zeta\|_{L_2(D)}+|\mathrm {Tr}^\dag \zeta|_{{H^1(D)}}\leq C|\zeta|_{H^{1/2}(\partial D)}, \end{equation*} where the constant $C$ depends only on $\rho_{\!{_\ssD}}$. % \subsection{Some Estimates for Polynomials}\label{subsec:DE} Let $\mathbb{P}_k$ be the space of polynomials of total degree $\leq k$ in $d$ variables. We obtain the following estimates by using the equivalence of norms on finite dimensional vector spaces and scaling. \begin{lemma}\label{lem:DiscreteEstimates} We have \begin{alignat}{3} \|p\|_{L_2(\partial D)}^2&\lesssim \hspace{1pt}h_{{\scriptscriptstyle D}}^{-1}\|p\|_{L_2(D)}^2&\qquad&\forall\,p\in\mathbb{P}_k, \label{eq:DiscreteEstimate0}\\ |p|_{{H^1(D)}}&\lesssim h_D^{-1}\|p\|_{L_2(D)} &\qquad&\forall\,p\in \mathbb{P}_k, \label{eq:DiscreteEstimate1}\\ \|p\|_{L_\infty(D)}&\lesssim |\bar{p}_{_{\partial D}}|+\hspace{1pt}h_{{\scriptscriptstyle D}}^{1-(d/2)} |p|_{{H^1(D)}}&\qquad&\forall\,p\in\mathbb{P}_k, \label{eq:DiscreteEstimate2}\\ \|p\|_{L_\infty(D)}&\lesssim |\bar{p}_{_{D}}|+\hspace{1pt}h_{{\scriptscriptstyle D}}^{1-(d/2)} |p|_{{H^1(D)}}&\qquad&\forall\,p\in\mathbb{P}_k, \label{eq:DiscreteEstimate3} \end{alignat} where $\bar p_{_{\partial D}}$ is the mean of $p$ over $\partial D$ and $\bar p_{_D}$ is the mean of $p$ over $D$. \end{lemma} \begin{proof} In view of \eqref{eq:discs}, we have \begin{align}\label{eq:DS2} \|p\|_{L_\infty(D)}\leq \|p\|_{L_\infty(\tilde\mathfrak{B}_{{\scriptscriptstyle D}})} &\lesssim \|p\|_{L_\infty(\mathfrak{B}_{{\scriptscriptstyle D}})}\\ &\lesssim (\text{diam}\,\mathfrak{B}_{{\scriptscriptstyle D}})^{-d/2}\|p\|_{L_2(\mathfrak{B}_{{\scriptscriptstyle D}})} \leq (\text{diam}\,\mathfrak{B}_{{\scriptscriptstyle D}})^{-d/2}\|p\|_{L_2(D)}.\notag \end{align} The estimate \eqref{eq:DiscreteEstimate0} then follows from \eqref{eq:G2} and \eqref{eq:DS2}: \begin{align*} \|p\|_{L_2(\partial D)}^2 \lesssim \hspace{1pt}h_{{\scriptscriptstyle D}}^{d-1}\|p\|_{L_\infty(D)}^2 \lesssim \hspace{1pt}h_{{\scriptscriptstyle D}}^{-1}\|p\|_{L_2(D)}^2. \end{align*} \par Similarly, we have \begin{equation*} |p|_{{H^1(D)}}\leq |p|_{H^1(\tilde\mathfrak{B}_{{\scriptscriptstyle D}})}\lesssim |p|_{H^1(\mathfrak{B}_{{\scriptscriptstyle D}})} \lesssim (\text{diam}\,\mathfrak{B}_{{\scriptscriptstyle D}})^{-1}\|p\|_{L_2(\mathfrak{B}_{{\scriptscriptstyle D}})}\leq (\text{diam}\,\mathfrak{B}_{{\scriptscriptstyle D}})^{-1}\|p\|_{L_2(D)}, \end{equation*} which together with \eqref{eq:SA} implies \eqref{eq:DiscreteEstimate1}. \par The estimates \eqref{eq:DiscreteEstimate2} and \eqref{eq:DiscreteEstimate3} follow immediately from \eqref{eq:G1}--\eqref{eq:G2}, \eqref{eq:PF1}--\eqref{eq:PF2} and \eqref{eq:DS2}. \end{proof} % \begin{lemma}\label{lem:Polynomial} Given any $p\in \mathbb{P}_{k-2}$ $(k\geq2)$, there exists $q\in\mathbb{P}_k$ such that $\Delta q=p$ and \begin{equation*} \|\nabla q\|_{L_2(B)}\leq C (\mathrm{diam}\, B)\|p\|_{L_2(B)} \end{equation*} where $B\subset \mathbb{R}^d$ is any ball and the positive constant $C$ depends only on $k$. \end{lemma} \begin{proof} By scaling it suffices to treat the case where $B$ is a unit ball. Since $\Delta$ maps $\mathbb{P}_k$ onto $\mathbb{P}_{k-2}$, there exists an operator $\Delta^\dag:\mathbb{P}_{k-2}\longrightarrow \mathbb{P}_k$ such that $\Delta \Delta^\dag$ is the identity operator on $\mathbb{P}_{k-2}$, and we can take $q=\Delta^\dag p$. The lemma follows from the observation that both $\|p\|_{L_2(B)}$ and $\|\Delta^\dag p\|_{L_2(B)}$ are norms on $\mathbb{P}_{k-2}$ together with a standard inverse estimate \cite{Ciarlet:1978:FEM,BScott:2008:FEM}. \end{proof} \par \section{Local Virtual Element Spaces in Two Dimensions}\label{sec:LocalVEM2D} In this section we obtain properties of the local virtual element spaces that will be used in the stability and error analyses in Section~\ref{sec:Poisson2D}. \par Let the space $\mathbb{P}_k(D)$ be the restriction of $\mathbb{P}_k$ to $D$ and the space of $\mathbb{P}_k(\p D)$ be defined by \begin{equation*} \mathbb{P}_k(\p D)=\{v\in C(\partial D):\,v\big|_e\in \mathbb{P}_k(e) \;\text{for all}\;e\in\cE_{\ssD}\}, \end{equation*} where $C(\partial D)$ is the space of continuous functions on $\partial D$, $\mathbb{P}_k(e)$ is the restriction of $\mathbb{P}_k$ to the edges $e$, and $\cE_{\ssD}$ is the set of the edges of $D$. The length of an edge $e$ is denoted by $h_e$. \subsection{The Projection $\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}$ and the Space $\cQ^k(D)$}\label{subsec:PODCQD} The Sobolev space ${H^1(D)}$ is a Hilbert space under the inner product \begin{equation}\label{eq:InnerProduct} (\!(\zeta,\eta)\!)=(\nabla \zeta,\nabla \eta) +\Big(\int_{\partial D}\zeta\,ds\Big)\Big(\int_{\partial D} \eta\,ds\Big). \end{equation} The projection operator from ${H^1(D)}$ onto $\mathbb{P}_k(D)$ with respect to $(\!(\cdot,\cdot)\!)$ is denoted by $\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}$, i.e., $\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} \zeta\in \mathbb{P}_k(D)$ satisfies \begin{equation}\label{eq:PODDef} (\!(\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}\zeta,q)\!)=(\!(\zeta,q)\!) \qquad\forall\,q\in\mathbb{P}_k(D). \end{equation} In particular we have \begin{equation}\label{eq:Projection} \Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} p=p \qquad\forall\,p\in\mathbb{P}_k(D). \end{equation} \par It is straight-forward to check that \eqref{eq:PODDef} is equivalent to \begin{alignat}{3} \int_D \nabla (\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}\zeta)\cdot\nabla q\,dx&=\int_D \nabla \zeta\cdot \nabla q\,dx\label{eq:POD1}\\ &=\int_{\partial D} \zeta(n\cdot\nabla q)\,ds-\int_D \zeta(\Delta q)\,dx &\qquad&\forall\,q\in \mathbb{P}_k(D),\notag \end{alignat} together with \begin{alignat}{3} \int_{\partial D} \Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}\zeta\,ds&=\int_{\partial D}\zeta\,ds. \label{eq:POD2} \end{alignat} \par For $k\geq 1$, the virtual element space $\cQ^k(D)\subset {H^1(D)}$ is defined by the following conditions: $v\in {H^1(D)}$ belongs to $\cQ^k(D)$ if and only if (i) the trace of $v$ on $\partial D$ belongs to $\mathbb{P}_k(\p D)$, (ii) the distribution $-\Delta v$ belongs to $\mathbb{P}_k(D)$, and (iii) we have \begin{equation}\label{eq:Condition3} \Pi_{k,D}^0\hspace{1pt} v-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} v\in \mathbb{P}_{k-2}(D), \end{equation} where $\Pi_{k,D}^0\hspace{1pt}$ is the projection from $L_2(D)$ onto $\mathbb{P}_k(D)$ and $\mathbb{P}_{-1}(D)=\{0\}$. \begin{remark}\label{rem:Continuity}{\rm It follows from elliptic regularity for bounded Lipschitz domains \cite[Section~1.2]{Kenig:1994:CBMS} and conditions (i) and (ii) in the definition of $\cQ^k(D)$ that $\cQ^k(D)\subset C(\bar D)$.} \end{remark} \begin{remark}\label{rem:dof}{\rm The dimension of $\cQ^k(D)$ is the sum of the dimension of $\mathbb{P}_k(\p D)$ and the dimension of $\mathbb{P}_{k-2}(D)$ (cf. \cite{AABMR:2013:Projector}). The degrees of freedom consist of (i) the values of $v$ at the vertices of $D$ and at the points in the interior of the edges that together determine $\mathbb{P}_k(\p D)$, and (ii) the moments of $\Pi_{k-2,D}^0 v$. The set of the nodes in (i) will be denoted by $\mathcal{N}_{\p D}$.} \end{remark} \begin{remark}\label{rem:Computable} {\rm It follows from \eqref{eq:POD1} and \eqref{eq:POD2} that the polynomial $\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} v$ can be computed in terms of the degrees of freedom of $v\in\cQ^k(D)$. Moreover, the polynomial $\Pi_{k,D}^0\hspace{1pt} v$ can also be computed through \eqref{eq:Condition3}.} \end{remark} \subsection{A Minimum Energy Principle}\label{subsec:MEP} The following minimum energy principle is useful for bounding the $H^1$ norm of a virtual element function. \begin{lemma}\label{lem:MEP} The inequality \begin{equation* |v|_{{H^1(D)}}\leq |\zeta|_{{H^1(D)}} \end{equation*} holds for any $v\in\cQ^k(D)$ and $\zeta\in {H^1(D)}$ such that $\zeta-v=0$ on $\partial D$ and $\Pi_{k,D}^0\hspace{1pt}(\zeta-v)=0$. \end{lemma} \begin{proof} It follows from condition (ii) in the definition of $\cQ^k(D)$ that \begin{equation*} \int_D \nabla v\cdot\nabla (\zeta-v)\,dx=\int_D (-\Delta v) (\zeta-v)dx=0 \end{equation*} and hence $ |\zeta|_{H^1(D)}^2=|\zeta-v|_{H^1(D)}^2+|v|_{H^1(D)}^2. $ \end{proof} \subsection{A Maximum Principle}\label{subsec:MP} We begin with a result from \cite[Lemma~3.3]{BLR:2016:VEM}. \begin{lemma}\label{lem:BLR} There exists a positive constant $C$, depending only on $\rho_{\!{_\ssD}}$ and $k$, such that \begin{equation}\label{eq:BLR} \|\Delta v\|_{L_2(D)}\leq C\hspace{1pt}h_{{\scriptscriptstyle D}}^{-1}\|\nabla v\|_{L_2(D)} \qquad\forall\,v\in\cQ^k(D). \end{equation} \end{lemma} \begin{proof} By scaling we may assume $\hspace{1pt}h_{{\scriptscriptstyle D}}=1$. \par Let $\phi\geq0$ be a smooth (bump) function supported on the disc $\fB_{\!\ssD}$ with radius $\rho_{\!{_\ssD}}$ such that \begin{equation}\label{eq:BumpFunction} \int_D \phi\,dx=1, \quad |\phi|\lesssim 1 \quad \text{and} \quad |\nabla\phi(x)|\lesssim 1. \end{equation} We have, by the equivalence of norms on finite dimensional vector spaces, scaling and \eqref{eq:discs}, \begin{equation}\label{eq:Fundamental1} \|p\|_{L_2(D)}^2\leq \|p\|_{L_2(\tilde\fB_{\!\ssD})}^2\lesssim \|p\|_{L_2(\fB_{\!\ssD})}^2 \lesssim \int_{\fB_{\!\ssD}} p^2\phi\,dx \qquad\forall\,p\in\mathbb{P}_k. \end{equation} \par Since $\Delta v\in\mathbb{P}_k$, it follows from \eqref{eq:DiscreteEstimate1} (with $\hspace{1pt}h_{{\scriptscriptstyle D}}=1$), \eqref{eq:Fundamental1} and integration by parts that \begin{align*} \|\Delta v\|_{L_2(D)}^2&\lesssim \int_{\fB_{\!\ssD}} (\Delta v)^2\phi\,dx\\ &= -\int_{\fB_{\!\ssD}} \nabla v\cdot(\phi\nabla(\Delta v)+(\Delta v)\nabla\phi\big)dx\\ &\lesssim \|\nabla v\|_{L_2(D)} \big(\|\nabla(\Delta v)\|_{L_2(D)}+\|\Delta v\|_{L_2(D)}\big) \lesssim \|\nabla v\|_{L_2(D)}\big(\|\Delta v\|_{L_2(D)}\big), \end{align*} which implies \eqref{eq:BLR} (with $\hspace{1pt}h_{{\scriptscriptstyle D}}=1$). \end{proof} The following maximum principle will be used in the analysis of the interpolation operator in Section~\ref{subsec:Interpolation} (cf. Lemma~\ref{lem:1DInterpolationError}), and in the stability and error analyses for virtual element methods in three dimensions (cf. \eqref{eq:3DIDtbar} and Lemma~\ref{lem:3DSDBdd}). \begin{lemma}\label{lem:MaximumPrinciple} There exists a positive constant $C$, depending only on $\rho_{\!{_\ssD}}$ and $k$, such that \begin{equation*} \|v\|_{{L_\infty(D)}}\leq C\big[ \|v\|_{L_\infty(\partial D)}+|v|_{H^1(D)}\big] \qquad\forall\,v\in \cQ^k(D). \end{equation*} \end{lemma} \begin{proof} There exists $q\in \mathbb{P}_{k+2}$ such that \begin{equation}\label{eq:MP2} \Delta q=\Delta v \quad\text{and}\quad \|\nabla q\|_{L_2(\fB_{\!\ssD})} \lesssim \hspace{1pt}h_{{\scriptscriptstyle D}}\|\Delta v\|_{L_2(\fB_{\!\ssD})} \end{equation} by Lemma~\ref{lem:Polynomial} (with $p=\Delta v\in\mathbb{P}_k$). \par Without loss of generality we may assume that the mean of $q$ over $D$ is zero. Therefore we have \begin{equation* \|q\|_{{L_\infty(D)}}\lesssim |q|_{{H^1(D)}} \lesssim |q|_{H^1(\tilde\fB_{\!\ssD})} \lesssim |q|_{H^1(\fB_{\!\ssD})} \lesssim \hspace{1pt}h_{{\scriptscriptstyle D}}\|\Delta v\|_{L_2(\fB_{\!\ssD})} \lesssim |v|_{{H^1(D)}}\notag \end{equation*} by \eqref{eq:discs}, \eqref{eq:DiscreteEstimate3}, Lemma~\ref{lem:BLR}, \eqref{eq:MP2} and scaling. \par It then follows from the maximum principle for the harmonic function $v-q$ (cf. \cite{Evans:2010:PDE}) that \begin{align*} \|v\|_{{L_\infty(D)}}&\leq \|v-q\|_{{L_\infty(D)}}+\|q\|_{{L_\infty(D)}}\\ &\leq \|v-q\|_{L_\infty(\partial D)}+\|q\|_{{L_\infty(D)}} \lesssim \|v\|_{L_\infty(\partial D)}+|v|_{H^1(D)}. \end{align*} \end{proof} \subsection{The Semi-norm ${|\!|\!|\cdot|\!|\!|_{k,D}}$}\label{subsec:Norm} The semi-norm $|\!|\!|\cdot|\!|\!|_{k,D}$ on ${H^1(D)}$ is defined by \begin{equation}\label{eq:tbarNormDef} |\!|\!| \zeta|\!|\!|_{k,D}^2=\|\Pi_{k-2,D}^0\zeta\|_{L_2(D)}^2+ \hspace{1pt}h_{{\scriptscriptstyle D}}\sum_{e\in\cE_{\ssD}}\|\Pi_{k-1,e}^0 \zeta\|_{L_2(e)}^2, \end{equation} where $\Pi_{k-1,e}^0$ is the orthogonal projection from $L_2(e)$ onto $\mathbb{P}_{k-1}(e)$. \par It follows from \eqref{eq:Trace} and \eqref{eq:tbarNormDef} that \begin{equation}\label{eq:tbarBdd} |\!|\!|\zeta|\!|\!|_{k,D}\leq C\big(\|\zeta\|_{L_2(D)}+\hspace{1pt}h_{{\scriptscriptstyle D}}|\zeta|_{{H^1(D)}}\big) \qquad\forall\, \zeta\in {H^1(D)}, \end{equation} where the positive constant $C$ depends only on $\rho_{\!{_\ssD}}$ and $k$. \par We also have, by \eqref{eq:G2} and a standard estimate for polynomials in one variable, \begin{align}\label{eq:tbarBdd2} |\!|\!| v|\!|\!|_{k,D}&\lesssim \hspace{1pt}h_{{\scriptscriptstyle D}}\|v\|_{L_\infty(\partial D)} +\|\Pi_{k-2,D}^0 v\|_{L_2(D)}\\ &\lesssim \hspace{1pt}h_{{\scriptscriptstyle D}}\Big(\sum_{p\in\mathcal{N}_{\p D}} v^2(p)\Big)^\frac12 +\|\Pi_{k-2,D}^0 v\|_{L_2(D)} \qquad \forall\,v\in\cQ^k(D),\notag \end{align} where the hidden constant depends only on $\rho_{\!{_\ssD}}$ and $k$. \subsection{Estimates for $\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}$}\label{subsec:PODEstimates} \par All the hidden constants in this subsection depend only on $\rho_{\!{_\ssD}}$ and $k$. Besides the obvious stability estimate \begin{equation}\label{eq:PODStability1} |\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}\zeta|_{{H^1(D)}}\leq |\zeta|_{{H^1(D)}}\qquad\forall\,\zeta\in {H^1(D)} \end{equation} that follows from \eqref{eq:POD1}, we also have a stability estimate for $\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}$ in terms of $\|\cdot\|_{L_2(D)}$ and the semi-norm $|\!|\!|\cdot|\!|\!|_{k,D}$. \begin{lemma}\label{lem:PODStability2} We have \begin{equation* \|\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}\zeta\|_{L_2(D)}\lesssim |\!|\!| \zeta|\!|\!|_{k,D} \qquad\forall\,\zeta\in {H^1(D)}. \end{equation*} \end{lemma} \begin{proof} It follows from \eqref{eq:POD1} that \begin{align*} &\int_D \nabla(\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} \zeta)\cdot\nabla(\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}\zeta)\,dx =\int_{\partial D}\zeta n\cdot\nabla(\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}\zeta)ds -\int_D \zeta \Delta (\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}\zeta)dx\\ &\hspace{40pt}\leq \Big(\sum_{e\in\cE_{\ssD}}\|\Pi_{k-1,e}^0\zeta\|_{L_2(e)}^2\Big)^\frac12 \Big(\sum_{e\in\cE_{\ssD}}\|\nabla\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}\zeta\|_{L_2(e)}^2\Big)^\frac12\\ &\hspace{80pt}+\|\Pi_{k-2,D}^0\zeta\|_{L_2(D)}\|\Delta(\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}\zeta)\|_{L_2(D)}, \end{align*} and we have, by \eqref{eq:DiscreteEstimate0} and \eqref{eq:DiscreteEstimate1}, \begin{align*} \sum_{e\in\cE_{\ssD}}\|\nabla\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}\zeta\|_{L_2(e)}^2\lesssim \hspace{1pt}h_{{\scriptscriptstyle D}}^{-1}\|\nabla \Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}\zeta\|_{L_2(D)}^2 \quad\text{and}\quad \|\Delta(\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}\zeta)\|_{L_2(D)}&\lesssim \hspace{1pt}h_{{\scriptscriptstyle D}}^{-1}\|\nabla \Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}\zeta\|_{L_2(D)}. \end{align*} It follows that \begin{equation*} \|\nabla\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}\zeta\|_{L_2(D)} \lesssim \hspace{1pt}h_{{\scriptscriptstyle D}}^{-1}\Big(\hspace{1pt}h_{{\scriptscriptstyle D}}\sum_{e\in\cE_{\ssD}}\|\Pi_{k-1,e}^0\zeta\|_{L_2(e)}^2 +\|\Pi_{k-2,D}^0\zeta\|_{L_2(D)}^2\Big)^\frac12 =\hspace{1pt}h_{{\scriptscriptstyle D}}^{-1}|\!|\!|\zeta|\!|\!|_{k,D}. \end{equation*} \par Moreover \eqref{eq:G2}, \eqref{eq:POD2} and \eqref{eq:tbarNormDef}, imply \begin{align*} \Big|\int_{\partial D} \Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} \zeta\,ds\Big| =\Big|\sum_{e\in\cE_{\ssD}}\int_e \Pi_{0,e}^0\zeta\,ds\Big| &\leq \sum_{e\in\cE_{\ssD}}\sqrt{h_e}\|\Pi_{k-1,e}^0\zeta\|_{L_2(e)}\\ &\lesssim \sqrt{\hspace{1pt}h_{{\scriptscriptstyle D}}} \Big(\sum_{e\in\cE_{\ssD}}\|\Pi_{k-1,e}^0\zeta\|_{L_2(e)}^2\Big)^\frac12\leq |\!|\!|\zeta|\!|\!|_{k,D}. \end{align*} \par Finally we have, by \eqref{eq:PF2}, \begin{align*} \|\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}\zeta\|_{L_2(D)}&\leq \Big|\int_{\partial D} \Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} \zeta\,ds\Big| + \hspace{1pt}h_{{\scriptscriptstyle D}} \|\nabla\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}\zeta\|_{L_2(D)} \lesssim |\!|\!| \zeta|\!|\!|_{k,D}. \end{align*} \end{proof} \par We can now establish error estimates for $\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}$. \par \begin{lemma}\label{lem:PODErrors} We have \begin{alignat}{3} \|\zeta-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} \zeta\|_{L_2(D)}&\lesssim \hspace{1pt}h_{{\scriptscriptstyle D}}^{\ell+1}|\zeta|_{H^{\ell+1}(D)} &\qquad&\forall\,\zeta\in H^{\ell+1}(D),\,0\leq\ell\leq k, \label{eq:LTwoPOD}\\ |\zeta-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}\zeta|_{{H^1(D)}}&\lesssim \hspace{1pt}h_{{\scriptscriptstyle D}}^{\ell}|\zeta|_{H^{\ell+1}(D)} &\qquad&\forall\,\zeta\in H^{\ell+1}(D),\,1\leq\ell\leq k, \label{eq:HOnePOD}\\ |\zeta-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} \zeta|_{H^2(D)}&\lesssim h_D^{\ell-1}|\zeta|_{H^{\ell+1}(D)}&\qquad& \forall\,\zeta\in H^{\ell+1}(D),\,1\leq\ell\leq k.\label{eq:HTwoPOD} \end{alignat} \end{lemma} \begin{proof} The estimate \eqref{eq:HOnePOD} follows immediately from \eqref{eq:BHEstimates}, \eqref{eq:Projection} and \eqref{eq:PODStability1}. \par In view of \eqref{eq:DiscreteEstimate1} and \eqref{eq:PODStability1}, we have \begin{equation*} |\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}\zeta|_{H^2(D)}\lesssim \hspace{1pt}h_{{\scriptscriptstyle D}}^{-1}|\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}\zeta|_{{H^1(D)}}\leq\hspace{1pt}h_{{\scriptscriptstyle D}}^{-1}|\zeta|_{{H^1(D)}}, \end{equation*} which together with \eqref{eq:BHEstimates} and \eqref{eq:Projection} implies \eqref{eq:HTwoPOD}. \par Similarly we have, by \eqref{eq:tbarBdd} and Lemma~\ref{lem:PODStability2} \begin{equation*} \|\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}\zeta\|_{L_2(D)}\lesssim |\!|\!| \zeta|\!|\!|_{k,D} \lesssim \|\zeta\|_{L_2(D)}+\hspace{1pt}h_{{\scriptscriptstyle D}}|\zeta|_{{H^1(D)}}, \end{equation*} which together with \eqref{eq:BHEstimates} and \eqref{eq:Projection} implies \eqref{eq:LTwoPOD}. \end{proof} \subsection{Estimates for $\Pi_{k,D}^0\hspace{1pt}$}\label{subsec:PDZEstimates} All the hidden constants in this subsection only depend on $\rho_{\!{_\ssD}}$ and $k$. We have an obvious stability estimate \begin{equation}\label{eq:PDZLTwo} \|\Pi_{k,D}^0\hspace{1pt} \zeta\|_{L_2(D)}\leq \|\zeta\|_{L_2(D)} \qquad\forall\,\zeta\in L_2(D) \end{equation} % and an obvious relation \begin{equation}\label{eq:PDZInvariance} \Pi_{k,D}^0\hspace{1pt} q=q\qquad\forall\,q\in \mathbb{P}_k(D). \end{equation} % \par It follows from \eqref{eq:BHEstimates}, \eqref{eq:PDZLTwo} and \eqref{eq:PDZInvariance} that \begin{equation} \label{eq:PDZLTwoError} \|\zeta-\Pi_{k,D}^0\hspace{1pt}\zeta\|_{L_2(D)}\lesssim \hspace{1pt}h_{{\scriptscriptstyle D}}^{\ell+1}|\zeta|_{H^{\ell+1}(D)} \qquad\forall\,\zeta\in H^{\ell+1}(D),\, 0\leq\ell\leq k. \end{equation} \par We also have a stability estimate for $\Pi_{k,D}^0\hspace{1pt}$ in $|\cdot|_{{H^1(D)}}$. \begin{lemma}\label{lem:PDZHOne} We have % \begin{equation}\label{eq:PDZHOne} |\Pi_{k,D}^0\hspace{1pt} \zeta|_{{H^1(D)}}\lesssim |\zeta|_{{H^1(D)}} \qquad\forall\,\zeta\in {H^1(D)}. \end{equation} \end{lemma} \begin{proof} This is a consequence of \eqref{eq:DiscreteEstimate1}, \eqref{eq:PODStability1}, \eqref{eq:LTwoPOD} and \eqref{eq:PDZLTwoError}: \begin{align*} |\Pi_{k,D}^0\hspace{1pt} \zeta|_{{H^1(D)}}&\leq |\Pi_{k,D}^0\hspace{1pt} \zeta-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}\zeta|_{{H^1(D)}}+|\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}\zeta|_{{H^1(D)}}\\ &\lesssim \hspace{1pt}h_{{\scriptscriptstyle D}}^{-1}\|\Pi_{k,D}^0\hspace{1pt} \zeta-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}\zeta\|_{L_2(D)}+|\zeta|_{{H^1(D)}}\\ &\lesssim \hspace{1pt}h_{{\scriptscriptstyle D}}^{-1}\big(\|\Pi_{k,D}^0\hspace{1pt} \zeta-\zeta\|_{L_2(D)}+\|\zeta-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}\zeta\|_{L_2(D)}\big) +|\zeta|_{{H^1(D)}} \lesssim |\zeta|_{{H^1(D)}}. \end{align*} \end{proof} \par We can then derive error estimates for $\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}$ by combining the Bramble-Hilbert estimates and Lemma~\ref{lem:PDZHOne}. \begin{lemma}\label{lem:PDZErrors} We have \begin{alignat}{3} |\zeta-\Pi_{k,D}^0\hspace{1pt}\zeta|_{{H^1(D)}}&\lesssim \hspace{1pt}h_{{\scriptscriptstyle D}}^\ell|\zeta|_{H^{\ell+1}(D)} &\qquad&\forall\,\zeta\in H^{\ell+1}(D), \, 1\leq\ell\leq k, \label{eq:PDZHOneError}\\ |\zeta-\Pi_{k,D}^0\hspace{1pt}\zeta|_{H^2(D)}&\lesssim h^{\ell-1}|\zeta|_{H^{\ell+1}(D)} &\qquad&\forall\,\zeta\in H^{\ell+1}(D), \, 1\leq\ell\leq k.\label{eq:PDZHTwoError} \end{alignat} \end{lemma} \begin{proof} In view of \eqref{eq:PDZInvariance}, the estimate \eqref{eq:PDZHOneError} follows from \eqref{eq:BHEstimates} and \eqref{eq:PDZHOne}. \par Similarly the estimate \eqref{eq:PDZHTwoError} follows from \eqref{eq:BHEstimates}, \eqref{eq:PDZInvariance} and the inequality \begin{equation*} |\Pi_{k,D}^0\hspace{1pt}\zeta|_{H^2(D)}\lesssim \hspace{1pt}h_{{\scriptscriptstyle D}}^{-1}|\Pi_{k,D}^0\hspace{1pt}\zeta|_{H^1(D)} \lesssim \hspace{1pt}h_{{\scriptscriptstyle D}}^{-1}|\zeta|_{H^1(D)} \end{equation*} obtained from \eqref{eq:DiscreteEstimate1} and \eqref{eq:PDZHOne}. \end{proof} The following is another useful estimate. \begin{lemma}\label{lem:PDZcQD} We have \begin{equation*} \|\Pi_{k,D}^0\hspace{1pt} v\|_{L_2(D)}\lesssim C |\!|\!| v|\!|\!|_{k,D} \qquad\forall\,v\in \cQ^k(D). \end{equation*} \end{lemma} \begin{proof} Let $v\in\cQ^k(D)$ be arbitrary. It follows from \eqref{eq:Condition3} that \begin{align*} \|\Pi_{k,D}^0\hspace{1pt} v\|_{L_2(D)}^2&=\|\Pi_{k-2,D}^0 v\|_{L_2(D)}^2+\|(\Pi_{k,D}^0\hspace{1pt}-\Pi_{k-2,D}^0)v\|_{L_2(D)}^2\\ &=\|\Pi_{k-2,D}^0 v\|_{L_2(D)}^2+\|(\Pi_{k,D}^0\hspace{1pt}-\Pi_{k-2,D}^0)\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} v\|_{L_2(D)}^2\\ &\leq \|\Pi_{k-2,D}^0 v\|_{L_2(D)}^2+\|\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} v\|_{L_2(D)}^2, \end{align*} which together with \eqref{eq:tbarNormDef} and Lemma~\ref{lem:PODStability2} completes the proof. \end{proof} \subsection{Inverse Estimates}\label{subsec:InverseEstimate} These are estimates that bound the norm $|v|_{H^1(D)}$ of a virtual element function $v\in\cQ^k(D)$ in terms of $\|\Pi_{k-2,D}^0 v\|_{L_2(D)}$ and norms that only involve the boundary data of $v$. They are crucial for the stability analysis of virtual element methods in Section~\ref{subsec:DiscreteProblem}. \par We begin with a key lemma. \begin{lemma}\label{lem:Fundamental} There exists a positive constant $C$ depending only on $\rho_{\!{_\ssD}}$ and $k$, such that \begin{equation}\label{eq:Fundamental} |v|_{{H^1(D)}}\leq C\big[ \hspace{1pt}h_{{\scriptscriptstyle D}}^{-1}|\!|\!| v|\!|\!|_{k,D}+|v|_{H^{1/2}(\partial D)}\big]. \end{equation} \end{lemma} \begin{proof} By scaling we may assume $\hspace{1pt}h_{{\scriptscriptstyle D}}=1$. \par Let $\mathrm {Tr}^\dag$ be the lifting operator from Section~\ref{subsec:Lifting}. The function $w=\mathrm {Tr}^\dag v\in H^1(D)$ satisfies $w=v$ on $\partial D$ and \begin{equation}\label{eq:w} \|w\|_{H^1(D)}\lesssim |v|_{H^{1/2}(D)}. \end{equation} \par Let $\phi$ be the same (bump) function in the proof of Lemma~\ref{lem:MaximumPrinciple} and $\zeta=w+p\phi$, where the polynomial $p\in \mathbb{P}_k(D)$ is determined by \begin{equation*} \int_D (\zeta-v)q\,dx=0 \qquad\forall\,q\in \mathbb{P}_k(D), \end{equation*} or equivalently \begin{equation}\label{eq:Fundamental2} \int_D pq\phi\,dx=\int_D (v-w)q\,dx=\int_D (\Pi_{k,D}^0\hspace{1pt} v-w)q\,dx \qquad\forall\,q\in \mathbb{P}_k(D). \end{equation} Then we have \begin{equation}\label{eq:Fundamental3} |v|_{{H^1(D)}}\leq |\zeta|_{{H^1(D)}} \end{equation} by Lemma~\ref{lem:MEP} and, in view of \eqref{eq:DiscreteEstimate1}, \eqref{eq:BumpFunction} and \eqref{eq:w}, \begin{equation}\label{eq:Fundamental4} |\zeta|_{{H^1(D)}}\leq |w|_{{H^1(D)}}+|p\phi|_{{H^1(D)}} \lesssim |w|_{{H^1(D)}}+\|p\|_{L_2(D)}\lesssim |v|_{H^{1/2}(\partial D)}+\|p\|_{L_2(D)}. \end{equation} \par Note that \eqref{eq:Fundamental1} and \eqref{eq:Fundamental2} imply \begin{equation*} \|p\|_{L_2(D)}\lesssim \|\Pi_{k,D}^0\hspace{1pt} v-w\|_{L_2(D)} \end{equation*} and hence \begin{equation}\label{eq:Fundamental5} \|p\|_{L_2(D)} \lesssim \|\Pi_{k,D}^0\hspace{1pt} v\|_{L_2(D)}+\|w\|_{L_2(D)}\lesssim |\!|\!| v|\!|\!|_{k,D}+|v|_{H^{1/2}(\partial D)} \end{equation} by Lemma~\ref{lem:PDZcQD} and \eqref{eq:w}. \par The estimate \eqref{eq:Fundamental} (with $\hspace{1pt}h_{{\scriptscriptstyle D}}=1$) follows from \eqref{eq:Fundamental3}--\eqref{eq:Fundamental5}. \end{proof} \begin{lemma}\label{lem:InverseEstimate1} There exists a positive constant $C$, depending only $\rho_{\!{_\ssD}}$ and $k$, such that \begin{equation}\label{eq:InverseEstimate1} |v|_{{H^1(D)}}\leq C \big[\hspace{1pt}h_{{\scriptscriptstyle D}}^{-1}|\!|\!| v|\!|\!|_{k,D} +\hspace{1pt}h_{{\scriptscriptstyle D}}^{1/2}\|\partial v/\partial s\|_{L_2(\partial D)}\big] \qquad\forall\,v\in\cQ^k(D), \end{equation} where $\partial v/\partial s$ is a tangential derivative of $v$. \end{lemma} \begin{proof} The estimate \eqref{eq:InverseEstimate1} follows immediately from \eqref{eq:HalfAndOne} and Lemma~\ref{lem:Fundamental} . \end{proof} \begin{lemma}\label{lem:InverseEstimate2} There exists a positive constant $C$, depending only on $\rho_{\!{_\ssD}}$, $|\cE_{\ssD}|$ and $k$, such that \begin{equation}\label{eq:InverseEstimate2} |v|_{{H^1(D)}}\leq C \big[ \hspace{1pt}h_{{\scriptscriptstyle D}}^{-1}|\!|\!| v|\!|\!|_{k,D} +\sqrt{\ln(1+\ \tau_{\!\ssD})}\|v\|_{L_\infty(\partial D)} \big]\qquad\forall\,v\in \cQ^k(D), \end{equation} where \begin{equation}\label{eq:TauD} \tau_{\!\ssD}= \frac{\max_{e\in\cE_{\ssD}}h_e}{\min_{e\in\cE_{\ssD}}h_e}. \end{equation} \end{lemma} \begin{proof} According to \cite[Lemma~5.1]{BLR:2016:VEM}, we have \begin{equation}\label{eq:HalfAndInfty} |v|_{H^{1/2}(\partial D)}\leq C \sqrt{\ln(1+\tau_{\!\ssD})}\|v\|_{L_\infty(\partial D)} \qquad\forall\,v\in\cQ^k(D), \end{equation} where the positive constant $C$ only depends on $\rho_{\!{_\ssD}}$, $|\cE_{\ssD}|$ and $k$. \par The estimate \eqref{eq:InverseEstimate2} follows from Lemma~\ref{lem:Fundamental} and \eqref{eq:HalfAndInfty}. \end{proof} \par Combining \eqref{eq:tbarBdd2} and \eqref{eq:InverseEstimate2}, we have the following corollary. \begin{corollary}\label{cor:InverseEstimate3} There exists a positive constant $C$, depending only on $\rho_{\!{_\ssD}}$, $|\cE_{\ssD}|$ and $k$, such that \begin{equation*} |v|_{H^1(D)}\leq C\big[ \hspace{1pt}h_{{\scriptscriptstyle D}}^{-1}\|\Pi_{k-2,D}^0 v\|_{L_2(D)}+\sqrt{\ln(1+\tD)} \|v\|_{L_\infty(\partial D)}\big] \qquad\forall\,v\in\cQ^k(D). \end{equation*} \end{corollary} \subsection{The Interpolation Operator}\label{subsec:Interpolation} For $s>1$ the interpolation operator $I_{k,D}:H^s(D)\longrightarrow\cQ^k(D)$ is defined by the condition that $\zeta$ and $I_{k,D}\zeta$ share the same degrees of freedom, i.e., $I_{k,D}\zeta$ agrees with $\zeta$ at the nodes in $\mathcal{N}_{\p D}$ and \begin{equation}\label{eq:IDDef} \Pi_{k-2,D}^0I_{k,D}\zeta=\Pi_{k-2,D}^0\zeta. \end{equation} \par It is clear that \begin{equation}\label{eq:IDInvariance} I_{k,D} q=q\qquad\forall\,q\in\mathbb{P}_k(D), \end{equation} and by a standard estimate for polynomials in one variable, \begin{equation}\label{eq:TrivialBdd} \|I_{k,D} \zeta\|_{L_\infty(\partial D)}\leq C \max_{p\in\mathcal{N}_{\p D}}|\zeta(p)| \leq\|\zeta\|_{L_\infty(\partial D)}\qquad \forall\,\zeta\in H^s(D) \;\text{and}\; s>1, \end{equation} where the positive constant $C$ only depends on $k$. \par For the three dimensional Poisson problem, if the solution belongs to $H^{\ell+1}({\Omega})$, then its restriction to a face $F$ of a polyhedral subdomain belongs to $H^{\ell+\frac12}(F)$. Therefore below we also consider the interpolants of functions in $H^{\ell+\frac12}(D)$. \par We begin with several stability estimates for the interpolation operator. \begin{lemma}\label{lem:IDtbar} There exists a positive constant $C$, depending only on $\rho_{\!{_\ssD}}$ and $k$, such that \begin{alignat}{3} \tbarI_{k,D}\zeta|\!|\!|_{k,D}&\leq C\big[\|\zeta\|_{L_2(D)}+\hspace{1pt}h_{{\scriptscriptstyle D}}|\zeta|_{H^1(D)} +\hspace{1pt}h_{{\scriptscriptstyle D}}^2|\zeta|_{H^2(D)}\big] &\qquad&\forall\,\zeta\in H^2(D),\label{eq:IDtbar1}\\ \tbarI_{k,D}\zeta|\!|\!|_{k,D}&\leq C\big[\|\zeta\|_{L_2(D)} +\hspace{1pt}h_{{\scriptscriptstyle D}}|\zeta|_{H^1(D)}+\hspace{1pt}h_{{\scriptscriptstyle D}}^{3/2}|\zeta|_{H^{3/2}(D)}\big] &\qquad&\forall\,\zeta\in H^{3/2}(D).\label{eq:IDtbar2} \end{alignat} \end{lemma} \begin{proof} Let $\zeta\in {H^2(D)}$ (resp., $H^{3/2}(D)$) be arbitrary. From \eqref{eq:G2}, \eqref{eq:tbarBdd2}, \eqref{eq:IDDef} and \eqref{eq:TrivialBdd}, we have \begin{align* \tbarI_{k,D}\zeta|\!|\!|_{k,D}&\lesssim \hspace{1pt}h_{{\scriptscriptstyle D}}\|I_{k,D}\zeta\|_{L_\infty(\partial D)} +\|\Pi_{k-2,D}^0I_{k,D}\zeta\|_{L_2(D)}\\ &\lesssim \hspace{1pt}h_{{\scriptscriptstyle D}}\|\zeta\|_{L_\infty(\partial D)}+\|\Pi_{k-2,D}^0 \zeta\|_{L_2(D)}\lesssim \hspace{1pt}h_{{\scriptscriptstyle D}}\|\zeta\|_{L_\infty(D)},\notag \end{align*} which together with \eqref{eq:Sobolev} (resp., \eqref{eq:Sobolev2}) implies \eqref{eq:IDtbar1} (resp., \eqref{eq:IDtbar2}). \end{proof} \begin{lemma}\label{lem:IDFundamental1} We have \begin{align} |I_{k,D}\zeta|_{{H^1(D)}}&\lesssim |\zeta|_{{H^1(D)}}+\hspace{1pt}h_{{\scriptscriptstyle D}}|\zeta|_{H^2(D)} \label{eq:IDHOne}\\ \intertext{for all $\zeta\in{H^2(D)}$, and} |I_{k,D}\zeta|_{{H^1(D)}}&\lesssim |\zeta|_{{H^1(D)}}+\hspace{1pt}h_{{\scriptscriptstyle D}}^{1/2}|\zeta|_{H^{3/2}(D)}\label{eq:IDHOneHalf} \end{align} for all $\zeta\in H^{3/2}(D)$, where the hidden constants only depend on $\rho_{\!{_\ssD}}$, $|\cE_{\ssD}|$ and $k$. \end{lemma} \begin{proof} Let $\zeta\in H^2(D)$ be arbitrary and $\bar\zeta_D$ be the mean of $\zeta$ over $D$. Since $I_{k,D}\bar\zeta_D=\bar\zeta_D$, it follows from \eqref{eq:InverseEstimate1} that \begin{align*} |I_{k,D}\zeta|_{{H^1(D)}}^2&=|I_{k,D}(\zeta-\bar\zeta_D)|_{H^1(D)}^2\\ &\lesssim \hspace{1pt}h_{{\scriptscriptstyle D}}^{-2}\tbarI_{k,D}(\zeta-\bar\zeta_D)|\!|\!|_{k,D}^2 +\hspace{1pt}h_{{\scriptscriptstyle D}}\|\partial (I_{k,D}\zeta)/\partial s\|_{L_2(\partial D)}^2. \end{align*} We have, by a standard interpolation estimate in one dimension and \eqref{eq:Trace} (applied to the first order derivatives of $\zeta$), \begin{align*} \hspace{1pt}h_{{\scriptscriptstyle D}}\|\partial(I_{k,D} \zeta)/\partial s\|_{L_2(\partial D)}^2&\lesssim\sum_{e\in\cE_{\ssD}} \hspace{1pt}h_{{\scriptscriptstyle D}}\|\partial\zeta/\partial s\|_{L_2(e)}^2\\ &\lesssim \sum_{e\in\cE_{\ssD}}\big[|\zeta|_{H^1(D)}^2+ \hspace{1pt}h_{{\scriptscriptstyle D}}^2|\zeta|_{H^2(D)}^2\big]\lesssim |\zeta|_{H^1(D)}^2+\hspace{1pt}h_{{\scriptscriptstyle D}}^2|\zeta|_{H^2(D)}^2. \end{align*} \par These two estimates together with \eqref{eq:PF2} and \eqref{eq:IDtbar1} imply \eqref{eq:IDHOne}. \par Similarly we obtain \eqref{eq:IDHOneHalf} by replacing \eqref{eq:Trace} with the estimate in Lemma~\ref{lem:CZ1}. \end{proof} \begin{lemma}\label{lem:IDFundamental2} We have \begin{align} \|I_{k,D}\zeta\|_{L_2(D)}&\lesssim \|\zeta\|_{L_2(D)} +\hspace{1pt}h_{{\scriptscriptstyle D}}|\zeta|_{{H^1(D)}}+\hspace{1pt}h_{{\scriptscriptstyle D}}^2|\zeta|_{H^2(D)} \label{eq:IDLTwo}\\ \intertext{for all $\zeta\in {H^2(D)}$, and} \|I_{k,D}\zeta\|_{L_2(D)}&\lesssim \|\zeta\|_{L_2(D)}+\hspace{1pt}h_{{\scriptscriptstyle D}}|\zeta|_{{H^1(D)}}+\hspace{1pt}h_{{\scriptscriptstyle D}}^{3/2}|\zeta|_{H^{3/2}(D)} \label{eq:IDLTwoHalf} \end{align} for all $\zeta\in H^{3/2}(D)$, where the hidden constants only depend on $\rho_{\!{_\ssD}}$, $|\cE_{\ssD}|$ and $k$. \end{lemma} \begin{proof} From Lemma~\ref{lem:PODStability2} and \eqref{eq:LTwoPOD} we have \begin{align}\label{eq:LTwoID} \|I_{k,D}\zeta\|_{L_2(D)}&\lesssim \|I_{k,D}\zeta-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}I_{k,D}\zeta\|_{L_2(D)} +\|\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}I_{k,D}\zeta\|_{L_2(D)}\\ &\lesssim \hspace{1pt}h_{{\scriptscriptstyle D}}|I_{k,D}\zeta|_{H^1(D)}+\tbarI_{k,D}\zeta|\!|\!|_{k,D}.\notag \end{align} The estimates \eqref{eq:IDLTwo} and \eqref{eq:IDLTwoHalf} follow from \eqref{eq:LTwoID}, Lemma~\ref{lem:IDtbar} and Lemma~\ref{lem:IDFundamental1}. \end{proof} \par We can now derive error estimates for the interpolation operator. \begin{lemma}\label{lem:InterpolationError} We have, for $1\leq \ell\leq k$, \begin{alignat}{3} \|\zeta-I_{k,D}\zeta\|_{L_2(D)}+\|\zeta-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}I_{k,D}\zeta\|_{L_2(D)} &\lesssim \hspace{1pt}h_{{\scriptscriptstyle D}}^{\ell+1}|\zeta|_{H^{\ell+1}(D)}&\qquad& \forall\,\zeta\in H^{\ell+1}(D), \label{eq:LTwoPODID}\\ |\zeta-I_{k,D}\zeta|_{{H^1(D)}}+|\zeta-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}I_{k,D}\zeta|_{{H^1(D)}} &\lesssim \hspace{1pt}h_{{\scriptscriptstyle D}}^{\ell}|\zeta|_{H^{\ell+1}(D)}&\qquad& \forall\,\zeta\in H^{\ell+1}(D), \label{eq:HOnePODID}\\ |\zeta-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}I_{k,D}\zeta|_{H^2(D)}&\lesssim \hspace{1pt}h_{{\scriptscriptstyle D}}^{\ell-1}|\zeta|_{H^{\ell+1}(D)} &\qquad&\forall\,\zeta\in H^{\ell+1}(D), \label{eq:HTwoPODID} \end{alignat} and \begin{alignat}{3} \|\zeta-I_{k,D} \zeta\|_{L_2(D)}&\lesssim \hspace{1pt}h_{{\scriptscriptstyle D}}^{\ell+\frac12}|\zeta|_{H^{\ell+\frac12}(D)} &\qquad&\forall\,\zeta\in H^{\ell+\frac12}(D), \label{eq:HalfIDLTwoError}\\ |\zeta-I_{k,D} \zeta|_{H^1(D)}&\lesssim \hspace{1pt}h_{{\scriptscriptstyle D}}^{\ell-\frac12}|\zeta|_{H^{\ell+\frac12}(D)} &\qquad&\forall\,\zeta\in H^{\ell+\frac12}(D), \label{eq:HalfIDHOneError} \end{alignat} where the hidden constants only depend on $\rho_{\!{_\ssD}}$, $|\cE_{\ssD}|$ and $k$. \end{lemma} \begin{proof} In view of \eqref{eq:tbarBdd2}, Lemma~\ref{lem:PODStability2}, \eqref{eq:IDtbar1} and \eqref{eq:IDLTwo}, we have, for any $\zeta\in H^2(D)$, \begin{align*} \|I_{k,D}\zeta\|_{L_2(D)}+\|\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} I_{k,D}\zeta\|_{L_2(D)} &\lesssim \|I_{k,D}\zeta\|_{L_2(D)}+\tbarI_{k,D}\zeta|\!|\!|_{k,D}\\ &\lesssim \|\zeta\|_{L_2(D)}+\hspace{1pt}h_{{\scriptscriptstyle D}}|\zeta|_{{H^1(D)}}+\hspace{1pt}h_{{\scriptscriptstyle D}}^2|\zeta|_{H^2(D)}, \end{align*} which together with \eqref{eq:BHEstimates}, \eqref{eq:Projection} and \eqref{eq:IDInvariance} implies \eqref{eq:LTwoPODID}. \par From \eqref{eq:PODStability1} and \eqref{eq:IDHOne} we also have, for any $\zeta\in H^2(D)$, \begin{align*} & |I_{k,D}\zeta|_{H^1({\Omega})}+|\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}I_{k,D}\zeta|_{{H^1(D)}}\leq 2|I_{k,D}\zeta|_{{H^1(D)}} \lesssim \hspace{1pt}h_{{\scriptscriptstyle D}}^{-1}\|\zeta\|_{L_2(D)}+|\zeta|_{{H^1(D)}}+\hspace{1pt}h_{{\scriptscriptstyle D}}|\zeta|_{H^2(D)}, \end{align*} which together with \eqref{eq:BHEstimates}, \eqref{eq:Projection} and \eqref{eq:IDInvariance} implies \eqref{eq:HOnePODID}. \par Similarly, by using the relation \begin{align*} |\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}I_{k,D}\zeta|_{H^2(D)}&=|\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}I_{k,D}\zeta-\Pi_{1,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}\zeta|_{H^2(D)}\\ &\lesssim \hspace{1pt}h_{{\scriptscriptstyle D}}^{-1}|\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}I_{k,D}\zeta-\Pi_{1,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}\zeta|_{H^1(D)}\\ &\lesssim \hspace{1pt}h_{{\scriptscriptstyle D}}^{-1}|I_{k,D}\zeta|_{H^1(D)}+\hspace{1pt}h_{{\scriptscriptstyle D}}^{-1}|\zeta|_{H^1(D)} \lesssim \hspace{1pt}h_{{\scriptscriptstyle D}}^{-2}\|\zeta\|_{L_2(D)}+\hspace{1pt}h_{{\scriptscriptstyle D}}^{-1}|\zeta|_{H^1(D)}+|\zeta|_{H^2(D)} \end{align*} that follows from \eqref{eq:DiscreteEstimate1}, \eqref{eq:PODStability1} and \eqref{eq:IDHOne}, we can establish \eqref{eq:HTwoPODID} through \eqref{eq:BHEstimates}, \eqref{eq:Projection} and \eqref{eq:IDInvariance}. \par Finally we obtain \eqref{eq:HalfIDLTwoError} and \eqref{eq:HalfIDHOneError} by replacing \eqref{eq:BHEstimates} (resp., \eqref{eq:IDtbar1}, \eqref{eq:IDHOne} and \eqref{eq:IDLTwo}) with \eqref{eq:BHEstimates2} (resp., \eqref{eq:IDtbar2}, \eqref{eq:IDHOneHalf} and \eqref{eq:IDLTwoHalf}) in the arguments for \eqref{eq:LTwoPODID} and \eqref{eq:HOnePODID}. \end{proof} \par The proof for the following result is similar. \begin{lemma}\label{lem:LTwoIDSpecial} There exists a positive constant $C$, depending only on $\rho_{\!{_\ssD}}$, $|\cE_{\ssD}|$ and $k$, such that % \begin{equation* \|I_{k,D} \zeta-\Pi_{1,D}^0I_{k,D} \zeta\|_{L_2(D)}\leq C \hspace{1pt}h_{{\scriptscriptstyle D}}^2|\zeta|_{H^2(D)} \qquad\forall\,\zeta\in H^2(D). \end{equation*} \end{lemma} \par We also have interpolation error estimates in the $L_\infty$ norm. \begin{lemma}\label{lem:1DInterpolationError} There exists a positive constant $C$, depending only on $\rho_{\!{_\ssD}}$, $N$ and $k$, such that \begin{alignat}{3} \|\zeta-I_{k,D}\zeta\|_{L_\infty(D)}&\leq C \hspace{1pt}h_{{\scriptscriptstyle D}}^\ell|\zeta|_{H^{\ell+1}(D)}&\qquad& \forall\,\zeta\in H^{\ell+1}(D) \;\text{and}\; 1\leq\ell\leq k,\label{eq:1DInterpolationError}\\ \|\zeta-I_{k,D}\zeta\|_{L_\infty(D)}&\leq C\hspace{1pt}h_{{\scriptscriptstyle D}}^{\ell-\frac12}|\zeta|_{H^{\ell+\frac12}(D)} &\qquad& \forall\,\zeta\in H^{\ell+\frac12}(D) \;\text{and}\;1\leq\ell\leq k. \label{eq:1DInterpolationErrorHalf} \end{alignat} \end{lemma} \begin{proof} It follows from \eqref{eq:Sobolev}, Lemma~\ref{lem:MaximumPrinciple}, \eqref{eq:TrivialBdd} and \eqref{eq:IDHOne} that, for any $\zeta\in {H^2(D)}$, \begin{equation*} \|I_{k,D}\zeta\|_{L_\infty(D)}\lesssim \|I_{k,D}\zeta\|_{L_\infty(\partial D)}+|I_{k,D}\zeta|_{H^1(D)} \lesssim\hspace{1pt}h_{{\scriptscriptstyle D}}^{-1}\|\zeta\|_{L_2(D)}+|\zeta|_{H^1(D)}+\hspace{1pt}h_{{\scriptscriptstyle D}}|\zeta|_{H^2(D)}, \end{equation*} which together with \eqref{eq:BHEstimates} and \eqref{eq:IDInvariance} imply \eqref{eq:1DInterpolationError}. \par The proof for \eqref{eq:1DInterpolationErrorHalf} is similar, but with \eqref{eq:Sobolev} (resp., \eqref{eq:BHEstimates} and \eqref{eq:IDHOne}) replaced by \eqref{eq:Sobolev2} (resp., \eqref{eq:BHEstimates2} and \eqref{eq:IDHOneHalf}). \end{proof} \subsection{The Null Space of $\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}$}\label{subsec:Kernel} Let $\mathcal{N}(\POD)=\{v\in\cQ^k(D):\,\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} v=0\}$ be the null space of the projection $\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}$. The inverse estimates in Section~\ref{subsec:InverseEstimate} can be simplified for functions in $\mathcal{N}(\POD)$. \begin{lemma}\label{lem:KerPOD} There exists a positive constant $C$, depending only on $\rho_{\!{_\ssD}}$ and $k$, such that \begin{equation}\label{eq:KerPOD} |\!|\!| v|\!|\!|_{k,D}^2\leq C \hspace{1pt}h_{{\scriptscriptstyle D}}\sum_{e\in\cE_{\ssD}}\|\Pi_{k-1,e}^0 v\|_{L_2(e)}^2 \qquad\forall\,v\in \mathcal{N}(\POD). \end{equation} \end{lemma} \begin{proof} We can assume $k\geq2$ since $\Pi_{k-2,D}\hspace{1pt}v=0$ for $k=1$. \par Let $v\in\mathcal{N}(\POD)$ be arbitrary. It follows from \eqref{eq:POD1} that \begin{equation}\label{eq:KerPOD1} \int_D v(\Delta q)\,dx=\int_{\partial D}v(n\cdot\nabla q)\,ds \qquad\forall \,q\in \mathbb{P}_k(D). \end{equation} \par According to \eqref{eq:discs} and Lemma~\ref{lem:Polynomial}, given any $p\in\mathbb{P}_{k-2}$, there exists $q\in\mathbb{P}_k$ such that $\Delta q=p$ and \begin{equation}\label{eq:KerPOD12} \|\nabla q\|_{L_2(D)}\leq\|\nabla q\|_{L_2(\tilde\fB_{\!\ssD})}\lesssim \hspace{1pt}h_{{\scriptscriptstyle D}}\|p\|_{L_2(\tilde\fB_{\!\ssD})} \lesssim \hspace{1pt}h_{{\scriptscriptstyle D}}\|p\|_{L_2(\fB_{\!\ssD})}\leq \hspace{1pt}h_{{\scriptscriptstyle D}}\|p\|_{L_2(D)}. \end{equation} \par It follows from \eqref{eq:DiscreteEstimate0}, \eqref{eq:KerPOD1} and \eqref{eq:KerPOD12} that \begin{align*} \int_D vp\,dx &\leq \Big(\sum_{e\in\cE_{\ssD}}\|\Pi_{k-1,e}^0 v\|_{L_2(e)}^2\Big)^\frac12 \Big(\sum_{e\in\cE_{\ssD}}\|\nabla q\|_{L_2(e)}^2\Big)^\frac12\\ &\lesssim \Big(\sum_{e\in\cE_{\ssD}}\|\Pi_{k-1,e}^0 v\|_{L_2(e)}^2\Big)^\frac12 \hspace{1pt}h_{{\scriptscriptstyle D}}^{-1/2} \|\nabla q\|_{L_2(D)}\\ &\lesssim \Big(\sum_{e\in\cE_{\ssD}}\|\Pi_{k-1,e}^0 v\|_{L_2(e)}^2\Big)^\frac12 \hspace{1pt}h_{{\scriptscriptstyle D}}^{1/2} \|p\|_{L_2(D)} \end{align*} and hence \begin{equation}\label{eq:KerPod13} \|\Pi_{k-2,D}^0v\|_{L_2(D)}=\max_{p\in\mathbb{P}_{k-2}}\int_D vp\,dx /\|p\|_{L_2(D)} \lesssim \Big(\hspace{1pt}h_{{\scriptscriptstyle D}}\sum_{e\in\cE_{\ssD}}\|\Pi_{k-1,e}^0 v\|_{L_2(e)}^2\Big)^\frac12. \end{equation} \par The estimate \eqref{eq:KerPOD} follows from \eqref{eq:tbarNormDef} and \eqref{eq:KerPod13}. \end{proof} \begin{lemma}\label{lem:BdryEst} For any $v\in\mathbb{P}_k(\p D)$ that vanishes at some point on $\partial D$, we have \begin{equation*} \|v\|_{L_2(\partial D)} \leq C \hspace{1pt}h_{{\scriptscriptstyle D}} \|\partial v/\partial s\|_{L_2(\partial D)}, \end{equation*} where $\partial v /\partial s$ denotes a tangential derivative of $v$ along $\partial D$ and the positive constant $C$ only depends on $k$. \end{lemma} \begin{proof} It follows from a Poincar\'e-Friedrichs inequality on the circle $\partial\fB_{\!\ssD}$ that \begin{equation*} \|\zeta\|_{L_2(\partial\fB_{\!\ssD})}\lesssim \hspace{1pt}h_{{\scriptscriptstyle D}}|\zeta|_{H^1(\partial\fB_{\!\ssD})} \end{equation*} for any $\zeta\in H^1(\partial\fB_{\!\ssD})$ that vanishes at some point on $\partial\fB_{\!\ssD}$. The lemma then follows from \eqref{eq:L2Iso1} and \eqref{eq:H1Iso1}. \end{proof} \par Note that every $v\in \mathcal{N}(\POD)$ must vanish at some point on $\partial{\Omega}$ because $\int_{\partial D}v\,ds=0$ by \eqref{eq:POD2}. Therefore, in view of Lemma~\ref{lem:KerPOD} and Lemma~\ref{lem:BdryEst}, the inverse estimates \eqref{eq:InverseEstimate1} and \eqref{eq:InverseEstimate2} can be simplified to \begin{alignat*}{3} |v|_{{H^1(D)}}&\lesssim \hspace{1pt}h_{{\scriptscriptstyle D}}^{1/2}\|\partial v/\partial s\|_{L_2(\partial D)} &\qquad&\forall\,v\in \mathcal{N}(\POD)\\ \intertext{with a hidden constant depending only on $\rho_{\!{_\ssD}}$ and $k$, and} |v|_{{H^1(D)}}&\lesssim \sqrt{\ln(1+\tau_{\!\ssD})}\|v\|_{L_\infty(\partial D)} &\qquad&\forall\,v\in \mathcal{N}(\POD) \end{alignat*} with a hidden constant that also depends on $|\cE_{\ssD}|$. \par Hence we have \begin{alignat}{3} |v|_{H^1(D)}^2&=|\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} v|_{H^1(D)}^2+|v-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} v|_{H^1(D)}^2\label{eq:EnergyBdd1}\\ &\lesssim |\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} v|_{H^1(D)}^2+\hspace{1pt}h_{{\scriptscriptstyle D}}\|\partial (v-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} v)/\partial s\|_{L_2(\partial D)}^2 &\qquad&\forall\,v\in\cQ^k(D),\notag\\ \intertext{with a hidden constant depending only on $\rho_{\!{_\ssD}}$ and $k$, and also} |v|_{H^1(D)}^2&\lesssim |\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} v|_{H^1(D)}^2+\ln(1+\tau_{\!\ssD})\|v-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} v\|_{L_\infty(\partial D)}^2 &\qquad&\forall\,v\in\cQ^k(D),\label{eq:EnergyBdd2} \end{alignat} with a hidden constant that also depends on $|\cE_{\ssD}|$. \section{The Poisson Problem in Two Dimensions}\label{sec:Poisson2D} In this section we consider virtual element methods for the Poisson problem \eqref{eq:Poisson} in two dimensions and establish error estimates under two global shape regularity assumptions. \par Let $\mathcal{T}_h$ be a triangulation of the convex polygon ${\Omega}\subset\mathbb{R}^2$ by polygonal subdomains, where $h=\displaystyle\max_{D\in\mathcal{T}_h}\hspace{1pt}h_{{\scriptscriptstyle D}}$ is the mesh parameter. The global virtual finite element space $\cQ^k_h$ is given by $$\cQ^k_h=\{v\in H^1_0({\Omega}):\, v\big|_D\in\cQ^k(D) \quad \forall\,D\in\mathcal{T}_h\}.$$ The space of (discontinuous) piecewise polynomials of degree $\leq k$ with respect to $\mathcal{T}_h$ is denoted by $\cP^k_h$. \par The operators $\Pi_{k,h}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}} :H^1({\Omega})\longrightarrow \cP^k_h$, $\Pi_{k,h}^0:L_2({\Omega})\longrightarrow \cP^k_h$ and $I_{k,h}:H^2({\Omega})\cap H^1_0({\Omega})\longrightarrow \cQ^k_h$ are defined in terms of their local counterparts, i.e., $$(\Pi_{k,h}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\zeta)\big|_D=\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}(\zeta\big|_D), \quad (\Pi_{k,h}^0\zeta)\big|_D=\Pi_{k,D}^0\hspace{1pt}(\zeta\big|_D) \quad \text{and} \quad (I_{k,h}\zeta)\big|_D=I_{k,D}(\zeta\big|_D). $$ \par The piecewise $H^1$ norm with respect to $\mathcal{T}_h$ is given by \begin{equation}\label{eq:PiecewiseHOneNOrm} |v|_{h,1}=\Big(\sum_{D\in\cT_h} |v|_{{H^1(D)}}^2\Big)^\frac12. \end{equation} \subsection{Global Shape Regularity Assumptions} \label{subsec:GlobalShapeRegularity} We assume that the local shape regularity assumption \eqref{eq:SA} is satisfied by all $D\in\mathcal{T}_h$ and impose the following global regularity assumptions. \par\noindent {\em Assumption 1}\quad There exists a positive number $\rho\in (0,1)$, independent of $h$, such that \begin{equation}\label{eq:rho} \rho_{\!{_\ssD}}\geq\rho \qquad\forall\,D\in\mathcal{T}_h. \end{equation} \par\noindent {\em Assumption 2}\quad There exists a positive integer $N$, independent of $h$, such that \begin{equation}\label{eq:N} |\cE_{\ssD}|\leq N\qquad\forall\,D\in\mathcal{T}_h. \end{equation} \par The hidden constants in the rest of Section~\ref{sec:Poisson2D} will only depend on $\rho$, $N$ and $k$. \subsection{The Discrete Problem}\label{subsec:DiscreteProblem} Let the local stabilizing bilinear form $S^D_1(\cdot,\cdot)$ and $S^D_2(\cdot,\cdot)$ be defined by \begin{align} S^D_1(w,v)&=\sum_{\mathcal{N}_{\p D}} w(p)v(p),\label{eq:SD1}\\ S^D_2(w,v)&=\hspace{1pt}h_{{\scriptscriptstyle D}}(\partial w/\partial s,\partial v/\partial s)_{L_2(\partial D)},\label{eq:SD2} \end{align} where $\partial v/\partial s$ denotes a tangential derivative of $v$ along $\partial D$. \begin{remark}\label{rem:SD}{\rm The local stability bilinear form $S^D_1(\cdot,\cdot)$ is the boundary part of the local stability bilinear form in \cite{BBCMMR:2013:VEM}. The bilinear form $S^D_2(\cdot,\cdot)$ was introduced in \cite{WRR:2016:VEM}.} \end{remark} \begin{remark}\label{rem:AlternativeSD2}{\rm We can also use the bilinear form $\tilde S^D_2(\cdot,\cdot)$ defined by \begin{equation*} \tilde S^D_2(w,v)=\sum_{e\in\cE_{\ssD}}h_e(\partial w/\partial s,\partial v/\partial s)_{L_2(e)} \end{equation*} } \end{remark} \begin{remark}\label{rem:NormEquivalence} {\rm By the equivalence of norms on finite dimensional vector spaces, we have $$\sum_{p\in\mathcal{N}_e}v^2(p)\approx \sum_{e\in\cE_{\ssD}}\|v\|_{L_\infty(e)}^2 \approx \|v\|_{L_\infty(\partial D)}^2 \qquad \forall\, v\in \mathbb{P}_k(\p D).$$ } \end{remark} \par The discrete problem for \eqref{eq:Poisson} is to find $u_h\in\cQ^k_h$ such that \begin{equation}\label{eq:DiscreteProblem} a_h(u_h,v)=(f,\Xi_h v) \qquad \forall\,v\in\cQ^k_h, \end{equation} where \begin{align} a_h(w,v)&=\sum_{D\in\cT_h} \big[a^D(\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} w,\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} v) +S^D(w-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} w,v-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} v)\big],\label{eq:ah}\\ a^D(w,v)&=\int_D \nabla w\cdot\nabla v\,dx,\label{eq:aD} \end{align} $S^D(\cdot,\cdot)$ is either $S^D_1(\cdot,\cdot)$ or $S^D_2(\cdot,\cdot)$, and $\Xi_h:\cQ^k_h\longrightarrow\cP^k_h$ is given by \begin{equation}\label{eq:Xih} \Xi_h=\begin{cases} \Pi_{1,h}^0&\qquad\text{if $k=1,2,$}\\[4pt] \Pi_{k-2,h}^0 &\qquad\text{if $k\geq 3$}.\\ \end{cases} \end{equation} \par It follows from \eqref{eq:EnergyBdd1} and \eqref{eq:EnergyBdd2} that \begin{equation}\label{eq:Stability} |v|_{H^1({\Omega})}^2 \lesssim \alpha_h a_h(v,v) \qquad\forall\,v\in \cQ^k_h, \end{equation} where \begin{equation}\label{eq:kappah} \alpha_h=\begin{cases}\displaystyle \ln(1+\max_{D\in\mathcal{T}_h}\tau_{\!\ssD})&\qquad \text{if $S^D(\cdot,\cdot)=S^D_1(\cdot,\cdot)$}\\ 1&\qquad\text{if $S^D(\cdot,\cdot)=S^D_2(\cdot,\cdot)$} \end{cases}\;. \end{equation} The well-posedness of the discrete problem follows from the stability estimate \eqref{eq:Stability}. \par We will use the following properties of $\Xi_h$ in the error analysis. \begin{lemma}\label{lem:Xih} We have, for $1\leq\ell\leq k$, \begin{alignat}{3} (f,w-\Xi_h w)&\lesssim h^\ell|f|_{H^{\ell-1}({\Omega})} |w|_{H^1({\Omega})} &\qquad&\forall\,f\in H^{\ell-1}({\Omega}),\;w\in\cQ^k_h, \label{eq:Xih1}\\ (f,I_{k,h}\zeta-\Xi_hI_{k,h}\zeta)&\lesssim h^{\ell+1}|f|_{H^{\ell-1}({\Omega})}|\zeta|_{H^2({\Omega})}&\qquad& \forall f\in H^{\ell-1}({\Omega}),\;\zeta\in H^2({\Omega}).\label{eq:Xih2} \end{alignat} \end{lemma} \begin{proof} In view of the relation \begin{align*} (f,w-\Xi_h w)&=(f-\Pi_{k-2,h}^0 f,w-\Xi_h w)\\ &\leq \|f-\Pi_{k-2,h}^0 f\|_{L_2({\Omega})}\| w-\Xi_h w\|_{L_2({\Omega})} \leq \|f-\Pi_{\ell-2,h}^0 f\|_{L_2({\Omega})} \|w-\Pi_{0,h}^0 w\|_{L_2({\Omega})}, \end{align*} the estimate \eqref{eq:Xih1} follows from \eqref{eq:PDZLTwoError}. \par Similarly we have \begin{align*} (f,I_{k,h}\zeta-\Xi_hI_{k,h}\zeta)&=(f-\Pi_{k-2,h}^0 f,I_{k,h}\zeta-\Xi_hI_{k,h}\zeta)\\ &\leq \|f-\Pi_{k-2,h}^0 f\|_{L_2({\Omega})}\|I_{k,h}\zeta-\Xi_hI_{k,h}\zeta\|_{L_2({\Omega})}\\ &\leq \|f-\Pi_{\ell-2,h}^0f\|_{L_2({\Omega})} \|I_{k,h}\zeta-\Pi_{1,h}^0I_{k,h}\zeta\|_{L_2({\Omega})}, \end{align*} and the estimate \eqref{eq:Xih2} follows from \eqref{eq:PDZLTwoError} and Lemma~\ref{lem:LTwoIDSpecial}. Note that this is the reason why $\Xi_h$ is chosen to be $\Pi_{1,h}^0$ for $k=2$ instead of $\Pi_{k-2,h}^0=\Pi_{0,h}^0$. \end{proof} \subsection{An Abstract Error Estimate in the Energy Norm} \label{subsec:AbstractEnergyError} Let $\|\cdot\|_h=\sqrt{a_h(\cdot,\cdot)}$ be the mesh-dependent energy norm. Note that \eqref{eq:Stability} implies \begin{equation}\label{eq:HOneVEM} |v|_{H^1({\Omega})}\lesssim \sqrt{\alpha_h}\|v\|_h \qquad\forall\,v\in\cQ^k_h. \end{equation} \par The discrete problem \eqref{eq:DiscreteProblem} is defined in terms of a non-inherited symmetric positive definite bilinear form. We have a standard error estimate (cf. \cite[Lemma~10.1.7]{BScott:2008:FEM} and \cite{BSS:1972:NC}) \begin{equation}\label{eq:Standard} \|u-u_h\|_h\leq \inf_{v\in\cQ^k_h}\|u-v\|_h+\sup_{v\in \cQ^k_h} \frac{a_h(u,v)-(f,\Xi_h v)}{\|v\|_h}. \end{equation} The key is to control the numerator on the right-hand side of \eqref{eq:Standard}. \par In view of \eqref{eq:Poisson}, \eqref{eq:aDef} and \eqref{eq:POD1} we can write, for any $v\in\cQ^k_h$, \begin{align*} a_h(u,v)&=\sum_{D\in\cT_h} \big[a^D(\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} u,\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} v)+S^D(u-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} u,v-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} v)\big]\\ &=\sum_{D\in\cT_h} \big[a^D(\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} u,v)+S^D(u-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} u,v-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} v)\big]\\ &=\sum_{D\in\cT_h} \big[a^D(\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} u-u,v)+S^D(u-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} u,v-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} v)\big]+(f,v)\\ &=\sum_{D\in\cT_h} \big[a^D(\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} u-u,v-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} v)+S^D(u-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} u,v-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} v)\big]+(f,v), \end{align*} and hence, by \eqref{eq:PODStability1}, \eqref{eq:ah} and \eqref{eq:HOneVEM}, \begin{align}\label{eq:Numerator} a_h(u,v)-(f,\Xi_h v)&=\sum_{D\in\cT_h} \big[a^D(\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} u-u,v-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} v) +S^D(u-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} u,v-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} v)\big]\notag\\ &\hspace{30pt}+(f,v-\Xi_h v),\notag\\ &\lesssim \Big(\sum_{D\in\cT_h}|\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} u-u|_{{H^1(D)}}|v-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} v|_{{H^1(D)}}\Big) +\|u-\Pi_{k,h}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}} u\|_h\|v\|_h\\ &\hspace{30pt}+\Big(\sup_{w\in\cQ^k_h}\frac{(f,w-\Xi_h w)}{|w|_{H^1({\Omega})}}\Big) \sqrt{\alpha_h}\|v\|_h\notag\\ &\lesssim \|u-\Pi_{k,h}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}} u\|_h\|v\|_h+ \Big(|u-\Pi_{k,h}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}} u|_{h,1}+\sup_{w\in\cQ^k_h} \frac{(f,w-\Xi_h w)}{|w|_{H^1({\Omega})}}\Big)\sqrt{\alpha_h}\|v\|_h.\notag \end{align} \par Putting \eqref{eq:Standard} and \eqref{eq:Numerator} together we arrive at the estimate \begin{equation}\label{eq:EnergyError} \|u-u_h\|_h\lesssim \|u-I_{k,h} u\|_h+\|u-\Pi_{k,h}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}} u\|_h +\sqrt{\alpha_h}\Big(|u-\Pi_{k,h}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}} u|_{h,1}+\sup_{w\in\cQ^k_h} \frac{(f,w-\Xi_h w)}{|w|_{H^1({\Omega})}}\Big). \end{equation} \par Below we will derive concrete error estimates under the assumption that the solution $u$ of \eqref{eq:Poisson} belongs to $H^{\ell+1}({\Omega})$ for $1\leq\ell\leq k$. \subsection{Concrete Error Estimates in the Energy Norm} \label{subsec:ConcreteEnergyErrors} The terms on the right-hand side of \eqref{eq:EnergyError} are estimated as follows. \par\medskip\noindent{\em Estimate for the Term Involving $f$} \par\smallskip Since $u\in H^{\ell+1}({\Omega})$ and $f=-\Delta u$, we have, by \eqref{eq:Xih1}, \begin{equation}\label{eq:RHSError} \sup_{w\in\cQ^k_h}\frac{(f,w-\Xi_h w)}{|w|_{H^1({\Omega})}} \lesssim h^{\ell}|u|_{H^{\ell+1}({\Omega})}. \end{equation} \par\medskip\noindent{\em Estimate for $|u-\Pi_{k,h}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}} u|_{h,1}$} \par\smallskip It follows directly from \eqref{eq:HOnePOD} that \begin{equation}\label{eq:PODError} |u-\Pi_{k,h}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}} u|_{h,1}=\Big(\sum_{D\in\cT_h} |u-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} u|_{H^1(D)}^2\Big)^\frac12 \lesssim h^\ell|u|_{H^{\ell+1}({\Omega})}. \end{equation} \par\medskip\noindent{\em Estimate for $\|u-I_{k,h} u\|_h$} \par\smallskip We will establish the estimate \begin{equation}\label{eq:InterpolationError} \|u-I_{k,h} u\|_h\lesssim h^{\ell}|u|_{H^{\ell+1}(\O)} \end{equation} for both choices of $S^D(\cdot,\cdot)$. \par In the case where $S^D(\cdot,\cdot)=S^D_1(\cdot,\cdot)$, it follows from \eqref{eq:PODStability1}, \eqref{eq:SD1} and \eqref{eq:ah} that \begin{align}\label{eq:InterError1} \|u-I_{k,h} u\|_h^2&\lesssim \sum_{D\in\cT_h} |\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}(u-I_{k,D} u)|_{H^1(D)}^2+\sum_{D\in\cT_h} \|(u-I_{k,D} u)-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}(u-I_{k,D} u)\|_{L_\infty(\partial D)}^2\notag\\ &\lesssim \sum_{D\in\cT_h} \big(|u-I_{k,D} u|_{H^1(D)}^2+\|u-I_{k,D} u\|_{L_\infty(\partial D)}^2\big)\\ &\hspace{40pt}+\sum_{D\in\cT_h} |\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}(u-I_{k,D} u)|_{L_\infty(D)}^2,\notag \end{align} and we have \begin{align}\label{eq:InterError2} \sum_{D\in\cT_h} |\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}(u-I_{k,D} u)|_{L_\infty(D)}^2&\lesssim \sum_{D\in\cT_h}\big(\,(\,\overline{\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}(u-I_{k,D} u)}\,)_{\partial D}^2 +|\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}(u-I_{k,D} u)|_{H^1(D)}^2\big)\notag\\ &\leq \sum_{D\in\cT_h}\big(\,(\,\overline{u-I_{k,D} u}\,)_{\partial D}^2+|u-I_{k,D} u|_{H^1(D)}^2\big)\\ &\lesssim \sum_{D\in\cT_h} \big(\|u-I_{k,D} u\|_{L_\infty(\partial D)}^2 +|u-I_{k,D} u|_{H^1(D)}^2\big),\notag \end{align} by \eqref{eq:DiscreteEstimate2}, \eqref{eq:Condition3} and \eqref{eq:PODStability1}. \par The estimate \eqref{eq:InterpolationError} now follows from \eqref{eq:HOnePODID}, \eqref{eq:1DInterpolationError}, \eqref{eq:InterError1} and \eqref{eq:InterError2}. \par In the case where $S^D(\cdot,\cdot)=S^D_2(\cdot,\cdot)$, it follows from \eqref{eq:SD2} and \eqref{eq:ah} that \begin{align*} \|u-I_{k,h} u\|_h^2&\lesssim \sum_{D\in\cT_h}|\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}(u-I_{k,D} u)|_{H^1(D)}^2 + \sum_{D\in\cT_h}\hspace{1pt}h_{{\scriptscriptstyle D}} \|\partial [\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}(u-I_{k,D} u)]/\partial s\|_{L_2(\partial D)}^2\notag\\ &\hspace{30pt}+{\sum_{D\in\cT_h}\hspace{1pt}h_{{\scriptscriptstyle D}} \|\partial(u-I_{k,D} u)/\partial s\|_{L_2(\partial D)}^2}. \end{align*} We have \begin{equation*} \sum_{D\in\cT_h}|\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}(u-I_{k,D} u)|_{H^1(D)}^2\leq \sum_{D\in\cT_h}|u-I_{k,D} u|_{H^1(D)}^2\lesssim h^{2\ell}|u|_{H^{\ell+1}(\O)}^2 \end{equation*} by \eqref{eq:PODStability1} and \eqref{eq:HOnePODID}, and hence, in view of \eqref{eq:DiscreteEstimate0} (applied to the first order derivatives of the polynomial $\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}(u-I_{k,D} u)$), \begin{align*} \sum_{D\in\cT_h}\hspace{1pt}h_{{\scriptscriptstyle D}} \|\partial [\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}(u-I_{k,D} u)]/\partial s\|_{L_2(\partial D)}^2&\lesssim \sum_{D\in\cT_h} \|\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}(u-I_{k,D} u)|_{H^1(D)}^2\lesssim h^{2\ell}|u|_{H^{\ell+1}(\O)}^2. \end{align*} Finally it follows from a standard interpolation error estimate in one variable and \eqref{eq:HalfTrace} (applied to the $\ell$-th order derivatives of $u$) that \begin{align*} \sum_{D\in\cT_h}\hspace{1pt}h_{{\scriptscriptstyle D}} \|\partial(u-I_{k,D} u)/\partial s\|_{L_2(\partial D)}^2&\lesssim \sum_{D\in\cT_h}\hspace{1pt}h_{{\scriptscriptstyle D}}\sum_{e\in\cE_{\ssD}}h_e^{2\ell-1} \,|\partial^\ell u/\partial s^\ell|_{H^{1/2}(e)}^2 \lesssim h^{2\ell}|u|_{H^{\ell+1}(\O)}^2. \end{align*} \par Together these estimates imply \eqref{eq:InterpolationError}. \goodbreak \par\medskip\noindent{\em Estimate for $\|u-\Pi_{k,h}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}} u\|_h$} \par\smallskip We will show that the estimate \begin{equation}\label{eq:ProjectionError} \|u-\Pi_{k,h}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}} u\|_h\lesssim h^\ell|u|_{H^{\ell+1}(\O)} \end{equation} holds for both choices of $S^D(\cdot,\cdot)$. \par In the case where $S^D(\cdot,\cdot)=S^D_1(\cdot,\cdot)$, it follows from \eqref{eq:Sobolev}, Lemma~\ref{lem:PODErrors}, \eqref{eq:SD1} and \eqref{eq:ah} that \begin{align*}\label{eq:ProjectionEst2} \|u-\Pi_{k,h}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}} u\|_h^2 &\lesssim \sum_{D\in\cT_h} \|u-\Pi_{k,h}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}} u\|_{L_\infty(\partial D)}^2\\ &\lesssim \sum_{D\in\cT_h}\big[\hspace{1pt}h_{{\scriptscriptstyle D}}^{-2}\|u-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} u\|_{L_2(D)}^2+|u-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} u|_{{H^1(D)}}^2 +\hspace{1pt}h_{{\scriptscriptstyle D}}^2|u-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} u|_{H^2(D)}^2\big]\notag\\ &\lesssim h^{2\ell}|u|_{H^{\ell+1}({\Omega})}^2.\notag \end{align*} \par In the case where $S^D(\cdot,\cdot)=S^D_2(\cdot,\cdot)$, we have \begin{align*} \|u-\Pi_{k,h}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}} u\|_h^2&=\sum_{D\in\cT_h}\hspace{1pt}h_{{\scriptscriptstyle D}} \|\partial(u-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} u)/\partial s\|_{L_2(D)}^2\\ &\lesssim \sum_{D\in\cT_h} \big[|u-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} u|_{{H^1(D)}}^2+\hspace{1pt}h_{{\scriptscriptstyle D}}^2|u-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} u|_{H^2(D)}^2\big] \lesssim h^{2\ell}|u|_{H^{\ell+1}({\Omega})}^2\notag \end{align*} by \eqref{eq:Trace} (applied to the first order derivatives of $u-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} u$) and Lemma~\ref{lem:PODErrors}. \par Putting \eqref{eq:EnergyError}--\eqref{eq:InterpolationError} and \eqref{eq:ProjectionError} together, we arrive at the following result. \begin{theorem}\label{thm:ConcreteEnergyError} Assuming the solution $u$ of \eqref{eq:Poisson} belongs to $H^{\ell+1}({\Omega})$ for $\ell$ between $1$ and $k$, we have \begin{equation*} \|u-u_h\|_h\leq C\sqrt{\alpha_h} h^\ell|u|_{H^{\ell+1}({\Omega})}, \end{equation*} where $\alpha_h$ is defined in \eqref{eq:kappah} and the positive constant $C$ only depends on $\rho$, $k$ and $N$. \end{theorem} \par We have similar estimates for the computable approximate solutions $\Pi_{k,h}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}} u_h$ and $\Pi_{k,h}^0 u_h$. \begin{theorem}\label{thm:ComputableEnergyError} Assuming the solution $u$ of \eqref{eq:Poisson} belongs to $H^{\ell+1}({\Omega})$ for some $\ell$ between $1$ and $k$, we have \begin{equation}\label{eq:ComputableEnergyError} |u-u_h|_{H^1({\Omega})}+\sqrt{\alpha_h}\big[|u-\Pi_{k,h}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}} u_h|_{h,1} +|u-\Pi_{k,h}^0 u|_{h,1}\big] \leq C \alpha_h h^\ell |u|_{H^{\ell+1}({\Omega})}, \end{equation} where $\alpha_h$ is defined in \eqref{eq:kappah} and the positive constant $C$ only depends on $\rho$, $N$ and $k$. \end{theorem} \begin{proof} In view of \eqref{eq:ah} and Theorem~\ref{thm:ConcreteEnergyError}, we have \begin{equation*} |\Pi_{k,h}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}} (u-u_h)|_{h,1}\leq \|u-u_h\|_h\lesssim \sqrt{\alpha_h}h^{\ell}|u|_{H^{\ell+1}({\Omega})}, \end{equation*} which together with \eqref{eq:PODError} implies \begin{equation*} |u-\Pi_{k,h}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}} u_h|_{h,1}\leq |u-\Pi_{k,h}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}} u|_{h,1}+|\Pi_{k,h}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}(u-u_h)|_{h,1} \lesssim \sqrt{\alpha_h}h^{\ell}|u|_{H^{\ell+1}({\Omega})}. \end{equation*} It follows from this estimate and \eqref{eq:PDZHOne} that \begin{equation*} |u-\Pi_{k,h}^0 u|_{h,1}\leq |u-\Pi_{k,h}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}} u|_{h,1}+|\Pi_{k,h}^0(\Pi_{k,h}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}} u-u)|_{h,1} \lesssim |u-\Pi_{k,h}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}} u|_{h,1}\lesssim \sqrt{\alpha_h}h^{\ell}|u|_{H^{\ell+1}(\O)}. \end{equation*} \par Finally we have, by \eqref{eq:HOnePODID}, \eqref{eq:HOneVEM}, \eqref{eq:InterpolationError} and Theorem~\ref{thm:ConcreteEnergyError}, \begin{align*} |u-u_h|_{H^1({\Omega})}&\leq |u-I_{k,h} u|_{H^1({\Omega})}+|I_{k,h} u-u_h|_{H^1({\Omega})}\\ &\lesssim |u-I_{k,h} u|_{H^1({\Omega})}+\sqrt{\alpha_h}\|I_{k,h} u-u_h\|_h\\ &\leq |u-I_{k,h} u|_{H^1({\Omega})}+\sqrt{\alpha_h} \big(\|u-I_{k,h} u\|_h+\|u-u_h\|_h\big) \lesssim \alpha_h h^{\ell}|u|_{H^{\ell+1}(\O)}. \end{align*} \end{proof} \goodbreak \begin{remark}\label{rem:HOneErrors}{\rm In the case where $S^D(\cdot,\cdot)=S^D_1(\cdot,\cdot)$, the estimates for $|u-\Pi_{k,h}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}} u_h|_{h,1}$ and $|u-\Pi_{k,h}^0 u_h|_{h,1}$ are better than the estimate for $|u-u_h|_{H^1({\Omega})}$. } \end{remark} \subsection{Error Estimates in the $L_2$ Norm}\label{subsec:L2Error} We begin with two lemmas involving $S^D(\cdot,\cdot)$. \begin{lemma}\label{lem:SDEstimate} We have \begin{equation}\label{eq:SDEstimate} \sum_{D\in\cT_h} S^D(\zeta-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}I_{k,D} \zeta,\zeta-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}I_{k,D} \zeta)\lesssim h^2|\zeta|_{H^2({\Omega})}^2\qquad\forall\, \zeta\in H^2({\Omega})\cap H^1_0({\Omega}). \end{equation} \end{lemma} \begin{proof} It follows from \eqref{eq:Trace} (applied to the first order partial derivatives of $\zeta-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}I_{k,D}\zeta$), Lemma~\ref{lem:InterpolationError} and \eqref{eq:SD2} that \begin{align*} &\sum_{D\in\cT_h} S^D_2(\zeta-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}I_{k,D} \zeta,\zeta-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}I_{k,D} \zeta) =\sum_{D\in\cT_h}\hspace{1pt}h_{{\scriptscriptstyle D}}\|\partial(\zeta-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}I_{k,D} \zeta)/\partial s\|_{L_2(\partial D)}^2\\ &\hspace{60pt}\lesssim \sum_{D\in\cT_h} \big[|\zeta-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}I_{k,D}\zeta|_{{H^1(D)}}^2+ \hspace{1pt}h_{{\scriptscriptstyle D}}^2|\zeta-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}I_{k,D}\zeta|_{H^2(D)}^2\big] \lesssim \hspace{1pt}h_{{\scriptscriptstyle D}}^2|\zeta|_{H^2({\Omega})}^2, \end{align*} and we have \begin{align*} &\sum_{D\in\cT_h} S^D_1(\zeta-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}I_{k,D}\zeta,\zeta-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}I_{k,D} \zeta)\lesssim \sum_{D\in\cT_h}\|\zeta-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}I_{k,D} \zeta\|_{L_\infty(\partial D)}^2\\ &\hspace{30pt}\lesssim \sum_{D\in\cT_h}\big[\hspace{1pt}h_{{\scriptscriptstyle D}}^{-2}\|\zeta-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}I_{k,D} \zeta\|_{L_2(D)}^2 +|\zeta-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}I_{k,D} \zeta|_{{H^1(D)}}^2+\hspace{1pt}h_{{\scriptscriptstyle D}}^2|\zeta-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}I_{k,D} \zeta|_{H^2(D)}^2\big]\\ &\hspace{30pt}\lesssim \hspace{1pt}h_{{\scriptscriptstyle D}}^2|\zeta|_{H^2({\Omega})}^2 \end{align*} by \eqref{eq:Sobolev}, Lemma~\ref{lem:InterpolationError} and \eqref{eq:SD1}. \end{proof} \begin{lemma}\label{lem:POSD} Assuming that $u\in H^{\ell+1}({\Omega})$ for some $\ell$ between $1$ and $k$, we have \begin{equation}\label{eq:POSD} \sum_{D\in\cT_h} S^D(u_h-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} u_h,u_h-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} u_h)\lesssim \alpha_h^2 h^{2\ell}|u|_{H^{\ell+1}(\O)}^2. \end{equation} \end{lemma} \begin{proof} This is a consequence of \eqref{eq:ah}, \eqref{eq:ProjectionError}, Theorem~\ref{thm:ConcreteEnergyError} and Theorem~\ref{thm:ComputableEnergyError}: \begin{align*} &\sum_{D\in\cT_h} S^D(u_h-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} u_h,u_h-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} u_h)= \|u_h-\Pi_{k,h}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}} u_h\|_h^2\\ &\hspace{100pt}\lesssim \|u_h-u\|_h^2+\|u-\Pi_{k,h}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}} u\|_h^2 +\|\Pi_{k,h}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}(u-u_h)\|_h^2\\ &\hspace{100pt}=\|u_h-u\|_h^2+\|u-\Pi_{k,h}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}} u\|_h^2+|u-u_h|_{H^1({\Omega})}^2 \lesssim \alpha_h^2 h^{2\ell}|u|_{H^{\ell+1}(\O)}^2. \end{align*} \end{proof} \par We can now prove a consistency estimate. \begin{lemma}\label{lem:ConsistencyError} Assuming that $u\in H^{\ell+1}({\Omega})$ for some $\ell$ between $1$ and $k$, we have \begin{equation*} a(u-u_h,I_{k,h}\zeta)\leq C{\alpha_h} h^{\ell+1}|u|_{H^{\ell+1}({\Omega})} |\zeta|_{H^2({\Omega})} \qquad \forall\, \zeta\in H^2({\Omega})\cap H^1_0({\Omega}), \end{equation*} where $\alpha_h$ is defined in \eqref{eq:kappah} and the positive constant $C$ only depends on $\rho$, $k$ and $N$. \end{lemma} \begin{proof} We have, by \eqref{eq:Poisson}, \eqref{eq:aDef}, \eqref{eq:POD1} and \eqref{eq:DiscreteProblem}--\eqref{eq:aD}, \begin{align* &a(u-u_h,I_{k,h}\zeta)=a(u,I_{k,h}\zeta)-\sum_{D\in\cT_h} a^D(u_h,I_{k,D}\zeta)\notag\\ &\hspace{40pt}=(f,I_{k,h}\zeta)-\sum_{D\in\cT_h} a^D(u_h,\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}I_{k,D}\zeta) -\sum_{D\in\cT_h} a^D(u_h,I_{k,D}\zeta-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}I_{k,D}\zeta)\\ &\hspace{40pt}=(f,I_{k,h}\zeta)-a_h(u_h,I_{k,D}\zeta) +\sum_{D\in\cT_h} S^D(u_h-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} u_h,I_{k,D}\zeta-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}I_{k,D}\zeta)\\ &\hspace{80pt}+\sum_{D\in\cT_h} a^D(\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} u_h-u_h,I_{k,D}\zeta -\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}I_{k,D}\zeta)\\ &\hspace{40pt}=(f,I_{k,h}\zeta-\Xi_hI_{k,h}\zeta) +\sum_{D\in\cT_h} S^D(u_h-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} u_h,I_{k,D}\zeta-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}I_{k,D}\zeta)\\ &\hspace{80pt}+\sum_{D\in\cT_h} a^D(\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} u_h-u_h,I_{k,D}\zeta-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}I_{k,D}\zeta), \end{align*} and the three terms on the right-hand side can be estimated as follows. \par We have \begin{equation*} |(f,I_{k,h}\zeta-\Xi_hI_{k,h}\zeta)|\lesssim h^{\ell+1}|u|_{H^{\ell+1}(\O)}|\zeta|_{H^2({\Omega})} \end{equation*} by \eqref{eq:Xih2}, \begin{equation*} \sum_{D\in\mathcal{T}_h} {S^D(u_h-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} u_h,I_{k,D}\zeta-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}I_{k,D}\zeta)} \lesssim {\alpha_h}h^{\ell+1}|u|_{H^{\ell+1}(\O)}|\zeta|_{H^2({\Omega})} \end{equation*} by Lemma~\ref{lem:SDEstimate} and Lemma~\ref{lem:POSD}, and \begin{align*} & \sum_{D\in\mathcal{T}_h} a^D(\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} u-u_h,I_{k,D}\zeta-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} I_{k,D}\zeta)\\ &\hspace{40pt}\leq \big[|\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} u-u|_{h,1}+|u-u_h|_{H^1({\Omega})}\big] \big[|I_{k,D}\zeta-\zeta|_{H^1({\Omega})}+|\zeta-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}I_{k,D}\zeta|_{h,1}\big]\\ &\hspace{40pt}\lesssim {\alpha_h} h^{\ell+1}|u|_{H^{\ell+1}(\O)}|\zeta|_{H^2({\Omega})} \end{align*} by Lemma~\ref{lem:InterpolationError}, \eqref{eq:PODError} and Theorem~\ref{thm:ComputableEnergyError}. \end{proof} \begin{theorem}\label{thm:uhLTwoError} Assuming $u\in H^{\ell+1}({\Omega})$ for some $\ell$ between $1$ and $k$, there exists a positive constant $C$, depending only on $\rho$, $N$ and $k$, such that \begin{equation}\label{eq:uhLTwoError} \|u-u_h\|_{L_2({\Omega})}\leq C{\alpha_h}h^{\ell+1}|u|_{H^{\ell+1}({\Omega})}, \end{equation} where $\alpha_h$ is defined in \eqref{eq:kappah}. \end{theorem} \begin{proof} Let $\zeta\in H^1_0({\Omega})$ be defined by \begin{equation*} a(v,\zeta)=(v,u-u_h) \qquad\forall\,v\in H^1_0({\Omega}). \end{equation*} We have % \begin{equation}\label{eq:Duality1} \|u-u_h\|_{L_2({\Omega})}^2=a(u-u_h,\zeta) = a(u-u_h,\zeta-I_{k,h}\zeta)+a(u-u_h,I_{k,h}\zeta), \end{equation} and since ${\Omega}$ is convex, \begin{equation}\label{eq:Duality2} \|\zeta\|_{H^2({\Omega})}\leq C_{\Omega} \|u-u_h\|_{L_2({\Omega})} \end{equation} by elliptic regularity \cite{Grisvard:1985:EPN,Dauge:1988:EBV}. \par The first term on the right-hand side of \eqref{eq:Duality1} satisfies \begin{equation}\label{eq:Duality3} a(u-u_h,\zeta-I_{k,h}\zeta)\leq |u-u_h|_{H^1({\Omega})} |\zeta-I_{k,h}\zeta|_{H^1({\Omega})}\\ \lesssim h|u-u_h|_{H^1({\Omega})}|\zeta|_{H^2({\Omega})} \end{equation} by \eqref{eq:HOnePODID}, and then \eqref{eq:uhLTwoError} follows from Theorem~\ref{thm:ComputableEnergyError}, Lemma~\ref{lem:ConsistencyError} and \eqref{eq:Duality1}--\eqref{eq:Duality3}. \end{proof} \par We have similar $L_2$ error estimates for the computable approximations $\Pi_{k,h}^0 u_h$ and $\Pi_{k,h}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}} u_h$. \begin{theorem}\label{thm:ComputableLTwoErrors} Assuming $u\in H^{\ell+1}({\Omega})$ for some $\ell$ between $1$ and $k$, there exists a positive constant $C$, depending only on $\rho$, $N$ and $k$, such that \begin{equation*} \|u-\Pi_{k,h}^0 u_h\|_{L_2({\Omega})} +\|u-\Pi_{k,h}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}} u_h\|_{L_2({\Omega})}\leq C{\alpha_h}h^{\ell+1}|u|_{H^{\ell+1}({\Omega})}, \end{equation*} where $\alpha_h$ is defined in \eqref{eq:kappah}. \end{theorem} \begin{proof} The estimate for $\Pi_{k,h}^0 u_h$ follows from \eqref{eq:PDZLTwoError}, Theorem~\ref{thm:uhLTwoError} and the relation \begin{align*} \|u-\Pi_{k,h}^0 u_h\|_{L_2({\Omega})}&\leq \|u-\Pi_{k,h}^0 u\|_{L_2({\Omega})} +\|\Pi_{k,h}^0(u-u_h)\|_{L_2({\Omega})}\\ &\leq \|u-\Pi_{k,h}^0 u\|_{L_2({\Omega})}+\|u-u_h\|_{L_2({\Omega})}. \end{align*} \par For the approximation $\Pi_{k,h}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}} u_h$, we have \begin{equation}\label{eq:LTwoPODError1} \|u-\Pi_{k,h}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}} u_h\|_{L_2({\Omega})}\leq\|u-\Pi_{k,h}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}I_{k,h} u\|_{L_2({\Omega})} +\|\Pi_{k,h}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}(I_{k,h} u-u_h)\|_{L_2({\Omega})} \end{equation} and, in view of \eqref{eq:Trace}, \eqref{eq:tbarNormDef}, and Lemma~\ref{lem:PODStability2}, \begin{align}\label{eq:LTwoPODError2} \|\Pi_{k,h}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}(I_{k,h} u-u_h)\|_{L_2({\Omega})}^2&\lesssim \sum_{D\in\mathcal{T}_h}|\!|\!| I_{k,D} u-u_h|\!|\!|_{k,D}^2\notag\\ &\lesssim \sum_{D\in\mathcal{T}_h}\big(\hspace{1pt}h_{{\scriptscriptstyle D}}\|I_{k,D} u-u_h\|_{L_2(\partial D)}^2 +\|\Pi_{k-2,D}^0 (I_{k,D} u-u_h)\|_{L_2(D)}^2\big)\\ &\lesssim\sum_{D\in\mathcal{T}_h} \big(\|I_{k,D} u-u_h\|_{L_2(D)}^2 +\hspace{1pt}h_{{\scriptscriptstyle D}}^2|I_{k,D} u-u_h|_{{H^1(D)}}^2\big)\notag\\ &\lesssim \|u-u_h\|_{L_2({\Omega})}^2+ \|u-I_{k,h} u\|_{L_2({\Omega})}^2 +h^2|u-I_{k,h} u|_{H^1({\Omega})}^2\notag\\ &\hspace{50pt}+h^2|u-u_h|_{H^1({\Omega})}^2.\notag \end{align} \par The estimate for $\Pi_{k,h}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}} u_h$ follows from Lemma~\ref{lem:InterpolationError}, Theorem~\ref{thm:ComputableEnergyError}, Theorem~\ref{thm:uhLTwoError} and \eqref{eq:LTwoPODError1}--\eqref{eq:LTwoPODError2}. \end{proof} \subsection{Error Estimates for $u_h$ in the $L_\infty$ Norm}\label{subsec:uhLInftyError} Here we consider a $L_\infty$ error estimate for $u_h$ over the edges of $\mathcal{T}_h$, where $u_h$ is computable. We will treat the two choices of $S^D(\cdot,\cdot)$ separately. The set of all the edges in $\mathcal{T}_h$ will be denoted by $\mathcal{E}_h$. \subsubsection{The Case where $S^D(\cdot,\cdot)=S^D_2(\cdot,\cdot)$} \label{eq:subsubsec:uhLInftySD2} We have the following result for this choice of $S^D(\cdot,\cdot)$. \begin{theorem}\label{thm:MaxNormBdryEst2} Assuming that the solution $u$ of \eqref{eq:Poisson} belongs to $\in H^{\ell+1}({\Omega})$ for some $\ell$ between $1$ and $k$, we have \begin{equation*} \max_{e\in\mathcal{E}_h}\|u-u_h\|_{L_\infty(e)}\leq C h^{\ell}|u|_{H^{\ell+1}(\O)}, \end{equation*} where the positive constant $C$ only depends on $\rho$, $N$ and $k$. \end{theorem} \begin{proof} First we observe that, by \eqref{eq:SD2}, \eqref{eq:ah} and Theorem~\ref{thm:ConcreteEnergyError}, \begin{equation}\label{eq:EdgeEstimate} \sum_{D\in\cT_h}\sum_{e\in\cE_{\ssD}}\hspace{1pt}h_{{\scriptscriptstyle D}}\|\partial[(u-u_h)-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} (u-u_h)]/\partial s\|_{L_2(e)}^2 \lesssim \|u-u_h\|_h^2\lesssim h^{2\ell}|u|_{H^{\ell+1}(\O)}^2. \end{equation} \par We can connect any point in $e\in\mathcal{E}_h$ to $\partial{\Omega}$, where $u-u_h=0$, by a path along the edges in $\mathcal{E}_h$. Therefore it follows from a direct calculation (or a Sobolev inequality in one variable) that \begin{align}\label{eq:PathEstimate} \|u-u_h\|_{L_\infty(e)}^2&\lesssim \sum_{D\in\cT_h}\sum_{e\in\cE_{\ssD}} h_e\|\partial(u-u_h)/\partial s\|_{L_2(e)}^2\notag\\ &\lesssim \sum_{D\in\cT_h}\sum_{e\in\cE_{\ssD}}\hspace{1pt}h_{{\scriptscriptstyle D}}\|\partial[(u-u_h)-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} (u-u_h)]/\partial s\|_{L_2(e)}^2\\ &\hspace{40pt}+\sum_{D\in\cT_h}\sum_{e\in\cE_{\ssD}} \hspace{1pt}h_{{\scriptscriptstyle D}}\|\partial[\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}(u-u_h)]/\partial s\|_{L_2(e)}^2\qquad\qquad\forall\,e\in\mathcal{E}_h,\notag \end{align} and we have, by \eqref{eq:Trace}, \eqref{eq:DiscreteEstimate1}, \eqref{eq:PODStability1} and Theorem~\ref{thm:ComputableEnergyError}, \begin{align}\label{eq:TrickyEstimate} &\sum_{D\in\cT_h}\sum_{e\in\cE_{\ssD}} \hspace{1pt}h_{{\scriptscriptstyle D}}\|\partial[\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}(u-u_h)]/\partial s\|_{L_2(e)}^2\notag\\ &\hspace{80pt}\lesssim \sum_{D\in\cT_h} \big(|\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}(u-u_h)|_{H^1(D)}^2+\hspace{1pt}h_{{\scriptscriptstyle D}}^2|\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}(u-u_h)|_{H^2(D)}^2\big)\\ &\hspace{80pt}\lesssim\sum_{D\in\cT_h} |\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}(u-u_h)|_{H^1(D)}^2\lesssim h^{2\ell}|u|_{H^{\ell+1}(\O)}^2. \notag \end{align} \par The estimates \eqref{eq:EdgeEstimate}--\eqref{eq:TrickyEstimate} together imply \begin{equation*} \|u-u_h\|_{L_\infty(e)}\leq h^\ell|u|_{H^{\ell+1}(\O)} \qquad\forall\,e\in\mathcal{E}_h. \end{equation*} \end{proof} \subsubsection{The Case where $S^D(\cdot,\cdot)=S^D_1(\cdot,\cdot)$} \label{eq:subsubsec:uhLInftySD1} We will establish an analog of Theorem~\ref{thm:MaxNormBdryEst2} under the additional assumption that $\mathcal{T}_h$ is quasi-uniform, i.e., there exists a positive constant $\gamma$ independent of $h$ such that \begin{equation}\label{eq:gamma} \hspace{1pt}h_{{\scriptscriptstyle D}}\geq \gamma h \qquad\forall\,\gamma\in\mathcal{T}_h. \end{equation} \begin{theorem}\label{thm:MaxNormBdryEst1} Assuming $\mathcal{T}_h$ is quasi-uniform and the solution $u$ of \eqref{eq:Poisson} belongs to $H^{\ell+1}({\Omega})$ for some $\ell$ between $1$ and $k$, we have \begin{equation*} \max_{e\in\mathcal{E}_h}\|u-u_h\|_{L_\infty(e)}\leq C\ln(1+\max_{D\in\mathcal{T}_h}\tau_{\!\ssD}) h^{\ell}|u|_{H^{\ell+1}(\O)}, \end{equation*} where the positive constant $C$ only depends on $\rho$, $N$, $\gamma$ and $k$. \end{theorem} \begin{proof} Let $D\in\mathcal{T}_h$ be arbitrary. First we observe that, by \eqref{eq:SD1}, Remark~\ref{rem:NormEquivalence}, \eqref{eq:ah}, \eqref{eq:InterpolationError}, Theorem~\ref{thm:ConcreteEnergyError} and Theorem~\ref{thm:ComputableEnergyError}, \begin{align}\label{eq:SD1Edge1} &\|(I_{k,D} u-u_h)-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}(I_{k,D} u-u_h)\|_{L_\infty(\partial D)}\notag\\ &\hspace{80pt}\lesssim \|I_{k,h} u-u_h\|_h\\ &\hspace{80pt} \lesssim \|I_{k,h} u-u\|_h+\|u-u_h\|_h \lesssim [\ln(1+\max_{D\in\mathcal{T}_h}\tau_{\!\ssD})]^\frac12h^\ell|u|_{H^{\ell+1}(\O)}.\notag \end{align} \par Furthermore, it follows from \eqref{eq:G2}, \eqref{eq:DiscreteEstimate3}, \eqref{eq:PODStability1}, \eqref{eq:LTwoPODID}, Theorem~\ref{thm:ConcreteEnergyError}, Theorem~\ref{thm:ComputableLTwoErrors} and \eqref{eq:gamma} that \begin{align}\label{eq:SD1Edge2} &\|\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}(I_{k,D} u-u_h)\|_{L_\infty(\partial D)}\lesssim \hspace{1pt}h_{{\scriptscriptstyle D}}^{-1}\|\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}(I_{k,D} u-u_h)\|_{L_2(D)}+ |\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}(I_{k,D} u-u_h)|_{H^1(D)}\notag\\ &\hspace{60pt}\lesssim \hspace{1pt}h_{{\scriptscriptstyle D}}^{-1}\big(\|\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}I_{k,D} u-u\|_{L_2(D)} +\|u-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} u_h\|_{L_2(D)}\big)+|I_{k,D} u-u_h|_{H^1(D)}\\ &\hspace{60pt}\lesssim \ln(1+\max_{D\in\mathcal{T}_h}\tau_{\!\ssD}) h^\ell|u|_{H^{\ell+1}(\O)}.\notag \end{align} \par The theorem follows from \eqref{eq:1DInterpolationError}, \eqref{eq:SD1Edge1}, \eqref{eq:SD1Edge2} and the triangle inequality. \end{proof} \subsection{Error Estimates for $\Pi_{k,h}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}} u_h$ and $\Pi_{k,h}^0 u_h$ in the $L_\infty$ Norm} \label{subsec:ComputableLInftyError} \par Again we treat the two choices of $S^D(\cdot,\cdot)$ separately. \subsubsection{The Case where $S^D(\cdot,\cdot)=S^D_2(\cdot,\cdot)$} \label{eq:subsubsec:ComputableLInftySD2} For this choice of $S^D(\cdot,\cdot)$, we can establish the following result without assuming that $\mathcal{T}_h$ is quasi-uniform. \begin{theorem}\label{thm:LInftyErrorsSD2} Assuming the solution $u$ of \eqref{eq:Poisson} belongs to $H^{\ell+1}({\Omega})$ for some $\ell$ between $1$ and $k$, there exists a positive constant $C$, depending only on $\rho$, $N$ and $k$, such that \begin{equation*} \|u-\Pi_{k,h}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}} u_h\|_{L_\infty({\Omega})}+\|u-\Pi_{k,h}^0 u_h\|_{L_\infty({\Omega})} \leq C h^{\ell}|u|_{H^{\ell+1}(\O)}. \end{equation*} \end{theorem} \begin{proof} For any $D\in\mathcal{T}_h$, we have, by \eqref{eq:Sobolev}, \eqref{eq:DiscreteEstimate1} and \eqref{eq:PODStability1}, \begin{align*} \|u-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} u_h\|_{{L_\infty(D)}} &\lesssim \hspace{1pt}h_{{\scriptscriptstyle D}}^{-1}\|u-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} u_h\|_{L_2(D)}+|u-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} u_h|_{H^1(D)}+\hspace{1pt}h_{{\scriptscriptstyle D}}|u-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} u_h|_{H^2(D)}\\ &\lesssim \hspace{1pt}h_{{\scriptscriptstyle D}}^{-1}\|u-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} u_h\|_{L_2(D)}+|u-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} u_h|_{H^1(D)}+\hspace{1pt}h_{{\scriptscriptstyle D}}|u-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} u|_{H^2(D)}\\ &\hspace{50pt}+ \hspace{1pt}h_{{\scriptscriptstyle D}}|\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} (u-u_h)|_{H^2(D)}\\ &\lesssim \hspace{1pt}h_{{\scriptscriptstyle D}}^{-1}\|u-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} u_h\|_{L_2(D)}+|u-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} u_h|_{H^1(D)}+\hspace{1pt}h_{{\scriptscriptstyle D}}|u-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} u|_{H^2(D)}\\ &\hspace{40pt}+|u-u_h|_{H^1(D)}, \end{align*} and \begin{align*} \hspace{1pt}h_{{\scriptscriptstyle D}}^{-1}\|u-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} u_h\|_{L_2(D)}&\leq \hspace{1pt}h_{{\scriptscriptstyle D}}^{-1}\|u-u_h\|_{L_2(D)}+ \hspace{1pt}h_{{\scriptscriptstyle D}}^{-1}\|u_h-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} u_h\|_{L_2(D)}\\ &\lesssim \big(\|u-u_h\|_{L_\infty(\partial D)}+|u-u_h|_{H^1(D)}\big) +|u_h-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} u_h|_{H^1(D)} \end{align*} by \eqref{eq:G2}, \eqref{eq:PF2} and \eqref{eq:POD2}. These two estimates together with Lemma~\ref{lem:PODErrors}, Theorem~\ref{thm:ComputableEnergyError} and Theorem~\ref{thm:MaxNormBdryEst2} imply the estimate for $u-\Pi_{k,h}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}} u_h$. \par The estimate for $u-\Pi_{k,h}^0 u_h$ can be derived similarly. We have, by \eqref{eq:Sobolev}, \eqref{eq:DiscreteEstimate1} and \eqref{eq:PDZHOne}, \begin{align*} \|u-\Pi_{k,D}^0\hspace{1pt} u_h\|_{{L_\infty(D)}} &\lesssim \hspace{1pt}h_{{\scriptscriptstyle D}}^{-1}\|u-\Pi_{k,D}^0\hspace{1pt} u_h\|_{L_2(D)}+|u-\Pi_{k,D}^0\hspace{1pt} u_h|_{H^1(D)}+\hspace{1pt}h_{{\scriptscriptstyle D}}|u-\Pi_{k,D}^0\hspace{1pt} u|_{H^2(D)}\\ &\hspace{40pt}+|u-u_h|_{H^1(D)}, \end{align*} and \begin{align*} \hspace{1pt}h_{{\scriptscriptstyle D}}^{-1}\|u-\Pi_{k,D}^0\hspace{1pt} u_h\|_{L_2(D)}&\leq \hspace{1pt}h_{{\scriptscriptstyle D}}^{-1}\|u-u_h\|_{L_2(D)}+ \hspace{1pt}h_{{\scriptscriptstyle D}}^{-1}\|u_h-\Pi_{k,D}^0\hspace{1pt} u_h\|_{L_2(D)}\\ &\lesssim \big(\|u-u_h\|_{L_\infty(\partial D)}+|u-u_h|_{H^1(D)}\big)+ |u_h-\Pi_{k,D}^0\hspace{1pt} u_h|_{H^1(D)} \end{align*} by \eqref{eq:G2}, \eqref{eq:PF1} and \eqref{eq:PF2}. The estimate for $u-\Pi_{k,h}^0 u_h$ now follows from Lemma~\ref{lem:PDZErrors}, Theorem~\ref{thm:ComputableEnergyError} and Theorem~\ref{thm:MaxNormBdryEst2}. \end{proof} \subsubsection{The Case where $S^D(\cdot,\cdot)=S^D_1(\cdot,\cdot)$} \label{eq:subsubsec:ComputableLInftySD1} The following analog of Theorem~\ref{thm:LInftyErrorsSD2} is proved by the same arguments but with Theorem~\ref{thm:MaxNormBdryEst2} replaced by Theorem~\ref{thm:MaxNormBdryEst1}. \begin{theorem}\label{thm:LInftyErrorsSD1} Assuming $\mathcal{T}_h$ is quasi-uniform and the solution $u$ of \eqref{eq:Poisson} belongs to $H^{\ell+1}({\Omega})$ for some $\ell$ between $1$ and $k$, there exists a positive constant $C$, depending only on $\rho$, $N$, $\gamma$ and $k$, such that \begin{equation*} \|u-\Pi_{k,h}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}} u_h\|_{L_\infty({\Omega})}+\|u-\Pi_{k,h}^0 u_h\|_{L_\infty({\Omega})} \leq C \ln(1+\max_{D\in\mathcal{T}_h}\tau_{\!\ssD}) h^{\ell}|u|_{H^{\ell+1}(\O)}. \end{equation*} \end{theorem} \section{Virtual Element Methods for the Poisson Problem in Three Dimensions}\label{sec:Poisson3D} The analysis of virtual element methods in three dimensions follows the same strategy as in two dimensions and many of the results in Section~\ref{sec:LocalVEM2D} and Section~\ref{sec:Poisson2D} carry over by identical arguments. We will only provide details for estimates that require different derivations. \par Let $\mathcal{T}_h$ be a polyhedral mesh on ${\Omega}$. The set of the faces of a subdomain $D\in\mathcal{T}_h$ is denoted by $\mathcal{F}_\ssD$ and the set of the edges of $F$ is denoted by $\cE_{\ssF}$. The set of all the faces of $\mathcal{T}_h$ is denoted by $\mathcal{F}_h$ and the set of all the edges of $\mathcal{T}_h$ is denoted by $\mathcal{E}_h$. \subsection{Shape Regularity Assumptions in Three Dimensions} \label{subsec:Shape3D} We impose the following shape regularity assumptions on $\mathcal{T}_h$, where $\hspace{1pt}h_{{\scriptscriptstyle D}}$ is the diameter of $D$. \par\smallskip\noindent {\em Assumption 1} \quad There exists $\rho\in (0,1)$, independent of $h$, such that every polyhedron $D\in\mathcal{T}_h$ is star-shaped with respect to a ball $\fB_{\!\ssD}$ with radius $\geq \rho \hspace{1pt}h_{{\scriptscriptstyle D}}$. \par\smallskip\noindent {\em Assumption 2} \quad There exists a positive integer $N$, independent of $h$, such that $|\mathcal{F}_\ssD|\leq N$ for all $D\in\mathcal{T}_h$. \par\smallskip\noindent {\em Assumption 3}\quad The shape regularity assumptions in Section~\ref{subsec:GlobalShapeRegularity} are satisfied by all the faces in $\mathcal{F}_h$, with the same $\rho$ from Assumption~1 and the same $N$ from Assumption~2. \par\smallskip All the hidden constants below will only depend on $\rho$, $N$ and $k$. \par Let $D$ be a polyhedron in $\mathcal{T}_h$. We can define the inner product $(\!(\cdot,\cdot)\!)$ by \eqref{eq:InnerProduct} where the infinitesimal arc-length $ds$ is replaced by the infinitesimal surface area $dS$. Then the projection operator $\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}:H^1(D)\longrightarrow\mathbb{P}_k(D)$ is defined by \eqref{eq:PODDef} and \begin{equation}\label{eq:3DPODStability1} |\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}\zeta|_{H^1(D)}\leq |\zeta|_{H^1(D)}\qquad\forall\,\zeta\in H^1(D). \end{equation} The projection from ${L_2(D)}$ to $\mathbb{P}_k(D)$ is again denoted by $\Pi_{k,D}^0\hspace{1pt}$. \par The results in Section~\ref{sec:SS} are valid for $D\in\mathcal{T}_h$ under Assumption 1. Consequently the results in Section~\ref{subsec:PODEstimates} and Section~\ref{subsec:PDZEstimates} are also valid provided the semi-norm $|\!|\!|\cdot|\!|\!|_{k,D}$ is defined by the following analog of \eqref{eq:tbarNormDef}: \begin{equation}\label{eq:tbarNorm3D} |\!|\!| \zeta|\!|\!|_{k,D}^2=\|\Pi_{k-2,D}^0 \zeta\|_{L_2(D)}^2 +\hspace{1pt}h_{{\scriptscriptstyle D}}\sum_{F\in\mathcal{F}_\ssD}\|\Pi_{k-1,F}^0 \zeta\|_{L_2(F)}^2, \end{equation} where $\Pi_{k-1,F}^0$ is the projection from $L_2(F)$ onto $\mathbb{P}_{k-1}(F)$. \par We have the following estimates for $\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}$ and $\Pi_{k,D}^0\hspace{1pt}$: \begin{equation}\label{eq:3DLTwoPODPDZ} \|\zeta-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} \zeta\|_{L_2(D)}+ \|\zeta-\Pi_{k,D}^0\hspace{1pt}\zeta\|_{L_2(D)} \lesssim \hspace{1pt}h_{{\scriptscriptstyle D}}^{\ell+1}|\zeta|_{H^{\ell+1}(D)} \quad\forall\,\zeta\in H^{\ell+1}(D),\,0\leq\ell\leq k, \end{equation} and for $1\leq\ell \leq k$, \begin{alignat}{3} |\zeta-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}\zeta|_{{H^1(D)}}+ |\zeta-\Pi_{k,D}^0\hspace{1pt}\zeta|_{{H^1(D)}}&\lesssim \hspace{1pt}h_{{\scriptscriptstyle D}}^{\ell}|\zeta|_{H^{\ell+1}(D)} &\quad&\forall\,\zeta\in H^{\ell+1}(D), \label{eq:3DHOnePODPDZ}\\ |\zeta-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} \zeta|_{H^2(D)}+|\zeta-\Pi_{k,D}^0\hspace{1pt}\zeta|_{H^2(D)}& \lesssim h_D^{\ell-1}|\zeta|_{H^{\ell+1}(D)} &\quad& \forall\,\zeta\in H^{\ell+1}(D).\label{eq:3DHTwoPODPDZ} \end{alignat} \par The analogs of Lemma~\ref{lem:PODStability2} and \eqref{eq:tbarNorm3D} lead to the estimate \begin{equation}\label{eq:3DPODStability2} \|\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} \zeta\|_{L_2(D)}^2\lesssim |\!|\!|\zeta|\!|\!|_{k,D}^2 \lesssim \|\zeta\|_{L_2(D)}^2+\hspace{1pt}h_{{\scriptscriptstyle D}}\|\zeta\|_{L_2(\partial D)}^2\qquad \forall\,\zeta\in H^1(D), \end{equation} and we also have the following analog of \eqref{eq:PDZHOne}: \begin{equation}\label{eq:3DPDZHOne} |\Pi_{k,D}^0\hspace{1pt} \zeta|_{H^1(D)}\lesssim |\zeta|_{H^1(D)} \qquad\forall\,\zeta\in{H^1(D)}. \end{equation} \subsection{The Local Virtual Element Space $\bm\cQ^k(D)$} \label{subsec:Local3D} The space $\cQ^k(\p D)$ of continuous piecewise (two dimensional) virtual element functions of order $\leq k$ on $\partial D$ is defined by \begin{equation}\label{eq:cQbDDef} \cQ^k(\p D)=\{v\in C(\partial D):\,v\big|_F\in \mathcal{Q}^k(F) \quad\forall\,F\in \mathcal{F}_\ssD\}. \end{equation} \par For $k\geq 1$, the virtual element space $\cQ^k(D)\subset {H^1(D)}$ is defined by the following conditions: $v\in {H^1(D)}$ belongs to $\cQ^k(D)$ if and only if (i) the trace of $v$ on $\partial D$ belongs to $\cQ^k(\p D)$, (ii) the distribution $-\Delta v$ belongs to $\mathbb{P}_k(D)$, and (iii) condition \eqref{eq:Condition3} is satisfied. \begin{remark}\label{rem:3DContinuity}{\rm Since the restriction of $v\in\cQ^k(D)$ to $\partial D$ belongs to $C(\partial D)$ and $-\Delta v\in \mathbb{P}_k(D)$, the virtual element function $v$ is also continuous on $\bar D$ (cf. \cite[Section~1.2]{Kenig:1994:CBMS}).} \end{remark} \begin{remark}\label{rem:3Ddofs}{\rm The degrees of freedom of $\cQ^k(D)$ (cf. \cite{AABMR:2013:Projector}) consist of (i) the values of $v$ at the vertices of $D$ and nodes on the interior of each edge of $D$ that determine a polynomial of degree $k$ on each edge of $D$, (ii) the moments of $\Pi_{k-2,F}^0 v$ on each face $F$ of $D$, and (iii) the moments of $\Pi_{k-2,D}^0 v$ on $D$.} \end{remark} \begin{remark}\label{rem:3DComputable}{\rm For $v\in\cQ^k(D)$ and $F\in\mathcal{F}_\ssD$, the polynomial $\Pi_{k,F}^0 v$ can be computed in terms of the degrees of freedom of $v\big|_F$ (cf. Remark~\ref{rem:Computable}). Therefore the polynomial $\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} v$ can be computed in terms of the degrees of freedom of $v\in\cQ^k(D)$ through \eqref{eq:POD1} and \eqref{eq:POD2}. The polynomial $\Pi_{k,D}^0\hspace{1pt} v$ can then be computed through \eqref{eq:Condition3}. } \end{remark} \begin{remark}\label{rem:DAndF}{\rm Under Assumption~3 in Section~\ref{subsec:Shape3D}, the results in Section~\ref{sec:LocalVEM2D} (with $D$ replaced by $F$) are valid for the restriction of $v\in \cQ^k(D)$ to any face $F$ of $D$. } \end{remark} \par The three dimensional analogs of Lemma~\ref{lem:MEP}, Lemma~\ref{lem:Fundamental} and Lemma~\ref{lem:InverseEstimate1} lead to the estimate \begin{equation}\label{eq:3DInverse} |v|_{H^1(D)}^2\lesssim \hspace{1pt}h_{{\scriptscriptstyle D}}^{-2}|\!|\!|\zeta|\!|\!|_{k,D}^2 +\hspace{1pt}h_{{\scriptscriptstyle D}}\sum_{F\in\mathcal{F}_\ssD}\|\nabla_{\!\!F} v\|_{L_2(F)}^2 \qquad\forall\,v\in\cQ^k(D), \end{equation} where $\nabla_{\!\!F}$ is the two dimensional gradient operator on the face $F$, and we also have an analog of \eqref{eq:KerPod13}: \begin{equation}\label{eq:3DKerPOD} \|\Pi_{k-2,D}^0 v\|_{L_2(D)}^2\lesssim \hspace{1pt}h_{{\scriptscriptstyle D}}\sum_{F\in\FD} \|\Pi_{k-1,F}^0 v\|_{L_2(F)}^2 \qquad\forall\,v\in \mathcal{N}(\POD). \end{equation} Hence we have, by \eqref{eq:tbarNorm3D}, \eqref{eq:3DInverse} and \eqref{eq:3DKerPOD}, \begin{equation}\label{eq:3DInverse2} |v|_{H^1(D)}^2\lesssim \hspace{1pt}h_{{\scriptscriptstyle D}}^{-1}\sum_{F\in\mathcal{F}_\ssD}\|\Pi_{k-1,F}^0 v\|_{L_2(F)}^2\\ +\hspace{1pt}h_{{\scriptscriptstyle D}}\sum_{F\in\mathcal{F}_\ssD}\|\nabla_{\!\!F} v\|_{L_2(F)}^2\qquad\forall\,v\in\mathcal{N}(\POD). \end{equation} \par The interpolation operator $I_{k,D}:H^2(D)\longrightarrow \cQ^k(D)$ is defined by the condition that $I_{k,D}\zeta$ and $\zeta$ share the same degrees of freedom. In particular we have \begin{equation}\label{eq:3DIDInvariance} I_{k,D} q=q\qquad\forall\,q\in \mathbb{P}_k(D). \end{equation} \par Note that \begin{equation}\label{eq:ConsistentID} I_{k,F} (\zeta\big|_F)=(I_{k,D}\zeta)\big|_F \qquad\forall\,F\in\mathcal{F}_\ssD, \end{equation} and hence, in view of \eqref{eq:G2}, Lemma~\ref{lem:MaximumPrinciple} and \eqref{eq:tbarNorm3D}, \begin{align}\label{eq:3DIDtbar} |\!|\!| I_{k,D}\zeta|\!|\!|_{k,D}^2&=\|\Pi_{k-2,D}^0 (I_{k,D}\zeta)\|_{L_2(D)}^2 +\hspace{1pt}h_{{\scriptscriptstyle D}}\sum_{F\in\FD}\|\Pi_{k-1,F}^0 (I_{k,D}\zeta)\|_{L_2(F)}^2\notag\\ &=\|\Pi_{k-2,D}^0\zeta\|_{L_2(D)}^2+\hspace{1pt}h_{{\scriptscriptstyle D}}\sum_{F\in\FD} \|\Pi_{k-1,F}^0(I_{k,F}\zeta)\|_{L_2(F)}^2\\ &\lesssim \|\zeta\|_{L_2(D)}^2+\hspace{1pt}h_{{\scriptscriptstyle D}}\SumFh_F^2\|I_{k,F}\zeta\|_{L_\infty(F)}^2\notag\\ &\lesssim \|\zeta\|_{L_2(D)}^2+\hspace{1pt}h_{{\scriptscriptstyle D}}\SumFh_F^2\big(\|I_{k,F}\zeta\|_{L_\infty(\partial F)}^2 +\|\nabla_{\!\!F}(I_{k,F}\zeta)\|_{L_2(F)}^2\big).\notag \end{align} \par The error estimates for $I_{k,D}$ rely on the following analog of \eqref{eq:IDHOne}, where $\tau_{\!\ssF}$ is defined by replacing $D$ by $F$ in \eqref{eq:TauD}. \begin{lemma}\label{lem:3DIDHOne} We have \begin{equation}\label{eq:3DIDHOne} |I_{k,D}\zeta|_{H^1(D)}\lesssim \hspace{1pt}h_{{\scriptscriptstyle D}}^{-1}\|\zeta\|_{L_2(D)} +|\zeta|_{H^1(D)}+\hspace{1pt}h_{{\scriptscriptstyle D}}|\zeta|_{H^2(D)} \end{equation} for all $\zeta\in{H^2(D)}$. \end{lemma} \begin{proof} Let $\zeta \in H^2(D)$ be arbitrary. It follows from \eqref{eq:3DInverse} and \eqref{eq:3DIDtbar} that \begin{align*} |I_{k,D}\zeta|_{H^1(D)}^2 &\lesssim \hspace{1pt}h_{{\scriptscriptstyle D}}^{-2}\|\zeta\|_{L_2(D)}^2+\hspace{1pt}h_{{\scriptscriptstyle D}} \sum_{F\in\FD} \|I_{k,F} \zeta\|_{L_\infty(\partial F)}^2 +\hspace{1pt}h_{{\scriptscriptstyle D}}\sum_{F\in\FD} \|\nabla_{\!\!F}(I_{k,F}\zeta)\|_{L_2(F)}^2. \end{align*} We have, by \eqref{eq:Sobolev} and \eqref{eq:TrivialBdd}, \begin{align}\label{eq:IFLinfty} \hspace{1pt}h_{{\scriptscriptstyle D}}\sum_{F\in\FD}\|I_{k,F}\zeta\|_{L_\infty(\partial F)}^2&\lesssim \hspace{1pt}h_{{\scriptscriptstyle D}}\sum_{F\in\FD} \|\zeta\|_{L_\infty(\partial F)}^2\\ &\lesssim \hspace{1pt}h_{{\scriptscriptstyle D}}\|\zeta\|_{L_\infty(D)}^2\lesssim \hspace{1pt}h_{{\scriptscriptstyle D}}^{-2}\|\zeta\|_{L_2(D)}^2 +|\zeta|_{H^1(D)}^2+\hspace{1pt}h_{{\scriptscriptstyle D}}^2|\zeta|_{H^2(D)}^2,\notag \end{align} and by \eqref{eq:HalfTrace}, Lemma~\ref{lem:CZ2} and \eqref{eq:IDHOneHalf}, \begin{align}\label{eq:IFHone} \hspace{1pt}h_{{\scriptscriptstyle D}}\sum_{F\in\FD} \|\nabla_{\!\!F}(I_{k,F}\zeta)\|_{L_2(F)}^2&\lesssim \hspace{1pt}h_{{\scriptscriptstyle D}}\sum_{F\in\FD}\big(|\zeta|_{H^1(F)}^2+h_F |\zeta|_{H^{3/2}(F)}^2\big)\\ &\lesssim \sum_{D\in\cT_h}\big( |\zeta|_{H^1(D)}^2+\hspace{1pt}h_{{\scriptscriptstyle D}}^{2}|\zeta|_{H^2(D)}^2\big). \notag \end{align} \end{proof} \par Note that \eqref{eq:3DIDtbar}, \eqref{eq:IFLinfty} and \eqref{eq:IFHone} imply \begin{equation}\label{eq:3DIDtbar2} \tbarI_{k,D}\zeta|\!|\!|_{k,D}\lesssim \|\zeta\|_{L_2(D)}+\hspace{1pt}h_{{\scriptscriptstyle D}}|\zeta|_{H^1(D)}+\hspace{1pt}h_{{\scriptscriptstyle D}}^2|\zeta|_{H^2(D)} \quad\forall\,\zeta\in{H^2(D)}, \end{equation} and hence we have, in view of \eqref{eq:3DLTwoPODPDZ}, \eqref{eq:3DPODStability2}, \eqref{eq:3DIDtbar} and \eqref{eq:3DIDHOne}, \begin{align}\label{eq:3DIDLTwo} \|I_{k,D}\zeta\|_{L_2(D)}&\leq \|I_{k,D}\zeta-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}I_{k,D}\zeta\|_{L_2(D)}+\|\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}I_{k,D}\zeta\|_{L_2(D)}\notag\\ &\lesssim \hspace{1pt}h_{{\scriptscriptstyle D}}|I_{k,D}\zeta|_{H^1(D)}+\tbarI_{k,D}\zeta|\!|\!|_{k,D}\\ &\lesssim \big(\|\zeta\|_{L_2(D)} +\hspace{1pt}h_{{\scriptscriptstyle D}}|\zeta|_{H^1(D)}+\hspace{1pt}h_{{\scriptscriptstyle D}}^2|\zeta|_{H^2(D)}\big)\notag \end{align} for all $\zeta\in H^2(D)$. \par In view of \eqref{eq:3DIDInvariance}, the following analogs of \eqref{eq:LTwoPODID}--\eqref{eq:HTwoPODID}, where $\zeta\in H^{\ell+1}(D)$ and $1\leq\ell\leq k$, can be obtained by combining the Bramble-Hilbert estimates \eqref{eq:BHEstimates} with the stability estimates \eqref{eq:3DPODStability1}, \eqref{eq:3DPODStability2}, \eqref{eq:3DIDHOne}, \eqref{eq:3DIDtbar2} and \eqref{eq:3DIDLTwo}. \begin{alignat}{3} \|\zeta-I_{k,D}\zeta\|_{L_2(D)}+\|\zeta-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}I_{k,D}\zeta\|_{L_2(D)} &\lesssim \hspace{1pt}h_{{\scriptscriptstyle D}}^{\ell+1}|\zeta|_{H^{\ell+1}(D)} \label{eq:3DLTwoPODID}\\ |\zeta-I_{k,D}\zeta|_{{H^1(D)}}+|\zeta-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}I_{k,D}\zeta|_{{H^1(D)}} &\lesssim \hspace{1pt}h_{{\scriptscriptstyle D}}^{\ell}|\zeta|_{H^{\ell+1}(D)} \label{eq:3DHOnePODID}\\ |\zeta-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}I_{k,D}\zeta|_{H^2(D)}&\lesssim \hspace{1pt}h_{{\scriptscriptstyle D}}^{\ell-1}|\zeta|_{H^{\ell+1}(D)} \label{eq:3DHTwoPODID} \end{alignat} \begin{remark}\label{rem:3DLInftyInterpolation}{\rm We also have the following analog of \eqref{eq:1DInterpolationError}: \begin{equation*} \|\zeta-I_{k,D}\zeta\|_{L_\infty(D)}\lesssim h^{\ell-\frac12}|u|_{H^{\ell+1}(D)} \end{equation*} for all $\zeta\in H^{\ell+1}(D)$ and $1\leq \ell\leq k$. The proof uses Lemma~\ref{lem:MaximumPrinciple} (which is valid in three dimensions) and the arguments for \eqref{eq:1DInterpolationError}. But we do not need this estimate in the error analysis. } \end{remark} \subsection{The Discrete Problem}\label{subsec:DP3D} Let the global virtual element space $\cQ^k_h$ be defined by $$\cQ^k_h=\{v\in H^1_0({\Omega}):\,v\big|_D\in\cQ^k(D)\quad\forall\,D\in\mathcal{T}_h\}.$$ The discrete problem for \eqref{eq:Poisson} is to find $u_h\in\cQ^k_h$ such that \begin{equation*} a_h(u_h,v)=(f,\Xi_h v) \qquad\forall\,v\in\cQ^k_h, \end{equation*} where $\Xi_h$ is defined as in \eqref{eq:Xih}, \begin{equation*} a_h(w,v)=\sum_{D\in\cT_h}\big[\big(\nabla(\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} w),\nabla(\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} v)\big)_{L_2(D)}+S^D(w-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} w,v-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} v)\big], \end{equation*} and the local stabilizing bilinear form $S^D(\cdot,\cdot)$ is given by \begin{align} S^D(w,v)&=\hspace{1pt}h_{{\scriptscriptstyle D}}\sum_{F\in\FD}\Big(h_F^{-2}(\Pi_{k-2,F}^0 w,\Pi_{k-2,F}^0 v)_{L_2(F)}+ \sum_{p\in\mathcal{N}_{\p F}}w(p)v(p)\Big).\label{eq:3DSD} \end{align} Here $\mathcal{N}_{\p F}$ is the set of the nodes along $\partial F$ associated with the degrees of freedom of a virtual element function. \begin{lemma}\label{lem:3DSDBdd} There exists a positive constant $C$, depending only on $\rho$, $N$ and $k$, such that \begin{alignat}{3} |v|_{H^1(D)}^2&\leq C \big [\ln(1+\max_{F\in\mathcal{F}_\ssD}\tau_{\!\ssF})\big] S^D(v,v) &\qquad&\forall\,v\in\mathcal{N}(\POD). \label{eq:3DSDBdd} \end{alignat} \end{lemma} \begin{proof} Let $v\in\mathcal{N}(\POD)$ be arbitrary. We have, by \eqref{eq:G2}, \eqref{eq:PF2}, Lemma~\ref{lem:MaximumPrinciple}, Corollary~\ref{cor:InverseEstimate3} and \eqref{eq:3DInverse2}, \begin{align*} |v|_{H^1(D)}^2&\lesssim \hspace{1pt}h_{{\scriptscriptstyle D}}\sum_{F\in\FD} \big(\hspace{1pt}h_{{\scriptscriptstyle D}}^{-2}\|v\|_{L_2(F)}^2 +\|\nabla_{\!\!F} v\|_{L_2(F)}^2\big)\notag\\ &\lesssim \hspace{1pt}h_{{\scriptscriptstyle D}}\sum_{F\in\FD}\Big(\hspace{1pt}h_{{\scriptscriptstyle D}}^{-2}h_F^2\big(\|v\|_{L_\infty(\partial F)}^2 +\|\nabla_{\!\!F} v\|_{L_2(F)}^2\big)+\|\nabla_{\!\!F} v\|_{L_2(F)}^2\Big)\\ &\lesssim \hspace{1pt}h_{{\scriptscriptstyle D}}\sum_{F\in\FD}\big(\|v\|_{L_\infty(\partial F)}^2+\|\nabla_{\!\!F} v\|_{L_2(F)}^2\big)\\ &\lesssim \hspace{1pt}h_{{\scriptscriptstyle D}}\sum_{F\in\FD}\big(h_F^{-2}\|\Pi_{k-2,F}^0 v\|_{L_2(F)}^2 +\ln(1+\tau_{\!\ssF})\|v\|_{L_\infty(\partial F)}^2\big), \end{align*} which together with Remark~\ref{rem:NormEquivalence} and \eqref{eq:3DSD} implies \eqref{eq:3DSDBdd}. \end{proof} \par It follows from \eqref{eq:3DSDBdd} that we have an analog of \eqref{eq:Stability}: \begin{equation}\label{eq:3DStability} |v|_{H^1({\Omega})}^2\leq 2\sum_{D\in\cT_h}\big[|\Pi_{k,h}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}} v|_{H^1({\Omega})}^2 +|v-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} v|_{{H^1(D)}}^2\big]\lesssim \beta_h a_h(v,v) \qquad\forall\,v\in \cQ^k_h, \end{equation} where \begin{equation}\label{eq:lambdah} \beta_h= \ln(1+\max_{F\in\mathcal{F}_h}\tau_{\!\ssF}). \end{equation} Hence the discrete problem is well-posed. \begin{remark}\label{rem:3DInfo}{\rm The constants in the error estimates for the virtual element methods will only depend on $\rho$, $N$, $k$ and $\beta_h$. Therefore the existence of small faces in $\mathcal{T}_h$ does not affect the performance of the method. It is only the relative sizes of the edges on each face that matter. } \end{remark} \par Note that the estimates in Lemma~\ref{lem:Xih} are also valid for ${\Omega}\subset \mathbb{R}^3$. \subsection{Error Estimates in the Energy Norm}\label{subsec:3DEnergyError} The abstract error estimate \begin{equation}\label{eq:3DAbstractEnergyError} \|u-u_h\|_h\lesssim \|u-I_{k,h} u\|_h+\|u-\Pi_{k,h}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}} u\|_h +\sqrt{\beta_h}\Big(|u-\Pi_{k,h}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}} u|_{h,1}+ \sup_{w\in\cQ^k_h}\frac{(f,w-\Xi_hw)}{|w|_{H^1({\Omega})}}\Big) \end{equation} is obtained by the same arguments as in Section~\ref{subsec:AbstractEnergyError}, where $|\cdot|_{h,1}$ is defined in \eqref{eq:PiecewiseHOneNOrm}. \par We will derive concrete error estimates under the assumption that $u$ belongs to $H^{\ell+1}({\Omega})$ for $1\leq\ell\leq k$. Since the estimate \begin{equation}\label{eq:3DEasy} |u-\Pi_{k,h}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}} u|_{h,1}+ \sup_{w\in\cQ^k_h}\frac{(f,w-\Xi_hw)}{|w|_{H^1({\Omega})}} \lesssim h^\ell|u|_{H^{\ell+1}(\O)} \end{equation} remains the same, we only need to estimate $\|u-I_{k,h} u\|_h$ and $\|u-\Pi_{k,h}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}} u\|_h$. \par It follows from \eqref{eq:G1}, \eqref{eq:3DPODStability1}, \eqref{eq:ConsistentID} and \eqref{eq:3DSD} that \begin{align}\label{eq:3DInterpolationError1} \|u-I_{k,h} u\|_h^2&\lesssim \sum_{D\in\cT_h}|\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}(u-I_{k,D} u)|_{H^1(D)}^2 +\sum_{D\in\cT_h}\hspace{1pt}h_{{\scriptscriptstyle D}}\sum_{F\in\FD} h_F^{-2}\|u-I_{k,D} u\|_{L_2(F)}^2\notag\\ &\hspace{60pt}+\sum_{D\in\cT_h}\hspace{1pt}h_{{\scriptscriptstyle D}}\SumFh_F^{-2}\|\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}(u-I_{k,D} u)\|_{L_2(F)}^2\notag\\ &\hspace{90pt}+\sum_{D\in\cT_h} \hspace{1pt}h_{{\scriptscriptstyle D}}\sum_{F\in\FD} \|\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} (u-I_{k,D} u)\|_{L_\infty(\partial F)}^2\\ &\lesssim \sum_{D\in\cT_h} |u-I_{k,D} u|_{H^1(D)}^2 +\sum_{D\in\cT_h}\hspace{1pt}h_{{\scriptscriptstyle D}}\sum_{F\in\FD} h_F^{-2}\|u-I_{k,F} u\|_{L_2(F)}^2\notag\\ &\hspace{60pt}+\sum_{D\in\cT_h} \hspace{1pt}h_{{\scriptscriptstyle D}}\|\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} (u-I_{k,D} u)\|_{L_\infty(D)}^2,\notag \end{align} and we have, by \eqref{eq:HalfTrace} and \eqref{eq:HalfIDLTwoError}, \begin{equation}\label{eq:3DInterpolationError2} \sum_{D\in\cT_h}\hspace{1pt}h_{{\scriptscriptstyle D}}\sum_{F\in\FD} h_F^{-2}\|u-I_{k,F} u\|_{L_2(F)}^2 \lesssim \sum_{D\in\cT_h} \hspace{1pt}h_{{\scriptscriptstyle D}} \sum_{F\in\FD} h_F^{2\ell-1}|u|_{H^{\ell+(1/2)}(F)}^2 \lesssim h^{2\ell}|u|_{H^{\ell+1}(\O)}^2. \end{equation} Moreover the estimates \eqref{eq:G1}, \eqref{eq:DiscreteEstimate3} and \eqref{eq:3DPODStability1} imply \begin{align}\label{eq:3DInterpolationError3} &\sum_{D\in\cT_h} \hspace{1pt}h_{{\scriptscriptstyle D}}\|\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} (u-I_{k,D} u)\|_{L_\infty(D)}^2 \notag\\ &\hspace{60pt} \lesssim \sum_{D\in\cT_h} \hspace{1pt}h_{{\scriptscriptstyle D}}^{-2}\|\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} (u-I_{k,D} u)\|_{L_2(D)}^2 +\sum_{D\in\cT_h}|\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} (u-I_{k,D} u)|_{H^1(D)}^2\\ &\hspace{60pt}\lesssim \sum_{D\in\cT_h} \hspace{1pt}h_{{\scriptscriptstyle D}}^{-2}\big(\|\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} u-u)\|_{L_2(D)}^2 +\|u-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}I_{k,D} u)\|_{L_2(D)}^2\big)\notag\\ &\hspace{100pt}\notag+\sum_{D\in\cT_h}|u-I_{k,D} u|_{H^1(D)}^2.\notag \end{align} \par Combining \eqref{eq:3DLTwoPODPDZ}, \eqref{eq:3DLTwoPODID}, \eqref{eq:3DHOnePODID} and \eqref{eq:3DInterpolationError1}--\eqref{eq:3DInterpolationError3}, we obtain \begin{equation}\label{eq:3DInterpolationError} \|u-I_{k,h} u\|_h^2\lesssim h^{2\ell}|u|_{H^{\ell+1}(\O)}^2, \end{equation} which is the analog of \eqref{eq:InterpolationError}. \par From \eqref{eq:Sobolev}, \eqref{eq:G1}, \eqref{eq:3DLTwoPODPDZ}--\eqref{eq:3DHTwoPODPDZ} and \eqref{eq:3DSD}, we find \begin{align}\label{eq:3DProjectionError} \|u-\Pi_{k,h}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}} u\|_h^2&\lesssim \sum_{D\in\cT_h}\hspace{1pt}h_{{\scriptscriptstyle D}}\SumFh_F^{-2}\|u-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} u\|_{L_2(F)}^2 +\sum_{D\in\cT_h} \hspace{1pt}h_{{\scriptscriptstyle D}}\sum_{F\in\FD} \|u-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} u\|_{L_\infty(\partial F)}^2\notag\\ &\lesssim \sum_{D\in\cT_h} \hspace{1pt}h_{{\scriptscriptstyle D}}\|u-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} u\|_{L_\infty(D)}^2\\ &\lesssim \sum_{D\in\cT_h}\big( \hspace{1pt}h_{{\scriptscriptstyle D}}^{-2}\|u-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} u\|_{L_2(D)}^2 +|u-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} u|_{H^1(D)}^2+\hspace{1pt}h_{{\scriptscriptstyle D}}^2|u-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} u|_{H^2(D)}^2\big)\notag\\ &\lesssim h^{2\ell}|u|_{H^{\ell+1}(\O)}^2,\notag \end{align} which is the analog of \eqref{eq:ProjectionError}. \par The estimates \eqref{eq:3DAbstractEnergyError}, \eqref{eq:3DEasy}, \eqref{eq:3DInterpolationError} and \eqref{eq:3DProjectionError} lead to the following analog of Theorem~\ref{thm:ConcreteEnergyError}. \begin{theorem}\label{thm:3DConcreteEnergyError} Assuming the solution $u$ of \eqref{eq:Poisson} belongs to $H^{\ell+1}({\Omega})$ for $\ell$ between $1$ and $k$, we have \begin{equation}\label{eq:3DConcreteEnergyError} \|u-u_h\|_h\leq C \sqrt{\beta_h}h^\ell|u|_{H^{\ell+1}(\O)}. \end{equation} where $\beta_h$ is defined in \eqref{eq:lambdah} and the positive constant $C$ depends only on $\rho$, $N$ and $k$. \end{theorem} \par The following analog of Theorem~\ref{thm:ComputableEnergyError} on the computable approximate solutions $\Pi_{k,h}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}} u_h$ and $\Pi_{k,h}^0 u_h$ is obtained by the same arguments. \begin{theorem}\label{thm:3DComputableEnergyError} Assuming the solution $u$ of \eqref{eq:Poisson} belongs to $H^{\ell+1}({\Omega})$ for $\ell$ between $1$ and $k$, there exists a positive constant $C$, depending only on $\rho$, $N$ and $k$, such that \begin{equation}\label{eq:3DComputableEnergyError} |u-u_h|_{H^1({\Omega})}+\sqrt{\beta_h}\big[|u-\Pi_{k,h}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}} u_h|_{h,1} +|u-\Pi_{k,h}^0 u|_{h,1}\big]\leq C \beta_h h^\ell |u|_{H^{\ell+1}({\Omega})}, \end{equation} where $\beta_h$ is defined in \eqref{eq:lambdah}. \end{theorem} \subsection{Error Estimates in the $L_2$ Norm}\label{subsec:3DL2Error} We begin with an analog of Lemma~\ref{lem:SDEstimate}. \begin{lemma}\label{lem:3DSDEstimate} We have \begin{equation* \sum_{D\in\cT_h} S^D(\zeta-\Pi_{k,h}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}I_{k,h} \zeta,\zeta-\Pi_{k,h}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}I_{k,h} \zeta)\lesssim h^2|\zeta|_{H^2({\Omega})}^2\qquad\forall \zeta\in H^2({\Omega})\cap H^1_0({\Omega}). \end{equation*} \end{lemma} \begin{proof} It follows from \eqref{eq:Sobolev}, \eqref{eq:3DLTwoPODID}--\eqref{eq:3DHTwoPODID} and \eqref{eq:3DSD} that \begin{align*} &\sum_{D\in\cT_h} S^D(\zeta-\Pi_{k,h}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}} I_{k,h}\zeta,\zeta-\Pi_{k,h}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}} I_{k,h}\zeta)\\ &\hspace{30pt}\lesssim \sum_{D\in\cT_h}\hspace{1pt}h_{{\scriptscriptstyle D}}\sum_{F\in\FD}\big(h_F^{-2}\|\zeta-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}I_{k,D}\zeta\|_{L_2(F)}^2 +\|\zeta-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}I_{k,D}\zeta\|_{L_\infty(\partial F)}^2\big)\\ &\hspace{30pt}\lesssim \sum_{D\in\cT_h}\hspace{1pt}h_{{\scriptscriptstyle D}}\|\zeta-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}I_{k,D}\zeta\|_{L_\infty(\partial D)}^2\\ &\hspace{30pt}\lesssim \sum_{D\in\cT_h} \big(\hspace{1pt}h_{{\scriptscriptstyle D}}^{-2}\|\zeta-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}I_{k,D}\zeta\|_{L_2(D)}^2+ |\zeta-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}I_{k,D}\zeta|_{H^1(D)}^2+\hspace{1pt}h_{{\scriptscriptstyle D}}^2|\zeta-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}I_{k,D}\zeta|_{H^2(D)}^2\big)\\ &\hspace{30pt}\lesssim \hspace{1pt}h_{{\scriptscriptstyle D}}^2|\zeta|_{H^2({\Omega})}^2. \end{align*} \end{proof} \goodbreak \par The same arguments as in the proof of Lemma~\ref{lem:POSD} lead to the following result. \begin{lemma}\label{lem:3DPOSD} We have \begin{equation*} \sum_{D\in\cT_h} S^D(u_h-\Pi_{k,h}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}} u_h,u_h-\Pi_{k,h}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}} u_h)\lesssim \beta_h^2 h^{2\ell}|u|_{H^{\ell+1}(\O)}^2. \end{equation*} \end{lemma} \par With Lemma~\ref{lem:3DSDEstimate} and Lemma~\ref{lem:3DPOSD} in hand, we obtain the following analog of Theorem~\ref{thm:uhLTwoError} and Theorem~\ref{thm:ComputableLTwoErrors} by identical arguments. \begin{theorem}\label{thm:3DLTwoErrors} Assuming $u\in H^{\ell+1}({\Omega})$ for some $\ell$ between $1$ and $k$, there exists a positive constant $C$, depending only on $N$, $k$ and $\rho$, such that \begin{align*} \|u-u_h\|_{L_2({\Omega})}+\|u-\Pi_{k,h}^0 u_h\|_{L_2({\Omega})}+ \|u-\Pi_{k,h}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}} u_h\|_{L_2({\Omega})}\leq C\beta_h h^{\ell+1}|u|_{H^{\ell+1}({\Omega})}, \end{align*} where $\beta_h$ is defined in \eqref{eq:lambdah}. \end{theorem} \subsection{Error Estimate in the $L_\infty$ Norm}\label{subsec:3DInftyError} We will derive $L_\infty$ error estimates under the additional assumption that $\mathcal{T}_h$ is quasi-uniform (cf. \eqref{eq:gamma}). We begin with an analog of Theorem~\ref{thm:MaxNormBdryEst1}. \begin{theorem}\label{thm:uhInftyBdryEst} Assuming $\mathcal{T}_h$ is quasi-uniform and the solution $u$ of \eqref{eq:Poisson} belongs to $H^{\ell+1}({\Omega})$ for $\ell$ between $1$ and $k$, we have \begin{equation}\label{eq:InftyBdryEst} \max_{e\in\mathcal{E}_h}\|u-u_h\|_{L_\infty(e)}\leq C\beta_h h^{\ell}|u|_{H^{\ell+1}(\O)}, \end{equation} where $\beta_h$ is defined in \eqref{eq:lambdah} and the positive constant $C$ only depends on $\rho$, $N$, $\gamma$ and $k$. \end{theorem} \begin{proof} It follows from Remark~\ref{rem:NormEquivalence} that \begin{equation*} \sum_{D\in\cT_h}\hspace{1pt}h_{{\scriptscriptstyle D}}\sum_{F\in\FD}\|(I_{k,D} u-u_h)-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} (I_{k,D} u-u_h)\|_{L_\infty(\partial F)}^2 \lesssim \|I_{k,h} u-u_h\|_h^2 \end{equation*} and hence, for any $D\in\mathcal{T}_h$ and $F\in\mathcal{F}_\ssD$, \begin{equation}\label{eq:3DBdryEst1} \|(I_{k,D} u-u_h)-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} (I_{k,D} u-u_h)\|_{L_\infty(\partial F)}^2\lesssim \beta_h\, h^{2\ell-1}|u|_{H^{\ell+1}(\O)}^2 \end{equation} by \eqref{eq:3DInterpolationError}, Theorem~\ref{thm:3DConcreteEnergyError} and the quasi-uniformity of $\mathcal{T}_h$, \par For any $D\in\mathcal{T}_h$ and $F\in\mathcal{F}_\ssD$, we have, by \eqref{eq:G1}, \eqref{eq:DiscreteEstimate3}, \eqref{eq:3DPODStability1}, \eqref{eq:3DLTwoPODID}, \eqref{eq:3DHOnePODID}, Theorem~\ref{thm:3DLTwoErrors} and the quasi-uniformity of $\mathcal{T}_h$, \begin{align}\label{eq:3DBdryEst2} &\|\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}(I_{k,D} u-u_h)\|_{L_\infty(F)}^2\notag\\ &\hspace{40pt}\lesssim \hspace{1pt}h_{{\scriptscriptstyle D}}^{-3}\|\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}(I_{k,D} u-u_h)\|_{L_2(D)}^2+ \hspace{1pt}h_{{\scriptscriptstyle D}}^{-1}|\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}(I_{k,D} u-u_h)|_{H^1(D)}^2\notag\\ &\hspace{40pt}\lesssim \hspace{1pt}h_{{\scriptscriptstyle D}}^{-3}\big(\|\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}I_{k,D} u-u\|_{L_2(D)}^2 +\|u-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} u_h\|_{L_2(D)}^2\big) +\hspace{1pt}h_{{\scriptscriptstyle D}}^{-1}|I_{k,D} u-u_h|_{H^1(D)}^2\\ &\hspace{40pt}\lesssim \beta_h^2 h^{2\ell-1}|u|_{H^{\ell+1}(\O)}^2.\notag \end{align} \par Finally we have, for any $D\in\mathcal{T}_h$ and $F\in\mathcal{F}_\ssD$, \begin{equation}\label{eq:3DBdryEst3} \|u-I_{k,D} u\|_{L_\infty(F)}^2=\|u-I_{k,F} u\|_{L_\infty(F)}^2\lesssim h_F^{2\ell-1}|u|_{H^{\ell+\frac12}(F)}^2 \lesssim \hspace{1pt}h_{{\scriptscriptstyle D}}^{2\ell-1}|u|_{H^{\ell+1}(D)}^2 \end{equation} by \eqref{eq:HalfTrace}, \eqref{eq:1DInterpolationErrorHalf} and \eqref{eq:ConsistentID}. \par % The estimate \eqref{eq:InftyBdryEst} then follows from \eqref{eq:3DBdryEst1}--\eqref{eq:3DBdryEst3} and the triangle inequality. \end{proof} \par We also have estimates for the computable approximate solutions $\Pi_{k,h}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}} u_h$ and $\Pi_{k,h}^0 u_h$. \begin{theorem}\label{thm:3DMaxNormBdryEst} Assuming $\mathcal{T}_h$ is quasi-uniform and the solution $u$ of \eqref{eq:Poisson} belongs to $H^{\ell+1}({\Omega})$ for $\ell$ between $1$ and $k$, there exists a positive constant $C$, depending only on $\rho$, $N$, $\gamma$ and $k$, such that \begin{equation}\label{eq:3DLInftyError} \|u-\Pi_{k,h}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}} u_h\|_{L_\infty({\Omega})}+\|u-\Pi_{k,h}^0 u_h\|_{L_\infty({\Omega})}\leq C \beta_h h^{\ell-(1/2)}|u|_{H^{\ell+1}(\O)}, \end{equation} where $\beta_h$ is defined in \eqref{eq:lambdah}. \end{theorem} \begin{proof} For any $D\in\mathcal{T}_h$, we have, by \eqref{eq:Sobolev}, \eqref{eq:DiscreteEstimate1} and \eqref{eq:3DPODStability1}, \begin{align*} &\|u-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} u_h\|_{{L_\infty(D)}}\\ &\hspace{40pt}\lesssim \hspace{1pt}h_{{\scriptscriptstyle D}}^{-\frac32}\|u-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} u_h\|_{L_2(D)}+\hspace{1pt}h_{{\scriptscriptstyle D}}^{-\frac12}|u-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} u_h|_{H^1(D)} +\hspace{1pt}h_{{\scriptscriptstyle D}}^\frac12|u-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} u_h|_{H^2(D)}\\ &\hspace{40pt}\lesssim \hspace{1pt}h_{{\scriptscriptstyle D}}^{-\frac32}\|u-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} u_h\|_{L_2(D)}+\hspace{1pt}h_{{\scriptscriptstyle D}}^{-\frac12}|u-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} u_h|_{H^1(D)} +\hspace{1pt}h_{{\scriptscriptstyle D}}^\frac12|u-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} u|_{H^2(D)}\\ &\hspace{70pt}+\hspace{1pt}h_{{\scriptscriptstyle D}}^\frac12|\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt}(u-u_h)|_{H^2(D)}\\ &\hspace{40pt}\lesssim \hspace{1pt}h_{{\scriptscriptstyle D}}^{-\frac32}\|u-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} u_h\|_{L_2(D)}+\hspace{1pt}h_{{\scriptscriptstyle D}}^{-\frac12}|u-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} u_h|_{H^1(D)} +\hspace{1pt}h_{{\scriptscriptstyle D}}^\frac12|u-\Pi_{k,D}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} u|_{H^2(D)}\\ &\hspace{60pt}+\hspace{1pt}h_{{\scriptscriptstyle D}}^{-\frac12}|u-u_h|_{H^1(D)}, \end{align*} which together with \eqref{eq:3DHTwoPODPDZ}, Theorem~\ref{thm:3DComputableEnergyError}, Theorem~\ref{thm:3DLTwoErrors} and the quasi-uniformity of $\mathcal{T}_h$ implies the estimate for $u-\Pi_{k,h}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}} u_h$. \par Similarly we have, by \eqref{eq:Sobolev}, \eqref{eq:DiscreteEstimate1} and \eqref{eq:3DPDZHOne}, \begin{align*} &\|u-\Pi_{k,D}^0\hspace{1pt} u_h\|_{{L_\infty(D)}}\\ &\hspace{40pt}\lesssim \hspace{1pt}h_{{\scriptscriptstyle D}}^{-\frac32}\|u-\Pi_{k,D}^0\hspace{1pt} u_h\|_{L_2(D)}+\hspace{1pt}h_{{\scriptscriptstyle D}}^{-\frac12}|u-\Pi_{k,D}^0\hspace{1pt} u_h|_{H^1(D)} +\hspace{1pt}h_{{\scriptscriptstyle D}}^\frac12|u-\Pi_{k,D}^0\hspace{1pt} u|_{H^2(D)}\\ &\hspace{70pt}+\hspace{1pt}h_{{\scriptscriptstyle D}}^{-\frac12}|u-u_h|_{H^1(D)}, \end{align*} which together with \eqref{eq:3DHTwoPODPDZ}, Theorem~\ref{thm:3DComputableEnergyError}, Theorem~\ref{thm:3DLTwoErrors} and the quasi-uniformity of $\mathcal{T}_h$ implies the estimate for $u-\Pi_{k,h}^0 u_h$. \end{proof} \section{Concluding Remarks}\label{sec:Conclusions} We have developed error estimates for virtual element methods for the model Poisson problem in two and three dimensions that provide justifications for existing numerical results for polygonal (or polyhedral) meshes with small edges (or faces). \par For the two dimensional problem, the convergence of the virtual element method based on the stabilizing bilinear form $S^D_2(\cdot,\cdot)$ is optimal under the shape regularity assumptions in Section~\ref{subsec:GlobalShapeRegularity}. Under the additional assumption that the edges of any subdomain in a polygonal mesh are comparable to one another, convergence of the virtual element method based on the stabilizing bilinear form $S^D_1(\cdot,\cdot)$ is also optimal. \par For the three dimensional problem, the convergence of the virtual element method is optimal if, in addition to the assumptions in Section~\ref{subsec:Shape3D}, we also assume that the edges of any face in the polyhedral mesh are comparable to one another. \par The results in this paper can be extended to virtual element methods with the stabilizing bilinear form \begin{equation*} S^D(w,v)=(\Pi_{k-2,D}^0 w,\Pi_{k-2,D}^0 v)+\sum_{p\in\mathcal{N}_{\p D}}w(p)v(p) \end{equation*} in two dimensions, and the stabilizing bilinear form \begin{align*} S^D(w,v)&=(\Pi_{k-2,D}^0 w,\Pi_{k-2,D}^0 v)+\hspace{1pt}h_{{\scriptscriptstyle D}}\sum_{F\in\FD}\Big(h_F^{-2}(\Pi_{k-2,F}^0 w,\Pi_{k-2,F}^0 v)_{L_2(F)}+ \sum_{p\in\mathcal{N}_{\p F}}w(p)v(p)\Big).\label{eq:3DSD} \end{align*} in three dimensions. The stability for these virtual element methods is automatic and the error analysis also does not pose any new difficulties. \par The results in this paper can also be extended to virtual element methods ($k\geq2$) where the inner product \eqref{eq:InnerProduct} is replaced by the inner product \begin{equation*} (\!(\zeta,\eta)\!)=(\nabla \zeta,\nabla \eta) +\Big(\int_{D}\zeta\,dx\Big)\Big(\int_{D} \eta\,dx\Big). \end{equation*} \par We note that error estimates for the Poisson problem on general polygonal or polyhedral domains can also be obtained by the techniques developed in this paper. \par Finally it would be interesting to construct a three dimensional analog of the stabilizing bilinear form $S^D_2(\cdot,\cdot)$ defined in Section~\ref{subsec:DiscreteProblem} so that the convergence of the virtual element methods is optimal for polyhedral meshes with arbitrarily small faces and edges, and $L_\infty$ error estimates can be established without assuming the meshes are quasi-uniform. We conjecture that such a bilinear form can be defined by \begin{align*} S^D(v,w)&=\SumFh_F(\nabla_{\!\!F}\Pi_{k,F}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} v,\nabla_{\!\!F}\Pi_{k,F}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} w)_{L_2(F)}\\ &\hspace{40pt}+\SumFh_F\sum_{e\in\cE_{\ssF}} h_e\big(\partial (v-\Pi_{k,F}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} v)/\partial s,\partial (w-\Pi_{k,F}^{\raise 1pt\hbox{$\scriptscriptstyle\nabla$}}\hspace{1pt} w)/\partial s\big)_{L_2(e)}. \end{align*}
{ "timestamp": "2017-10-03T02:12:36", "yymm": "1710", "arxiv_id": "1710.00442", "language": "en", "url": "https://arxiv.org/abs/1710.00442", "abstract": "We consider a model Poisson problem in $\\R^d$ ($d=2,3$) and establish error estimates for virtual element methods on polygonal or polyhedral meshes that can contain small edges ($d=2$) or small faces ($d=3$).", "subjects": "Numerical Analysis (math.NA)", "title": "Virtual Element Methods on Meshes with Small Edges or Faces", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9852713883126863, "lm_q2_score": 0.8267117898012104, "lm_q1q2_score": 0.8145354728719043 }
https://arxiv.org/abs/2112.10171
Newtonian mechanics in a Riemannian manifold
The work done by Isaac Newton more than three hundred years ago, continues being a path to increase our knowledge of Nature. To better understand all the ideas behind it, one of the finest ways is to generalize them to wider situations. In this report we make a review of one of these enlargements, the one that bears the mechanical systems from the elementary homogeneous three dimensional Euclidean space to the more abstract geometry of a Riemannian manifold.
\section{Introduction} Mechanics is an ancient wisdom of humankind. It is not possible to build the Egyptian Pyramids, for example, without some more or less organised intuitive ideas of mechanics. And they were made near five thousand years ago. Archimedes, 287--212 BC, was one of those that best exploited mechanical ideas as an interesting instrument not only to conceive several devices but to proof geometric results. As a science, mechanics began to emerge in modern times, in XVI and XVII centuries, when Galileo, Kepler and Newton tried to explain the motion of the planets they knew in the solar system using the old intuitive mechanical and geometric ideas and the carefully collected observations of the positions of the planets annotated along the time. Isaac Newton, 1643-1727, was the real founder of what we know as classical mechanics or analytical mechanics, as Lagrange, 1736-1813, called it. The Newton's {\sl Principia}, \cite{Newton}, have been developed along more than three hundred years and we continue increasing their understanding and applications. Near every scientific generation has a novel approach to Newtonian mechanics and has increased the range of applications. In fact, analytical mechanics has arrived to be a basic tool to found the description of our Universe in a systematic form and it can be said that all our modern knowledge of Nature has been developed following ideas coming at the end from Newtonian mechanics. Geometry has always been related to mechanics. They have influenced each other in a very enhancing way along the time, using both disciplines as a source of problems and methods to state and solve questions in the other. And more and more, new geometric techniques have been introduced in mechanics to obtain new results, or for better understanding older ones, and applications. In particular, differential geometry is a powerful tool to clarify the deep dependencies between different expressions of the same problem, or solution, and the relations among magnitudes, due to the intrinsic formulation of the theory, that is the independence of the used coordinate system to express the equation under consideration. Indeed, it has proven to be a very adequate way to express the mechanical concepts in a short and more comprehensible form. As far as we know, the oldest reference to the name ``Mechanics in a Riemannian manifold" is used by R. Hermann in \cite{HE1968}, where one chapter has precisely this title. But clearly there are previous authors using Riemannian techniques to tackle mechanical problems. We necessarily need to cite Eisenhart, \cite{EISEN-1928} and Synge, \cite{SYN-1926, SYN-1928}, most of these works inspired by Levi--Civita, the real pionneer. Other modern references are \cite{AM-78, Ar-89,BLOCH2015, BULE2005,CC-2005,Ol-02} where this name appears together with geometric mechanics. We will use all of them without specific citation. It is interesting to note the special effort made by some authors, specially A.D. Lewis in \cite{LEWIS, Lewis2018}, in order to justify the way going from classical to geometric mechanics and the usefulness of this last approach. At this point, it is important to say that this survey is by no means an historical review of classical or geometric mechanics. A very nice survey on historical aspects of geometrical mechanics with an extensive bibliography is \cite{Leon-2017}. Some other historical remarks are contained in \cite{Ar-89} in the introduction of the different chapters. The aim of this review is to develop some aspects of Newtonian mechanics in a Riemannian setting, hence without reference to the usual vector, or affine, space structure for the configuration manifold. With respect to present developments and applications we will give only some of them with names of people involved and general references in the corresponding sections, references which contain a large amount of more specific references. With this ideas in mind, the organization of the paper is as follows: \begin{description} \item[Section 2]: Notations, definitions and dynamical equation of the trajectories including the Lagrangian formulation for conservative and more general systems. \item[Sections 3 and 4]: We study constrained systems. First with holonomic constraints and secondly with nonholonomic ones. The corresponding d'Alembert principles and the subsequent dynamical equations are stated, both in the Riemannian form and in the Euler-Lagrange setting. \item[Section 5]: Is a short section for non autonomous systems. \item[Section 6]: Some classical subjects in Newtonian mechanics are included: Hamilton-Jacobi vector fields and geodesic fields and Hamilton-Jacobi equation in a Lagrangian setting. We finish the section with an approximation to stationary Euler equation for fluids as a Hamilton--Jacobi equation for a Newtonian system and comments and references on the relation between solutions to the Hamilton--Jacobi equation and to the associated Schr\"odinger equation for this kind of systems. \item[Section 7]: Comments and references on other topics in this approach and applications: symmetries and Noether's theorem and control of mechanical systems. \item[Section 8]: Conclusions. \end{description} Observe that Sections 2, 3 and 4 are the general theory while sections 6 and 7 are devoted to some specific developments and applications where the Riemannian approach is significantly enhancing. As general references on differential and Riemannian geometry we recommend \cite{Con2001,Lee2013,Lee2018}. With respect to analytical mechanics and geometric mechanics, see \cite{AM-78, Ar-89,Arovas2014, Ga-70,GPS-01,JS-98,LL-76, LM-sgam,MR-99,Ol-02, SC-71, Sch-2005, So-ssd}. A recent reference on the deep relations between geometry and physics is \cite{CIMM-2015}, not only with classical mechanics but including quantum physics. As is usual in this approach we consider that our manifolds and mappings are of $\mathcal{C}^\infty$--class. Einstein index summation convention is also assumed. \section{Newtonian dynamical systems} \subsection{Definitions and dynamical equation} From a mathematical point of view a \textbf{Newtonian mechanical system} is a triple $(Q,{\bf g},\omega)$, where \begin{enumerate} \item $Q$ is a differentiable manifold ($\dim\, Q=n$). \item ${\bf g}$ is a {\sl Riemannian metric} in $Q$. Then $(Q,{\bf g})$ is a Riemannian manifold. \item $\omega$ is a differential 1-form in $Q$, called the \textbf{work form}. \end{enumerate} Being ${\bf g}$ a Riemannian metric, the work form $\omega\in{\mit\Omega}^1(Q)$ is associated to a unique vector field ${\rm F}\in\mathfrak{X} (Q)$ such that $\mathop{i}\nolimits({\rm F}) {\bf g}=\omega$. We call ${\rm F}$ the \textbf{the field of forces} of the system. Clearly we can determine the system with the form of work or with its force field. In this case we denote the system as $(Q,{\bf g},{\rm F})$. With this elements we have a differential equation: we look for curves, $\gamma\colon[a,b]\subset\mathbb{R}\to Q$, solutions to the equation \begin{equation}\label{Neweq} \nabla_{\dot{\gamma}}\dot\gamma ={\rm F}\circ\gamma \end{equation} where $\nabla$ is the {\sl Levi-Civita connection} associated to the Riemannian metric ${\bf g}$. Recall that, given a Riemannian metric ${\bf g}$ on the manifold $Q$, the Levi-Civita connection $\nabla$ is the unique linear connection which is symmetrical, that is with null torsion tensor field, and Riemannian, that is $\nabla_Z({\bf g}(X,Y))= {\bf g}(\nabla_ZX,Y) + {\bf g}(X,\nabla_ZY)$, for every $X,Y,Z\in\mathfrak{X}(Q)$, or, what is the same $\nabla_Z{\bf g}$ for every $Z\in\mathfrak{X}(Q)$. Equation (\ref{Neweq}) is called the \textbf{Newton equation}, or dynamical equation, of the system. A {\bf trajectory} of the system is a curve solution to the Newton equation. The manifold $Q$ is the \textbf{configuration space} of the system. If $\dim Q=n$ we say that the system has $n$ \textbf{degrees of freedom}. Its tangent bundle ${\rm T} Q$ is the \textbf{phase space of coordinates--velocities} and the \textbf{phase space of coordinates--momenta} is the cotangent bundle ${\rm T}^*Q$. Both phase spaces are also called {\bf state spaces}. As we are describing physical systems, it must exist {\bf observables} in order to make measures and obtain results from the state of the system. The observables of the system with configuration space $(Q,{\bf g})$ are the real algebra ${\rm C}^\infty ({\rm T} Q)$ of smooth functions defined on the phase space. Equivalently the algebra ${\rm C}^\infty ({\rm T}^*Q)$. The value of an observable $f\in{\rm C}^\infty ({\rm T} Q)$ on a state $(q,v)\in{\rm T} Q$ is the real number $f(q,v)$. \bigskip \noindent{\bf Comment}: If the above definitions and equation are the Riemannian image of the Newtonian mechanics, they must contain in some sense the three Newton laws contained in his ``{\sl Principia Mathematica}", see \cite{Newton}, and it is so: \begin{enumerate} \item Equation (\ref{Neweq}) is no more than the classical $ma=F$, that is ``mass by acceleration is equal to force" written as $$ m\frac{{\rm d}}{{\rm d} t}v=F\, , $$ where the constant $m$ is contained in the covariant derivative, that is in the metric, which is usually called, as we will see in the sequel, the {\bf kinetic energy metric}. \item Note that if ${\rm F}=0$, then the dynamical trajectories are the geodesic curves of the metric ${\bf g}$. This correspond to the first Newton Law ({\sl Inertia Law}\/). This is what, at present, is stated as the existence of \textbf{inertial systems} of coordinates: those coordinate systems where the Newton laws are fulfilled. In our differential geometry formulation not preferred coordinates are used and Inertia Law is a consequence of the dynamical equation. \item With respect to the \textsl{third law}, or action and reaction law, there are long and complicated discussions about its rank of application. For detailed comments see \cite{SPIVAK-2004,Spivak2010} and references therein. It seems that its appropriate domain is in the search of models for different types of forces between bodies in contact or to state the relation of one system and the universe surrounding it. As our systems are studied as isolated ones, this law has no significant meaning in this geometric approach where every system is described and studied by its own structure given by the three defining elements $(Q,{\bf g},\omega)$ without relation to other external elements. \end{enumerate} \bigskip If $(U,\varphi=(x^i))$ is a local chart in $Q$, and $\{\Gamma^k_{ij}\}$ are the corresponding Christoffel symbols of the Levi--Civita connection $\nabla$, then the dynamical equation is locally given by $$ \ddot\gamma^k+\Gamma_{ij}^k\dot\gamma^i\dot\gamma^j={\rm F}^k\circ\gamma \ . $$ The relation in coordinates between $\omega$ and $F$ is as follows: if, $\omega=\omega_i{\rm d} x^i$ and \(\displaystyle {\rm F}={\rm F}^i\derpar{}{x^i}\), then we have that $$ \omega_i=g_{ij}{\rm F}^j \quad , \quad {\rm F}^i=g^{ij}\omega_j \ , $$ where $g^{ij}$ are the components of the inverse matrix of ${\bf g}$ in this local chart (the ``inverse metric''). It is well known that this relation between $\omega$ and $F$ is a particular case of the so called musical diffeomorphisms associated to the Riemannian metric ${\bf g}$, defined by: \begin{eqnarray*} \flat:{\rm T} Q\to{\rm T}^*Q & &\qquad (q,v)\mapsto (q,\mathop{i}\nolimits(v){\bf g}) \\ \vspace{2mm}\sharp:{\rm T}^* Q\to{\rm T} Q& &\qquad (q,\omega)\mapsto (q,\mathop{i}\nolimits(\omega){\bf g}^{-1})\, , \end{eqnarray*} and using these mappings we can write the dynamical equation in dual form: If $\gamma$ is the trajectory of the system, then $\mathop{i}\nolimits(\dot\gamma){\bf g}$ is called the \textbf{linear momentum} and, as $\nabla$ is the Levi--Civita connection, we have that $$ \nabla_{\dot{\gamma}}(\mathop{i}\nolimits(\dot\gamma){\bf g}) =\mathop{i}\nolimits(\nabla_{\dot{\gamma}}\dot\gamma){\bf g}=\mathop{i}\nolimits({\rm F}\circ\gamma){\bf g}=\mathop{i}\nolimits({\rm F}){\bf g} \circ\gamma=\omega\circ\gamma\, , $$ that is: the dynamical equation (\ref{Neweq}) is equivalent to the dual form: \begin{equation}\label{newtondual} \nabla_{\dot{\gamma}}(\mathop{i}\nolimits(\dot\gamma){\bf g}) =\omega\circ\gamma\,. \end{equation} Then, as if ${\rm F}=0$ then $\omega=0$, we have that the linear momentum $\mathop{i}\nolimits(\dot\gamma){\bf g}$ is conserved along the motion which is the dual statement of the inertial law. \subsection{Euler-Lagrange equations} Given a Riemannian manifold $(Q,{\bf g})$, we can associate a natural function defined in the tangent bundle, the so called \textbf{kinetic energy}, defined by: $$ \begin{array}{ccccc} K & \colon & {\rm T} Q & \longrightarrow & \mathbb{R} \\ & & (q,v) &\mapsto & \frac{1}{2}{\bf g}(v,v)\, , \end{array} $$ whose local expression is $$ K(q^i,v^j)=\frac{1}{2}g_{ij}(q)v^iv^j\, . $$ For a Newtonian system $(Q,{\bf g}, {\rm F})$, with $\omega=\mathop{i}\nolimits({\rm F})g$, and using the kinetic energy $K$, we can transform the Newton equation into a new form which is easier to state when we know the elements defining the system. \begin{teor}: Let $(Q,{\bf g},\omega)$ a Newtonian mechanical system, and $\gamma\colon [a,b]\subset\mathbb{R}\to Q$ a smooth curve contained in the domain $U\subset Q$ of a chart $(U,\varphi=(q^i))$ of $Q$. Then, $\gamma$ is a solution to the dynamical equation (\ref{Neweq}) if and only if, it satisfies the equations \begin{equation} \frac{d}{d t}\left(\derpar{K}{v^j}\circ\dot\gamma\right)- \derpar{K}{q^j}\circ\dot\gamma= (\omega\circ\gamma)\left(\derpar{}{q^j}\right)=\omega_j \circ\gamma =g_{ij}F^i \circ\gamma\ , \label{eel} \end{equation} for every $j$, which are called \textbf{Euler-Lagrange equations of the second kind} of the system. \end{teor} These equations are usually written as: $$ \frac{d}{d t}\left(\derpar{K}{v^j}\right)- \derpar{K}{q^j}=\omega_j =g_{ij}F^i \, . $$ The proof is a direct calculus on the function $K$ in local coordinates and using the local expression of the Christoffel symbols of the Levi--Civita connection: $$ [kl,j]=g_{ij}\Gamma^i_{kl}= \frac{1}{2}\left(\derpar{g_{jk}}{q^l}+\derpar{g_{jl}}{q^k}-\derpar{g_{lk}}{q^j}\right) \ . $$ Why have we changed the intrinsic dynamical equation (\ref{Neweq}) into this expression in local coordinates? These equations were obtained by Lagrange in 1788 and published in \cite{Lagrange} and are related with variational calculus. They are easier to calculate for a particular system than the dynamical equation because you don't need to know the Christoffel symbols of the connection. In fact they have the same ``formal" theoretical expression in every coordinate system, that is, you need to apply the same rule to obtain them independently of the used coordinates, called ``generalised coordinates" by Lagrange. \bigskip To finish this section a comment on other kinds of forces. In this initial section we have preferred to keep us in the case of simple forces, depending only on the position coordinates, but it is usual that the mechanical forces, or the work forms, depend not only on the position coordinates but also on the time and the velocities. For the case of time depending forces, see Section 5 where a short introduction with references is given. The dependency on the velocities is the case of dissipative systems or electromagnetic, Lorentz, forces. Geometrically this means that $\omega\in{\mit\Omega}^1(Q,\tau_Q)$ and ${\rm F}\in\mbox{\fr X} (Q,\tau_Q)$, that is they are forms, or vector fields, along the natural projection of the corresponding phase space on the configuration manifold. In this case the only change we need to do in the dynamical equations is to write $\omega\circ\dot{\gamma}$, or ${\rm F}\circ\dot{\gamma}$, instead of $\omega\circ\gamma$ or ${\rm F}\circ\gamma$, respectively. For a detailed study of general forces in mechanics in a geometric way you can see \cite{God-69, Leon-2021}. \subsection{Conservative systems} There is a special kind of Newtonian systems, those which are of conservative, or mechanical Lagrangian, type. A Newtonian system $(Q,{\bf g},\omega)$ is \textbf{conservative} if the work form is exact, that is, there exists $V\in{\rm C}^\infty (Q)$ such that $\omega =-{\rm d} V$. The negative sign is a customary tradition in Physics. In this case, the function $V$ is called the \textbf{potential energy} of the system. Observe that in this case the force vector field is ${\rm F}=-{\rm grad}\ V$. For these systems $(Q,{\bf g},\omega=-{\rm d} V)$, the \textbf{total energy} or \textbf{mechanical energy} of the system is the function $E\in{\rm C}^\infty({\rm T} Q$ defined as $$ \begin{array}{ccccc} E & \colon & {\rm T} Q & \longrightarrow & \mathbb{R} \\ & & (q,v) &\mapsto & K(q,v)+(\tau^*_QV)(q,v)\,. \end{array} $$ We usually write $E=K+V$. As a direct consequence of the definition we have: \begin{teor} \textbf{(Mechanical energy conservation)}: Let $(Q,{\bf g},\omega=-{\rm d} V)$ be a conservative Newtonian mechanical system, then the mechanical energy $E$ is invariant, that is constant along the trajectories of the system. \end{teor} ({\sl Proof\/})\quad If $\gamma\colon [a,b]\subset\mathbb{R}\to Q$ is a solution to the Newton equation, then \begin{equation} \nabla_{\dot\gamma}\dot\gamma={\rm F}\circ\gamma \quad , \quad \mathop{i}\nolimits({\rm F}){\bf g}= \omega=-{\rm d} V \ , \label{eqdinsis} \end{equation} then we have that \begin{eqnarray*} \frac{d (E\circ\dot\gamma)}{d t} &=& \nabla_{\dot\gamma}(E\circ\dot\gamma)= \nabla_{\dot\gamma}\left(\frac{1}{2}{\bf g}(\dot\gamma,\dot\gamma)+V\circ\gamma\right) \\ \vspace{2mm}&=&{\bf g}(\nabla_{\dot\gamma}\dot\gamma,\dot\gamma)+{\rm d} V(\dot\gamma)=0 \ . \end{eqnarray*} \qed In this case, the Lagrange equations have a simpler expression. Consider the function ${\cal L}=K-\tau_Q^*V\in\mathcal{C}^\infty({\rm T} Q)$, called {\bf Lagrangian} of the system. The Euler--Lagrange equations for $(Q,{\bf g},\omega=-{\rm d} V)$ are $$ \frac{d}{d t}\left(\derpar{K}{v^j}\circ\dot\gamma\right)- \derpar{K}{q^j}\circ\dot\gamma=\omega_j\circ\gamma= (-{\rm d} V\circ\gamma)\left(\derpar{}{q^j}\right)= -\derpar{(\tau_Q^*V)}{q^j}\circ\gamma \ , $$ which we can write as \begin{equation} \frac{d}{d t}\left(\derpar{{\cal L}}{v^j}\circ\dot\gamma\right)- \derpar{{\cal L}}{q^j}\circ\dot\gamma=0 \ , \label{eel1e} \end{equation} recalling that \(\displaystyle\derpar{(\tau_Q^*V)}{v^j}=0\). From now on we write ${\cal L}=K-V$ for simplicity. The local expression of ${\cal L}$ is $$ {\cal L} (q,v)=(K-V)(q,v)= \frac{1}{2}g_{ij}(q)v^iv^j-V(q)\, . $$ These conservative systems $(Q,{\bf g},\omega=-{\rm d} V)$ are called {\bf simple mechanical systems} and the associated Lagrangians {\bf natural lagrangians}. \section{Holonomic constrained systems. D'Alembert principle} \protect\label{sdnlh} A relevant topic in classical mechanics is the study of systems with holonomic or nonholonomic constraints. A {\bf constraint} is a restriction in the motion of the system. This restriction may be in the configuration space: the system is obliged to move in a particular subset of this manifold. Or it may restrict the possible velocities to be reached, to be used to move. From our geometric approach, the first ones, called {\bf holonomic constraints}, are defined by a submanifold of the configuration space where the system must remain. The other, or {\bf nonholonomic constraints}, by a submanifold of the phase space of coordinate--velocities allowing to reach all the positions in the configuration manifold. To write the equations of motion we need to add to our postulates some new idea related to the submanifold of constraints. There are several approaches to tackle this problem. The older and best stablished is {\bf d'Alembert principle}, which in geometric terms is specially clarifying on the motion of these kinds of systems. For an historical approach to constrained systems, in particular nonholonomic ones, see \cite{Leon-2012} and the extended bibliography contained there. As d'Alembert principle is not the only one used to obtain the equations of motion, see the previous reference for other known principles and relations among them. For a nice discussion on d'Alembert principle in different situations with a geometric viewpoint, see \cite{MARLE-98}. \subsection{Holonomic constraints. Holonomic d'Alembert principle} Let $(Q,{\bf g},\omega)$ be a Newtonian mechanical system and ${\rm F}\in\mbox{\fr X} (Q)$ its force field. Let ${\bf S}$ be a submanifold of $Q$, usually called {\bf submanifold of holonomic constraints}, and $j\colon ${\bf S}$\hookrightarrow Q$ the natural embedding. We intend to describe the dynamics of the given system when it is obliged to evolve in the submanifold ${\bf S}$ of the configuration space, hence with some restriction in the coordinates the system can reach. To force this behaviour, it is compulsory to apply a new force field ${\rm R}$, called {\bf constraint force}, which obliges the system to remain in ${\bf S}$. In general, such force depends, not only on the position, but also on the velocity; then ${\rm R}\in\mbox{\fr X} (Q,\tau_Q)$ and, moreover, we don't know it; in fact it is a new unknown to find. Then we have a new dynamical equation for curves $\gamma\colon [a,b]\subset\mathbb{R}\to {\bf S}$, which is \begin{equation} \nabla_{\dot\gamma}\dot\gamma ={\rm F}\circ\gamma+{\rm R}\circ\dot\gamma \ . \label{eqdinS} \end{equation} To solve this problem, we introduce the so called \textbf{d'Alembert principle}: The constraint force ${\rm R}$ is orthogonal to the submanifold ${\bf S}$; that is, for every $q\in {\bf S}$ and for every $ u,v\in{\rm T}_q{\bf S}$, we have ${\bf g}(u,{\rm R}(q,v))=0$, supposing that ${\rm R}$ depends on the velocities. We impose that the constraint force is orthogonal to the constraint submanifold. How to obtain the equation of motion and the expression of the constraint force? If $g_{\bf S}=j^*g$, let $\nabla^{\bf S}$ be the Levi-Civita connection in the Riemannian manifoldt $({\bf S},g_{\bf S})$. Then, we have the following natural geometric elements and consequences: \begin{description} \item[a)] For every $q\in {\bf S}$ the orthogonal decomposition ${\rm T}_qQ={\rm T}_q{\bf S}\oplus ({\rm T}_q{\bf S})^\perp $ and the orthogonal projections, \vspace{-2mm} $$ \pi_{\bf S}(q)=\colon {\rm T}_qQ\rightarrow{\rm T}_q{\bf S} \quad , \quad \pi^\perp_{\bf S}(q)=\colon {\rm T}_qQ\rightarrow ({\rm T}_q{\bf S})^\perp \ , $$ \item[b)] The global orthogonal projections \vspace{-2mm} $$ \pi_{\bf S}=\colon {\rm T} Q\vert_ {\bf S}\rightarrow{\rm T} {\bf S} \quad , \quad \pi^\perp_{\bf S}=\colon {\rm T} Q\vert_ {\bf S}\rightarrow {\rm T} {\bf S}^\perp \ . $$ Thus d'Alembert principle is reduced to $\pi_{\bf S}\circ{\rm R}=0$. \item[c)] The Levi--civita connection on the submanifold ${\bf S}$ satisfies $\nabla^ {\bf S}=\pi_ {\bf S}\circ\nabla$. This can be directly proved because $\pi_ {\bf S}\circ\nabla$ is a $g_{\bf S}$--Riemannian symmetrical connection. \end{description} Now, taking the dynamical equation (\ref{eqdinS}) and splitting up it into the tangent and orthogonal components with respect to ${\bf S}$, we obtain respectively \begin{eqnarray} \pi_ {\bf S}(\nabla_{\dot\gamma}\dot\gamma) &=& \pi_ {\bf S}\circ{\rm F}\circ\gamma+\pi_ {\bf S}\circ{\rm R}\circ\dot\gamma = \pi_ {\bf S}\circ{\rm F}\circ\gamma \ , \label{eqdinsplit1} \\ \pi_ {\bf S}^\perp(\nabla_{\dot\gamma}\dot\gamma) &=& \pi_ {\bf S}^\perp\circ{\rm F}\circ\gamma+\pi_ {\bf S}^\perp\circ{\rm R}\circ\dot\gamma = \pi_ {\bf S}^\perp\circ{\rm F}\circ\gamma +{\rm R}\circ\dot\gamma \ . \label{eqdinsplit2} \end{eqnarray} and, denoting by ${\rm F}^{\bf S}=\pi_ {\bf S}\circ{\rm F}\in\mbox{\fr X} ({\bf S})$ the projection of ${\rm F}$ on ${\bf S}$, equation \eqref{eqdinsplit1} is simply \begin{equation} \nabla^{\bf S}_{\dot\gamma}\dot\gamma = {\rm F}^{\bf S}\circ\gamma \ ; \label{eqdinresS} \end{equation} that is: the dynamical equation of the Newtonian mechanical system $({\bf S},g_{\bf S},\omega_{\bf S})$, where $\omega_{\bf S}=\mathop{i}\nolimits({\rm F}^{\bf S})g_{\bf {\bf S}}$. Observe that solutions to equation (\ref{eqdinresS}) are curves $\gamma\colon [a,b]\subset\mathbb{R}\to {\bf S}$ such that, introducing each of them into equation (\ref{eqdinsplit2}), allows us to calculate the constraint force ${\rm R}$ for that trajectory $\gamma$, obtaining \begin{equation}\label{constraintR} \nabla_{\dot\gamma}\dot\gamma -\nabla^{\bf S}_{\dot\gamma}\dot\gamma= {\rm F}\circ\gamma-{\rm F}^{\bf S}\circ\gamma +{\rm R}\circ\dot\gamma \ . \end{equation} Then we know ${\rm R}\circ\dot\gamma\in\mbox{\fr X} (Q,\dot\gamma)$. Notice that we can calculate the constraint force only on every trajectory of the system but {\bf not} as a vector field depending on the velocities: we need to calculate one trajectory using equation (\ref{eqdinresS}) and then we can obtain the constraint force on this trajectory by means of equation (\ref{constraintR}) where the only unknown is ${\rm R}\circ\dot\gamma$. \bigskip As in the case of equation (\ref{newtondual}) we can dualise equation (\ref{eqdinsplit1}) and we obtain $$ \nabla^{\bf S}_{\dot\gamma}(\mathop{i}\nolimits(\dot\gamma){\bf g}_{\bf S})=\omega_{\bf S}\circ\gamma $$ being $\omega_{\bf S}=j^*\omega$. In fact we can state the {\bf dual d'Alembert principle} as follows: If $\rho=\mathop{i}\nolimits({\rm R})g$, then $j^*\rho =0$. And by means of the dual of the above orthogonal projections in the cotangent bundle we can obtain the value of $\rho$ on every trajectory of the system and hence of {\rm R}. This dual form of d'Alembert principle is also called ``Principle of virtual work": the integral of the work form $\rho$ along any piece of a possible trajectory $\gamma$ of the system is zero. \bigskip {\bf Examples:} \begin{enumerate} \item {\bf Systems with one constraint}: Let ${\bf S}=\{ q\in Q \ ;\ \varphi (q)=0\}$, with $\varphi\in{\rm C}^\infty (Q)$ and suppose that ${\rm d}\varphi(q)\neq 0$, for every $q\in {\bf S}$, hence ${\bf S}$ is a hypersurface of $Q$. Let $X\in\mbox{\fr X} (Q)$ such that $\mathop{i}\nolimits (X){\bf g}={\rm d}\varphi$, then $X$ is orthogonal to $ {\bf S}$. In this case we have $$ \pi^\perp_ {\bf S}({\rm F})=\frac{{\bf g}({\rm F},X)}{{\bf g}(X,X)}X= \frac{{\rm d}\varphi ({\rm F})}{\| {\rm d}\varphi\|^2}X \ ,\qquad {\rm F}^ {\bf S}=\pi_ {\bf S}({\rm F})={\rm F}-\frac{{\rm d}\varphi ({\rm F})}{\| {\rm d}\varphi\|^2}X \ . $$ This allows us to find the trajectories of the system as solutions to the differential equation $$ \nabla^ {\bf S}_{\dot\gamma}\dot\gamma= {\rm F}\circ\gamma-\frac{{\rm d}\varphi ({\rm F})}{\| {\rm d}\varphi\|^2}X \ . $$ Once we have a trajectory solution $\gamma$, the constraint force is given by the equation $$ \nabla_{\dot\gamma}\dot\gamma-\nabla^ {\bf S}_{\dot\gamma}\dot\gamma= \frac{{\rm d}\varphi ({\rm F})}{\| {\rm d}\varphi\|^2}\circ\gamma-{\rm R}\circ\dot\gamma \ , $$ where the only unknown is ${\rm R}\circ\dot\gamma$. These expressions are related to the second fundamental form of the hypersurface ${\bf S}$. \item {\bf Systems with several constraints}: Consider now $ {\bf S}=\{ q\in Q \ ;\ \varphi_1(q)=0,\ldots ,\varphi_h(q)=0\}$, with $\varphi_1\ldots ,\varphi_h\in{\rm C}^\infty (Q)$, such that ${\rm d}\varphi_1(q),\ldots{\rm d}\varphi_h(q)$ are linearly independent at every point $q\in {\bf S}$ (we assume that $ {\bf S}$ is not empty). Let $\moment{Z}{1}{n-h}\in\mbox{\fr X} (Q)$ such that: \begin{description} \item[i)] $i(Z_a)d\varphi_{\beta}=0$, for $1\leq \beta\leq h,\,\,1\leq a\leq n-h$, \item[ii)]$ {\bf g}(Z_a, Z_b)=0, a\not= b$, $1\leq a,b\leq n-h$. \end{description} To obtain these vector fields $Z_a$, it is enough to take vector fields $\moment{X}{1}{n-h}\in\mbox{\fr X} (Q)$ satisfying the first condition (a linear equation) and apply the well known {\sl Gramm-Schmidt method}. In this situation, we have that $$ \pi_ {\bf S}({\rm F})=\sum_{a=1}^{n-h}\frac{{\bf g}({\rm F},Z_a)}{{\bf g}(Z_a,Z_a)}Z_a $$ hence, as in the previous case, we obtain the dynamical equation and the expression of the constraint force along every trajectory solution. \end{enumerate} The above examples can be taken as local coordinate expression for a general submanifold of the configuration manifold. \subsection{Euler-Lagrange equations for holonomic constraints} We have shown that for a Newtonian mechanical system $(Q,{\bf g},\omega)$ constrained to move on the submanifold $j\colon {\bf S}\hookrightarrow Q$, the dynamics is given by the Newtonian mechanical system $( {\bf S},g_ {\bf S},\omega_ {\bf S})$. To write the corresponding Euler-Lagrange equation of this last system, take a local chart $(U,q^i)$ in $ {\bf S}$ and the corresponding natural lifting $(\tau_Q^{-1}(U),q^i,v^i)$ to ${\rm T} {\bf S}$. Then we have \begin{equation} \frac{d}{d t}\left(\derpar{K_ {\bf S}}{v^k}\circ\dot\gamma\right)- \derpar{K_ {\bf S}}{q^k}\circ\dot\gamma= (g_ {\bf S})_{ik}{(\rm F^ {\bf S})}^i \ , \label{equno} \end{equation} where $K_ {\bf S}\in{\rm C}^\infty({\rm T} {\bf S})$ is the {\sl kinetic energy} of the system $( {\bf S},g_ {\bf S},\omega_ {\bf S})$. It i {\bf S} easy to show that $K_ {\bf S}=({\rm T} j)^*K$. If the dynamical system is conservative, that is $\omega=-{\rm d} V$, then $$ \omega_ {\bf S}=j^*\omega=-j^*{\rm d} V=-{\rm d} j^*V \ , $$ and the above equation takes the expression $$ \frac{d}{d t}\left(\derpar{{\cal L}_ {\bf S}}{v^j}\circ\dot\gamma\right)- \derpar{{\cal L}_ {\bf S}}{q^j}\circ\dot\gamma=0 \ , $$ where ${\cal L}_ {\bf S}:=({\rm T} j)^*{\cal L}=({\rm T} j)^*(K-V)$. Notice that the constraint force is {\bf not} in these equations. This was one of the innovations developed by Lagrange, to obtain the dynamical equations without mention to the constraint force. In fact, if $(W,x^i)$ is a local chart in $Q$, and $(U,q^j)$ is another in ${\bf S}$, both adapted to the inclusion map $j\colon U\hookrightarrow W$, hence $U={\bf S}\cap W$, given by the local expression $x^i=f^i(q)$, and hence \(\displaystyle \dot x^i=\derpar{f^i}{q^j}\dot q^j\). This shows that it is enough to know the Lagrangian function ${\cal L}$ of the unconstrained system, to introduce these last expressions of $x^i,\dot x^i$ in the Euler-Lagrange equations of the unconstrained system and, by direct derivation, to obtain the Euler-Lagrange equations of this constrained system using the local coordinates $(q^i,\dot{q}^i)$ of ${\rm T}{\bf S}$, the real phase space of the system. See \cite{Ar-89} for interesting comments on this topic. \subsection{Product systems} Suppose we have a family of Newtonian systems, $(Q_\mu,{\bf g}_\mu, {\rm F}_\mu)$, $\mu=1,\ldots, N$. If the force fields ${\rm F}_\mu$ depend only on the corresponding configuration manifold, that is ${\rm F}_\mu\in \mbox{\fr X}(Q_\mu)$, then we have a family of systems of differential equations for a curve $\gamma=(\gamma_1,\ldots,\gamma_N)$, decomposed into $N$ non--coupled equations, one for every $\gamma_\mu$. But if the force fields depend on the manifold product, ${\rm F}_\mu(q_1,\ldots,q_N)$ instead of ${\rm F}_\mu(q_\mu)$, then we have a coupled family of differential equations. In some places these systems are called in interaction. We can represent the situation as another Newtonian system $(Q,{\bf g},\omega)$ with the following elements as configuration manifold, Riemannian metric, work form and force field respectively: $$ Q=\prod_{\mu=1}^NQ_\mu , \qquad {\bf g}=\oplus_{\mu=1}^N{\bf g}_\mu\, ,\qquad\omega=(\omega_1,\ldots ,\omega_N)\,,\qquad {\rm F}=({\rm F}_1,\ldots ,{\rm F}_N) , $$ where in fact $\omega_\mu\in\Omega^1(Q_\mu, \pi_\mu)$, ${\rm F}_\mu\in\mbox{\fr X} (Q_\mu,\pi_\mu)$, being $\pi_\mu\colon \prod_{\nu=1}^N Q_\nu\to Q_\mu$ the natural projections. As we have a new Newtonian system, we can consider the case of constrained one, that is a holonomic constrained system: the system is obliged to move in a submanifold ${\bf S}\subset Q$. The constraint force is also decomposed as ${\rm R}=({\rm R}_1,\ldots,{\rm R}_N)$, where ${\rm R}_\mu\in \mbox{\fr X} (Q_\mu,\pi_\mu)$, $\mu=1,\ldots,N$. The dynamical equation has the same form: $$ \nabla_{\dot\gamma}\dot\gamma={\rm F}\circ\gamma +{\rm R}\circ\dot\gamma \ ; $$ for curves $\gamma\colon [a,b]\subset\mathbb{R}\to S$, $\gamma=(\gamma_1,\ldots,\gamma_N)$, $\gamma_\mu\colon [a,b]\subset\mathbb{R}\to Q_\mu$, or the corresponding constraint submanifold in the case of holonomic constraints. \section{Nonholonomic constrains. Nonholonomic d'Alembert principle} \protect\label{slnh} As far as we know, the description of this kind of systems is not contained in the literature with this Riemannian approach, hence we develop them in detail. For a classical approach see for example \cite{GPS-01,Som-1952}. Other geometric approaches can be seen in \cite{CLMM-2002, GMM-2003,LEWIS1998,Lewis2020}. For an extended bibliography on this topic, for both classical and geometric approaches, see \cite{Leon-2017}. In \cite{Koiller-2019}, there is a dual standpoint using Cartan equivalence that can be directly written in our Riemannian approach. \subsection{Nonholonomic constrained systems} Let $(Q,{\bf g},\omega)$ be a Newtonian mechanical system, ${\rm F}\in\mbox{\fr X} (Q)$ the force field. Let $C$ be a submanifold of ${\rm T} Q$, $j_C\colon C\hookrightarrow {\rm T} Q$ the natural embedding, and suppose that $\tau_Q(C)=Q$. In this situation $C$ is called a {\bf submanifold of nonholonomic constraints}. We want to describe the dynamics of the system when it is constrained to evolve in the submanifold $C$ of the phase space. The constrained system is given by $(Q,{\bf g},F,C)$. Notice that the system is not restricted in the configuration manifold, that is in the positions, but in the possible velocities to move with. As in the holonomic case, to solve this problem we suppose that there exists a {\bf constraint force} ${\rm R}$, usually depending on the velocities, that is ${\rm R}\in\mbox{\fr X} (Q,\tau_Q)$, which forces the system to move in $C$. This constraint force is unknown. Then the Newton dynamical equation is given for curves $\gamma\colon [a,b]\subset\mathbb{R}\to Q$ such that satisfy \begin{enumerate} \item $\dot\gamma (t)\in C$, $ t\in[a,b]$. \item $\nabla_{\dot\gamma}\dot\gamma ={\rm F}\circ\gamma+{\rm R}\circ\dot\gamma$. \end{enumerate} And we need to state conditions allowing us to find the trajectories of the system and calculate ${\rm R}$\footnote{Arnold Sommerfeld, says that this force $\mathrm{R}$ is a ``geometric force'' versus ${\rm F}$ which is an ``applied force''. See \cite{Som-1952}.}. In order to state the nonholonomic d'Alembert principle, we need some geometric preliminaries. Let $(q,v)\in C$. The condition assumed on $C$, $\tau_Q (C)=Q$, tells us that the dimension of the subspace of ${\rm V}_{(q,v)}({\rm T} Q)$ which is tangent to $C$, does not depend on the point $(q,v)$. Let $$ {\rm T}_{(q,v)}^VC={\rm V}_{(q,v)}({\rm T} Q)\cap{\rm T}_{(q,v)}C=\{ w\in{\rm V}_{(q,v)}({\rm T} Q)\ ;\ w\in{\rm T}_{(q,v)}C\} $$ be the vertical subspace tangent to $C$. This is a vector subbundle of ${\rm T}({\rm T} Q)$ and we can write ${\rm T}^VC={\rm V}({\rm T} Q)|_{C}\cap{\rm T} C$ as vector bundles over the manifold $C$. For $(q,v)\in {\rm T} Q$, consider the vertical lifting from the point $q\in Q$ to $(q,v)$ given, as usual, by \begin{align*} \lambda_q^{(q,v)}\colon&{\rm T}_qQ\to{\rm V}_{(q,v)}({\rm T} Q)\\ &\quad u_q\,\mapsto\quad\lambda_q^{(q,v)}(u_{q}):\phi\mapsto\lim_{t\rightarrow 0}\frac{\phi(q,v+tu)-\phi(q,v)}{t}, \end{align*} that is, the directional derivative of $\phi\in{\rm C}^\infty ({\rm T} Q)$ along $u_{q}$ at the point $(q,v)\in{\rm T} Q$. As $\lambda_q^{(q,v)}$ is an isomorphism from ${\rm T}_{q}Q$ to $V_{(q,v)}({\rm T} Q)$, let $({\rm T}_{(q,v)}^VC)_q$ the inverse image of ${\rm T}_{(q,v)}^VC\subset V_{(q,v)}({\rm T} Q)$ by $\lambda_q^{(q,v)}$. Then ${\rm T}_{q}Q=({\rm T}_{(q,v)}^VC)_q\oplus({\rm T}_{(q,v)}^VC)^\perp_q$, being this one an orthogonal decomposition with respect to ${\bf g}$. Now we introduce the \textbf{Nonholonomic d'Alembert principle}: The constraint force ${\rm R}\in\mbox{\fr X} (Q,\tau_Q)$ satisfies that $$ {\rm R}(q,v)\in({\rm T}^V_{(q,v)}C)^\perp_q \ , $$ that is, ${\bf g}({\rm R}(q,v),w)=0$, for every $w\in ({\rm T}^V_{(q,v)}C)_q$. The constraint force in $(q,v)\in C$ is orthogonal to the subspace $({\rm T}^V_{(q,v)}C)_q\subset{\rm T}_q Q$ of tangent vectors at $q$ whose vertical lifting is tangent to $C$. In the classical physics literature, the elements in $({\rm T}^V_{(q,v)}C)_q$ are called {\bf virtual velocities}. \bigskip \noindent{\bf Comment}: If there are no constraints, that is $C={\rm T} Q$, then for every $(q,v)\in C$ we have that ${\rm T}_{(q,v)}^VC=V_{(q,v)}({\rm T} Q)$, hence $({\rm T}^V_{(q,v)}C)_q={\rm T}_{q}Q$ and $({\rm T}^V_{(q,v)}C)^\perp_q=\{0\}$, that is ${\rm R}(q,v)=0$ and there is no constraint force. \bigskip This principle allows to obtain the expression of the constraint force and the dynamical equations of the trajectories of the system as we will see in the next paragraphs. \subsection{Relation between the constraint force and the constraints} In order to do this, we need to characterize the subspace $({\rm T}^V_{(q,v)}C)_q$ in relation with the \textbf{constraints}, defined as the functions vanishing on the submanifold $C$, that is functions $\phi:{\rm T} Q\to\mathbb{R}$ such that $j_C^*\phi =0$. \medskip First, let $\phi\in{\rm C}^\infty ({\rm T} Q)$, and consider the 1-form ${\rm d}^V\phi\in{\mit\Omega}^1(Q,\tau_Q)$ defined by $$ ({\rm d}^V\phi(q,v))(u)= ({\rm d}\phi\circ\lambda_q^{(q,v)})(u)= {\rm d}\phi (\lambda_q^{(q,v)}(u)) , \quad (q,v)\in{\rm T} Q, \quad u\in{\rm T}_qQ \ ; $$ whose expression in a local natural chart $(q^i,v^i)$ of ${\rm T} Q$ is \(\displaystyle{\rm d}^V\phi=\derpar{\phi}{v^i}{\rm d} q^i\). We have the following result: \begin{prop}\label{verticalvectors} Let $(q,v)\in C$. \begin{enumerate} \item If $w\in{\rm T}_qQ$, then \(\displaystyle w\in({\rm T}_{(q,v)}^VC)_q\) if, and only if, $({\rm d}^V\phi(q,v))(w)=0$, for every $\phi\in{\rm C}^\infty ({\rm T} Q)$ such that $j_C^*\phi =0$, that is for every constraint. \item Let $(({\rm T}_{(q,v)}^VC)_q)^o=\{\alpha\in{\rm T}^{*}_{q}Q; \alpha(w)=0, \forall w\in({\rm T}_{(q,v)}^VC)_q\}\subset{\rm T}^{*}_{q}Q$ be the annihilator of $({\rm T}_{(q,v)}^VC)_q$, then $ (({\rm T}_{(q,v)}^VC)_q)^o=\{{\rm d}^V\phi(q,v);\forall \phi\in{\rm C}^\infty({\rm T} Q), j_C^*\phi=0\}. $ \item If $w\in{\rm T}_qQ$, then \(\displaystyle w\in({\rm T}_{(q,v)}^VC)_q\) if, and only if, $\mathop{i}\nolimits(w){\bf g} \in(({\rm T}_{(q,v)}^VC)_q)^o$. \end{enumerate} \end{prop} ({\sl Proof\/})\quad To prove the first item let $w\in{\rm T}_qQ$, then we have \begin{eqnarray*} &w\in({\rm T}_{(q,v)}^VC)_q& \Leftrightarrow \\ &\lambda_q^{(q,v)}(w)\in{\rm T}_{(q,v)}^VC &\Leftrightarrow\\ \lambda_q^{(q,v)}(w)(\phi)=0, &\forall \phi\in{\rm C}^\infty({\rm T} Q), \mathrm{with} \,\, j_C^*\phi=0 &\Leftrightarrow\\ {\rm d}\phi(\lambda_q^{(q,v)}(w))=0, &\forall \phi\in{\rm C}^\infty({\rm T} Q), \mathrm{with} \,\, j_C^*\phi=0 &\Leftrightarrow \\ ({\rm d}^V\phi(q,v))(w)=0,&\forall \phi\in{\rm C}^\infty({\rm T} Q), \mathrm{with} \,\, j_C^*\phi=0.& \end{eqnarray*} The second item is a direct consequence of the first and the third can be obtained from the definitions. \qed \begin{corol} Let ${\rm R}\in\mbox{\fr X} (Q,\tau_Q)$ and $(q,v)\in C$; then ${\rm R}(q,v)\in({\rm T}^V_{(q,v)}C)^\perp_p$ if, and only if, $\mathop{i}\nolimits({\rm R}(q,v)){\bf g}\in(({\rm T}_{(q,v)}^VC)_q)^o$. \end{corol} Usually the submanifold $C$ is given by the annihilation of a finite family of constraints, functions defined in ${\rm T} Q$. We are going to characterize $(({\rm T}_{(q,v)}^VC)_q)^o$ using these constraints. \bigskip From here to the end of this section, we suppose that the submanifold $C$ is defined by the vanishing of $r$ functions $\{\phi^i\}$, with $r<n=\dim\, Q$, satisfying the condition $$ \mathrm{rank}\,\left(\derpar{\coor{\phi}{1}{r}}{\coor{v}{1}{n}}\right)=r\, . $$ Then $\dim\, C=2n-r$ and we have: \begin{prop} Let $(q,v)\in C$, then \begin{enumerate} \item $\dim {\rm T}^V_{(q,v)}C=n-r$. \item $({\rm T}^V_{(q,v)}C)_{q}=\{w\in {\rm T}_{q}Q; ({\rm d}^V\phi^i(q,v))(w)=0, i=1,\ldots,r \}$. \item The subspace $(({\rm T}_{(q,v)}^VC)_q)^o$ is generated by $\{{\rm d}^V\phi^1(q,v),\ldots,{\rm d}^V\phi^r(q,v)\}$. Or what is the same, if $\alpha\in{\rm T}_{q}^{*}Q$ satisfies $\alpha |_{({\rm T}^V_{(q,v)}C)_{q}}=0$, then $\alpha$ is a linear combination of ${\rm d}^V\phi^1(q,v),\ldots,{\rm d}^V\phi^r(q,v)$. \end{enumerate} \end{prop} ({\sl Proof\/})\quad Let $(q^{i}, v^{i})$ a natural coordinate system on $TQ$. \begin{enumerate} \item The assumed condition \(\displaystyle {\rm rank}\,\left(\derpar{\coor{\phi}{1}{r}}{\coor{v}{1}{n}}\right)=r\) implies that, up to a change of order in the coordinates $\coor{q}{1}{n}$, we can suppose that $$ \det\, \left(\derpar{\coor{\phi}{1}{r}}{\coor{v}{1}{r}}\right)\not= 0. $$ Then $(q^{1},\ldots,q^{n},\phi^1,\ldots,\phi^r,v^{r+1},\ldots,v^{n})$ is a local coordinate system of ${\rm T} Q$ by the Inverse Function Theorem. The vector space $V_{q,v}({\rm T} Q)$ is generated by $$ \left\{\derpar{}{\phi^1},\ldots,\derpar{}{\phi^r},\derpar{}{v^{r+1}},\ldots,\derpar{}{v^n}\right\}_{(q,v)}\, , $$ and the subspace ${\rm T}^V_{(q,v)}C\subset V_{q,v}({\rm T} Q)$ is generated by $$ \left\{\derpar{}{v^{r+1}},\ldots,\derpar{}{v^n}\right\}_{(q,v)}\, . $$ Hence the first item is proved. \item The inclusion part is proved in the first item of Proposition \ref{verticalvectors} and the equality is a matter of dimensions. \item The previous items imply that $\{{\rm d}^V\phi^1(q,v),\ldots,{\rm d}^V\phi^r(q,v)\}$ is a basis of $ (({\rm T}_{(q,v)}^VC)_q)^o$. \end{enumerate} \qed Then, as a corollary we obtain: \begin{prop} For $(q,v)\in C$, the form $\eta\in{\mit\Omega}^1(Q,\tau_Q)$ satisfies \(\displaystyle j^*_C\eta\vert_{({\rm T}^V_{(q,v)}C)_q}=0\) if and only if there exist $\moment{\lambda}{1}{r}\in{\rm C}^\infty ({\rm T} Q)$ such that $\eta=\lambda_\alpha{\rm d}^V\phi^\alpha=\lambda_1{\rm d}^V\phi^1+\ldots+\lambda_r{\rm d}^V\phi^r$ \end{prop} ({\sl Proof\/})\quad Because for every $(q,v)\in C$, we have that $\eta(q,v)\in(({\rm T}_{(q,v)}^VC)_q)^o$. \qed \bigskip In the case that the constraints define only locally the submanifold $C$, then the above results are valid only in the corresponding open set. The last Proposition allows us to state the so called \rm{Dual d'Alembert nonholonomic principle}: {\it The work form $\mathop{i}\nolimits({\rm R}){\bf g}$ corresponding to the constraint force ${\rm R}$ annihilates the virtual velocities of the system}. And, as an immediate result, we have: \begin{corol} If ${\rm R}$ is the nonholonomic constraint force, then there exist $\lambda_1,\ldots,\lambda_r\in{\rm C}^\infty ({\rm T} Q)$ such that $$ \mathop{i}\nolimits({\rm R}){\bf g}=\lambda_\alpha{\rm d}^V\phi^\alpha=\lambda_\alpha\derpar{\phi^\alpha}{v^j}{\rm d} q^j \ , $$ and, as a consequence, $$ {\rm R}=\lambda_\alpha\derpar{\phi^\alpha}{v^j}g^{jk}\derpar{}{q^k} \ . $$ \end{corol} \begin{definition} The functions $\lambda_1,\ldots,\lambda_r$ are called \textbf{Lagrange multipliers} of the nonholonomic system. \end{definition} \noindent{\bf Comment}: Observe that in the case of holonomic constraints we could not obtain the global expression of the constraint force, we obtain the constraint form along any particular trajectory of the system, but in the present situation we can obtain one expression for ${\rm R}$ as a vector field depending on the velocities and on the Lagrange multipliers. This is because holonomic constraints are not a particular case of the nonholonomic ones. See also the comment following equation (\ref{constraintR}). \bigskip \noindent{\bf Important particular case}: The submanifold $C\subset{\rm T} Q$ is a linear subbundle of ${\rm T} Q$. \begin{enumerate} \item In this case $C$ is defined by the annihilation of a family of differential forms, that is, we have $\omega^{\alpha}\in\Omega^{1}(Q)$, $\alpha=1,\ldots,r$, linearly independent at every point of $Q$, and $$ C=\{(q,v)\in{\rm T} Q;\, \omega^{\alpha}_{q}(v)=0,\alpha=1,\ldots,r\}\, . $$ In local coordinates, if $\omega^{\alpha}=a^{\alpha}_{j}(q){\rm d} q^{j}$, then $\phi^\alpha=a^{\alpha}_{j}(q)v^{j}$, that is the constraints are linear in the velocities, and the expression of the constraint force is $$ {\rm R}=\lambda_\alpha a^\alpha_{j}g^{jk}\derpar{}{q^k} \ . $$ \item Alternatively we can suppose that the subbundle $C$ is given as a regular distribution $\mathcal{D}$, the distribution annihilated by $\{\omega^{\alpha}, \alpha=1,\ldots,r\}$. If $(q,v)\in\mathcal{D}$, by linearity we have that ${\rm T}_{(q,v)}^VC=\lambda_{q}^{(q,v)} (\mathcal{D}_{q})$, hence $({\rm T}_{(q,v)}^VC)_{q}=\mathcal{D}_{q}$ and $({\rm T}_{(q,v)}^VC)^\perp_q=\mathcal{D}^\perp_q$, then the constraint force ${\rm R}$ is orthogonal to $\mathcal{D}$. This is the situation usually considered in the classical books on mechanics. \item If the distribution $\mathcal{D}$ is integrable and $(q,v)\in\mathcal{D}$ is the initial condition of the dynamical equation for the solution $\gamma$, then the image of $\gamma$ is contained in the integral submanifold of $\mathcal{D}$ passing throught the point $q\in Q$, because $\dot\gamma(t)\in\mathcal{D}_{\gamma(t)}$ for every $t$. The constraint force ${\rm R}$, orthogonal to $\mathcal{D}$, obliges the system to move on the integral submanifolds of the constraint distribution $\mathcal{D}$. \end{enumerate} \bigskip \noindent{\bf Comments}: \begin{enumerate} \item We can understand the solution as follows: if we have $\phi:{\rm T} Q\to\mathbb{R}$, an only constraint, there is an associated 1-form, ${\rm d}^V\phi$, which gives a ``constraint force'' $R^{\phi}$ such that $\mathop{i}\nolimits(R^{\phi}){\bf g}$ is proportional to ${\rm d}^V\phi$. If we have $r$ independent constraints $\{\phi^\alpha\}$, then we have the corresponding constraint forces, $R^{\phi^\alpha}$, and the subbundle generated by them, $\{R^{\phi^\alpha}\}$, and the resultant constraint force ${\rm R}$ is contained in this subbundle. \item For these systems, d'Alembert principle says that if the system moves ``along the vertical fibres'' of C, the work realised by the constraint force, the integral along the trajectory, is null. This is called the {\bf virtual work principle} as alternative to d'Alembert principle. \end{enumerate} \subsection{Dynamical equations for nonholonomic systems} We finish this section giving the expressions of the dynamical equations of nonholonomic constrained systems $(Q,{\bf g},\omega, C)$ in different significant cases of $C\subset {\rm T} Q$, the submanifold of nonholonomic constraints. \begin{enumerate} \item If the submanifold of constraints $C$ is locally defined by the annihilation of $r$ functions $\{\phi^\alpha\}$, and they are independent constraints, then the dynamical equation is $$ \nabla_{\dot\gamma}\dot\gamma ={\rm F}\circ\gamma+\lambda_\alpha\mathop{i}\nolimits({\rm d}^V\phi^\alpha)g^{-1}\circ\dot\gamma \ , $$ or, in dual form, $$ \nabla_{\dot\gamma}(\mathop{i}\nolimits(\dot\gamma){\bf g})= \omega\circ\gamma+\lambda_\alpha{\rm d}^V\phi^\alpha\circ\dot\gamma \ . $$ These equations together with the constraints defining $C$, $\phi^1=0,\ldots,\phi^r=0$, are a system of $n+r$ equations with $n+r$ unknowns: the components of the trajectory $\gamma$ and the Lagrange multipliers $\lambda_\alpha$. Observe that some of them are the dynamical equations and the remaining ones are the constraint functions. The corresponding Euler-Lagrange equations are $$ \frac{d}{d t}\left(\derpar{K}{v^j}\circ\dot\gamma\right)- \derpar{K}{q^j}\circ\dot\gamma= \omega_j\circ\gamma +\lambda_\alpha\derpar{\phi^\alpha}{v^j}\circ\dot\gamma \quad , \quad (j=1,\ldots ,n) $$ because $\omega=\omega_k{\rm d} q^k$, with $\omega_k=g_{kj}{\rm F}^j$. \item If the system is conservative, then $\omega=-{\rm d} V$, where $V\in{\rm C}^\infty (Q)$ is the potential function. In this case we can introduce the Lagrangian function ${\cal L}=K-\tau_Q^*V$ and we have $$ \frac{d}{d t}\left(\derpar{{\cal L} }{v^j}\circ\dot\gamma\right)- \derpar{{\cal L}}{q^j}\circ\dot\gamma= \lambda_\alpha\derpar{\phi^\alpha}{v^j}\circ\dot\gamma \ . $$ As above, these equations, together with the constraints defining $C$, are also a system of $n+r$ equations with $n+r$ unknowns. \item If $C$ is a vector subbundle, then $\phi^\alpha=a^{\alpha}_{j}(q)v^{j}$ and the Euler-Lagrange equations are $$ \frac{d}{d t}\left(\derpar{K}{v^j}\circ\dot\gamma\right)- \derpar{K}{q^j}\circ\dot\gamma= \omega_j\circ\gamma +f_ka^k_j\circ\dot\gamma \quad , \quad (j=1,\ldots ,n)\, . $$ Or in the case of conservative systems $$ \frac{d}{d t}\left(\derpar{{\cal L} }{v^j}\circ\dot\gamma\right)- \derpar{{\cal L}}{q^j}\circ\dot\gamma= \lambda_\alpha a^\alpha_j\circ\dot\gamma \ . $$ \end{enumerate} If $C$ is an affine subbundle, then $\phi^\alpha=a^{\alpha}_{j}(q)v^{j}+b^\alpha(q)$ and the expression of the Euler-Lagrange dynamical equations is the same as above. \section{Non-autonomous Newtonian systems} In some interesting cases the force field acting on a Newtonian system depends not only on the positions and the velocities but also on time. They are called {\bf non-autonomous} or {\bf time-depending} systems. In the following paragraphs we will try to extend the above geometric formulation to this situation. The geometric model appropriate to this case is the following: A \textbf{non-autonomous Newtonian mechanical system} is a triple $(\mathbb{R}\times Q,{\bf g},{\rm F})$, where $(Q,{\bf g})$ is a Riemannian manifold and the force field is ${\rm F}\in\mbox{\fr X} (Q,\pi_2)$, with $\pi_2\colon\mathbb{R}\times Q\to Q$; that is, $$ \xymatrix{&{\rm T} Q\ar[d]^{\tau_Q}\\{\mathbb{R}\times Q}\ar[ur]^{{\rm F}}\ar[r]^{\quad\pi_2}&\, Q\,.} $$ Moreover, if the force field depends on the velocities, then ${\rm F}\in\mbox{\fr X} (Q,\tau_Q\circ\rho_2)$, with $\rho_2\colon\mathbb{R}\times{\rm T} Q\to {\rm T} Q$; that is, $$ \xymatrix{& &{\rm T} Q\ar[d]^{\tau_Q}\\{\mathbb{R}\times {\rm T} Q}\ar[urr]^{{\rm F}}\ar[r]^{\quad\rho_2}&{\rm T} Q\ar[r]^{\tau_Q}&\, Q\,.} $$ The Newton equations are written in the usual way: \begin{itemize} \item In the case that the force does not depend on the velocities $$ \nabla_{\dot\gamma}\dot\gamma ={\rm F}\circ\bar\gamma $$ where $\bar\gamma=(t,\gamma)\colon I\subset\mathbb{R}\to I\times Q$. We can also use the dual form by means of the corresponding work form $\omega\in{\mit\Omega}^1(Q,\pi_2)$. \item If the force field depends on the velocities $$ \nabla_{\dot\gamma}\dot\gamma ={\rm F}\circ\bar{\dot\gamma} \ , $$ where $\bar{\dot\gamma}=(t,\dot\gamma)\colon I\subset\mathbb{R}\to I\times{\rm T} Q$. As above we can use the corresponding work form $\omega\in{\mit\Omega}^1(Q,\tau_Q\circ\rho_2)$ and obtain the equations in the dual form. \end{itemize} The Euler-Lagrange equations are the same as usual but the second term depends on time $t\in\mathbb{R}$. In particular, if the work form depends on time, $\omega\in{\mit\Omega}^1(Q,\pi_2)$, we say that the system is \textsl{conservative} if there exists $V\colon\mathbb{R}\times Q\to \mathbb{R}$, such that $\omega=-{\rm d} V_t$, where $V_t\colon Q\to Q$ is defined by $V_t(p):=V(t,p)$, for every $p\in Q$ and $t\in\mathbb{R}$. In this situation we can define the Lagrangian function ${\cal L}=K-V$, depending on time, and the Euler-Lagrange equation are as usual $$ \frac{d}{d t}\left(\derpar{{\cal L}}{v^i}\circ\bar{\dot\gamma}\right)- \derpar{{\cal L}}{q^i}\circ\bar{\dot\gamma}= 0 \ . $$ The case of time depending constrained systems, both holonomic and nonholonomic, can be directly formulated with the adequate changes. The constraint force also depends on time. It is interesting to note that if the Lagrangian function is time-depending, then the system is not conservative, that is, the energy function is not conserved along the trajectories of the system. In fact it is easy to show that $$ \frac{dE\circ \dot\gamma}{dt}=-\frac{\partial L}{\partial t}\circ \dot\gamma\, . $$ using the definition of the total energy from the Lagrangian, $E=K+V$. Observe that, in the above comments we have supposed that the dependency on the time is in the forces, but there are systems where it is in the kinetic energy, that is in the Riemannian metric. This is the case of variable mass systems, like a rocket whose mass changes while the combustion goes; see \cite{Fox-67} and \cite{Cve-16} for other interesting examples. There are also constrained systems whose constraints depend on time, see for example \cite{Ar-89}. Other different applications in mechanics of time depending Riemanian metrics can be seen in \cite{SarPrin-2010,MesSarCram-2011,SarPrinMes-kup-2012} and references therein. Some problems in geometry, mechanics and relativity consider the case of Riemannian metrics depending on parameters, the time for example when we have an evolution problem, but they are out of the scope of this survey. \section{Geodesic fields, Hamilton-Jacobi equation and applications} Hamilton--Jacobi equation is a fundamental tool to obtain significant results in the study of Lagrangian and Hamiltonian systems, but for a Newtonian system we have not, in general, this tool. Following ideas contained in \cite{CGMMR-06}, we develop in this section the associated notion, which we call Hamilton--Jacobi vector fields. They can be associated with non conservative forces, that is when we haven't a Lagrangian or Hamiltonian function. We compare both situation, the classical and the Newtonian, and give some particular applications in different fields. \subsection{Hamilton--Jacobi vector fields} Consider a Newtonian system $(Q,{\bf g},{\rm F})$ and the associated dynamical equation for curves $\gamma:I\subseteq \mathbf{R}\longrightarrow Q$ \begin{equation}\label{DIN} \nabla_{\dot{\gamma}(t)}\dot{\gamma}(t)={\rm F}(\gamma(t))\,. \end{equation} This is a second order differential equation for the curves $\gamma$, hence it is associated to a second order vector field in the phase space ${\rm T} Q$. Following ideas of Jacobi, we try to reduce the order of this differential equation and we will obtain the Hamilton--Jacobi equation for Newtonian systems in this Riemannian approach. For a similar approach in the Lagrangian setting, see \cite{CGMMR-06}. In \cite{BLMDMM-2012} you can see a more general approach in the atmosphere of skew symmetric algebroids. First we begin with a geometric result on vector fields on the configuration space $Q$. \begin{teor} The vector field $X\in\mathfrak{X}(Q)$ satisfies the equation \begin{equation}\label{HJL} \nabla_X X={\rm F} \end{equation} if and only if its integral curves $\gamma:I\rightarrow Q$ are trajectories of the above Newtonian system, that is they satisfy equation (\ref{DIN}). We say that $X$ is a \textbf{Hamilton-Jacobi vector field} associated to the Newtonian system $(Q,{\bf g},{\rm F})$. \end{teor} \begin{proof} For any $p\in Q$, let $\gamma$ be the integral curve of $X$, $\dot{\gamma}=X\circ\gamma$, with initial condition $\gamma(0)=p$. Suppose that $\gamma$ satisfies (\ref{DIN}). Then at $t=0$ we have: \begin{equation} \nabla_{\dot{\gamma}(0)}\dot{\gamma}(t)={\rm F}(\gamma(0)) \end{equation} that is \begin{equation} \nabla_{X(p)}X={\rm F}(p)\,. \end{equation} But the point $p\in Q$ is arbitrary, hence $\nabla_{X}X={\rm F}$ as we wanted. On the other side, if $X$ satisfies $\nabla_{X}X={\rm F}$ and $\gamma$ is an integral curve of $X$, then $$ \nabla_{\dot{\gamma}(t)}\dot{\gamma}(t)= (\nabla_{X}X)(\gamma(t))={\rm F}(\gamma(t))\,, $$ and the curve $\gamma$ satisfies equation (\ref{DIN}). \end{proof}\qed \bigskip \noindent{\bf Comments}: \begin{enumerate} \item Notice that by this result, we obtain solutions of a second order differential equation, that is integral curves of a vector field in ${\rm T} Q$, as integral curves of vector fields in the manifold $Q$, any vector field solution to the equation $\nabla_X X={\rm F}$, that is first order differential equations. But, given a solution $X\in\mathfrak{X}(Q)$, we do not obtain from $X$ all the integral curves of the second order equation, we only obtain those with initial conditions $(q,u)\in{\rm T} Q$ of the form $(q,X(q))$. In fact, for every solution $X$, we obtain a family of curves solution to the dynamical equation (\ref{DIN}): they are those solution curves contained in the submanifold of ${\rm T} Q$ defined by the graph of $X$ as a section of the natural projection $\tau_Q:{\rm T} Q\rightarrow Q$. It can be proved that this submanifold is invariant by the second order differential system associated to the dynamical equation. See \cite{CGMMR-06} for more details. \item If ${\rm F}=0$, we have the family of vector fields satisfying the equation $\nabla_X X=0$, the so called {\bf geodesic vector fields}. Their integral curves are geodesic curves for the Levi--Civita connection $\nabla$, they are solutions to the equation $\nabla_{\dot{\gamma}(t)}\dot{\gamma}(t)=0$. Some interesting properties of these vector field are in \cite{CMM--2021} and references therein. \end{enumerate} \subsection{Classical Hamilton-Jacobi equation in Lagrangian form} Let $(Q,{\bf g},\omega=-{\rm d} V)$ be a conservative Newtonian system with force field ${\rm F}=-\mathrm{grad}\,V$. The Lagrangian function is $L=K-V$ and the total energy function is $E=K+V$. Suppose the vector field $X$ is a Hamilton--Jacobi vector field of the system, $\nabla_X X={\rm F}$, then $$ \mathcal{L}_X (E\circ X)=\mathcal{L}_X\left(\frac{1}{2}\textbf{g}(X,X)+V\right)= $$ $$ =\textbf{g}(\nabla_X X,X)+\mathrm{d}V(X)= \textbf{g}(F,X)+\mathrm{d}V(X)=0 $$ which is a conservation of energy theorem for the Hamilton--Jacobi vector fields. And following this last result we have: \bigskip \begin{teor} (Hamilton--Jacobi equation) Suppose that the vector field $X\in\mathfrak{X}(Q)$ satisfyes the condition $\mathrm{d}(\mathop{i}\nolimits(X) \textbf{g})=0$, for example if $X$ has a potential function (see the comments below). Then the following conditions for $X$ are equivalent: \begin{enumerate} \item $\nabla_X X=F=-\mathrm{grad}\,V$. \item $\mathrm{d}(E\circ X)=0$. \end{enumerate} \end{teor} \begin{proof} Let $Y$ be a vector field on $Q$, then: \vspace{-2mm} \begin{eqnarray*} \mathrm{d}(E\circ X)(Y)&=&\mathcal{L}_Y(E\circ X)=\mathcal{L}_Y\left(\frac{1}{2}(\textbf{g}(X,X)+V\right)=\textbf{g}(\nabla_Y X,X)+\mathrm{d}V(Y)\\ &=&\textbf{g}(\nabla_X X,Y)+\mathrm{d}V(Y)=\textbf{g}(\nabla_X X-F,Y)\,, \end{eqnarray*} where the fourth identity is consequence of $$ \mathrm{d}(i_X \textbf{g})(Y,Z)=\textbf{g}(\nabla_Y X,Z)-\textbf{g}(\nabla_Z X,Y) $$ directly obtained from the definition of the exterior differential and the symmetry of the connection $\nabla$. But $Y$ is arbitrary, hence we have the equivalence. \end{proof} \qed \bigskip \noindent{\bf Comments}: \begin{enumerate} \item Notice that the condition $\mathrm{d}(E\circ X)=0$ is equivalent to say that $E\circ X$ is a constant in every connected component of $Q$, that is $$E(q^1,\ldots,q^n, X^1,\ldots,X^n)=\mathrm{constant}\,,$$ which is no more that the Hamilton--Jacobi equation but in Lagrangian form. Its solutions are the vector fields $X\in\mathfrak{X}(Q)$ satisfying $\nabla_X X={\rm F}$ and $\mathrm{d}(\mathop{i}\nolimits(X) \textbf{g})=0$. \item If we want to obtain the classical form of the Hamilton--Jacobi equation, that is the Hamiltonian form, we need to construct the Hamiltonian function, $H:{\rm T}^*Q\to\mathbb{R}$, and change the vector field $X$ by the closed differential form $\alpha=\mathop{i}\nolimits(X) \textbf{g}$. Then, locally, $\alpha={\rm d} S$ and we obtain the classical equation. The closedness of the form is related to the special condition we have imposed to the vector field $X$ and it is equivalent to say that the image of the form $\alpha$ in ${\rm T}^*Q$ is a Lagrangian submanifold of ${\rm T}^*Q$ with its natural symplectic structure (see \cite{CGMMR-06}). \item Condition $\mathrm{d}(i_X \textbf{g})=0$ for the vector field $X$ is equivalent to say that the image of the map $X:Q\to\mathrm{T}Q$ is a Lagrangian submanifold of the symplectic manifold $\mathrm{T}Q$ with the 2-Lagrangian form associated to $L=K-V$, which is a regular Lagrangian. It is related with the previous item, as $\alpha=i_X \textbf{g}$, and the Legendre transformation associated to the kinetic energy. Once again, see \cite{CGMMR-06} for more details. \end{enumerate} \subsection{Jacobi metric} Consider a conservative mechanical system defined on the Riemannian manifold $(Q,{\bf g})$ with potential energy function $V \colon Q \to \mathbb{R}$. Recall that the energy is given by $E(v_q) = \frac12 {\bf g}(v_q,v_q) + V(q)$ and it is a constant of the motion. Suppose $E_0 > V(q)$ for all $q \in Q$. We define the {\bf Jacobi metric} as $$ {\bf g}_0 = (E_0-V) g . $$ It is well known that the solutions $\gamma$ of the Newton equation $\nabla_{\dot\gamma} \dot\gamma = -\mathrm{grad}\, V \circ \gamma$ with fixed energy $E_0$ are, under convenient reparametrization, the geodesic lines of~${\bf g}_0$. In \cite{God-69} there are interesting comments on this topic, and you can see a nice new approach of the same question in \cite{CMM--2021}. \subsection{Applications} \subsubsection{Euler equation for fluids as a Hamilton--Jacobi equation of a Newtonian system} In this paragraph we interpret the Euler equation for stationary fluids as a Hamilton--Jacobi equation for a Newtonian system. Let $U\subset\mathbf{R}^3$ an open set. The classical Euler equation for a time--depending vector field $X\in\mathfrak{X}(U)$, in Cartesian coordinates is given as $$ \frac{\partial X}{\partial t}+<X,\nabla>X=-\nabla p $$ where $p:U\times\mathbf{R}\to\mathbf{R}$ is a function, the pressure on the fluid. If we consider the case where the fluid is incompressible, then we include the condition $\mathrm{div}\,X=0$, that is the vector field is conservative. For a point $(x,t)$, the tangent vector $X(x,t)\in{\rm T}_x U$ is the velocity of the fluid particle in the point $x$ at the moment $t$. In a Riemannian manifold $(M,\textbf{g})$, the corresponding Euler equation for a time--depending vector field $X\in\mathfrak{X}(M)$ is written as: $$ \frac{\partial X}{\partial t}+\nabla_X X=-\mathrm{grad}\, p\, . $$ The condition of being conservative for $X$ is $L_X \mathrm{d}V=0$, where $\mathrm{d}V$ is the volume element associated to $\textbf{g}$. The equivalence of both equations in direct in local coordinates. For a stationary motion, that is constant in time, the last equation is: $$ \nabla_X X=-\mathrm{grad}\, p $$ that is, the Lagrangian Hamilton--Jacobi equation (\ref{HJL}) with ${\rm F}=-\mathrm{grad}\, p$, corresponding to the conservative Newtonian system $(M,{\bf g},\omega=-{\rm d} p)$. \subsubsection{Hamilton--Jacobi equation and Schr\"odinger equation. } Given a Newtonian conservative system $(M,{\bf g},\omega=-{\rm d} V)$ and a vector field $X\in\mathfrak{X}(M)$ which is a gradient, $X=-\mathrm{grad}\,S$, consider the following three conditions: \begin{enumerate} \item $X$ is a Hamilton--Jacobi vector field for the given system, that is $\nabla_X X=-\mathrm{grad}\, V$ or equivalently $E\circ \mathrm{grad}\,S=E_o=\mathrm{constant}$. \item $\mathrm{div} X=0$, hence $\mathrm{div}\, \mathrm{grad}\,S=\Delta S=0$, where $\Delta$ is the Laplacian operator for the Riemannian metric ${\bf g}$. \item The function $\Psi= \exp(iS)$ satisfies the Schr\"odinger equation $$ \left(-\frac{1}{2}\Delta+V\right)\Psi=E_o\Psi\, . $$ \end{enumerate} Then two of them imply the other one. This is stated in \cite{ALOMU-2014} and is a nice relation between classical and quantum worlds, specially highlighted with this Riemannian approach to mechanics because all the elements included in the statement have a clear geometric meaning. For more explanations of these relations and interesting ideas see \cite{CMMGM-2016,MARMO-2009}. See also \cite{CHERO2001} for a Riemannian approach to quantum mechanics. We only include the first paper of a series of four. \section{Other interesting topics} In this section we give some references about topics where this Riemannian approach has given a modern revival of results and applications. We limite ourselves to give some information on some significant authors and recommend some of their publications and web pages as a source of information and update on these topics in the Riemannian approach. In any case, it is necessary to say that this is not a full list of groups or researchers which have contributed to the study of these problems with this approach. \subsection{Symmetry and conserved quantities} Symmetries of the system, associated conserved quantities and reduction by the action of symmetry groups are deep ideas in the study of mechanical systems from the very beginning. In our approach, the Riemannian structure gives a natural set of symmetries, the isometries of the metric and, infinitesimally, the Killing vector fields. For example, in the case that the configuration manifold is $\mathbb{R}^3$, invariance by translations and rotations give rise to linear and angular momenta in the classic terminology. We do not enter in historical references on a so extended subject, which from a geometric viewpoint goes back to S. Lie in nineteenth century, with well recognized groups working on different topics and with a large amount of significant deep results. As an example we cite the work of G. Marmo and collaborators making geometry and symmetry fundamental tools to understand physical problems. See \cite{CIMM-2015} and the references therein as a book collecting their work along the years. Other well known books on the topic have been written by J. Marsden and coauthors or A. Bloch and others authors. See, e.g. \cite{BLOCH2015} and the personal web pages of the authors, the web page of J. Marsden, specially maintained since he passed away in 2010, where most of his books and research are open accessible and are a source of information and results on this topic. These given references contain extended lists of names and topics related to the use of symmetry to understand problems in mechanics and other problems. Hence we only give an example of a general result, a kind of {\bf Noether theorem} for the situation under study, that is mechanics with the Riemannian approach. \medskip Suppose we have a Newtonian system $(M,{\bf g},{\rm F})$. For a vector field $X\in\mbox{\fr X}(M)$, consider the function ${\bf I}_X=\widehat{\mathop{i}\nolimits(X){\bf g}}:{\rm T} M\to\mathbb{R}$ defined by ${\bf I}_X(v_q)={\bf g}(X_q,v_q)$. Then we have \begin{enumerate} \item If $X$ is a Killing vector field, that is $\mathcal{L}_X{\bf g}=0$, and $\sigma:I\subset\mathbb{R}\to M$ is a geodesic line, that is $\nabla_{\dot\sigma}{\dot\sigma}=0$, then ${\bf I}_X$ is constant along $\dot\sigma$, that is \begin{equation} \frac{{\rm d}}{{\rm d} t}({\bf I}_X\circ\dot\sigma)(t)=0\,. \end{equation} In other words, if $X$ is a Killing vector field, then ${\bf I}_X$ is a conserved quantity along the geodesic lines. \item If $X$ is a Killing vector field satisfying ${\bf g}(X,{\rm F})=0$, and $\gamma:I\subset\mathbb{R}\to M$ is a trajectory of the system $(M,{\bf g},{\rm F})$, that is $\nabla_{\dot\gamma}{\dot\gamma}={\rm F}\circ\gamma$, then ${\bf I}_X$ is constant along $\dot\gamma$, that is \begin{equation} \frac{{\rm d}}{{\rm d} t}({\bf I}_X\circ\dot\gamma)(t)=0\,. \end{equation} In other words, if $X$ is a Killing vector field orthogonal to the force field ${\rm F}$, then ${\bf I}_X$ is a conserved quantity along the geodesic lines. \end{enumerate} Both results are consequence of the following equation \begin{equation} \frac{{\rm d}}{{\rm d} t}\left({\bf g}(X(\gamma(t),\dot\gamma(t))\right)=\nabla_{\dot\gamma}({\bf g}(X,\dot\gamma))={\bf g}(\nabla_{\dot\gamma}X,\dot\gamma)(t)+{\bf g}(X,\nabla_{\dot\gamma}\dot\gamma)(t)\, , \end{equation} for every curve $\gamma$ and the well known property that $X$ is a Killing vector field if and only if ${\bf g}(\nabla_YX,Y)=0$ for every $Y\in\mbox{\fr X}(M)$ as can be seen from the following chain of identities: \begin{eqnarray*} {\bf g}(\nabla_YX,Y)&=&{\bf g}(\nabla_XY+\mathcal{L}_YX,Y)={\bf g}(\nabla_XY,Y) -{\bf g}(\mathcal{L}_XY,Y)\\ &=&\frac{1}{2}\mathcal{L}_X({\bf g}(Y,Y))-\left(\frac{1}{2}\mathcal{L}_X({\bf g}(Y,Y))-\frac{1}{2}(\mathcal{L}_X{\bf g})(Y,Y)\right)\\ &=&\frac{1}{2}(\mathcal{L}_X{\bf g})(Y,Y)\, , \end{eqnarray*} obtained using the properties of the Levi--Civita connection. \medskip \noindent{\bf Comment}: Which is the origin of the condition ${\bf g}(X,{\rm F})=0$ in item 2? Consider the case where the Newtonian system is conservative, then ${\rm F}=-{\rm grad}\, V$, hence the above orthogonality condition is equivalent to say that $\mathcal{L}_X V=0$, and this last condition, together with $X$ being Killing, implies that the Lagrangian $L=T-V$ is invariant by $X$, then the above result is a kind of generalization of the classical Noether theorem for Lagrangian systems. \subsection{Ideas on control of mechanical systems} Control of mechanical systems and robotics is a broad field of study both from the theoretical and the applied viewpoint. The geometric approach has a wide development at least from the eighties in the last century. Lagrangian, Hamiltonian and Riemannian approaches are used depending on the taste of the different authors and the closeness to the applications. In particular, Riemannian treatment is specially used in robotics. The control systems are modelled as a Newtonian system $(M,{\bf g},{\rm F})$ together with a set of control vector fields, or input forces, $F^1,\ldots,F^k$, we can modulate with some coefficients. The control equation is given as: $$ \nabla_{\dot\gamma}{\dot\gamma}={\rm F}\circ\gamma+\sum_{i=1}^k u_i F^i\,. $$ The coefficients $u_i$ are the input controls and, usually, are functions of time belonging to a specific set of functions and taking values in a specific domain $(u_1,\ldots,u_k)\in U\subset\mathbb{R}^k$. The full development of this ideas applied to control of mechanical systems can be seen in \cite{BULE2005}. It contains not only the state of the art and the work developed by the authors but a detailed bibliography of other authors work. Applications in robotics and other fields, and recent developments, can be seen in the included references and in the web pages of the authors. In particular the importance of the so called {\bf symmetric product}, associated to the Levi--Civita connection, in the study of controllability of Newtonian control systems. The controllability of the above control mechanical system depends on properties of the symmetric algebra generated by the products $\ll F^i,F^j\gg=\nabla_{F^i}F^j+\nabla_{F^j}F^i$ of the control vector fields. Furthermore, it contains a complete description of the motion of a rigid body in this Riemannian approach. In reference \cite{CORTES2002}, Ph. D. dissertation of the author, and in the subsequent research, there are several studies and applications in different specific fields of control of mechanical systems, in particular from the Riemannian point of view. In his personal web page there is a complete account of his developments and collaborators. Optimal control of Newtonian systems is another topic where this Riemannian approach gives specific insight. Once again, the web pages of the above authors, in particular A. D. Lewis, are a good source of information on this topic. See also \cite{mbl-mcml-2009, CMZ-10}, and references therein, for an study of problems of optimal control in mechanical systems and the existence of special kind of solutions. Another significant author working on this topic is Arjan van der Schaft whose web page gives account of its developments with collaborators in the subject of control of mechanical and electromechanical systems. The use of different geometrical tools is permanent including the Riemannian approach. \subsection{Other developments} There are other interesting topics not included in this manuscript, for example forces depending on higher order derivatives, as elastic forces, used in mechanical models of continuous media. Sometimes these models include constraints depending also on second or third order derivatives. In \cite{Neimark-Fufaev}, a classical reference in nonholonomic systems, there are several examples of this type of systems. The approach in this book is classical and there will be interesting to develop some of the chapters in a more geometric terms, in particular in a Riemannian setting. See \cite{CILM-2004} for an interesting an pioonnering geometrical approach. Moreover, not any reference is made in this manuscript to numerical methods of integration of dynamical equations, including those called ``geometric integrators" whose origin is in some geometric formulations of classical mechanics. \section{Conclusions} We have developed the analytical mechanics from a Riemannian geometry perspective including systems with constraints both holonomic and nonholonomic and non--autonomous systems. As specific topics with clear insight with our viewpoint, apart from the general theory, we include Hamilton--Jacobi vector fields and equation, the Jacobi metric and a specific Noether theorem for infinitesimal symmetries of the system. As applications we give the Euler equation for fluids as a Newtonian conservative system and one relation between specific solutions to the Hamilton--Jacobi equation an Schr\"odinger equation. To finish we include some comments on control and optimal control of mechanical systems from a Riemannian perspective. There are more topics, and authors working on them, where the Riemannian approach gives a special insight and sure they will continue giving new results an applications in different fields. \section*{Acknowledgements} \hspace{6 mm}Some ideas of this work come from a course given along the years in our Faculty of Mathematics and Statistics at the UPC for the students of the Mathematics degree. Both the course and this document would not have been possible without the permanent collaboration and friendship of my colleagues Narciso Rom\'an--Roy and Xavier Gr\`acia along the years. My deep thanks to them. My former Ph.\,D. students Javier Yaniz--Fern\'andez and Mar\'ia Barbero--Li\~n\'an worked on control and optimal control of mechanical systems with this Riemannian approach. To both my reconnaissance for their dedication. The author also acknowledges the financial support from the Spanish Ministerio de Ciencia, Innovaci\'on y Universidades project PGC2018-098265-B-C33 and to the Secretary of University and Research of the Ministry of Business and Knowledge of the Catalan Government project 2017--SGR--932. I also thank the referee for his careful reading of the manuscript and his comments that have allowed the author to improve some parts of the work.
{ "timestamp": "2021-12-21T02:20:17", "yymm": "2112", "arxiv_id": "2112.10171", "language": "en", "url": "https://arxiv.org/abs/2112.10171", "abstract": "The work done by Isaac Newton more than three hundred years ago, continues being a path to increase our knowledge of Nature. To better understand all the ideas behind it, one of the finest ways is to generalize them to wider situations. In this report we make a review of one of these enlargements, the one that bears the mechanical systems from the elementary homogeneous three dimensional Euclidean space to the more abstract geometry of a Riemannian manifold.", "subjects": "Differential Geometry (math.DG); Mathematical Physics (math-ph)", "title": "Newtonian mechanics in a Riemannian manifold", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9879462219033656, "lm_q2_score": 0.824461932846258, "lm_q1q2_score": 0.8145240516586069 }
https://arxiv.org/abs/1203.0690
Asymptotic normality of integer compositions inside a rectangle
Among all restricted integer compositions with at most $m$ parts, each of which has size at most $l$, choose one uniformly at random. Which integer does this composition represent? In the current note, we show that underlying distribution is, for large $m$ and $l$, approximately normal with mean value $\frac{ml}{2}$.
\section{Introduction} An integer composition of a nonnegative integer $n$ is, informally, a way of writing $n$ as a sum of nonnegative integers $\pi_1,\dotsc,\pi_k$, for some $k\ge 0$. Let $h_{l,m}(n)$ denote the number of integer compositions of the nonnegative integer $n$ with at most $m$ parts, each of which has size at most $l$ (`compositions inside a rectangle'). Recently, Sagan (2009) \cite{sagan} has shown that the sequence \begin{align*} h_{l,m}:= \bigl(\,h_{l,m}(0),\dotsc,h_{l,m}(lm)\,\bigr), \end{align*} is unimodal. In Figure \ref{fig:hlm}, we plot this sequence for $l=2$, $m=5$; $l=6$, $m=5$; and $l=6$, $m=20$. \begin{figure*}[!ht] \centering \includegraphics[scale=0.12]{hlm2_5.jpg} \includegraphics[scale=0.12]{hlm6_5.jpg} \includegraphics[scale=0.12]{hlm6_20.jpg} \caption { The sequences $h_{l,m}(0),\dotsc,h_{l,m}(lm)$ for $l=2$, $m=5$ (left), $l=6$, $m=5$ (middle) and $l=6$, $m=20$ (right). } \label{fig:hlm} \end{figure*} Apparently, as $l$ and $m$ increase, $h_{l,m}$ looks more and more `Gaussian'. This suggests a probabilistic interpretation of $h_{l,m}(n)$, according to which the normalized values $\frac{h_{l,m}(n)}{\sum_{i=0}^{lm}h_{l,m}(i)}$, $n=0,\dotsc,lm$, denote the probabilities that a uniform randomly chosen integer composition with at most $m$ parts, each of which has size at most $l$, represents the integer $n$. In the current note, we show that these probabilities follow, for large $l$ and $m$, approximately a normal distribution with mean value $\frac{lm}{2}$ and variance $m\frac{(l+1)^2-1}{12}$. Thereby, we first define \emph{multinomial triangles} as a generalization of Pascal's triangle and characterize their entries, \emph{polynomial coefficients}, as generalizations of the well-studied binomial coefficients (Section \ref{sec:triangles}), whereupon we outline a recently found relationship between polynomial coefficients and specificially restricted integer compositions (Section \ref{sec:integer}). The latter, with various types of restrictions, have attracted much attention in recent years (cf. \cite{chinn}, \cite{eger}, \cite{heubach}, \cite{hitczenko}, \cite{malandro}, \cite{schmutz}, \cite{shapcott}). For example, Malandro \cite{malandro} determines asymptotic formulas for $L$-restricted integer compositions --- $L$ being an arbitrary finite set --- and Shapcott \cite{shapcott} and Schmutz and Shapcott \cite{schmutz} find a lognormal distribution for part products of restricted integer compositions. Hitczenko and Stengle \cite{hitcz2} derive the expected number of distinct part sizes of unrestricted random compositions. Restricted and unrestricted integer compositions have a variety of applications, ranging from the theory of patterns \cite{heubach2} to monotone paths in two-dimensional lattices (\cite{kimberling}), alignments between strings (\cite{eger4}), and the distribution of the sum of discrete integer-valued random variables (\cite{eger2}). Then, in Section \ref{sec:main}, we state our main theorem, asymptotic normality of compositions inside a rectangle, which we prove in Section \ref{sec:proof}. In the conclusion, we discuss generalizations of the analyzed setting where part sizes are restricted to lie within arbitrary finite sets. While our main result, perceived rightly, might be considered not very surprising, the steps that lead to it (Lemmas \ref{lemma:exact} to \ref{lemma:approx}) may be judged interesting on their own (and are certainly novel) because they specify the exact distribution of the random variable $X_{l,m}$ that sums the parts of a randomly chosen integer composition from a rectangle of size $l\times m$, and give an elegant characterization of it in terms of the distribution of the sum of independent uniform random variables and an ``error term'' that quadratically tends toward zero. \section{Multinomial triangles and polynomial coefficients}\label{sec:triangles} In generalization to binomial triangles, $(l+1)$-nomial triangles, $l\ge 0$, are defined in the following way. Starting with a $1$ in row zero, construct an entry in row $k$, $k\ge 1$, by adding the overlying $(l+1)$ entries in row $(k-1)$ (some of these entries are taken as zero if not defined); thereby, row $k$ has $(kl+1)$ entries. For example, the monomial ($l=0$), binomial ($l=1$), trinomial ($l=2$) and quadrinomial triangles ($l=3$) start as follows, \begin{table}[!h] \begin{tabular}{r} 1\\ 1\\ 1\\ 1 \end{tabular}\hspace{0.5cm} \begin{tabular}{rrrr} 1\\ 1 & 1\\ 1 & 2 & 1\\ 1 & 3 & 3 & 1\\ \end{tabular}\hspace{0.5cm} \begin{tabular}{rrrrrrr} 1\\ 1 & 1 & 1\\ 1 & 2 & 3 & 2 & 1\\ 1 & 3 & 6 & 7 & 6 & 3 & 1\\ \end{tabular}\hspace{0.5cm} \begin{tabular}{rrrrrrrrrr} 1\\ 1 & 1 & 1 & 1\\ 1 & 2 & 3 & 4 & 3 & 2 & 1\\ 1 & 3 & 6 & 10 & 12 & 12 & 10 & 6 & 3 & 1\\ \end{tabular} \end{table} In the $(l+1)$-nomial triangle, entry $n$, $0\le n\le kl$, in row $k$, which we denote by $\binom{k}{n}_{l+1}$ and refer to as \emph{polynomial coefficient} (cf. Caiado (2007) \cite{caiado}, Comtet (1974) \cite{comtet}), has the following interpretation. It is the coefficient of $x^n$ in the expansion of \begin{align}\label{eq:multinomial_coeff1} (1+x+x^2+\dotsc+x^l)^k = \sum_{n=0}^{kl} \binom{k}{n}_{l+1}x^n. \end{align} Also note that, by its definition, $\binom{k}{n}_{l+1}$ satisfies the following recursion \begin{align} \binom{k}{n}_{l+1} = \sum_{j=0}^l \binom{k-1}{n-j}_{l+1}. \end{align} \section{Integer compositions and polynomial coefficients}\label{sec:integer} An {\bf integer composition} of a nonnegative integer $n$ is a tuple $\pi=(\pi_1,\dotsc,\pi_k)$, $k\ge 0$, of nonnegative integers such that \begin{align*} n = \pi_1+\dotsc+\pi_k \end{align*} where the $\pi_i$'s are called \emph{parts}, and $k$ is the \emph{number of parts}.\footnote{Compositions where some parts are allowed to be zero are sometimes called \emph{weak compositions}.} Let $\struct{C}(n,k,a,b)$ denote the set of restricted compositions of $n$ into $k$ parts $\pi_i$ with $a\le \pi_i\le b$, where $a,b\in\mathbb{N}\cup\set{\infty}$ such that $0\le a\le b$, and let $c(n,k,a,b)$ denote its size, $c(n,k,a,b)=\length{\struct{C}(n,k,a,b)}$. For example, for $n=5$, $k=2$, $a=0$, $b=\infty$, we have \begin{align*} 5 = 5+0=0+5=4+1=1+4=3+2=2+3, \end{align*} and thus $c(5,2,0,\infty)=6$. The following results are well-known. \begin{align} c(n,k,0,\infty) &= \binom{n+k-1}{k-1}\label{eq:1}\\ c(n,k,1,\infty) &= \binom{n-1}{k-1}\label{eq:2}\\ c(n,k,a,\infty) &= c(n-ka,k,0,\infty) = \binom{n-ka+k-1}{k-1}\label{eq:3}. \end{align} Moreover, in recent work, Eger (2012) \cite{eger} has shown, more generally, a simple relationship between the number of restricted integer compositions and polynomial coefficients, namely, \begin{align}\label{eq:eger} c(n,k,a,b) &= \binom{k}{n-ka}_{b-a+1}. \end{align} \section{Main theorem}\label{sec:main} Let $m$ be a positive integer and let $l$ be a nonnegative integer. Denote by $h_{l,m}(n)$ the number of integer compositions of the integer $n$ with at most $m$ parts $p$, each of which has size at most $l$, i.e. $0\le p\le l$. Let $X_{l,m}$ be the random variable that takes on the integer $n$, for $0\le n\le lm$, with probability \begin{align*} \frac{h_{l,m}(n)}{\sum_{i=0}^{lm} h_{l,m}(i)}. \end{align*} \begin{theorem}\label{theorem:main} Let $\mu_{l,m}=\frac{ml}{2}$ and let $\sigma_{l,m}^2 = \frac{(l+1)^2-1}{12}$. Then \begin{align*} \frac{X_{l,m}-m\mu_{l,m}}{\sigma_{l,m} \sqrt{m}}\goesto \struct{N}(0,1) \quad\text{as } l,m\goesto\infty. \end{align*} \end{theorem} Our strategy for proving Theorem \ref{theorem:main} is as follows. First, we determine the exact distribution of $X_{l,m}$ in Lemma \ref{lemma:exact}. Then we derive the exact distribution of the sum of $m$ independently and uniformly distributed random variables in Lemma \ref{lemma:iid}, which is, by the Central Limit Theorem, asymptotically a normal distribution. Next, Lemmas \ref{lemma:central} and \ref{lemma:asymptotic} provide inequalities and upper bounds that we require in Lemma \ref{lemma:approx}, where we show that the distribution of $X_{l,m}$ can be represented, roughly, as the sum of two parts: the distribution of the sum $S_1+\dotsc+S_m$ of $m$ independently distributed uniform random variables (derived in Lemma \ref{lemma:iid}) and an ``error term'' that converges quadratically toward zero in $l$. \section{Proof of the main theorem}\label{sec:proof} \begin{lemma}\label{lemma:exact} Let $i$, $1\le i\le m$, be the smallest index such that $n\le il$. Then, \begin{align*} P[X_{l,m}=n] = \frac{1}{(l+1)^m-1}\frac{l}{l+1}\sum_{j=i}^m\binom{j}{n}_{l+1}. \end{align*} \end{lemma} \begin{proof} By definition, $h_{l,m}(n)=\sum_{j=1}^m c(n,j,0,l)=\sum_{j=1}^m\binom{j}{n}_{l+1}$, where the last equality follows from \eqref{eq:eger}. Moreover, $c(n,j,0,l)$ is obviously zero when $j<i$ since $n>(i-1)l$. Finally, the number of integers representable by $j$ parts, each between $0$ and $l$, is obviously $(l+1)^j$. Therefore, \begin{align*} \sum_{i=0}^{lm}h_{l,m}(i) =\sum_{i=0}^{lm} \sum_{j=1}^m c(i,j,0,l) = \sum_{j=1}^m\sum_{i=0}^{lm}c(i,j,0,l) = \sum_{j=1}^m(l+1)^j = \frac{l+1}{l}{\bigl((l+1)^m-1\bigr)}. \end{align*} Hence, \begin{align*} P[X_{l,m}=n]=\frac{h_{l,m}(n)}{\sum_{i=0}^{lm} h_{l,m}(i)}= \frac{1}{(l+1)^m-1}\frac{l}{l+1}\sum_{j=i}^m\binom{j}{n}_{l+1}. \end{align*} \end{proof} \begin{lemma}\label{lemma:iid} Denote by $S^{(m)}_l$ the sum $S_1+\dotsc+S_m$ of independent uniform random variables $S_j$, $j=1,\dotsc,m$, each taking values from the set $\set{0,\dotsc,l}$. The distribution of $S^{(m)}_l$ is given by \begin{align*} P[S^{(m)}_l=n] = \Bigl(\frac{1}{l+1}\Bigr)^m\binom{m}{n}_{l+1}. \end{align*} \end{lemma} \begin{proof} See Caiado \cite{caiado}, Eger \cite{eger2}. \end{proof} \begin{remark} Note that the expected value and the variance of $S_l^{(m)}$ in Lemma \ref{lemma:iid} are given by \begin{align*} \Exp[S^{(m)}_l] = m\Exp[S_j] = \frac{ml}{2},\quad\quad\Var[S^{(m)}_l] = m\Var[S_j] = m\frac{(l+1)^2-1}{12}. \end{align*} Also note that, by the Central Limit Theorem, the distribution of $S_l^{(m)}$ is asymptotically normal. \end{remark} Now, we prove a fact well-known for binomial coefficients, namely, that the `central' coefficient majorizes the remaining coefficients in a given row in the (multinomial) triangle. \begin{lemma}\label{lemma:central} Let $k\ge 0$ and $l\ge 0$ be integers. For all integers $n$ such that $0\le n\le kl$, \begin{align*} \binom{k}{n}_{l+1} \le \binom{k}{\lfloor\frac{kl}{2}\rfloor}_{l+1}. \end{align*} \end{lemma} \begin{proof} By the representation of $\binom{k}{n}_{l+1}$ as $\binom{k}{n}_{l+1}=\sum_{j=0}^{l}\binom{k-1}{n-j}_{l+1}$ we find for $n\ge 1$ \begin{align}\label{eq:repr} \binom{k}{n}_{l+1} = \binom{k}{n-1}_{l+1}+\Bigl[\binom{k-1}{n}_{l+1}-\binom{k-1}{n-l-1}_{l+1}\Bigr]. \end{align} Moreover, it is easy to show that polynomial coefficients are symmetric in the following sense, \begin{align*} \binom{k}{n}_{l+1} = \binom{k}{kl-n}_{l+1}. \end{align*} Therefore it suffices to show that the sequence $\binom{k}{0}_{l+1},\binom{k}{1}_{l+1},\dotsc,\binom{k}{\lfloor\frac{kl}{2}\rfloor}_{l+1}$ is non-decreasing. But by \eqref{eq:repr} this easily follows inductively, using the row number $k$ as induction variable. Importantly, note that, in \eqref{eq:repr}, if $n\le \lfloor\frac{kl}{2}\rfloor$, then $\binom{k-1}{n}_{l+1}$ is defined and greater than zero for all $k\ge 2$ since then $n\le \lfloor\frac{kl}{2}\rfloor\le (k-1)l$. \end{proof} In the following lemma, we write $a_k\sim b_k$ as a short-hand for $\lim_{k\goesto\infty} \frac{a_k}{b_k}=1$. Also note that the following lemma is a generalization of Stirling's approximation to the central binomial coefficient. \begin{lemma}\label{lemma:asymptotic} For all fixed $l$, \begin{align*} \binom{k}{\lfloor\frac{kl}{2}\rfloor}_{l+1}\sim \frac{(l+1)^{k}}{\sqrt{2\pi k\frac{(l+1)^2-1}{12}}}. \end{align*} \end{lemma} \begin{proof} See Eger \cite{eger3}. \end{proof} \begin{lemma}\label{lemma:approx} For all $l$ and $m$ and for all $n$ such that $0\le n\le ml$, \begin{align*} P[X_{l,m}=n] = \gamma_{l,m}P[S^{(m)}_l=n]+e_{l,m}, \end{align*} where $e_{l,m}$ is an ``error term'' that satisfies \begin{align*} 0\le e_{l,m} \le O(l^{-2}) \end{align*} and $\gamma_{l,m}$ satisfies \begin{align*} \gamma_{l,m}=\bigl(1+O(\inv{l})\bigr)^{-1}. \end{align*} \end{lemma} \begin{proof} Let $i$, $1\le i\le m$, be the smallest index such that $n\le il$. Moreover, define $\alpha_{l,m}$ as $\alpha_{l,m}=\frac{1}{(l+1)^m-1}\frac{l}{l+1}$ and note that $\alpha_{l,m}=\gamma_{l,m}\frac{1}{(l+1)^m}$, where $\gamma_{l,m}=\inv{(1+1/l)}$ (ignoring the $(-1)$ in the denominator of $\alpha_{l,m}$). Then \begin{align*} P[X_{l,m}=n] &= \alpha_{l,m}\sum_{j=i}^m \binom{j}{n}_{l+1} = \alpha_{l,m}\binom{m}{n}_{l+1}+\alpha_{l,m}\sum_{j=i}^{m-1}\binom{j}{n}_{l+1} = \gamma_{l,m}P[S^{(m)}_l=n]+e_{l,m}, \end{align*} where we define $e_{l,m}=\alpha_{l,m}\sum_{j=i}^{m-1}\binom{j}{n}_{l+1}$. Obviously, $e_{l,m}\ge 0$. Moreover, by Lemmas \ref{lemma:central} and \ref{lemma:asymptotic} \begin{align}\label{eq:start} e_{l,m}&\le \alpha_{l,m}\sum_{j=i}^{m-1}\binom{j}{\lfloor\frac{jl}{2}\rfloor}_{l+1} \le \alpha_{l,m}O(1)\sum_{j=i}^{m-1}\frac{(l+1)^j}{\sqrt{2\pi j\frac{(l+1)^2-1}{12}}}. \end{align} Now, \begin{align*} \frac{(l+1)^j}{\sqrt{2\pi j\frac{(l+1)^2-1}{12}}} = O(1)\cdot\frac{(l+1)^{j}}{\sqrt{j\bigl((l+1)^2-1\bigr)}}, \end{align*} so that \begin{align*} \sum_{j=i}^{m-1}\frac{(l+1)^j}{\sqrt{2\pi j\frac{(l+1)^2-1}{12}}} = \sum_{j=i}^{m-1}O(1)\frac{(l+1)^{j}}{\sqrt{j}\sqrt{(l+1)^2-1}} \le O(1)\sum_{j=i}^{m-1}(l+1)^{j-1} = O(1)\frac{(l+1)^{i-1}\bigl[(l+1)^{m-i}-1\bigr]}{l}, \end{align*} whence, continuing from \eqref{eq:start}, \begin{equation}\label{eq:shape} \begin{split} e_{l,m}&\le \alpha_{l,m}O(1)\sum_{j=i}^{m-1}\frac{(l+1)^j}{\sqrt{2\pi j\frac{(l+1)^2-1}{12}}} \le O(1)\frac{(l+1)^{i-2}}{(l+1)^m-1}\bigl[(l+1)^{m-i}-1\bigr] \\&\le O(1)\Bigl((l+1)^{-2}-(l+1)^{i-m-2}\Bigr) \le O(1)(l+1)^{-2}. \end{split} \end{equation} \end{proof} In Table \ref{table:approx}, we show the decrease of $e_{l,m}$ in Lemma \ref{lemma:approx} as $l$ increases. Obviously, our bound is apparently quite well, as in fact $e_{l,m}$ seems to approximately quadratically decay in $l$. In Figure \ref{fig:xlm_slm}, the distributions of $X_{l,m}$ and $S_{l}^{(m)}$ for different values of $l$ and $m$ are plotted. The variable $X_{l,m}$ has a particular distributional shape that can be inferred from the proof of Lemma \ref{lemma:approx}. For small values $n$ the distribution of $X_{l,m}$ tends to be larger than that of $S_l^{(m)}$ --- $e_{l,m}$ is relatively larger as can be seen from Equation \eqref{eq:shape} --- while this relation is reversed for large $n$. \begin{table}[!h] \centering \begin{tabular}{l||rr|rr} & \multicolumn{2}{|c|}{$m=10$} & \multicolumn{2}{|c}{$m=20$} \\\hline\hline $l=1$ & $0.0471$ & & $0.0240$ &\\ $l=2$ & $0.0191$ & $2.46$ & $0.0093$ & $2.57$\\ $l=4$ & $0.0064$ & $2.94$ & $0.0031$ & $2.96$\\ $l=8$ & $0.0019$ & $3.25$ & $9.5016\times 10^{-4}$& $3.30$\\ $l=16$ & $5.5909\times 10^{-4}$ & $3.56$ & $2.6494\times 10^{-4}$& $3.58$\\ $l=32$ & $1.4871\times 10^{-4}$ & $3.75$ & $7.0291\times 10^{-5}$& $3.76$\\ $l=64$ & $3.8399\times 10^{-5}$ & $3.82$ & $1.8126\times 10^{-5}$& $3.87$\\ \end{tabular}\hspace{0.5cm} \caption{ Maximum over absolute differences $\abs{P[X_{l,m}=n]-P[S^{(m)}_l=n]}$, $n=0,\dotsc,lm$, for $m=10$ and $m=20$ and varying $l$. We also specify the factor of decrease in these differences between successive $l$ values. } \label{table:approx} \end{table} \begin{figure*}[!ht] \centering \includegraphics[scale=0.12]{distM10L2.jpg} \includegraphics[scale=0.12]{distM10L4.jpg} \includegraphics[scale=0.12]{distM10L8.jpg} \caption { The distributions of $X_{l,m}$ and $S^{(m)}_l$ for $m=10$ and $l=2$ (left), $l=4$ (middle), and $l=8$ (right). } \label{fig:xlm_slm} \end{figure*} \section{Conclusion} The choice of the restrictions $0\le p\le l$ for parts $p$ of integer compositions has, although illustrating a model case, largely been arbitrary. In fact, similar results as Theorem \ref{theorem:main} would hold for any finite set $L=\set{a_1,\dotsc,a_k}$ as range for part sizes. For $L=\set{a,a+1,\dotsc,b}$, $0\le a\le b$, we find simple closed form solutions of the asymptotic distribution of $X_{L,m}$, where we define $X_{L,m}$ (and other variables such as $S_{L}^{(m)}$) as a generalization of $X_{l,m}$ above with $X_{l,m}=X_{\set{0,\dotsc,l},m}$. For example, in this case, $S^{(m)}_L$ has exact distribution \begin{align*} \Bigl(\frac{1}{b-a+1}\Bigr)^m\binom{m}{n-ma}_{b-a+1}, \end{align*} (cf. Eger (2012) \cite{eger2}) with expected value $\frac{m(a+b)}{2}$ and is, by the Central Limit Theorem, asymptotically normally distributed. Conversely, the distribution of $X_{L,m}$ allows a similar representation as in Lemma \ref{lemma:exact}, as a sum of quantities $\binom{j}{n-ja}_{b-a+1}$ and a normalizing term, from which we can straightforwardly derive a decomposition of $X_{L,m}$ as in Lemma \ref{lemma:approx}, with bounds obtained from Lemmas \ref{lemma:central} and \ref{lemma:asymptotic}. As a final remark, note that our results entail a `Stirling' like formula for $h_{l,m}(n)$. By definition $P[X_{l,m}=n]=\frac{h_{l,m}(n)}{\sum_{i=0}^{lm} h_{l,m}(i)}$, and equating this quantity at its asymptotic mean value $\frac{ml}{2}$ with the corresponding normal density leads to \begin{align*} h_{l,m}(\frac{ml}{2}) \sim \frac{\bigl((l+1)^m-1\bigr)\frac{l+1}{l}}{\sqrt{2\pi m \frac{(l+1)^2-1}{12}}}. \end{align*}
{ "timestamp": "2012-03-06T02:01:44", "yymm": "1203", "arxiv_id": "1203.0690", "language": "en", "url": "https://arxiv.org/abs/1203.0690", "abstract": "Among all restricted integer compositions with at most $m$ parts, each of which has size at most $l$, choose one uniformly at random. Which integer does this composition represent? In the current note, we show that underlying distribution is, for large $m$ and $l$, approximately normal with mean value $\\frac{ml}{2}$.", "subjects": "Combinatorics (math.CO); Discrete Mathematics (cs.DM); Probability (math.PR)", "title": "Asymptotic normality of integer compositions inside a rectangle", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9879462215484651, "lm_q2_score": 0.8244619242200082, "lm_q1q2_score": 0.814524042843734 }
https://arxiv.org/abs/1203.4200
Residues and Telescopers for Rational Functions
We give necessary and sufficient conditions for the existence of telescopers for rational functions of two variables in the continuous, discrete and q-discrete settings and characterize which operators can occur as telescopers. Using this latter characterization, we reprove results of Furstenberg and Zeilberger concerning diagonals of power series representing rational functions. The key concept behind these considerations is a generalization of the notion of residue in the continuous case to an analogous concept in the discrete and q-discrete cases.
\section{Introduction}\label{SECT:intro} Residues have played a ubiquitous and important role in mathematics and their use in combinatorics has had a lasting impact~(e.g., \cite{Flajolet_Sedgewick}). In this paper we will show how the notion of residue and its generalizations lead to new results and a recasting of known results concerning telescopers in the continuous, discrete and~$q$-discrete cases. As an introduction to our point of view and our results, let us consider the problem of finding a differential telescoper for a rational function of two variables. Let~$k$ be a field of characteristic zero,~$k(t,x)$ the field of rational functions of two variables and~$D_t = \partial/\partial_t$ and~$D_x=\partial/\partial_x$ the usual derivations with respect to~$t$ and~$x$, respectively. Given~$f \in k(t,x)$, we wish to find a nonzero operator $L \in k(t)\langle D_t\rangle$, the ring of linear differential operators in~$D_t$ with coefficients in $k(t)$, and an element~$g \in k(t,x)$ such that~$L(f) = D_x(g)$. We may consider~$f$ as an element of~$\overline{K}(x)$ where~$\overline{K}$ is the algebraic closure of~$K = k(t)$. As such, we may write \begin{equation} \label{eqn0} f = p + \sum_{i=1}^m \sum_{j=1}^{n_i}\frac{\alpha_{i,j}}{(x-\beta_i)^j}, \end{equation} where $p\in {K}[x]$, the $\beta_i$ are the roots in $\overline{K}$ of the denominator of $f$ and the $\alpha_{i,j}$ are in~$\overline{K}$. Note that the element $\alpha_{i,1}$ is the usual \emph{residue} of~$f$ at~$\beta_i$. Using Hermite reduction (\cite[p.~39]{BronsteinBook} {or Section~\ref{SUBSECT:cres} below}), one sees that a rational function $h \in K(x)$ is of the form $h = D_x(g)$ for some $g \in K(x)$ if and only if all residues of $h$ are zero. Therefore to find a telescoper for $f$ it is enough to find a nonzero operator $L \in K\langle D_t \rangle$ such that $L(f)$ has only zero residues. For example assume that $f$ has only simple poles, i.e., $f = \frac{a}{b}, a,b \in K[x]$, $\deg_xa < \deg_x b$ and $b$ squarefree. We then know that the Rothstein-Trager resultant \cite{Trager1976, Rothstein1977} \[ R := {\rm resultant}_x(a-zD_x(b), b)\in K[z] \] is a polynomial whose roots are the residues at the poles of~$f$. Given a squarefree polynomial in $K[z]=k(t)[z]$, differentiation with respect to $t$ and elimination allow one to construct a nonzero linear differential operator $L \in k(t)\langle D_t \rangle$ such that $L$ annihilates the roots of this polynomial. Applying $L$ to each term of \eqref{eqn0} one sees that $L(f)$ has zero residues at each of its poles. Applying Hermite reduction to $L(f)$ allows us to find a $g$ such that $L(f) = D_x(g)$. The main idea in the method described above is that nonzero residues are the obstruction to being the derivative of a rational function and one constructs a linear operator to remove this obstruction. This idea is the basis of results in \cite{CKS2012} where it is shown that the problem of finding differential telescopers for rational functions in~$m$ variables is equivalent to the problem of finding telescopers for algebraic functions in~$m-1$ variables and where a new algorithm for finding telescopers for algebraic functions in two variables is given. For a precise problem description, let~$k(t,x)$ be as above and~$D_t$ and~$D_x$ be the derivations defined above. We define shift operators~$S_t$ and~$S_x$ as \[S_t(f(t,x)) = f(t+1,x) \quad \text{and} \quad S_x(f(t,x)) = f(t, x+1)\] and~$q$-shift operators (for~$q\in k$ not a root of unity)~$Q_t$ and~$Q_x$ as \[Q_t(f(t,x) = f(qt,x)\quad \text{and} \quad Q_x(f(t,x)) = f(t,qx)).\] Let~$\Delta_x$ and~$\Delta_{q, x}$ denote the difference and~$q$-difference operators~$S_x-1$ and~$Q_x-1$, respectively. In this paper, we give a solution to the following problem \begin{center} \begin{minipage}[t]{10cm} \underline{Existence Problem for Telescopers.}\,\, {\it For any~$\partial_t\in \{D_t, S_t, Q_t\}$ and~$\partial_x\in \{D_x, \Delta_x, \Delta_{q,x}\}$ find necessary and sufficient conditions on elements~$f \in k(t,x)$ that guarantee the existence of a nonzero linear operator~$L(t,\partial_t)$ in~$\partial_t$ with coefficients in~$k(t)$ (a {\emph{telescoper}}) and an element~$g \in k(t,x)$ (a {\emph{certificate}}) such that \[L(t,\partial_t)(f) = \partial_x(g).\]} \end{minipage} \end{center} \vspace{-0.3cm} As we have shown above, when~$\partial_t = D_t$ and~$\partial_x = D_x$, a telescoper and certificate exist for any~$f\in k(t,x)$. This is not necessarily true in the other cases. In the case when~$\partial_t = S_t$ and~$\partial_x = \Delta_x$, Abramov and Le~\cite{AbramovLe2002} showed that there is no telescoper for the rational function~$1/(t^2+x^2)$ and presented a necessary and sufficient condition for the existence of telescopers. Later, Abramov gave a general criterion for the existence of telescopers for hypergeometric terms~\cite{Abramov2003}. The~$q$-analogs were achieved in the works by Le~\cite{Le2001} and by Chen et al. \cite{Chen2005}. Our approach in this paper represents a unified way of solving the Existence Problem for Telescopers (for rational functions) in these and the remaining cases. In particular, we will first identify in each case the appropriate notion of ``residues'' which will be elements of $\overline{k(t)}$, the algebraic closure of~$k(t)$. We will show that for any~$f \in k(t,x)$ and~$\partial_x \in \{D_x, \Delta_x, \Delta_{q,x}\}$, there exists a~$g \in k(t,x)$ such that~$f = \partial_x(g)$ if and only if all the ``residues'' vanish. We will then show that to find a telescoper, it is necessary and sufficient to find an operator~$L(t,\partial_t)$ that annihilates all of the residues. This necessary and sufficient condition has several applications. For example, our results reduce the Existence Problem for Telescopers to the problem of finding necessary and sufficient conditions that guarantee the existence of operators that annihilate algebraic functions and we present a solution to this latter problem. Our approach also gives termination criteria for the Zeilberger method~\cite{Almkvist1990, Zeilberger1990, Zeilberger1991} and also a strategy for finding telescopers and certificates, which has been successfully used in the continuous case in~\cite{CKS2012}. In addition, these criteria together with the results in~\cite{Hardouin2008, Schneider2010} can be used to determine if indefinite sums and integrals satisfy (possibly nonlinear) differential equations (see Example~\ref{EX:transcendental}). The rest of the paper is organized as follows. In Section~\ref{SECT:residues} we define the notions of residues relevant to the discrete and~$q$-discrete cases and show that for any~$f \in k(t,x)$ and~$\partial_x \in \{D_x, \Delta_x, \Delta_{q,x}\}$, there exists a~$g \in k(t,x)$ such that~$f = \partial_x g$ if and only if all the residues vanish. In Section~\ref{SECT:algfuns} we characterize those algebraic functions in~$\overline{k(t)}$ for which there exist annihilating linear operators~$L(t,S_t)$ or $L(t,Q_t)$ as well as prove some ancillary results useful in succeeding sections. In Section~\ref{SECT:telescoper}, we solve the Existence Problem for Telescopers as well as characterize when a linear operator is a telescoper. Using this latter characterization, we can give a proof, using our approach, of the theorem of Furstenberg~\cite{Furstenberg1967} stating that the diagonal of a rational power series in two variables is an algebraic function. We also discuss a recent example of Ekhad and Zeilberger~\cite{EZ2011} in the context of the results of this paper. The final Appendix contains proofs of the characterizations stated in Section~\ref{SECT:algfuns}. \section{Residues}\label{SECT:residues} Let~$K$ be a field of characteristic zero and~$K(x)$ be the field of rational functions in~$x$ over~$K$. Let~$\overline{K}$ denote the algebraic closure of~$K$. Let~$q\in K$ be such that~$q^i\neq 1$ for any nonzero~$i\in \bZ$, i.e., $q$ is not a root of unity. As in the Introduction, we define the derivation~$D_x$, shift operator~$S_x$, and~$q$-shift operator~$Q_x$ on~$K(x)$, respectively, as \[D_x(f(x))=\frac{d(f(x))}{dx}, \quad S_x(f(x))= f(x+1), \quad \text{and}\quad Q_x(f(x))=f(qx) \] for all~$f\in K(x)$. Let~$\Delta_x$ and~$\Delta_{q, x}$ denote the difference and~$q$-difference operators~$S_x-1$ and~$Q_x-1$, respectively. A rational function~$f\in K(x)$ is said to be \emph{rational integrable} (resp.\ \emph{summable, $q$-summable}) in~$K(x)$ if there exists~$g\in K(x)$ such that~$f=D_x(g)$ (resp.\ $f=\Delta_x(g)$, $f = \Delta_{q, x}(g)$). This section is motivated by the well known result (Proposition~\ref{PROP:ratint} below) that characterizes rational integrability in terms of vanishing residues. In the remainder of this section we describe other types of ``residues'' and how they can be used to give necessary and sufficient conditions for summability and~$q$-summability. \subsection{Continuous residues}\label{SUBSECT:cres} Let~$f=a/b\in K(x)$ with~$a, b\in K[x]$ and~$\gcd(a, b)=1$. Then~$f$ can be uniquely written in its partial fraction decomposition \begin{equation}\label{EQ:cparfrac} f = p + \sum_{i=1}^m \sum_{j=1}^{n_i} \frac{\alpha_{i,j}}{(x-\beta_i)^j}, \end{equation} where~$p\in K[x]$, { $m, n_i\in \bN$, $\alpha_{i,j}, \beta_i\in \overline{K}$, and~$\beta_j$'s are roots of~$b$. From any of the usual proofs of partial fraction decompositions, one sees that all the~$\alpha_{i, j}$'s are in~$K(\beta_1, \ldots, \beta_m)$. \begin{define}[Continuous residue]\label{DEF:cres} Let~$f\in K(x)$ be of the form~\eqref{EQ:cparfrac}. The value~$\alpha_{i,1}\in \ok$ is called the \emph{continuous residue} of~$f$ at~$\beta_i$ (with respect to~$x$), denoted by~$\operatorname{cres}_x(f, \beta_i)$. \end{define} \noindent Note that the continuous residue is just the usual residue in complex analysis. We will define other kinds of residues below but when we refer to a residue without further modification, we shall mean the continuous residue. Although the following is well known (see~\cite[Proposition 2.1]{VanDerPut2001}) we include it since this result is the motivation and model for the considerations that follow. \begin{prop}\label{PROP:ratint} Let~$f=a/b\in K(x)$ be such that~$a, b\in K[x]$ and~$\gcd(a, b)=1$. Then~$f$ is rational integrable in~$K(x)$ if and only if the residue~$\operatorname{cres}_x(f, \beta)$ is zero for any root~$\beta\in \ok$ of~$b$. \end{prop} \begin{proof} Suppose that~$f$ is rational integrable in~$K(x)$, i.e.,\ $f= D_x(g)$ for some~$g$ in~$K(x)$. Writing~$g$ in its partial fraction decomposition and differentiating each term, one sees that all the residues of~$D_x(g)$ are~$0$. Conversely, if all residues of~$f$ at its poles are zero, then~$f$ can be written as \[f = p + \sum_{i=1}^m \sum_{j=2}^{n_i} \frac{\alpha_{i,j}}{(x-\beta_i)^j},\] where~$p\in K[x]$, $\alpha_{i,j}, \beta_i\in \overline{K}$, and~$\beta_j$'s are roots of~$b$. Note that any polynomial is rational integrable in~$K(x)$, and for all~$i, j$ with~$1\leq i\leq m$ and~$2\leq j\leq n_i$, \[\frac{\alpha_{i,j}}{(x-\beta_i)^j} = D_x\left(\frac{(1-j)^{-1}\alpha_{i, j}}{(x-\beta_i)^{j-1}}\right).\] Then~$f=D_x(g)$, where~$g$ is of the form \[g = \tilde{p} + \sum_{i=1}^m \sum_{j=2}^{n_i} \frac{(1-j)^{-1}\alpha_{i, j}}{(x-\beta_i)^{j-1}} \quad \text{for some~$\tilde{p}\in K[x]$.}\] For each irreducible factor $p$ of~$b$, the sum in~$g$ is a symmetric function of those~$\beta_i$'s that are roots of~$p$. From this one concludes that $g$ lies in~$K(x)$. Thus,~$f$ is rational integrable in~$K(x)$. \end{proof} \subsection{Discrete residues}\label{SUBSECT:dres} Given a rational function, Matusevich~\cite{Matusevich2000} found a necessary and sufficient condition for its rational summability. Moreover, one can algorithmically decide whether a rational function is rational summable or not using methods in~\cite{Abramov1975, Abramov1989, AbramovYu1991, Abramov1995, Abramov1995b, Paule1995b, Pirastu1995a, Pirastu1995b}. Here, we present a rational summability criterion via a discrete analogue of residues. To this end, we first recall some terminology from~\cite{Abramov1975, Paule1995b} and~\cite[Chapter 2]{vdPutSinger1997}. For an element~$\alpha \in \ok$, we call the subset~$\alpha + \bZ$ the~\emph{$\bZ$-orbit} of~$\alpha$ in~$\ok$, denoted by~$[\alpha]$. For a polynomial~$b\in K[x]\setminus K$, the value \[\max\{i\in \bZ \mid \text{$\exists \, \alpha, \beta \in \ok$ such that~$i=\alpha-\beta$ and~$b(\alpha)=b(\beta)=0$}\}\] is called the~\emph{dispersion} of~$b$ with respect to~$x$, denoted by~$\operatorname{disp}_x(b)$. The polynomial~$b$ is said to be~\emph{shift-free} with respect to~$x$ if~$\operatorname{disp}_x(b)=0$. Let~$f=a/b\in K(x)$ be such that~$a, b\in K[x]$ and~$\gcd(a, b)=1$. Over the field~$\ok$, $f$ can be decomposed into the form \begin{equation}\label{EQ:dparfrac} f = p + \sum_{i=1}^m \sum_{j=1}^{n_i} \sum_{\ell=0}^{d_{i,j}} \frac{\alpha_{i,j,\ell}}{(x-(\beta_i+\ell))^{j}}, \end{equation} where~$p\in K[x]$, $m, n_i, d_{i,j}\in \bN$, $\alpha_{i, j, \ell}, \beta_i\in \ok$, and~$\beta_i$'s are in distinct $\bZ$-orbits. \begin{define}[Discrete residue]\label{DEF:dres} Let~$f\in K(x)$ be of the form~\eqref{EQ:dparfrac}. The sum~$\sum_{\ell=0}^{d_{i,j}}\alpha_{i,j, \ell}$ is called the \emph{discrete residue} of~$f$ at the $\bZ$-orbit $[\beta_i]$ of multiplicity~$j$ (with respect to~$x$), denoted by~$\operatorname{dres}_x(f, [\beta_i], j)$. \end{define} \begin{lemma}\label{LM:dres} Let~$f=\sum_{\ell=0}^d \alpha_{\ell}/(x-(\beta+\ell))^s$ be such that~$d, s\in \bN$ and~$\alpha_{\ell}, \beta\in \ok$. Then~$f$ is rational summable in~$\ok(x)$ if and only if the sum~$\sum_{\ell=0}^d \alpha_{\ell}$ is zero that is, if and only if~$\operatorname{dres}_x(f, [\beta], s)=0$. \end{lemma} \begin{proof} Suppose that the sum~$\sum_{\ell=0}^d \alpha_{\ell}$ is zero. We show that~$f$ is rational summable in~$\ok(x)$. To this end, we proceed by induction on~$d$. In the base case when~$d=0$, $f$ is clearly rational summable in~$\ok(x)$ since~$f=0$. Suppose that the assertion holds for~$d=m$ with~$m\geq 0$. Note that \begin{align*} \frac{\alpha_{m+1}}{(x-(\beta+m+1))^s} = \Delta_x \left(-\frac{\alpha_{m+1}}{(x-(\beta+m+1))^s}\right) + \frac{\alpha_{m+1}}{(x-(\beta+m))^s}. \end{align*} This implies that \[\sum_{\ell=0}^{m+1} \frac{\alpha_{\ell}}{(x-(\beta+\ell))^s} = \Delta_x \left(-\frac{\alpha_{m+1}}{x-(\beta+m+1)^s}\right) + \sum_{\ell=0}^{m} \frac{\tilde{\alpha}_{\ell}}{(x-(\beta+\ell))^s},\] where~$\tilde{\alpha}_{\ell} = \alpha_{\ell}$ if~$0\leq \ell \leq m-1$ and~$\tilde{\alpha}_{m} = \alpha_{m+1} + \alpha_{m}$. By definition, the sum~$\sum_{\ell=0}^m \tilde{\alpha}_{\ell}$ is still zero. The induction hypothesis then implies that there exists~$\tilde{g}\in \ok(x)$ such that \[\sum_{\ell=0}^{m} \frac{\tilde{\alpha}_{\ell}}{(x-(\beta+\ell))^s} = \Delta_x(\tilde{g}).\] So~$f=\Delta_x(g)$ with~$g=\tilde{g} - \alpha_{m+1}/(x-(\beta+m+1))^s\in \ok(x)$. For the opposite implication, we assume to the contrary that the sum~$\sum_{\ell=0}^d \alpha_\ell$ is nonzero. Without loss of generality, we can assume that~$\alpha_0 \neq 0$. Write~$\alpha_0 = \bar{\alpha}_0 + \tilde{\alpha}_0$ such that~$\tilde{\alpha}_0 + \sum_{\ell=1}^{d} \alpha_\ell =0$. Since~$\sum_{\ell=0}^d \alpha_\ell\neq 0$, $\bar{\alpha}_0\neq 0$. By the assertion shown above, there exists~$\tilde{g}\in \ok(x)$ such that \[f = \frac{\bar{\alpha}_0}{(x-\beta)^s} + \Delta_x(\tilde{g}).\] Since~$\operatorname{disp}_x((x-\beta)^s)=0$ and~$\bar{\alpha}_0\neq 0$, ${\bar{\alpha}_0}/{(x-\beta)^s}$ is not rational summable by~\cite[Lemma 3]{Matusevich2000} or~\cite[Lemma 6.3]{Hardouin2008}. Then~$f$ is not rational summable in~$\ok(x)$. This completes the proof. \end{proof} \begin{prop}\label{PROP:ratsum} Let~$f=a/b\in K(x)$ be such that~$a, b\in K[x]$ and~$\gcd(a, b)=1$. Then~$f$ is rational summable in~$K(x)$ if and only if the discrete residue $\operatorname{dres}_x(f, [\beta], j)$ is zero for any $\bZ$-orbit~$[\beta]$ with~$b(\beta)=0$ of any multiplicity~$j\in \bN$. \end{prop} \begin{proof} Let~$f\in K(x)$ be decomposed into the form~\eqref{EQ:dparfrac}. If the discrete residue of~$f$ at any~$\bZ$-orbit of any multiplicity is zero, then Lemma~\ref{LM:dres} implies that for all~$i, j$ with~$1\leq i \leq m$ and~$1\leq j \leq n_i$, the sum \[\sum_{\ell=0}^{d_{i,j}} \frac{\alpha_{i,j,\ell}}{(x-(\beta_i+\ell))^{j}} = \Delta_x(g_{i,j})\quad \text{for some~$g_{i, j}\in \ok(x)$.}\] Since any polynomial is rational summable, there exists~$\tilde{p}\in K[x]$ such that~$p = \Delta_x(\tilde{p})$. So~$f = \Delta_x(\tilde{p} + g)$, where~$g=\sum_{i=1}^m\sum_{j=1}^{n_i} g_{i,j}$. Arguing as in Proposition~\ref{PROP:ratint}, one sees that for each irreducible factor $p$ of~$b$, the sum in~$f$ is a symmetric function of those~$\beta_i$'s that are roots of~$p$. From this one concludes that the sum is in~$K(x)$ and that~$f$ is rational summable in~$K(x)$. Suppose that~$f$ is rational summable in~$K(x)$, i.e., $f=\Delta_x(g)$ for some~$g\in K(x)$. Over the field~$\ok$, we decompose~$g$ into the form~\eqref{EQ:dparfrac}. For all~$i, j$ with~$1\leq i \leq m$ and~$1\leq j \leq n_i$, the linearity of~$\Delta_x$ implies that \[\Delta_x\left(\sum_{\ell=0}^{d_{i,j}} \frac{\alpha_{i,j,\ell}}{(x-(\beta_i+\ell))^{j}} \right) = \sum_{\ell=0}^{d_{i,j}+1} \frac{\tilde{\alpha}_{i,j,\ell}}{(x-(\tilde{\beta}_i +\ell))^{j}}, \] where~$\tilde{\beta}_i = \beta_i -1$, $\tilde{\alpha}_{i,j, 0}={\alpha}_{i, j, 0}$, $\tilde{\alpha}_{i, j, d_{i,j}+1} = -\alpha_{i, j, d_{i, j}}$, and~$\tilde{\alpha}_{i, j, \ell} = \alpha_{i,j, \ell}-\alpha_{i, j, \ell-1}$ for~$1\leq \ell \leq d_{i, j}$. Then the residue~$\operatorname{dres}_x(f, [\tilde{\beta}_i], j)=\sum_{\ell=0}^{d_{i, j}}\tilde{\alpha}_{i,j,\ell}=0$ for all~$i, j$. This completes the proof. \end{proof} \begin{remark} Proposition~\ref{PROP:ratsum} is also known in literature (see~\cite[Theorem 10]{Matusevich2000} or~\cite[Corollary 1]{Marshall2005}). We have recast the known proofs in our terms to show the relevance of discrete residues. \end{remark} \subsection{$q$-discrete residues}\label{SUBSECT:qres} Given a rational function, the $q$-analogue of Abramov's algorithm in~\cite{Abramov1995b} can decide whether it is rational $q$-summable or not. Here, we present a $q$-analogue of Proposition~\ref{PROP:ratsum} in terms of a $q$-discrete analogue of residues. To this end, we first recall some terminology from~\cite{Abramov1975, Abramov1989, Abramov1995b}. For an element~$\alpha \in \ok$, we call the subset~$\{\alpha \cdot q^i \mid i \in \bZ\}$ of~$\ok$ the~\emph{$q^\bZ$-orbit} of~$\alpha$ in~$\ok$, denoted by~$[\alpha]_q$. For a polynomial~$b\in K[x]\setminus K$, the value \[\max\{i\in \bZ \mid \text{$\exists$ nonzero~$\alpha, \beta \in \ok$ such that~$\alpha=q^i\cdot \beta$ and~$b(\alpha)=b(\beta)=0$}\}\] is called the~\emph{$q$-dispersion} of~$b$ with respect to~$x$, denoted by~$\operatorname{qdisp}_x(b)$. For~$b=\lambda x^n$ with~$\lambda \in K$ and~$n\in \bN\setminus\{0\}$, we define~$\operatorname{qdisp}_x(b)=+\infty$. The polynomial~$b$ is said to be~\emph{$q$-shift-free} with respect to~$x$ if~$\operatorname{qdisp}_x(b)=0$. Let~$f=a/b\in K(x)$ be such that~$a, b\in K[x]$ and~$\gcd(a, b)=1$. Over the field~$\ok$, $f$ can be uniquely decomposed into the form \begin{equation}\label{EQ:qparfrac} f = c + xp_1 + \frac{p_2}{x^s} + \sum_{i=1}^m \sum_{j=1}^{n_i} \sum_{\ell=0}^{d_{i,j}} \frac{\alpha_{i,j,\ell}}{(x-q^\ell \cdot \beta_i)^{j}}, \end{equation} where~$c\in K$, $p_1, p_2 \in K[x]$, $m, n_i\in \bN$ are nonzero, $s, d_{i,j}\in \bN$, $\alpha_{i, j, \ell}, \beta_i\in \ok$, and~$\beta_i$'s are nonzero and in distinct $q^\bZ$-orbits. \begin{define}[$q$-discrete residue]\label{DEF:qres} Let~$f\in K(x)$ be of the form~\eqref{EQ:qparfrac}. The sum $\sum_{\ell=0}^{d_{i,j}}q^{-\ell \cdot j} \alpha_{i,j, \ell}$ is called the \emph{$q$-discrete residue} of~$f$ at the~$q^\bZ$-orbit $[\beta_i]_q$ of multiplicity~$j$ (with respect to~$x$), denoted by~$\operatorname{qres}_x(f, [\beta_i]_q, j)$. In addition, we call the constant~$c$ the \emph{$q$-discrete residue} of~$f$ at infinity, denoted by~~$\operatorname{qres}_x(f, \infty)$. \end{define} We summarize some basic facts concerning rational $q$-summability in the next lemma. For a detailed proof, one can see~\cite[\S 3]{Abramov1995b}. \begin{lemma}\label{LM:ratqsum} Let~$p, p_1, p_2\in K[x]$, $c\in K$, and~$s\in \bN\setminus \{0\}$ be as in \eqref{EQ:qparfrac}. Then \begin{enumerate} \item $\deg_x(\Delta_{q, x}(p)) = \deg_x(p)$. \item If~$c$ is nonzero, then~$c$ is not rational $q$-summable in~$K(x)$. \item $f = xp_1 + p_2/x^s$ is rational $q$-summable in~$K(x)$. \end{enumerate} \end{lemma} The following lemma is a $q$-analogue of Lemma~\ref{LM:dres} and its proof proceeds in a similar way. \begin{lemma}\label{LM:qres} Let~$f=\sum_{\ell=0}^d \alpha_{\ell}/(x-q^\ell\cdot \beta )^s$ be such that~$d, s\in \bN$,~$\alpha_{\ell}, \beta\in \ok$, and~$\beta$ is nonzero. Then~$f$ is rational $q$-summable in~$\ok(x)$ if and only if the sum~$\sum_{\ell=0}^d q^{-\ell\cdot s}\alpha_{\ell}$ is zero, that is, if and only if~$\operatorname{qres}_x(f, [\beta]_q, s)=0$. \end{lemma} \begin{proof} Suppose that the sum~$\sum_{\ell=0}^d q^{-\ell\cdot s}\alpha_{\ell}$ is zero. We show that~$f$ is rational $q$-summable in~$\ok(x)$. To this end, we proceed by induction on~$d$. In the base case when~$d=0$, $f$ is clearly rational $q$-summable since~$f=0$. Suppose that the assertion holds for~$d=m$ with~$m\geq 0$. Note that \begin{align*} \frac{\alpha_{m+1}}{(x-q^{m+1}\beta)^s} = \Delta_{q, x} \left(-\frac{\alpha_{m+1}}{(x-q^{m+1}\beta)^s}\right) + \frac{q^{-s}\alpha_{m+1}}{(x-q^m\beta)^s}. \end{align*} This implies that \[\sum_{\ell=0}^{m+1} \frac{\alpha_{\ell}}{(x-q^{\ell}\beta)^s} = \Delta_{q, x} \left(-\frac{\alpha_{m+1}}{(x-q^{m+1}\beta)^s}\right) + \sum_{\ell=0}^{m} \frac{\tilde{\alpha}_{\ell}}{(x-q^{\ell}\beta)^s},\] where~$\tilde{\alpha}_{\ell} = \alpha_{\ell}$ if~$0\leq \ell \leq m-1$ and~$\tilde{\alpha}_{m} = q^{-s}\alpha_{m+1} + \alpha_{m}$. From the definition and assumption on the~$\alpha_\ell$'s, the sum~$\sum_{\ell=0}^m q^{-\ell\cdot s}\tilde{\alpha}_{\ell}$ is zero. The induction hypothesis then implies that there exists~$\tilde{g}\in \ok(x)$ such that \[\sum_{\ell=0}^{m} \frac{\tilde{\alpha}_{\ell}}{(x-q^{\ell}\beta)^s} = \Delta_{q, x}(\tilde{g}).\] So~$f=\Delta_{q, x}(g)$ with~$g=\tilde{g} - \alpha_{m+1}/(x-q^{m+1}\beta)^s\in \ok(x)$. For the opposite implication, we assume to the contrary that the sum~$\sum_{\ell=0}^d q^{-\ell \cdot s}\alpha_\ell$ is nonzero. Without loss of generality, we can assume that~$\alpha_0 \neq 0$. Write~$\alpha_0 = \bar{\alpha}_0 + \tilde{\alpha}_0$ such that~$\tilde{\alpha}_0 + \sum_{\ell=1}^{d} q^{-\ell\cdot s}\alpha_\ell =0$. Since~$\sum_{\ell=0}^d q^{-\ell\cdot s}\alpha_\ell\neq 0$, $\bar{\alpha}_0 \neq 0$. By the assertion shown above, there exists~$\tilde{g}\in \ok(x)$ such that \[f = \frac{\bar{\alpha}_0}{(x-\beta)^s} + \Delta_{q, x}(\tilde{g}).\] Since~$\operatorname{qdisp}_x((x-\beta)^s)=0$ and~$\bar{\alpha}_0\neq 0$, ${\bar{\alpha}_0}/{(x-\beta)^s}$ is not rational summable by~\cite[Lemma 6.3]{Hardouin2008}. Then~$f$ is not rational $q$-summable in~$\ok(x)$. This completes the proof. \end{proof} \begin{prop}\label{PROP:ratqsum} Let~$f=a/b\in K(x)$ be such that~$a, b\in K[x]$ and~$\gcd(a, b)=1$. Then~$f$ is rational $q$-summable in~$K(x)$ if and only if the~$q$-discrete residues $\operatorname{qres}_x(f, \infty)$ and~$\operatorname{qres}_x(f, [\beta]_q, j)$ are all zero for any $q^\bZ$-orbit~$[\beta]_q$ with~$\beta \neq 0$ and~$b(\beta)=0$ of any multiplicity~$j\in \bN$. \end{prop} \begin{proof} Let~$f\in K(x)$ be decomposed into the form~\eqref{EQ:qparfrac}. If the residue of~$f$ at any $q^\bZ$-orbit~$[\beta]_q, \beta \neq 0$, of any multiplicity is zero, then Lemma~\ref{LM:qres} implies that for all~$i, j$ with~$1\leq i \leq m$ and~$1\leq j \leq n_i$, the sum \[\sum_{\ell=0}^{d_{i,j}} \frac{\alpha_{i,j,\ell}}{(x-q^{\ell}\beta_i)^{j}} = \Delta_{q, x}(g_{i,j})\quad \text{for some~$g_{i, j}\in \ok(x)$.}\] Since the rational function~$xp_1 + \frac{p_2}{x^s}$ in~\eqref{EQ:qparfrac} is rational $q$-summable by Lemma~\ref{LM:ratqsum}, there exists~$u\in K(x)$ such that~$xp_1 + p_2/x^s = \Delta_{q, x}(u)$. So~$f = \Delta_{q, x}(u + g)$, where~$g=\sum_{i=1}^m\sum_{j=1}^{n_i} g_{i,j}$. As in Proposition~\ref{PROP:ratsum}, we see that~$g \in K(x)$ and therefore that $f$ is rational~$q$-summable in~$K(x)$. Suppose that~$f$ is rational $q$-summable in~$K(x)$, i.e., $f=\Delta_{q, x}(g)$ for some~$g\in K(x)$. Over the field~$\ok$, we decompose~$g$ into the form~\eqref{EQ:qparfrac}. For all~$i, j$ with~$1\leq i \leq m$ and~$1\leq j \leq n_i$, the linearity of~$\Delta_{q, x}$ implies that \[\Delta_{q, x}\left(\sum_{\ell=0}^{d_{i,j}} \frac{\alpha_{i,j,\ell}}{(x-q^{\ell}\beta_i)^{j}} \right) = \sum_{\ell=0}^{d_{i,j}+1} \frac{\tilde{\alpha}_{i,j,\ell}}{(x-q^{\ell}\tilde{\beta}_i)^{j}}, \] where~$\tilde{\beta}_i = q^{-1}\beta_i$, $\tilde{\alpha}_{i,j, 0}=q^{-j}{\alpha}_{i, j, 0}$, $\tilde{\alpha}_{i, j, d_{i,j}+1} = -\alpha_{i, j, d_{i, j}}$, and~$\tilde{\alpha}_{i, j, \ell} = q^{-j}\alpha_{i,j, \ell}-\alpha_{i, j, \ell-1}$ for~$1\leq \ell \leq d_{i, j}$. Then the residue~$\operatorname{qres}_x(f, [\tilde{\beta}_i]_q, j)=\sum_{\ell=0}^{d_{i, j}}q^{-\ell\cdot j} \tilde{\alpha}_{i,j,\ell}=0$ for all~$i, j$. Since~$\Delta_{q, x}(c)=0$ for any constant~$c\in k$, the residue of~$f$ at infinity is zero. This completes the proof. \end{proof} \subsection{Residual forms}\label{SUBSECT:resform} In terms of residues, we will present a normal form of a rational function in the quotient space~$K(x)/\partial_x (K(x))$ with~$\partial_x\in \{D_x, \Delta_x, \Delta_{q, x}\}$. Let~$f\in K(x)$. If~$f$ is of the form~\eqref{EQ:cparfrac}, then we can reduce it to \[f = D_x(g) + r, \quad \text{where~$r=\sum_{i=1}^m \frac{\operatorname{cres}_x(f, \beta_i)}{x-\beta_i}$}.\] Note that~$r$ actually lies in~$K(x)$. We call such an~$r$ the \emph{residual form} of~$f$ with respect to~$D_x$. Similarly, residual forms with respect to~$\Delta_x$ and~$\Delta_{q, x}$ are respectively \[r=\sum_{i=1}^m\sum_{j=1}^{n_i} \frac{\operatorname{dres}_x(f, [\beta_i], j)}{(x-\beta_i)^j}, \quad \text{where ~$\beta_i$'s in distinct $\bZ$-orbits}.\] and \[r=c + \sum_{i=1}^m\sum_{j=1}^{n_i} \frac{\operatorname{qres}_x(f, [\beta_i]_q, j)}{(x-\beta_i)^j}, \quad \text{where~$c\in K$ and~$\beta_i$'s in distinct $q^\bZ$-orbits}.\] Such a residual form for a rational function is unique up to taking a different representative from orbits. One can compute residual forms without introducing algebraic extensions of~$K$ by algorithms in~\cite{Hermite1872, Ostrogradsky1845, Horowitz1971, Paule1995b, Pirastu1995a, Pirastu1995b, Abramov1995b}. \section{Algebraic functions}\label{SECT:algfuns} As early as 1827, Abel already observed that an algebraic function satisfies a linear differential equation with polynomial coefficients~\cite[p.\ 287]{Abel1881}. The annihilating differential equations are important in the study of algebraic functions and their series expansions~\cite{Comtet1964, Chudnovsky1986, Chudnovsky1987}. Algorithms for constructing differential annihilators for algebraic functions have been developed in~\cite{Cockle1861, Harley1862, CormierSingerTragerUlmer2002, Nahay2003, BCLSS2007}. It is not true that any algebraic function satisfies a linear or a~$q$-linear recurrence. In this section we characterize those algebraic functions that satisfy such equations and prove a few lemmas concerning algebraic solutions of first order linear and~$q$-linear recurrences. In the next section, we will see how this restriction on algebraic solutions of such recurrences is responsible for the essential difference between the continuous problems and the~($q$-)discrete ones. Let~$k$ be an algebraically closed field of characteristic zero. Let~$q\in k$ be such that~$q^i\neq 1$ for any~$i\in \bZ\setminus\{0\}$. Let~$k(t)$ be the field of all rational functions in~$t$ over~$k$. On the field~$k(t)$, we let~$D_t$, $S_t$, and~$Q_t$ denote the derivation, shift operator, and $q$-shift operator with respect to~$t$, respectively. Let~$k(t)\langle D_t \rangle$ (resp.\ $k(t)\langle S_t \rangle$, $k(t)\langle Q_t \rangle$) denote the ring of linear differential (resp.\ recurrence, $q$-recurrence) operators over~$k(t)$. We recall the following fact for reference later. One can find its proof in~\cite[p.\ 339]{Harley1862} or~\cite[p.\ 267]{Comtet1964}. \begin{prop}\label{PROP:aflde} Let~$\alpha(t)$ be an element of the algebraic closure of~$k(t)$. Then there exists a nonzero operator~$L(t, D_t)\in k(t)\langle D_t\rangle$ such that~$L(\alpha)=0$. \end{prop} {As mentioned above, the situation is different if we consider the linear \linebreak ($q$-)recurrence equations for algebraic functions and the following results show that requiring an algebraic function~$f$ to satisfy such a recurrence equation severely restricts~$f$. \begin{prop}\label{PROP:afrde} Let~$\alpha(t)$ be an element in the algebraic closure of~$k(t)$. If there exists a nonzero operator~$L(t, S_t)\in k(t)\langle S_t \rangle$ such that~$L(\alpha)=0$, then~$\alpha\in k(t)$. \end{prop} \begin{prop}\label{PROP:aflqe} Let~$\alpha(t)$ be an element in the algebraic closure of~$k(t)$. If there exists a nonzero operator~$L(t, Q_t)\in k(t)\langle Q_t \rangle$ such that~$L(\alpha)=0$, then~$\alpha\in k(t^{1/n})$ for some positive integer~$n$. \end{prop} We have included complete proofs (and references to other proofs) of these results in the Appendix. In the next section, algebraic functions will appear as residues of bivariate rational functions and these functions will satisfy certain first order linear ($q$-)recurrence relations. The following lemmas characterize the form of these functions. Although these characterizations can be derived from Propositions~\ref{PROP:afrde} and~\ref{PROP:aflqe}, we will give more elementary proofs. Abusing notation, we let~$S_t$ and~$Q_t$ denote arbitrary extensions of~$S_t$ and~$Q_t$ to automorphisms of~$\overline{k(t)}$, the algebraic closure of~$k(t)$. \begin{lemma}\label{LM:const} Let~$n$ be a positive integer. \begin{enumerate} \item[(i)] If~$f\in \overline{k(t)}$ and~$S_t^n(f) = f$, then~$f \in k$. \item[(ii)] If~$f\in \overline{k(t)}$ and~$Q_t^n(f) = f$, then~$f \in k$. \item[(iii)] If~$f\in \overline{k(t)}$ and~$D_t(f)=0$, then~$f \in k$. \end{enumerate} \end{lemma} \begin{proof}~$(i)$. We begin by showing that if~$f\in {k(t)}$ and~$S_t^n(f) = f$ then~$f \in k$. If~$f\notin k$, then there exists an element~$a\in k$ such that~$a$ is a pole or zero of~$f$. In this case the infinite set~$\{a + in \ | \ i \in \bZ \}$ will also consist of poles or zeroes, an impossibility since~$f$ is a rational function. Now assume that~$f\in \overline{k(t)}$ and~$S_t^n(f) = f$. Let~$Y^\lambda + a_{\lambda-1} Y^{\lambda-1} + \ldots + a_0$ be the minimal polynomial of~$f$ over~$k(t)$. We then have that~$Y^\lambda + S_t^n(a_{\lambda-1}) Y^{\lambda-1} + \ldots + S_t^n(a_0)$ is also the minimal polynomial of~$f(t) = S_t^n(f(t))$. Therefore~$S_t^n(a_i) = a_i$ for all~$i = \lambda -1, \ldots , 0$. This implies that the~$a_i \in k$. Since $k$ is algebraically closed,~$f \in k$. $(ii)$. We again begin by showing that if~$f\in {k(t)}$ and~$Q_t^n(f) = f$ then~$f \in k$. Assume~$f \notin k$ and let~$a \in k$ be a nonzero pole or zero of~$f$. We then have that the set~$\{aq^{in} \ | \ i \in \bZ \}$ consists of poles or zeroes. Since~$q$ is not a root of unit, this set is infinite and we get a contradiction as before. Therefore,~$f = ct^m$ for some~$m \in \bZ$. Since~$f(q^nt) = f(t)$, we have~$q^{nm} = 1$, a contradiction. Therefore~$f\in k$. Now assume that~$f\in \overline{k(t)}$ and~$Q_t^n(f) = f$. An argument similar to that in 1.~shows that 2.~holds. $(iii)$. This assertion follows from Lemma~3.3.2~(i) of~\cite[Chapter 3]{BronsteinBook} and the assumption that~$k$ is algebraically closed. \end{proof} \begin{lemma}\label{LM:commuting} Let~$E \subset F$ be fields of characteristic zero with~$F$ algebraic over~$E$. Let~$\sigma$ be an automorphism of~$F$ such that~$\sigma(E) \subset E$ and let~$\delta$ be a derivation of~$F$ such that~$\delta(E) \subset E$. If~$\delta\sigma(f) = \sigma\delta(f)$ for all~$f \in E$, then~$\delta\sigma(f) = \sigma\delta(f)$ for all~$f \in F$.\end{lemma} \begin{proof} One can verify that~$\sigma^{-1}\delta\sigma$ is a derivation on~$F$ such that~$\sigma^{-1}\delta\sigma(E) \subset E$. Therefore~$\sigma^{-1}\delta\sigma - \delta$ is a derivation on~$f$ that is zero on~$E$. From the uniqueness of extensions of derivations to algebraic extensions, we have that~$\sigma^{-1}\delta\sigma - \delta$ is zero on~$F$, which yields the result.\end{proof} } \begin{lemma}\label{LM:cstdq} Let~$\alpha(t)$ be an element in the algebraic closure of~$k(t)$. If there exists a nonzero~$n\in \bN$ such that~$S_t^n(\alpha)= q^m\alpha$ for some~$m\in \bZ$, then~$m=0$ and~$\alpha(t)\in k$. \end{lemma} \begin{proof} Let~$\delta = D_t$. Lemma~\ref{LM:commuting} implies that~$S_t^n\delta = \delta S_t^n$ on~$\overline{k(t)}$. Therefore,~$S_t^n(\delta \alpha) = q^m \delta\alpha$. One see that this implies that~$S_t^n(\delta\alpha/\alpha) = \delta\alpha/\alpha$, so by Lemma~\ref{LM:const}~$\delta\alpha = c \alpha$ for some~$c \in k$. Assume that~$\alpha \notin k$ and therefore that~$\delta\alpha \neq 0$ and~$c \neq 0$. Let~$P(Y) = Y^\lambda + a_{\lambda-1} Y^{\lambda-1} + \ldots + a_0$ be the minimal polynomial of~$\alpha$ over~$k(t)$. Applying~$\delta$ to~$P(\alpha)$, one sees that \[Y^\lambda + \frac{\delta a_{\lambda-1} + (\lambda-1)c }{\lambda c} Y^{\lambda -1} + \ldots + \frac{\delta a_0}{\lambda c}\] is also the minimal polynomial of~$\alpha$ over~$k(t)$. Therefore \[ \frac{\delta a_0}{a_0} = \lambda c .\] Since~$a_0 \in k(t)$, we may write~$a_0 = d\prod(t-e_i)^{\mu_i}$, where~$d, e_i \in k, \mu_i \in \bZ$. Therefore \[\sum \frac{\mu_i}{t-e_i} = \lambda c\] contradicting the uniqueness of partial fraction decomposition. This contradiction implies that $\alpha \in k$. From the equation~$S_t^n(\alpha)=q^m \alpha$ we get~$q^m=1$. Therefore~$m=0$ since~$q$ is not root of unity. \end{proof} \begin{lemma}\label{LM:intlin} Let~$\alpha(t)$ be an element in the algebraic closure of~$k(t)$. If there exists a nonzero~$n\in \bZ$ such that~$S_t^n(\alpha)-\alpha = m$ for some~$m\in \bZ$, then~$\alpha(t)=\frac{m}{n} t + c$ for some~$c\in k$. \end{lemma} \begin{proof} Let~$\beta(t) = \frac{m}{n} t$. Since~$S_t^n(\beta) - \beta = m$, we have that $S_t^n(\alpha - \beta) - (\alpha - \beta) = 0$. Therefore Lemma~\ref{LM:const} implies that~$\alpha = \beta + c = \frac{m}{n}t + c$ for some~$c \in k$. \end{proof} \begin{lemma}\label{LM:cstqd} Let~$\alpha(t)$ be an element in the algebraic closure of~$k(t)$. If there exists a nonzero~$n\in \bZ$ such that~$Q_t^n(\alpha)-\alpha=m$ for some~$m\in \bZ$, then~$m=0$ and~$\alpha(t)\in k$. \end{lemma} \begin{proof}Let$\delta = tD_t$. One has that~$\delta Q_t = Q_t \delta$ on~$k(t)$ so Lemma~\ref{LM:commuting} implies that~$\delta Q_t = Q_t \delta$ on~$\overline{k(t)}$. We then also have~$\delta Q_t^n = Q_t^n \delta$ on~$\overline{k(t)}$ so~$Q_t^n(\delta \alpha) - \delta \alpha = 0$. Lemma~\ref{LM:const} implies~$\delta\alpha \in k$. Suppose that~$\delta \alpha = c$ for~$c\in k$. Then~$D_t(\alpha) = c/t$. If~$\text{Tr}: k(t)(\alpha) \rightarrow k(t)$ is the trace mapping, then ~$D_t(\text{Tr}(\alpha)) = \lambda c/t$ for some nonzero~$\lambda \in \bN$. By Proposition~\ref{PROP:ratint}, we have~$\lambda c =0$ and then~$c=0$. Now~$\alpha\in k$ follows from the third assertion of Lemma~\ref{LM:const}. \end{proof} \begin{lemma}\label{LM:qintlin} Let~$\alpha(t)$ be an element in the algebraic closure of~$k(t)$. If there exists a nonzero~$n\in \bZ$ such that~$Q_t^n(\alpha)= q^m\alpha$ for some~$m\in \bZ$, then~$\alpha(t)=c t^{\frac{m}{n}}$ for some~$c\in k$. \end{lemma} \begin{proof} Let~$\beta(t) = t^\frac{m}{n}$. We then have that \[Q_t^n(\frac{\alpha}{\beta}) = \frac{\alpha}{\beta}\] so $\alpha/\beta = c \in k$ by Lemma~\ref{LM:const}, that is,~$\alpha = c t^\frac{m}{n}$.\end{proof} \section{Telescopers}\label{SECT:telescoper} In Section~\ref{SECT:residues}, we see that nonzero residues are the obstruction for a rational function to being rational integrable (resp.\ summable, $q$-summable). In this section, we consider whether we can use a linear operator, a so-called~\emph{telescoper}, to remove this obstruction if an extra parameter is available. The importance of telescopers in the study of special functions and combinatorial identities have been shown in the work by Zeilberger and his collaborators~\cite{Zeilberger1990, Almkvist1990, WilfZeilberger1990a, WilfZeilberger1990b, Wilf1992}. Let~$k(t, x)$ be the field of rational functions in~$t$ and~$x$ over~$k$. On the field~$k(t, x)$, we have derivations~$D_t, D_x$, shift operators~$S_t, S_x$, and $q$-shift operators~$Q_t, Q_x$. The linear operators used below will be in the ring~$k(t)\langle D_t \rangle$, $k(t)\langle S_t \rangle$, or~$k(t)\langle Q_t \rangle$. For a rational function~$f\in k(t, x)$, we wish to solve the Existence Problem for Telescopers stated in the Introduction, that is, we want to decide the existence of linear operators~$L(t, \partial_t)$ with~$\partial_t \in \{D_t, S_t, Q_t\}$ such that \begin{equation}\label{EQ:tele} L(t, \partial_t)(f)=\partial_x(g) \end{equation} for some~$g\in k(t, x)$ and~$\partial_x \in \{D_x, \Delta_x, \Delta_{q, x}\}$. According to the different choices of~$L$ and~$\partial_x$, we have nine types of telescopers in general, see Table~\ref{tab:ninetelepb}. \begin{center} \renewcommand{\arraystretch}{1.2} \tabcolsep4.5pt \begin{table}[ht] \begin{tabular}{|c|c|c|c|} \hline $(L, \partial_x) $ & $D_x$ & $\Delta_x$ & $\Delta_{q, x}$ \\ \hline $k(t)\langle D_t \rangle $ & $L(t, D_t)(f) = D_x(g)$ & \underline{$L(t, D_t)(f) = \Delta_x(g)$} & \underline{$L(t, D_t)(f) = \Delta_{q, x}(g)$} \\ $k(t)\langle S_t \rangle$ & \underline{$L(t, S_t)(f) = D_x(g)$} & $L(t, S_t)(f) = \Delta_x(g)$ & \underline{$L(t, S_t)(f) = \Delta_{q, x}(g)$}\\ $k(t)\langle Q_t \rangle$ & \underline{$L(t, Q_t)(f) = D_x(g)$} & \underline{$L(t, Q_t)(f) = \Delta_x(g)$} & $L(t, Q_t)(f) = \Delta_{q, x}(g)$\\[0.05in] \hline \end{tabular} \caption{Nine different types of telescoping equations}\label{tab:ninetelepb} \end{table} \end{center} \vspace{-0.5cm} The existence problem of telescopers is related to the termination of Zeilberger-style algorithms and has been studied in~\cite{AbramovLe2002, Abramov2003, Chen2005, CCFL2010} but, to our knowledge, our results concerning telescopers of the six types underlined in the above table are new. In this section, we will present a unified way to solve this problem for rational functions by using the knowledge in the previous sections. Before the investigation of the existence of telescopers, we first present some preparatory lemmas for later use. \begin{define} Let~$\sim$ be an equivalence relation on a set~$R$ and~$\sigma: R\rightarrow R$ be a bijection. The relation~$\sim$ is said to be~\emph{$\sigma$-compatible} if \[\sigma(r_1) \sim \sigma(r_2) \, \, \Leftrightarrow \, \, r_1 \sim r_2 \quad \text{for all~$r_1, r_2\in R$.}\] \end{define} If the equivalence relation~$\sim$ is compatible with a bijection~$\sigma$ on~$R$, then a bijection on the quotient set~$R/\sim$ can be naturally induced by~$\sigma$, for which we still use the name~$\sigma$. We denote by~$[t]$ the equivalence class of~$t$ in~$R/\sim$. \begin{prop}\label{PROP:periodic} Let~$\sigma: R\rightarrow R$ be a bijection and~$\sim$ be a~$\sigma$-compatible equivalence relation on the set~$R$. Let~$T=\{[t_1], \ldots, [t_n]\}\subset R/\sim$. If for any~$i\in \{1, \ldots, n\}$, there exists nonzero~$m_i\in \bN$ such that~$\sigma^{m_i}([t_i])\in T$, then there exists nonzero~$m\in \bN$ such that~$\sigma^m([t_i])=[t_i]$ for all~$i\in \{1, \ldots, n\}$. \end{prop} \begin{proof} Let~$\tilde{m}$ be the least common multiple of~$m_i$'s. Then~$\sigma^{\tilde{m}}$ is a permutation on the finite set~$T$. Since any permutation on a finite set is idempotent, there exists an~$s\in \bN$ such that~$\sigma^{\tilde{m}s}$ is an identity on~$T$. Taking~$m=\tilde{m}s$ completes the proof. \end{proof} We will specialize Proposition~\ref{PROP:periodic} to different bijections and equivalence relations. The following examples show how to perform specializations. \begin{example}\label{EX:intlin} Let~$R$ be the algebraic closure of~$k(t)$. The equivalence relation~$\sim$ on~$R$ is defined by $\alpha_1 \sim \alpha_2$ if and only if~$\alpha_1-\alpha_2\in \bZ$. We take the shift mapping~$\sigma(\alpha(t))=\alpha(t+1)$ as the bijection. Let~$T=\{[\alpha_1], \ldots, [\alpha_n]\}$ be such that for any~$i\in \{1, \ldots, n\}$, $\sigma^{m_i}([\alpha_i])\in T$ for some nonzero~$m_i\in \bN$. By Proposition~\ref{PROP:periodic}, there exists nonzero~$m\in \bN$ such that~$\sigma^m(\alpha_i)-\alpha_i \in \bZ$ for all~$i\in \{1, \ldots, n\}$. Applying Lemma~\ref{LM:intlin} to~$\alpha_i$ yields~$\alpha_i = \frac{n_i}{m} t +c_i$ for some~$n_i\in \bZ$ and~$c_i\in k$. \end{example} \begin{example}\label{EX:intqlin} Let~$R$ be the algebraic closure of~$k(t)$. The equivalence relation~$\sim$ on~$R$ is defined by $\alpha_1 \sim \alpha_2$ if and only if~$\alpha_1/\alpha_2\in q^{\bZ}$. We take the $q$-shift mapping~$\sigma(\alpha(t))=\alpha(qt)$ as the bijection. Let~$T=\{[\alpha_1]_q, \ldots, [\alpha_n]_q\}$ be such that for any~$i\in \{1, \ldots, n\}$, $\sigma^{m_i}([\alpha_i])\in T$ for some nonzero~$m_i\in \bN$. By Proposition~\ref{PROP:periodic}, there exists nonzero~$m\in \bN$ such that~$\sigma^m(\alpha_i)/\alpha_i \in q^{\bZ}$ for all~$i\in \{1, \ldots, n\}$. Applying Lemma~\ref{LM:qintlin} to~$\alpha_i$ yields~$\alpha_i = c_i t^{{n_i}/{m}}$ for some~$n_i\in \bZ$ and~$c_i\in k$. \end{example} \subsection{Existence of telescopers}\label{SUBSECT:existence} The first result about the existence of telescopers was shown by Zeilberger in~\cite{Zeilberger1990} based on the theory of holonomic D-modules. In the following, we will study the existence problems from the residual point of view. For rational functions, the existence of telescopers is related to the properties of residues and the commutativity between the residue mappings and linear operators. Starting from the simplest, we consider the telescoping relation~$L(t, D_t)(f)=D_x(g)$ for a given rational function~$f\in k(t, x)$. Given~$\beta\in \overline{k(t)}$, view the residue mapping~$\operatorname{cres}_x(\underline{ \ \ }, \beta)$ as a $\overline{k(t)}$-linear transformation from~$\overline{k(t)}(x)$ to~$\overline{k(t)}$. For any~$\alpha, \beta\in \overline{k(t)}$, we have \[D_t\left(\frac{\alpha}{x-\beta}\right) = \frac{D_t(\alpha)}{x-\beta} + \frac{\alpha D_t(\beta)}{(x-\beta)^2}.\] Then~$\operatorname{cres}_x(D_t(f), \beta) = D_t(\operatorname{cres}_x(f, \beta))$ for any~$f\in \overline{k(t)}(x)$ and~$\beta\in \overline{k(t)}$. Assume that~$f=a/b$ with~$a, b \in k[t, x]$ and~$\gcd(a, b)=1$. Let~$\beta_1, \ldots, \beta_m$ be the roots of~$b$ in~$\overline{k(t)}$. For each root~$\beta_i$, the continuous residue $\operatorname{cres}_x(f, \beta_i) \in \overline{k(t)}$ is annihilated by a linear differential operator~$L_{i}\in k(t)\langle D_t \rangle$ by Proposition~\ref{PROP:aflde}. Let~$L(t, D_t)$ be the least common left multiple (LCLM) of the~$L_i$'s. Then we have~$L(\operatorname{cres}_x(f, \beta_i))= \operatorname{cres}_x(L(f), \beta_i)=0$ for all~$i$ with~$1\leq i \leq m$. So~$L(f)$ is rational integrable with respect to~$x$ by Proposition~\ref{PROP:ratint}. In summary, we have the following theorem. \begin{theorem}\label{THM:cc} For any~$f\in k(t, x)$, there exists a nonzero operator~$L\in k(t)\langle D_t \rangle$ such that~$L(f)= D_x(g)$ for some~$g\in k(t, x)$. \end{theorem} However, the situation in other cases turns out to be more involved. For the rational function~$f=1/(t^2+x^2)$, Abramov and Le~\cite{Le2001, AbramovLe2002} showed that there is no telescoper in~$k(t)\langle S_t \rangle$ such that~$L(f)=\Delta_x(g)$ for any~$g\in k(t, x)$. In other cases, there are two main reasons for non-existence: one is the non-commutativity between linear operators~$\partial_t\in \{D_t, S_t, Q_t\}$ and residue mappings, the other is that not all algebraic functions would satisfy linear ($q$)-recurrence relations. So it is natural that rational functions are of special forms if telescopers exist. Let~$f\in k(t, x)$ and~$\partial_x\in \{D_x, \Delta_x, \Delta_{q, x}\}$. Then~$f = \partial_x(g) + r$ with~$g, r\in k(t, x)$ and~$r$ being the residual form of~$f$ with respect to~$\partial_x$ {(see Section~\ref{SUBSECT:resform})}. Since linear operators~$L(t, \partial_t)$ with~$\partial_t\in \{D_t, S_t, Q_t\}$ commute with the linear operator~$\partial_x\in \{D_x, \Delta_x, \Delta_{q, x}\}$, a rational function has a telescoper if and only if its residual form does. From now on, we always assume that the given rational function is in its residual form. We will also use the fact~\cite[Lemma 1]{AbramovLe2002} that the sum~$f_1+f_2$ has a telescoper if both~$f_1$ and~$f_2$ do. To be more precise, if~$L_1, L_2$ are telescopers for~$f_1, f_2$, respectively, then the LCLM of~$L_1, L_2$ is a telescoper for~$f_1+f_2$. \subsubsection{Telescopers with respect to~$D_x$}\label{SUBSECT:existencediff} Let~$f\in k(t, x)$ be a residual form, that is, \begin{equation}\label{EQ:cresform} f = \sum_{i=1}^m \frac{\alpha_i}{x-\beta_i}, \quad \text{where~$\alpha_i, \beta_i\in \overline{k(t)}$ and the~$\beta_i$ are pairwise distinct.} \end{equation} \begin{theorem}\label{THM:dc} Let~$f\in k(t, x)$ be as in~\eqref{EQ:cresform}. Then~$f$ has a telescoper~$L$ in~$k(t)\langle S_t\rangle$ such that~$L(t, S_t)(f)= D_x(g)$ for some~$g\in k(t, x)$ if and only if all the~$\beta_i$ are in~$k$. \end{theorem} \begin{proof} Suppose that there exists a nonzero~$L\in k(t)\langle S_t\rangle$ such that~$L(t, S_t)(f)=D_x(g)$ for some~$g\in k(t, x)$. Write~$L=\sum_{\ell=0}^{\rho} e_{\ell}S_t^{\ell}$ with~$e_{\ell}\in k(t)$ and~{$e_\rho = 1$}. Then \[L(f) = \sum_{\ell=0}^{\rho} \sum_{i=1}^m \frac{e_{\ell}S_t^{\ell}(\alpha_i)}{x-S_t^{\ell}(\beta_i)}.\] Assume that~$\ell_0$ is the first index in~$\{0, 1, \ldots, \rho\}$ such that~$e_{\ell_0}\neq 0$. Since~$L(f)$ is rational integrable in~$k(t, x)$ with respect to~$D_x$, all residues of~$L(f)$ are zero by Proposition~\ref{PROP:ratint}. In particular, the set~$T=\{S_t^{\ell_0}(\beta_1), \ldots, S_t^{\ell_0}(\beta_m)\}$ satisfies the property that for any~$i\in \{1, \ldots, m\}$, there exists nonzero~$m_i\in \bN$ such that~$S_t^{\ell_0+m_i}(\beta_i)\in T$. By taking equality as the equivalence relation and the shift mapping as the bijection in Proposition~\ref{PROP:periodic}, there exists nonzero~$m\in \bN$ such that~$S_t^{\ell_0+m}(\beta_i)= \beta_i$ for all~$i\in \{1, \ldots, m\}$. By Lemma~\ref{LM:const}~(i) and the assumption that~$k$ is algebraically closed, all the~$\beta_i$ are in~$k$. For the opposite implication, it suffices to show that each fraction~$\alpha_i/(x-\beta_i)$ with~$\beta_i\in k$ has a telescoper in~$k(t)\langle S_t\rangle$. According to the process of partial fraction decomposition, ~$\alpha_i\in k(t)(\beta_i)$ for any~$i$ with~$1\leq i\leq m$. Then~$\alpha_i\in k(t)$, which is annihilated by the operator~$L_i = S_t - {\alpha_i(t+1)}/{\alpha_i(t)}$. Moreover, $L_i(\alpha_i/(x-\beta_i))=L_i(\alpha_i)/(x-\beta_i)=0$. So the LCLM of the~$L_i$'s is a telescoper for~$f$. This completes the proof. \end{proof} \begin{theorem}\label{THM:qc} Let~$f\in k(t, x)$ be as in~\eqref{EQ:cresform}. Then~$f$ has a telescoper~$L$ in~$k(t)\langle Q_t\rangle$ such that~$L(t, Q_t)(f)= D_x(g)$ for some~$g\in k(t, x)$ if and only if all the~$\beta_i$ are in~$k$. \end{theorem} \begin{proof} The proof proceeds in a similar way as above replacing~$S_t$ by~$Q_t$ and Lemma~\ref{LM:const}~(i) by~Lemma~\ref{LM:const}~(ii). \end{proof} \begin{example} Let~$f=1/(x+t)$. Since the root of~$x+t$ in~$\overline{k(t)}$ is~$t$, which is not in~$k$, $f$ has no telescoper in either~$k(t)\langle S_t\rangle$ or~$k(t)\langle Q_t\rangle$ with respect to~$D_x$ by Theorems~\ref{THM:dc} and~\ref{THM:qc}. \end{example} \subsubsection{Telescopers with respect to~$\Delta_x$}\label{SUBSECT:existenceshift} Let~$f\in k(t, x)$ be of the form \begin{equation}\label{EQ:dresform} f = \sum_{i=1}^m\sum_{j=1}^{n_i} \frac{\alpha_{i, j}}{(x-\beta_i)^j}, \end{equation} where~$\alpha_{i,j}, \beta_i\in \overline{k(t)}$, $\alpha_{i, n_i}\neq 0$, and the~$\beta_i$ are in distinct~$\bZ$-orbits. \begin{theorem}\label{THM:cd} Let~$f\in k(t, x)$ be as in~\eqref{EQ:dresform}. Then~$f$ has a telescoper~$L$ in~$k(t)\langle D_t\rangle$ such that~$L(t, D_t)(f)= \Delta_x(g)$ for some~$g\in k(t, x)$ if and only if all the~$\beta_i$ are in~$k$. \end{theorem} \begin{proof} Suppose that there exists a nonzero~$L\in k(t)\langle D_t\rangle$ such that~$L(t, D_t)(f)=\Delta_x(g)$ for some~$g\in k(t, x)$. Write~$L=\sum_{\ell=0}^{\rho} e_{\ell}D_t^{\ell}$ with~$e_{\ell}\in k(t)$. By induction on~$\ell$, we get \[D_t^{\ell}\left(\frac{\alpha_{i, n_i}}{(x-\beta_i)^{n_i}}\right) = \frac{(n_i)_{\ell}\alpha_{i, n_i}(D_t(\beta_i))^{\ell}}{(x-\beta_i)^{n_i+\ell}} + \, \, \text{lower terms,}\] where~$(n_i)_{\ell} = n_i (n_i+1) \cdots (n_i+\ell-1)$. Then we have \[L(f) = \sum_{i=1}^m \frac{(n_i)_{\rho}\alpha_{i, n_i}(D_t(\beta_i))^{\rho}}{(x-\beta_i)^{n_i+\rho}} +\,\, \text{lower terms.}\] Since~$L(f)$ is rational summable with respect to~$\Delta_x$ and the~$\beta_i$ are in distinct~$\bZ$-orbits, we get~$(n_i)_{\rho}\alpha_{i, n_i}(D_t(\beta_i))^{\rho}=0$ for all~$i\in \{1, \ldots, m\}$ by Proposition~\ref{PROP:ratsum}. Since~$\alpha_{i, n_i}\neq 0$ and~$(n_i)_{\rho}>0$, $D_t(\beta_i)=0$, which implies that~$\beta_i\in k$ {by Lemma~\ref{LM:const}~(iii)}. For the opposite implication, the proof is similar to that of Theorem~\ref{THM:dc}. Let~$L_{i,j}$ be the operator~$D_t - D_t(\alpha_{i, j})/\alpha_{i,j}\in k(t)\langle D_t \rangle$. Then the LCLM of the~$L_{i, j}$ is a telescoper for~$f$ with respect to~$\Delta_x$. \end{proof} \begin{example}\label{EX:transcendental} Let \[f = \frac{1}{x^2 - t} =\frac{1}{2\sqrt{t}}\left(\frac{1}{x - \sqrt{t}} - \frac{1}{x +\sqrt{t}}\right).\] Note that $f$ is already in residual form with respect to~$\Delta_x$. By Theorem~\ref{THM:cd}, there is no linear differential operator~$L(t,D_t) \in k(t)\langle D_t\rangle$ and $g \in k(t,x)$ such that $L(t,D_t)f = \Delta_x(g)$. Furthermore, Proposition~3.1 in~\cite{Hardouin2008} and the descent argument similar to that given in the proof of Corollary~3.2 of~\cite{Hardouin2008} (or Section 1.2.1 of~\cite{DH2011}) implies that the sum \[F(t,x) = \sum_{i=1}^{x -1}\frac{1}{{ i}^2-t} \ \ \mbox { (satisfying~$S_x(F) - F = f$) }\] satisfies no polynomial differential equation $P(t,x,F,D_tF,D_t^2F, \ldots ) = 0$. \end{example} The following theorem is the same as~\cite[Theorem 1]{AbramovLe2002}. We give an alternative proof using the knowledge developed in previous sections. \begin{theorem}\label{THM:dd} Let~$f\in k(t, x)$ be as in~\eqref{EQ:dresform}. Then~$f$ has a telescoper~$L$ in~$k(t)\langle S_t\rangle$ such that~$L(t, S_t)(f)= \Delta_x(g)$ for some~$g\in k(t, x)$ if and only if all the~$\beta_i=r_i t +c_i$ with~$r_i\in \bQ$ and~$c_i\in k$. \end{theorem} \begin{proof} Suppose that there exists a nonzero~$L\in k(t)\langle S_t\rangle$ such that~$L(t, S_t)(f)=\Delta_x(g)$ for some~$g\in k(t, x)$. Write~$L=\sum_{\ell=0}^{\rho} e_{\ell}S_t^{\ell}$ with~$e_{\ell}\in k(t)$ and~$e_0\neq 0$. For any~$\lambda \in \{1, \ldots, m\}$, we consider the rational function \[f_{\lambda} = \sum_{i=1}^m \frac{\alpha_{i, n_{\lambda}}}{(x-\beta_i)^{n_{\lambda}}},\quad \text{where~$\alpha_{\lambda, n_{\lambda}}\neq 0$ by assumption} .\] Without loss of generality, we may assume that the other~$\alpha_{i, n_{\lambda}}$ with~$i\neq \lambda$ are also nonzero. Since the shift operators~$S_t, S_x$ preserve the multiplicity, we have~$L(f_{\lambda})=\Delta_x(g_{\lambda})$ for some~$g_{\lambda}\in k(t, x)$. By Proposition~\ref{PROP:ratsum}, all the residues of~$L(f_{\lambda})$ are zero. We now use the notation and analysis of Example~\ref{EX:intlin}. We see that the set~$T=\{[\beta_1], \ldots, [\beta_m]\}$ satisfies the property that for any~$i \in \{1, \ldots, m\}$, there exists a nonzero~$m_i$ such that~$S_t^{m_i}([\beta_i])\in T$. As in Example~\ref{EX:intlin}, we conclude that~$\beta_i=\frac{p_i}{m} t +c_i$ with~$p_i, m\in \bZ$ and~$c_i\in k$. The opposite implication follows from the fact that the linear operator \[L_{i, j}=\alpha_{i, j}(t)S_t^{m}-\alpha_{i,j}(t+m)\] is a telescoper for the fraction~$f_{i, j} = \alpha_{i, j}/(x-(\frac{p_i}{m} t +c_i))^j$ with respect to~$\Delta_x$ since~$\operatorname{dres}(L_{i, j}(f_{i, j}), [\frac{p_i}{m} t +c_i], j)=0$. Then the LCLM of the~$L_{i, j}$ is a telescoper for~$f$ with respect to~$\Delta_x$. \end{proof} \begin{theorem}\label{THM:qd} Let~$f\in k(t, x)$ be as in~\eqref{EQ:dresform}. Then~$f$ has a telescoper~$L$ in~$k(t)\langle Q_t\rangle$ such that~$L(t, Q_t)(f)= \Delta_x(g)$ for some~$g\in k(t, x)$ if and only if all the~$\beta_i$ are in~$k$. \end{theorem} \begin{proof} Suppose that there exists a nonzero~$L\in k(t)\langle Q_t\rangle$ such that~$L(t, Q_t)(f)=\Delta_x(g)$ for some~$g\in k(t, x)$. Write~$L=\sum_{\ell=0}^{\rho} e_{\ell}Q_t^{\ell}$ with~$e_{\ell}\in k(t)$ and~$e_0\neq 0$. For any~$\lambda \in \{1, \ldots, m\}$, we consider the rational function \[f_{\lambda} = \sum_{i=1}^m \frac{\alpha_{i, n_{\lambda}}}{(x-\beta_i)^{n_{\lambda}}},\quad \text{where~$\alpha_{\lambda, n_{\lambda}}\neq 0$ by assumption} .\] Without loss of generality, we may assume that the other~$\alpha_{i, n_{\lambda}}$ with~$i\neq \lambda$ are also nonzero. Since the operators~$Q_t, S_x$ preserve the multiplicity, we have~$L(f_{\lambda})=\Delta_x(g_{\lambda})$ for some~$g_{\lambda}\in k(t, x)$. By Proposition~\ref{PROP:ratsum}, all the residues of~$L(f_{\lambda})$ are zero. We shall again use the reasoning and notation in Example~\ref{EX:intlin} where~$[ \ \ ]$ is an equivalence class of the equivalence relation that~$\alpha_1\sim \alpha_2$ in~$\overline{k(t)}$ if~$\alpha_1-\alpha_2\in \bZ$. In particular, the set~$T=\{[\beta_1], \ldots, [\beta_m]\}$ satisfies the property that for any~$i \in \{1, \ldots, m\}$, there exists a nonzero~$m_i$ such that~$Q_t^{m_i}([\beta_i])\in T$. Taking the shift mapping~$Q_t$ as the bijection, Proposition~\ref{PROP:periodic} and Lemma~\ref{LM:cstqd} imply that~$\beta_i\in k$ for all~$i$ with~$1\leq i \leq m$. The opposite implication follows from the fact that the linear operator \[L_{i, j}=\alpha_{i, j}(t)Q_t-\alpha_{i,j}(qt)\] is a telescoper for the fraction~$f_{i, j} = \alpha_{i, j}/(x-\beta_i)^j$ with respect to~$\Delta_x$ since $\operatorname{dres}(L_{i, j}(f_{i, j}), [\beta_i], j)=0$. Then the LCLM of the~$L_{i, j}$ is a telescoper for~$f$ with respect to~$\Delta_x$. \end{proof} \subsubsection{Telescopers with respect to~$\Delta_{q, x}$}\label{SUBSECT:existenceqshift} Let~$f\in k(t, x)$ be of the form \begin{equation}\label{EQ:qresform} f = c + \sum_{i=1}^m\sum_{j=1}^{n_i} \frac{\alpha_{i, j}}{(x-\beta_i)^j}, \end{equation} where~$c\in k(t)$, $\alpha_{i,j}, \beta_i\in \overline{k(t)}$, $\alpha_{i, n_i}\neq 0$, and the~$\beta_i$ are in distinct~$q^\bZ$-orbits. \begin{theorem}\label{THM:cq} Let~$f\in k(t, x)$ be as in~\eqref{EQ:qresform}. Then~$f$ has a telescoper~$L$ in~$k(t)\langle D_t\rangle$ such that~$L(t, D_t)(f)= \Delta_{q, x}(g)$ for some~$g\in k(t, x)$ if and only if all the~$\beta_i$ are in~$k$. \end{theorem} \begin{proof} The proof proceeds in the same way as that in Theorem~\ref{THM:cd}. \end{proof} \begin{theorem}\label{THM:dq} Let~$f\in k(t, x)$ be as in~\eqref{EQ:qresform}. Then~$f$ has a telescoper~$L$ in~$k(t)\langle S_t\rangle$ such that~$L(t, S_t)(f)= \Delta_{q, x}(g)$ for some~$g\in k(t, x)$ if and only if all the~$\beta_i$ are in~$k$. \end{theorem} \begin{proof} Suppose that there exists a nonzero~$L\in k(t)\langle S_t\rangle$ such that~$L(t, S_t)(f)=\Delta_{q, x}(g)$ for some~$g\in k(t, x)$. Write~$L=\sum_{\ell=0}^{\rho} e_{\ell}S_t^{\ell}$ with~$e_{\ell}\in k(t)$ and~$e_0\neq 0$. For any~$\lambda \in \{1, \ldots, m\}$, we consider the rational function \[f_{\lambda} = \sum_{i=1}^m \frac{\alpha_{i, n_{\lambda}}}{(x-\beta_i)^{n_{\lambda}}},\quad \text{where~$\alpha_{\lambda, n_{\lambda}}\neq 0$ by assumption} .\] Without loss of generality, we may assume that the other~$\alpha_{i, n_{\lambda}}$ with~$i\neq \lambda$ are also nonzero. Since the operators~$S_t, Q_x$ preserve the multiplicity, we have~$L(f_{\lambda})=\Delta_{q, x}(g_{\lambda})$ for some~$g_{\lambda}\in k(t, x)$. By Proposition~\ref{PROP:ratqsum}, all the residues of~$L(f_{\lambda})$ are zero. We now use the reasoning and notation in Example~\ref{EX:intqlin}. In particular, the set~$T=\{[\beta_1]_q, \ldots, [\beta_m]_q\}$ satisfies that for any~$i \in \{1, \ldots, m\}$, there exists a nonzero~$m_i$ such that~$S_t^{m_i}([\beta_i]_q)\in T$. Taking the shift mapping~$S_t$ as bijection, Proposition~\ref{PROP:periodic} and Lemma~\ref{LM:cstdq} imply that~$\beta_i\in k$ for all~$i$ with~$1\leq i \leq m$. The opposite implication follows from the fact that~$c(t)$ is annihilated by the operator~$L_0 =c(t)S_t - c(t+1)$ and the linear operator \[L_{i, j}=\alpha_{i, j}(t)S_t-\alpha_{i,j}(t+1)\] is a telescoper for the fraction~$f_{i, j} = \alpha_{i, j}/(x-\beta_i)^j$ with respect to~$\Delta_{q, x}$ since $\operatorname{dres}(L_{i, j}(f_{i, j}), [\beta_i]_q, j)=0$. Then the LCLM of the~$L_0$ and~$L_{i, j}$ is a telescoper for~$f$ with respect to~$\Delta_{q, x}$. \end{proof} The following theorem is a $q$-analogue of Theorem~\ref{THM:dd}, which has also been shown in~\cite[Theorem 1]{Le2001}. \begin{theorem}\label{THM:qq} Let~$f\in k(t, x)$ be as in~\eqref{EQ:qresform}. Then~$f$ has a telescoper~$L$ in~$k(t)\langle Q_t\rangle$ such that~$L(t, Q_t)(f)= \Delta_{q, x}(g)$ for some~$g\in k(t, x)$ if and only if all the~$\beta_i=c_i t^{r_i}$ with~$r_i\in \bQ$ and~$c_i\in k$. \end{theorem} \begin{proof} Suppose that there exists a nonzero~$L\in k(t)\langle Q_t\rangle$ such that~$L(t, Q_t)(f)=\Delta_{q, x}(g)$ for some~$g\in k(t, x)$. Write~$L=\sum_{\ell=0}^{\rho} e_{\ell}Q_t^{\ell}$ with~$e_{\ell}\in k(t)$ and~$e_0\neq 0$. For any~$\lambda \in \{1, \ldots, m\}$, we consider the rational function \[f_{\lambda} = \sum_{i=1}^m \frac{\alpha_{i, n_{\lambda}}}{(x-\beta_i)^{n_{\lambda}}},\quad \text{where~$\alpha_{\lambda, n_{\lambda}}\neq 0$ by assumption} .\] Without loss of generality, we may assume that the other~$\alpha_{i, n_{\lambda}}$ with~$i\neq \lambda$ are also nonzero. Since the $q$-shift operators~$Q_t, Q_x$ preserve the multiplicity, we have~$L(f_{\lambda})=\Delta_{q, x}(g_{\lambda})$ for some~$g_{\lambda}\in k(t, x)$. By Proposition~\ref{PROP:ratqsum}, all the residues of~$L(f_{\lambda})$ are zero. In particular, the set~$T=\{[\beta_1]_q, \ldots, [\beta_m]_q\}$ satisfies that for any~$i \in \{1, \ldots, m\}$, there exists a nonzero~$m_i$ such that~$Q_t^{m_i}([\beta_i]_q)\in T$. By the analysis in Example~\ref{EX:intqlin}, we conclude that~$\beta_i=c_it^{{p_i}/{m}}$ with~$p_i, m\in \bZ$ and~$c_i\in k$. The opposite implication follows from the fact that~$c(t)$ is annihilated by the operator~$L_0 =cS_t - c(t+1)$ and the linear operator \[L_{i, j}=\alpha_{i, j}(t)Q_t^{m}-q^{-jp_i}\alpha_{i,j}(q^mt)\] is a telescoper for the fraction~$f_{i, j} = \alpha_{i, j}/(x-(c_it^{{p_i}/{m}}))^j$ with respect to~$\Delta_{q, x}$ since~$\operatorname{qres}(L_{i, j}(f_{i, j}), [c_it^{{p_i}/{m}}]_q, j)=0$. Then the LCLM of the~$L_0$ and~$L_{i, j}$ is a telescoper for~$f$ with respect to~$\Delta_{q, x}$. \end{proof} The necessary and sufficient conditions for the existence of telescopers enable us to decide the termination of the Zeilberger algorithm for rational-function inputs. After reducing the given rational function into a residual form, one can detect the existence by investigating the denominator. For instance, we could check whether the denominator factors into two univariate polynomials respectively in~$t$ and~$x$ in the case when~$\partial_t=D_t$ and~$\partial_x=\Delta_x$. Combining the existence criteria with the Zeilberger algorithm yields a complete algorithm for creative telescoping with rational-function inputs. \subsection{Characterization of telescopers}\label{SUBSECT:chartele} We have shown that telescopers exist for a special class of rational functions. Now, we will characterize the linear differential and ($q$-)recurrence operators that could be telescopers for rational functions. Using such a characterization, we will give a direct algebraic proof of a theorem of Furstenberg stating that the diagonal of a rational power series in two variables is algebraic~\cite{Furstenberg1967}. In all of these considerations, residues are still the key. For a rational function~$f\in k(t, x)$, all of the telescopers for~$f$ in~$k(t)\langle D_t\rangle$ form a left ideal in~$k(t)\langle D_t\rangle$, denoted by~$\mathcal{T}_f$. Since the ring~$k(t)\langle D_t\rangle$ is a left Euclidean domain, the monic telescoper of minimal order generates the left ideal~$\mathcal{T}_f$, and we call this generator~\emph{the minimal telescoper} for~$f$. \begin{theorem}\label{THM:telecc} Let~$L(t, D_t)$ be a linear differential operator in~$k(t)\langle D_t\rangle$. Then~$L$ is a telescoper for some~$f\in k(t, x)\setminus D_x(k(t, x))$ such that~$L(f)= D_x(g)$ with~$g\in k(t, x)$ if and only if $L(y(t))=0$ has a nonzero solution algebraic over~$k(t)$. Moreover, if~$L$ is the minimal telescoper for~$f$, then all solutions of~$L(y(t))=0$ are algebraic over~$k(t)$. \end{theorem} \begin{proof} Suppose that there exists~$f\in k(t, x)\setminus D_x(k(t, x))$ such that~$L(f)= D_x(g)$ for some~$g\in k(t, x)$. Since~$f$ is not rational integrable with respect to~$x$, $f$ has a nonzero residue by Proposition~\ref{PROP:ratint}. Since~$L$ is a telescoper for~$f$ with respect to~$D_x$, $L$ vanishes at all residues of~$f$. So~$L(y(t))=0$ has a nonzero algebraic solution in~$\overline{k(t)}$ because any residue of a rational function in~$k(t, x)$ is algebraic over~$k(t)$. Conversely, if~$\alpha\in \overline{k(t)}$ is a nonzero algebraic solution of~$L(y(t))=0$ with minimal polynomial~$P\in k[t, x]$, then~$L$ is a telescoper for the rational function~$f=xD_x(P)/P$ with respect to~$D_x$. Let~$a/b\in k(t, x)$ be the residual form of~$f$ with respect to~$D_x$. All of the residues of~$a/b$ are roots of the polynomial~$R(t, z) = \mbox{resultant}_x(b, a-zD_x(b))\in k(t)[z]$. By the method in~\cite[\S 2]{CormierSingerTragerUlmer2002}, one can construct the minimal operator~$L_R$ in~$k(t)\langle D_t\rangle$ such that~$L_R(\alpha(t))=0$ for all roots of~$R$ in~$\overline{k(t)}$. Moreover, the solutions space of~$L_R$ is spanned by the roots of~$R$. Since~$L_R$ vanishes at all residues of~$f$, $L_R$ is a telescoper for~$f$. If~$L$ is the minimal telescoper for~$f$, then~$L$ divides~$L_R$ on the right. Thus, all solutions of~$L(y(t))=0$ are solutions of~$L_R(y(t))=0$, and therefore algebraic over~$k(t)$. \end{proof} The diagonal~$\operatorname{diag}(f)$ of a formal power series~$f=\sum_{i, j\geq 0} f_{i, j}t^ix^j\in k[[t, x]]$ is defined by \[\operatorname{diag}(f) = \sum_{i\geq 0} f_{i, i} t^i\in k[[t]].\] Using the characterization of telescopers in Theorem~\ref{THM:telecc}, we now give a proof of a theorem of Furstenberg that the diagonal of a rational power series in two variables is algebraic~\cite{Furstenberg1967}. For other proofs, see the papers~\cite{Fliess1974, Gessel1980, Haiman1993} and Stanley's book~\cite[Theorem 6.3.3]{Stanley1999}. Let~$\mathcal {F}=k((x))$ be the quotient field of~$k[[x]]$ and~$\mathcal {F}[[t]]$ be the formal power series over~$\mathcal {F}$. We use the notation~$[x^{-1}](a)$ to denote the coefficient of~$x^{-1}$ in~$a\in \mathcal {F}$. For a formal power series~$g=\sum_{i\geq 0} a_i(x)t^i \in \mathcal {F}[[t]]$, we define \[[x^{-1}](g)=\sum_{i\geq 0} ([x^{-1}](a_i))t^i\in k[[t]],\] and two derivations \[D_t(g) = \sum_{i\geq 0} i a_i(x) t^{i-1}, \quad D_x(g) = \sum_{i\geq 0} D_x(a_i)t^{i}.\] The ring~$\mathcal {F}[[t]]$ then becomes a $k[t, x]\langle D_t, D_x\rangle$-module. By definition, we have \[[x^{-1}](D_t(g)) = D_t([x^{-1}](g)) \quad \text{and} \quad [x^{-1}](t^i(g)) = t^i([x^{-1}](g)) \] for all~$i\in \bN$. By induction, we have~$L([x^{-1}](g))=[x^{-1}](L(g))$ for all~$L\in k[t]\langle D_t\rangle$. Since~$[x^{-1}](D_x(a))=0$ for any~$a\in \mathcal {F}$, we get~$[x^{-1}](D_x(g))=0$ for any~$g\in \mathcal {F}[[t]]$. Let~$f=\sum_{i, j\geq 0} f_{i, j}t^ix^j$ be a formal power series in~$k[[t, x]]$. Then~$F=f(x, t/x)/x$ is in~$\mathcal {F}[[t]]$. Applying~$[x^{-1}]$ to~$F$ yields \[[x^{-1}](F) = [x^{-1}](\sum_{i, j\geq 0} f_{i, j} x^{i-j-1}t^j) = \sum_{j\geq 0} f_{j, j} t^j = \operatorname{diag}(f).\] If~$L\in k[t]\langle D_t\rangle$ be such that~$L(F)=D_x(G)$ for some~$G\in \mathcal {F}[[t]]$, then applying~$[x^{-1}]$ to both sides of~$L(F)=D_x(G)$ yields~$L(\operatorname{diag}(f))=0$. In summary, we have the following lemma. \begin{lemma}\label{LM:diag} Let~$f\in k[[t, x]]$ and~$F=f(x, t/x)/x\in \mathcal {F}[[t]]$. If~$L\in k[t]\langle D_t\rangle$ is a telescoper for~$F$ such that~$L(F)=D_x(G)$ with~$G\in \mathcal {F}[[t]]$, then~$L(\operatorname{diag}(f))=0$. \end{lemma} In the following, we prove Furstenberg's diagonal theorem. \begin{theorem}[Furstenberg, 1967]\label{THM:diag} Let~$f\in k[[t, x]]\cap k(t, x)$. Then the diagonal of~$f$ is a power series algebraic over~$k(t)$. \end{theorem} \begin{proof} Let~$F=f(x, t/x)/x$. Since~$f$ is a rational function in~$k(t, x)$, so is~$F$. Let~$L\in k(t)\langle D_t \rangle$ be the minimal telescoper for~$F$. Since multiplying by an element of~$k[t]$ commutes with the derivation~$D_x$, we can always assume that the coefficients of~$L$ are polynomials in~$k[t]$. By Theorem~\ref{THM:telecc}, all of the solutions of~$L(y(t))=0$ are algebraic over~$k(t)$. So the diagonal of~$f$ is algebraic over~$k(t)$ since~$L(\operatorname{diag}(f))=0$ by Lemma~\ref{LM:diag}. \end{proof} The following example is borrowed from the recent paper by Ekhad and Zeilberger~\cite{EZ2011}, from which one can see how Zeilberger's method of creative telescoping plays a role in solving concrete problems in combinatorics. \begin{example} Let~$s(n)$ be the number of binary words of length~$n$ for which the number of occurrences of~$00$ is the same as that of~$01$ as subwords. Stanley~\cite{Stanley2011} asked for a proof of the following formula \begin{equation}\label{EQ:sf} S(t) \triangleq \sum_{n=0}^{\infty} s(n) t^n = \frac{1}{2}\left(\frac{1}{1-t} + \frac{1+2t}{\sqrt{(1-t)(1-2t)(1+t+2t^2)}} \right). \end{equation} We first show that the generating function~$S(t)$ is an algebraic function over~$k(t)$. The key ingredient is the Goulden-Jackson cluster method~\cite{GJ1979}. Noonan and Zeilberger~\cite{NoonanZeilberger1999} gave an elegant survey of this method together with an efficient implementation. Let~$\mathcal {W}$ be the set of all binary words and let~$\tau_{00}(w), \tau_{01}(w)$ be the numbers of occurrences of~$00$ and~$01$ in~$w\in \mathcal {W}$, respectively. Ekhad and Zeilberger~\cite{EZ2011} define the generating function \[f(t, y, z) = \sum_{w\in \mathcal {W}} t^{\text{length(w)}} y^{\tau_{00}(w)}z^{\tau_{01}(w)}.\] Loading the package~{\sf DAVID{\_}IAN} created by Noonan and Zeilberger to {\sf Maple}, typing {\sf GJstDetail([0, 1], \{[0, 0], [0, 1]\}, t, s)}, and replacing~$s[0, 0], s[0, 1]$ by~$y, z$, respectively, we get an explicit form of~$f(t, y, z)$, \[f(t, y, z) = \frac{(1-y)t +1}{(y-z)t^2-(1+y)t+1},\] which is a rational function of three variables. By definition, the desired generating function~$S(t)$ is the coefficient of~$x^{-1}$ in~$F(t, x) := x^{-1}f(t, x, x^{-1})$. Since~$\tau_{00}(w)$ and~$\tau_{01}(w)$ are bounded by~${\text{length(w)}}$, the function~$F(t, x)$ is an element in the ring~$k((x))[[t]]$. Therefore, the coefficient~$[x^{-1}](F)$ is annihilated by any telescoper for~$F$ in~$k[t]\langle D_t\rangle$. By Theorem~\ref{THM:telecc}, the function~$S(t)$ must be an algebraic function over~$k(t)$. By typing~{DETools[Zeilberger](F, t, x, Dt)} in {\sf Maple}, we get the minimal telescoper~$L$ for~$F$, which is \begin{align*} L =& \left( -1+5\,t-13\,{t}^{2}-30\,{t}^{4}+23\,{t}^{3}+40\,{t}^{5}-40\,{t}^{6}+16\,{t}^{7} \right) {{\it Dt}}^{2} \\ &\, \, + \left( 80\,{t}^{6}-168\,{t}^ {5}+152\,{t}^{4}-88\,{t}^{3}+24\,{t}^{2}-2\,t+2 \right) {\it Dt}\\ &\, \, +48\,{ t}^{5}-72\,{t}^{4}+48\,{t}^{3}-12\,{t}^{2}-6\,t. \end{align*} To show Stanley's formula~\eqref{EQ:sf}, it suffices to verify that~$S(t)$ satisfies the equation~$L(y(t)) = 0$, and check the two initial condition:~$y(0) =1$ and~$D_t(y)(0) = 2$. Moreover, we could also rediscover Stanley's formula by solving the differential equation. Thanks to Zeilberger's method, many classical combinatorial identities now can be proved and rediscovered automatically all by computer. \end{example} Except the case when~$\partial_t=D_t$ and~$\partial_x=D_x$ as above, we will show that telescopers for non-integrable or non-summable rational functions in~$k(t, x)$ have at least one nonzero rational solution in~$k(t)$. Of these 8 cases, 6 follow easily from an examination of some of the proofs above. These cases are considered in Theorem~\ref{THM:telemixed}. The remaining two cases require a slightly more detailed proof and are considered in Theorem~\ref{THM:teleddqq}. \begin{theorem}\label{THM:telemixed} Let~$L\in k(t)\langle \partial_t\rangle$ and~$f\in k(t, x)$ satisfy one of the following conditions: \begin{enumerate} \item $\partial_t = D_t$ and~$f\notin \Delta_x(k(t, x))$; \item $\partial_t = D_t$ and~$f\notin \Delta_{q, x}(k(t, x))$; \item $\partial_t = S_t$ and~$f\notin D_x(k(t, x))$; \item $\partial_t = S_t$ and~$f\notin \Delta_{q, x}(k(t, x))$; \item $\partial_t = Q_t$ and~$f\notin D_x(k(t, x))$; \item $\partial_t = Q_t$ and~$f\notin \Delta_x(k(t, x))$. \end{enumerate} Then~$L(t, \partial_t)$ is a telescoper for some~$f\in k(t, x)$ if and only if~$L(y(t))=0$ has a nonzero rational solution in~$k(t)$. \end{theorem} \begin{proof} Suppose that~$L(y(t))=0$ has a nonzero rational solution~$r(t)$ in~$k(t)$. Then~$L$ is a telescoper for~$f=r(t)/x$ and~$f$ satisfies the assumption above. For the opposite implication, Theorems~\ref{THM:cd}, \ref{THM:cq}, \ref{THM:dc}, \ref{THM:dq}, \ref{THM:qc} and~\ref{THM:qd} imply that the residual form of~$f$ is of the form~$a/b$ such that~$b=b_1(t)b_2(x)$ with~$b_1\in k[t]$ and~$b_2\in k[x]$. Then \[\frac{a}{b} = \sum_{i=1}^m \sum_{j=1}^{n_i} \frac{\alpha_{i, j}}{(x-\beta_i)^j}, \] where~$\alpha_{i,j}\in k(t)$ and~$\beta_i\in k$ are in distinct ($q$-)orbits. If~$L$ is a telescoper for~$f$, then~$L$ is also a telescoper for~$a/b$. Since all the~$\beta_i$ are free of~$t$, we have \[L(a/b) = \sum_{i=1}^m \sum_{j=1}^{n_i} \frac{L(\alpha_{i, j})}{(x-\beta_i)^j}=\partial_x(g), \quad \text{where~$\partial_x\in \{D_x, \Delta_x, \Delta_{q, x}\}$}.\] By Propositions~\ref{PROP:ratint}, \ref{PROP:ratsum}, and~\ref{PROP:ratqsum}, we have~$L(\alpha_{i,j})=0$. Since~$a/b$ is not zero, at least one of the~$\alpha_{i, j}$ is nonzero. Thus~$L(y(t))=0$ has at least one nonzero rational solution in~$k(t)$. \end{proof} \begin{theorem}\label{THM:teleddqq} Let~$L\in k(t)\langle \partial_t\rangle$ and~$f\in k(t, x)$ satisfy one of the following conditions: $(1)$~$\partial_t = S_t$ and~$f\notin \Delta_{x}(k(t, x))$; $(2)$~$\partial_t = Q_t$ and~$f\notin \Delta_{q, x}(k(t, x))$. Then~$L(t, \partial_t)$ is a telescoper for some~$f\in k(t, x)$ if and only if~$L(y(t))=0$ has a nonzero rational solution in~$k(t)$. \end{theorem} \begin{proof} Suppose that~$L(y(t))=0$ has a nonzero rational solution~$r(t)$ in~$k(t)$. Then~$L$ is a telescoper for~$f=r(t)/x$ and~$f$ satisfies the assumption above. For the opposite implication, we only prove the assertion for the first case, that is, when~$L$ and~$f$ satisfies the condition~$(1)$. The remaining assertion follows in a similar manner. Theorem~\ref{THM:dd} implies that the residual form~$a/b$ of~$f$ can be decomposed into \[\frac{a}{b} = \sum_{i=1}^m \sum_{j=1}^{n_i} \frac{\alpha_{i, j}}{(x-\beta_i)^j},\] where~$\alpha_{i, j}\in k(t)$ and~$\beta_i =\frac{\lambda_i}{\mu_i}t+c_i$ with~$c_i\in k$, $\lambda_i\in \bZ$ and~$\mu_i\in \bN$ such that~$\gcd(\lambda_i, \mu_i)=1$ and the~$\beta_i$ are in distinct~$\bZ$-orbits. If~$L\in k(t)\langle D_t\rangle$ is a telescoper for~$f$, then~$L$ is a telescoper for~$a/b$. Moreover, $L$ is a telescoper for each fraction~$f_{i, j} = {\alpha_{i, j}}/{(x-\beta_i)^j}$. We claim that the operator~$L_{i, j} := \alpha_{i,j}(t)S_t^{\mu_i} - \alpha_{i, j}(t+\mu_i)\in k(t)\langle D_t \rangle$ is the minimal telescoper for~$f_{i, j}$ with respect to~$\Delta_x$. In fact, $L_{i, j}$ is a telescoper for~$f_{i,j}$ as shown in the proof of~Theorem~\ref{THM:dd}. It remains to show the minimality. Assume that there exists a telescoper~$\tilde{L}_{i, j}$ of order less than~$\mu_i$ for~$f_{i, j}$. Write~$\tilde{L}_{i, j}=\sum_{\ell=0}^{\mu_i-1} e_{\ell}S_t^{\ell}$. Then \[\tilde{L}_{i, j}(f_{i, j}) = \sum_{\ell=0}^{\mu_i-1} \frac{e_{\ell} \alpha_{i, j}(t+\ell)}{(x-(\frac{\lambda_i}{\mu_i}t+\frac{\lambda_i}{\mu_i}\ell + c_i))^j}.\] Since~$\gcd(\lambda_i, \mu_i)=1$ and~$\ell\in \{0, \ldots, \mu_i-1\}$, the values~$\frac{\lambda_i}{\mu_i}t+\frac{\lambda_i}{\mu_i}\ell + c_i$ are in distinct~$\bZ$-orbits. If~$\tilde{L}_{i, j}(f_{i, j})$ is rational summable, then all the residues~$e_{\ell} \alpha_{i, j}(t+\ell)$ are zero by Proposition~\ref{PROP:ratsum}. Since~$\alpha_{i,j} \neq 0$, we have~$\tilde{L}_{i,j}$ is a zero operator. The claim holds. Since~$L$ is a telescoper for~$f_{i, j}$, $L_{i, j}$ divides~$L$ on the right. Note that the rational function~$\alpha_{i, j}\in k(t)$ is a nonzero solution of~$L_{i, j}(y(t))=0$. Thus, $L$ has at least one nonzero rational solution in~$k(t)$. \end{proof} {\section*{Appendix}\label{SECT:appendix} In this appendix, we present proofs of Propositions~\ref{PROP:afrde} and~\ref{PROP:aflqe}. Let~$K \subset E$ be difference fields of characteristic zero with automorphism~$\sigma$ and assume that the constants $E^\sigma$ of~$E$ are in~$K$. Furthermore assume that~$E$ is algebraically closed. \begin{lemma}\label{LM:shiftclose} Let~$u\in E$ be algebraic over~$K$ and assume that~$u$ satisfies a homogeneous linear difference equation over~$K$. Then there exists a field~$F \subset E$ with~$\sigma(F) = F$, $K \subset F$, $[F : K]< \infty$, and~$u \in F$. \end{lemma} \begin{proof} Let $u$ satisfy \begin{equation}\label{eqn1} \sigma^n(u) + b_{n-1} \sigma^{n-1}(u) + \cdots + b_0u = 0 \end{equation} with~$b_i \in K, b_0 \neq 0$ and let~$F = K(u, \sigma(u), \ldots , \sigma^{n-1}(u))$. We have that~$[F:K] < \infty$ since for any~$i$, $\sigma^i(u)$ is algebraic over~$K$. To see that~$\sigma(F) \subset F$ it is enough to show that~$\sigma^i(u) \in F$ for all~$i$. This is certainly true for~$i = 0, \ldots n$. If~$i > n$, apply~$\sigma^{i-n}$ to equation~\eqref{eqn1} and proceed by induction to conclude~$\sigma^{i}(u) \in F$. If~$i <0$ apply~$\sigma^{i}$ and proceed by induction to conclude~$\sigma^i(u) \in F$. \end{proof} \begin{lemma}\label{LM:aflre1} Let~$K = k(t)$, where~$k$ is algebraically closed. Let~$(E, \sigma)$ be a difference field such that~$K \subset E$, $\sigma(t) = t+1$ and~$[E : K] < \infty$. The~$E = K$. \end{lemma} \begin{proof} Let~$n = [E:K]$ and~$g$ be the genus of~$E$. The Riemann-Hurwitz formula (see~\cite[p.\ 106]{Chevalley1951} or~\cite[p.\ 125]{Fulton1989}) yields \begin{equation}\label{RHfmla} 2g-2 = -2n + \sum_P(e(P) - 1), \end{equation} where the sum is over all places~$P$ of~$E$ and~$e(P)$ is the ramification index of~$P$ with respect to~$K$. There are only a finite number of places~$Q$ of~$K$ over which places of~$E$ ramify and the automorphism~$\sigma$ leaves the set of such places invariant. On the other hand, the only finite set of places of~$K$ that is left invariant by~$\sigma$ is the place at infinity. Therefore, if~$P$ is a place of~$E$ with~$e(P) > 1$, then~$P$ lies above the place at infinity. Note that for any place~$Q$ of~$K$, Theorem~1 of~\cite[p.\ 52]{Chevalley1951} implies (under our assumptions) that \begin{equation}\label{ramsum} \sum_{\mbox{$P$ lies above~$Q$}} e(P) = n. \end{equation} Therefore we have \begin{eqnarray*} 2g-2 & = & -2n + \sum_{\mbox{$P$ lies above~$\infty$}}(e(P) - 1)\\& =& -2n +n-t \\ & =& -n-t, \end{eqnarray*} where~$t$ is the number of places above infinity. Since~$n$ and~$t$ are both positive integers and~$g$ is nonnegative, we must have~$g=0$ and~$n = t = 1$. In particular, since~$n = 1$, we have~$E= K$. \end{proof} \noindent{\underline{\emph{Proof of Proposition~\ref{PROP:afrde}.}} Suppose that~$\alpha(t)$ satisfies the linear recurrence relation \[S_t^n(\alpha) + a_{n-1} S_t^{n-1} (\alpha) + \cdots + a_0 \alpha = 0,\] where~$a_i\in k(t)$. By Lemma~\ref{LM:shiftclose}, the field~$E = k(t)(\alpha, S_t(\alpha), \ldots, S_t^{n-1}(\alpha)) \subset \overline{k(t)}$ is a difference field extension of~$k(t)$. Since~$[E:k(t)]<\infty$, $E=k(t)$ by Proposition~\ref{LM:aflre1}. Thus~$\alpha\in k(t)$. \hfill $\Box$ \begin{remark} Proposition~\ref{PROP:afrde} has been shown in~\cite[Theorem 1]{Benzaghou1992},~\cite[Prop.\ 4.4]{vdPutSinger1997} and~\cite[Theorem 5.2]{Bell2008}. The proof in~\cite[Theorem 5.2]{Bell2008} is based on analytic properties of algebraic functions.\\[0.1in] In this proposition, we assume that~$\alpha(t)$ satisfies a polynomial equation over~$k(t)$ and \underline{lies in a field}. This latter condition cannot be weakened without weakening the conclusion. For example, the sequence~$y=(-1)^n$ satisfies $y^2-1 = 0$ but~$k(t)[y]$ is a ring with zero divisors. The above references give a complete characterization of sequences satisfying both linear recurrences and polynomial equations. \end{remark} The following result is a~$q$-analogue of Lemma~\ref{LM:aflre1}. \begin{lemma}\label{LM:aflqre1} Let~$K = k(t)$, where~$k$ is algebraically closed. Let~$(E, \sigma)$ be a difference field such that~$K \subset E$, $\sigma(t) = qt$ with~$q \in k\setminus\{0\}$ and not a root of unity, and~$[E:K] < \infty$. Then~$E = k(t^{{1}/{n}})$ for some positive integer~$n$. \end{lemma} \begin{proof} Let~$[E:K] = n$ and~$g$ be the genus of~$E$. We again consider the set of places of~$K$ over which places of~$E$ ramify. This set is left invariant by~$\sigma$ and so must be a subset of the set containing the place at~$0$ and the place at~$\infty$. Therefore, ramification can occur only at~$0$ and~$\infty$. Equations~\eqref{RHfmla} and~\eqref{ramsum} imply \begin{eqnarray*} 2g-2 & = & -2n + \sum_{\mbox{~$P$ lies above~$0$}}(e(P) - 1) + \sum_{\mbox{$P$ lies above~$\infty$}}(e(P) - 1)\\ &=&-2n +2n-t_0-t_\infty \\ &=& -t_0-t_\infty \end{eqnarray*} where~$t_0, t_\infty$ are the number of places above~$0$ and~$\infty$. Since~$t_0$ and~$t_\infty$ are positive and~$g$ is nonnegative, we must have that~$g=0$ and~$t_0 = t_\infty = 1$. Therefore, $E$ has one place~$P_0$ over~$0$ with~$e_{P_0} = n$ and one place~$P_\infty$ over~$\infty$ with~$e_{P_\infty} = n$. Writing divisors multiplicatively, Riemann's Theorem (\cite[p.\ 22]{Chevalley1951}) implies that \begin{equation*} l(P_0P^{-1}_\infty) \geq d(P_0P^{-1}_\infty) -g+1 = 0-0+1 = 1 \end{equation*} where~$l(P_0P^{-1}_\infty)$ is the dimension of the space of elements of~$E$ which are~$\equiv 0 \mod {P_0P^{-1}_\infty}$. Note that since the degree~$P_0P^{-1}_\infty$ is~$0$, this latter condition implies that any such element has~$P_0P^{-1}_\infty$ as its divisor. Therefore, there exists an element~$y \in E$ whose divisor is~$P_0P^{-1}_\infty$. Note that the element~$t$ has divisor~$P_0^nP^{-n}_\infty$ and therefore~$y^nt^{-1}$ must be in~$k$. Therefore~$y = c\, t^{{1}/{n}}$ for some~$c \in k$. Finally, Theorem 4 of~\cite[p.\ 18]{Chevalley1951} states that~$[E:k(y)]$ equals the degree of the divisor of zeros of~$y$, that is, $[E:k(y)] = 1$. Therefore~$ E = k(y) = k(t^{{1}/{n}})$. \end{proof} \noindent{\underline{\emph{Proof of Proposition~\ref{PROP:aflqe}.}} Suppose that~$\alpha(t)$ satisfies the linear $q$-recurrence relation \[Q_t^n(\alpha) + a_{n-1} Q_t^{n-1} (\alpha) + \cdots + a_0 \alpha = 0,\] where~$a_i\in k(t)$. By Lemma~\ref{LM:shiftclose}, the field~$E = k(t)(\alpha, Q_t(\alpha), \ldots, Q_t^{n-1}(\alpha)) \subset \overline{k(t)}$ is a difference field extension of~$k(t)$. Since~$[E:k(t)]<\infty$, $E=k(t^{1/n})$ by Lemma~\ref{LM:aflqre1}. Thus~$\alpha\in k(t^{1/n})$.} \hfill $\Box$ \bibliographystyle{plain}
{ "timestamp": "2012-03-20T01:05:52", "yymm": "1203", "arxiv_id": "1203.4200", "language": "en", "url": "https://arxiv.org/abs/1203.4200", "abstract": "We give necessary and sufficient conditions for the existence of telescopers for rational functions of two variables in the continuous, discrete and q-discrete settings and characterize which operators can occur as telescopers. Using this latter characterization, we reprove results of Furstenberg and Zeilberger concerning diagonals of power series representing rational functions. The key concept behind these considerations is a generalization of the notion of residue in the continuous case to an analogous concept in the discrete and q-discrete cases.", "subjects": "Combinatorics (math.CO); Discrete Mathematics (cs.DM); Symbolic Computation (cs.SC)", "title": "Residues and Telescopers for Rational Functions", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9799765628041174, "lm_q2_score": 0.8311430436757312, "lm_q1q2_score": 0.8145007031398955 }
https://arxiv.org/abs/1406.1984
Discrete Hardy-type Inequalities
This paper studies the Hardy-type inequalities on the discrete intervals. The first result is the variational formulas of the optimal constants. Using these formulas, one may obtain an approximating procedure and the known basic estimates of the optimal constants. The second result, which is the main innovation of this paper, is about the factor of basic upper estimates. An improved factor is presented, which is smaller than the known one and is best possible. Some comparison results are included for comparing the optimal constants on different intervals.
\section{Introduction} For given two constants $p$ and $q$ with $1< p\leqslant q< \infty$, two positive sequences $\mathbf{u}$ and $\mathbf{v}$ on a discrete interval $[1, N]:= \{1, 2, \dots, N \}$ with $N\leqslant +\infty$, here is the discrete Hardy-type inequality: \begin{equation}\label{Hardy} \left[\sum_{n= 1}^N u_n \left(\sum_{i=1}^n x_i \right)^q \right]^{1/q} \leqslant A \left(\sum_{n=1}^N v_n x_n^p \right)^{1/p}, \end{equation} where $\mathbf{x}$ is an arbitrary non-negative sequence on $[1, N]$. For saving notations, the constant $A$ is assumed to be optimal. The purpose of this paper is two-fold. First, we give some variational formulas of the optimal constants. The primary applications of the variational formulas are the approximating procedure and the basic estimates. It is necessary to review the advance of the basic estimates in recent research, cf. \cite{Bliss, Chen4, Chen3, Maz'ya, Opic}. In continuous case, the following result, due to B. Opic \rf{Opic}{Theorem 1.14} and V. G. Maz'ya \rf{Maz'ya}{Theorem 1, pp. 42-43}, is well known \bg{equation}\label{basic} B \leqslant A \leqslant \tilde{k}_{q, p} B, \end{equation} where $B$ is a quantity described by $N$, $p$, $q$, $\mathbf{u}$ and $\mathbf{v}$, the factor $\tilde{k}_{q, p}$ is a constant defined by $p$ and $q$: \bg{equation} \label{tilde k_qp} \tilde{k}_{q,p} = \left(1+ \frac{q}{p^*} \right)^{1/q} \left(1+ \frac{p^*}{q} \right)^{1/p^*}, \end{equation} here $p^*$ is the conjugate number of $p$, i.e. $1/p + 1/{p^*}= 1$. In particular, $\tilde{k}_{p,p}= p^{1/p}(p^*)^{1/{p^*}}$. Afterwards, Chen \rf{Chen3}{Theorem 2.1} gets the same conclusions through the variational formulas of the optimal constants. Furthermore, there is an approximating procedure \rf{Chen3}{Theorem 2.2} based on the variational formulas, which can improve the estimates of the optimal constants step by step. In discrete context, when $p=q$, Chen, Wang and Zhang \cite{Chen2} arrive the corresponding variational formulas and basic estimates which are similar to (\ref{basic}), of course, $B$ must be adjusted appropriately in discrete case. When $p\ne q$, Mao \rf{Mao}{Proposition A.1} gets the similar result, but the factor of basic upper estimates is $p^{1/q} (p^*)^{1/p^*}$, which is a little coarser than $\tilde{k}_{q,p}$. Our first destination is to show the corresponding variational formulas in discrete context with the condition of $p\neq q$. Later on, as applications of these formulas, we obtain the basic estimates and the approximating procedure. Overall, these results can be regarded as an extension of the studies in continuous context \cite{Chen3}. Second, we study the upper bounds of the basic estimates of the optimal constants in discrete case. Our result is the factor $\tilde{k}_{q, p}$ in (\ref{basic}) can be improved to $k_{q, p}$: \bg{equation} \label{k_qp} k_{q,p} = \left(\frac{r}{B(\frac{1}{r}, \frac{q- 1}{r})} \right)^{1/p- 1/q}, \end{equation} where $B(a, b)= \int_0^1 x^{a-1} (1- x)^{b- 1} \text{\rm d} x$ is the Beta function and $r= q/p -1$. Moreover, our result shows that the factor is best possible and is consistent with the result of continuous case. In continuous case, the improvement has been worked out, cf. \rf{Bennett3}{Theorem 8}, \rf{Manakov}{Theorem 2}, and \rf{Kufner}{pp. 45-47}. The key is the result of Bliss \cite{Bliss}, which gives an integral inequality that the optimal constants can be attained. However, the analogue of the conclusion in the discrete context is nontrivial, as mentioned in \rf{Bennett3}{page 170, two lines above (61)}, \textquotedblleft I have been unable to prove the discrete analogue of Theorem 8\textquotedblright (here the last result is the continuous case). We are lucky to be able to prove this conclusion which constitutes the second part of this paper. When $p=q$, it is well known that the factor $\tilde{k}_{q,p}$ is sharp (see for instance \rf{Hardy_book}{Theorem 326 and 327}). Note that if we allow $q\rightarrow p$, by the identity $$ \lim_{r\rightarrow 0^{+}} B^r \left(\frac{a}{r}, \frac{b}{r} \right) = \frac{a^a b^b}{(a+ b)^{a+ b}}, \qquad (a,b >0), $$ we have \bg{equation}\label{k_pp} k_{p,p}= \tilde{k}_{p,p}= p^{1/p}(p^*)^{1/{p^*}}. \end{equation} It means our improved factor is consistent with the original one when $p= q$. Thus, our main results are devoted to the case of $p< q$. It is a long time for the research about Hardy-type inequalities, which are one of the major themes of harmonic analysis and represent useful tools e.g. in the theory and practice of differential equations, in the theory of approximation etc. From probabilistic consideration, these inequalities are important tools to study the convergence rate of the corresponding processes. These are the origin and motive of this study. This paper is organized as follows. The rest of this section, we give some notations and definitions, then illustrate the main results. In Section 2 and Section 3, we prove the conclusions on discrete half line (i.e. $N= \infty$). The case of finite interval (i.e. $N< \infty$) will be handled unitedly in the final section, which gives some comparison results for the optimal constants and their basic estimates on different intervals. For the simplicity of illustration, we need some notations. Let $\hat{v}_i = v_i^{1- p^*}, 1\leqslant i\leqslant N$. For any sequence $\mathbf{x}$ on $[1, N]$, define an operator $H$: \bg{equation}\label{H} H\mathbf{x}(n)= \begin{cases} 0, & n= 0, \\ \sum_{i=1}^n x_i, & n=1, 2, \cdots, N, \end{cases} \end{equation} which means the partial summation of $\mathbf{x}$. The following notations are used frequently: $$ \alpha} \def\bz{\beta \wedge \bz = \min \{ \alpha} \def\bz{\beta, \bz \}, \qquad \alpha} \def\bz{\beta \vee \bz = \max \{ \alpha} \def\bz{\beta, \bz \}. $$ Set $$ {\scr A}} \def\bq{{\scr B}[1, N]= \{\mathbf{x}: \text{$x_1>0$, and $x_i\geqslant 0$ for $i=2, \dots, N$} \}. $$ In this article, we use the convention $1/0 = \infty$. Then the optimal constant $A$ can be denoted by the following variational formula: \begin{equation}\label{A} A = \sup_{\mathbf{x} \in {\scr A}} \def\bq{{\scr B}[1, N]} \frac{\left[\sum_{n= 1}^N u_n \left(\sum_{i=1}^n x_i \right)^q \right]^{1/q}}{\left(\sum_{n= 1}^N v_n x_n^p \right)^{1/p}} = \sup_{\mathbf{x} \in {\scr A}} \def\bq{{\scr B}[1, N]} \frac{\| H\mathbf{x} \|_{l^q(u)}}{\| \mathbf{x} \|_{l^p(v)}}, \end{equation} where $\| \mathbf{x} \|_{l^q(u)}= \left[ \sum_{n=1}^{N} u_n x_n^q \right]^{1/q}$ and similarly to $\| \mathbf{x} \|_{l^p(v)}$. For upper estimates, define the single summation operator $I^*$ and the double summation operator $I\!I^*$ as: \begin{align} I_n^*(\mathbf{x}) &= \frac{\hat{v}_n}{x_n} \left(\sum_{i=n}^N u_i (H\mathbf{x}(i))^{q/p^*} \right)^{p^*/q}, \label{I*} \\ I\!I_n^*(\mathbf{x}) &= \frac{1}{H\mathbf{x}(n)} \sum_{i=1}^n \hat{v}_i \left(\sum_{j=i}^N u_j (H\mathbf{x}(j))^{q/p^*} \right)^{p^*/q}, \label{II*} \end{align} with domain ${\scr A}} \def\bq{{\scr B}[1, N]$. For lower estimates, there are some differences: \begin{align} I_n (\mathbf{x})&= \frac{\hat{v}_n}{x_n} \left(\sum_{i=n}^N u_i (H\mathbf{x}(i))^{q- 1} \right)^{p^*- 1}, \label{I} \\ I\!I_n (\mathbf{x})&= \frac{1}{H \mathbf{x} (n)} \sum_{i=1}^n \hat{v}_i \left(\sum_{j=i}^N u_j (H\mathbf{x}(j))^{q- 1} \right)^{p^*- 1}. \label{II} \\ \end{align} It is easy to see that $I\!I^*= I\!I$ and $I^*= I$ when $p= q$. To avoid the non-summability problem, the domain of $I$ and $I\!I$ have to be modified to: $$ {\scr A}} \def\bq{{\scr B}_0 [1, N]= \left\{\mathbf{x} \in {\scr A}} \def\bq{{\scr B}[1, N]: \sum_{i=1}^N v_i x_i^p < \infty \right\}. $$ With these notations, we can give the main conclusions of the variational formulas of the optimal constants. \nnd\begin{thm1}\label{Var} {\cms The optimal constant $A$ in the Hardy-type inequality {\rm (\ref{Hardy})} satisfies (i) upper estimates: \begin{equation} A\leqslant \inf_{\mathbf{x}\in {\scr A}} \def\bq{{\scr B}[1, N]} \left(\sup_{n\in [1, N]} I\!I_n^*(\mathbf{x}) \right)^{1/p^*} = \inf_{\mathbf{x}\in {\scr A}} \def\bq{{\scr B}[1, N]} \left(\sup_{n\in [1, N]} I_n^*(\mathbf{x}) \right) ^{1/p^*}. \end{equation} (ii) lower estimates: \begin{equation} \aligned A&\geqslant \sup_{\mathbf{x}\in {\scr A}} \def\bq{{\scr B}_0 [1, N]} \|\mathbf{x}\|_{l^p(v)}^{p/q -1} \left(\inf_{n\in [1,N]} I\!I_n (\mathbf{x})\right)^{{(p-1)}/q} \\ &\geqslant \sup_{\mathbf{x}\in {\scr A}} \def\bq{{\scr B}_0 [1, N]} \|\mathbf{x}\|_{l^p(v)}^{p/q -1} \left(\inf_{n\in [1,N]} I_n (\mathbf{x})\right)^{{(p-1)}/q}. \\ \endaligned \end{equation} } \end{thm1} Using Theorem \ref{Var} on a appropriate test function, we can obtain the basic estimates (\ref{basic}). To be specific, that is: \nnd\begin{crl1}\label{basic estimates} {\cms The inequality {\rm (\ref{Hardy})} holds for every $\mathbf{x}\in {\scr A}} \def\bq{{\scr B}[1, N]$ if and only if $B< \infty$, where \bg{equation}\label{B} B= \sup_{n\in [1, N]} \left( \sum_{i=1}^n \hat{v}_i \right)^{1/p^*} \left(\sum_{j=n}^N u_j \right)^{1/q}. \end{equation} Moreover, we have $$ B\leqslant A\leqslant \tilde{k}_{q,p}B, $$ where $\tilde{k}_{q,p}$ is defined as (\ref{tilde k_qp}), which is independent of $\mathbf{u}$, $\mathbf{v}$ and $N$. } \end{crl1} Roughly speaking, the conclusion of Corollary \ref{basic estimates} is from the first iteration through an appropriate test function. Moreover, we can improve the estimates step by step through multiple iterations on this test function. The following corollary is based on this idea, which is of great significance to numerical computation. \nnd\begin{crl1}\label{approximating} {\cms (i) Define $$ \aligned x^{(1)}_n &= (H \mathbf{\hat{v}} (n))^\alpha} \def\bz{\beta - (H \mathbf{\hat{v}} (n- 1))^\alpha} \def\bz{\beta, \\ x^{(m+ 1)}_n &= \hat{v}_n \left(\sum_{i= n}^N u_i (H \mathbf{x}^{(m)} (i))^{q/{p^*}} \right)^{{p^*}/q}, \endaligned $$ where $\alpha} \def\bz{\beta= q/(p^*+ q)$. If $\dz_1:= \sup_{n\in [1, N]} \left( I\!I_n (\mathbf{x}^{(1)}) \right)^{1/p^*}< \infty$, define a sequence as: \bg{equation} \dz_m= \sup_{n\in [1, N]} \left( I\!I^*_n(\mathbf{x}^{(m)})\right)^{1/p^*}, \quad m=2, 3, \dots. \end{equation} Otherwise, define $\dz_m \equiv \infty$. Then $\dz_m$ is a non-increasing sequence (denote $\dz_\infty$ be the limit of $\dz_m$) and we have $$ A\leqslant \dz_\infty \leqslant \cdots \leqslant \dz_1 \leqslant \tilde{k}_{q,p} B. $$ (ii) Fix $k\in [1, N]$, define $$ \aligned y^{(k, 1)}_n &= \begin{cases} \hat{v}_n, & 1\leqslant n\leqslant k, \\ 0, & n> k, \end{cases} \\ y^{(k, m+1)}_n &= \hat{v}_n \left(\sum_{i=n}^N \left(H \mathbf{y}^{(k, m)} (i) \right)^{q- 1} \right)^{p^*- 1}, \endaligned $$ and define a sequence as $$ \aligned \widetilde{\dz}_m&= \sup_{k\in [1, N]} \big\| \mathbf{y}^{(k, m)} \big\|_{l^p (v)}^{p/q -1} \left(\inf_{n\in [1, N]} I\!I_n \left( \mathbf{y}^{(k, m)} \right) \right)^{{(p- 1)}/q}, \\ \overline{\dz}_m&= \sup_{k\in [1, N]} \frac{\left[\sum_{n= 1}^N u_n \left( H \mathbf{y}^{(k, m)} (n) \right)^q \right]^{1/q}}{\left[\sum_{n=1}^N v_n \left(y_n^{(k, m)} \right)^p \right]^{1/p}}. \\ \endaligned $$ Then we have $A\geqslant \widetilde{\dz}_m \vee \overline{\dz}_m$ for all $m\geqslant 1$. } \end{crl1} Another main result of this paper is about the factor in (\ref{basic}). Just like the continuous case, the factor of the basic upper estimates can be improved. Furthermore, we can prove the improved factor is best possible. This result is described in detail below. \nnd\begin{thm1}\label{kqpB} {\cms The basic upper estimates can be improved to: \begin{equation} A\leqslant k_{q, p}B, \end{equation} where $k_{q,p}$ is defined as (\ref{k_qp}). In particular, when $N= \infty$ and $\sum_{i=1}^\infty \hat{v}_i = \infty$, the factor $k_{q,p}$ is sharp. } \end{thm1} \section{Proof of Theorem \ref{Var} } In this section, we always assume $N= \infty$, and the case of the finite interval will be discussed in Section \ref{Interval}. The first proposition is about the property of the sequences which reach the equality case of (\ref{Hardy}). This result is useless in the proof of Theorem \ref{Var}, but is necessary to study the property of the optimal constants. \nnd\begin{prp1}\label{decreasing} {\cms If the optimal constant $A$, appearing in (\ref{Hardy}), is attained by some non-negative sequence $\mathbf{x}$. Define $w_n = \hat{v}_n^{-1} x_n$, then the sequence $\mathbf{w}$ is decreasing. } \end{prp1} \medskip \noindent {\bf Proof}. With the definition of $\mathbf{w}$, we can rewrite the Hardy- type inequalities (\ref{Hardy}) as: \bg{equation}\label{Hardy*} \left[ \sum_{n=1}^\infty u_n \left(\sum_{i=1}^n \hat{v}_i w_i \right)^q \right]^{1/q}\leqslant A \left(\sum_{n=1}^\infty \hat{v}_n w_n^p \right)^{1/p}, \end{equation} and $A$ is attained at $\mathbf{w}$. The idea in the remainder of this proof is from Bennett \rf{Bennett1}{Section 3}. Using reduction to absurdity, assume there exist integers $i$ and $j$ with $1\leqslant i< j< \infty$, but $w_i< w_j$. We can construct a new sequence $\mathbf{w'}$ from $\mathbf{w}$ as \begin{equation}\label{decreasing1} w'_i= w'_j= w_0, \end{equation} where $w_0$ satisfies \begin{equation}\label{decreasing2} (\hat{v}_i+ \hat{v}_j) w_0^p= \hat{v}_i w_i^p+ \hat{v}_j w_j^p. \end{equation} On the one hand, from (\ref{decreasing2}), the right side of (\ref{Hardy*}) is unchanged when $\mathbf{w}$ is replaced by $\mathbf{w'}$. On the other hand, since $p> 1$ and (\ref{decreasing2}), we have \bg{equation}\label{decreasing3} w_i< w_0< w_j. \end{equation} \bg{equation}\label{decreasing4} \hat{v}_i w_i + \hat{v}_j w_j < (\hat{v}_i + \hat{v}_j) w_0. \end{equation} Combine (\ref{decreasing3}) and (\ref{decreasing4}), we obtain $$ \sum_{k=1}^n \hat{v}_k w_k < \sum_{k=1}^n \hat{v}_k w'_k, \qquad \forall n \geqslant 1. $$ It means the left side of (\ref{Hardy*}), increases strictly when $\mathbf{w}$ is replaced by $\mathbf{w'}$. Hence $A$ is not attained at $\mathbf{w}$, which is a contradiction. \quad $\square$ \medskip \noindent {\bf Proof of Theorem \ref{Var}.} The briefing of the proof of Theorem \ref{Var} is given as follows. (a) First, we need to verify the relation of the single summation operator $I^*$ and the double summation operator $I\!I^*$: $$ \inf_{\mathbf{x} \in {\scr A}} \def\bq{{\scr B}[1, \infty)} \left[\sup_{n\in [1, \infty)} I\!I_n^*(\mathbf{x}) \right]^{1/p^*} = \inf_{\mathbf{x} \in {\scr A}} \def\bq{{\scr B}[1, \infty)} \left[\sup_{n\in [1, \infty)} I_n^*(\mathbf{x}) \right]^{1/p^*}. $$ For any $\mathbf{x} \in {\scr A}} \def\bq{{\scr B}[1, \infty)$, as an application of the proportional property, we get $$ \aligned \sup_{n\in [1, \infty)} I\!I_n^*(\mathbf{x}) &= \sup_{n\in [1, \infty)} \frac{1}{H \mathbf{x} (n)} \left[ \sum_{i= 1}^{n} \hat{v}_i \left(\sum_{j=i}^{\infty} u_j (H \mathbf{x} (j))^{q/{p^*}} \right)^{{p^*}/q}\right] \\ &\leqslant \sup_{n\in [1, \infty)} \frac{1}{x_n} \left[ \hat{v}_n \left(\sum_{i=n}^{\infty} u_i (H \mathbf{x} (i))^{q/{p^*}} \right)^{{p^*}/q}\right] \\ &= \sup_{n\in [1, \infty)} I_n^*(\mathbf{x}). \endaligned $$ Hence, we have $$ \inf_{\mathbf{x} \in {\scr A}} \def\bq{{\scr B}[1, \infty)} \left[\sup_{n\in [1, \infty)} I\!I_n^*(\mathbf{x}) \right]^{1/p^*} \leqslant\inf_{\mathbf{x} \in {\scr A}} \def\bq{{\scr B}[1, \infty)} \left[\sup_{n\in [1, \infty)} I_n^*(\mathbf{x}) \right]^{1/p^*}. $$ On the other hand, for any $\mathbf{x}\in {\scr A}} \def\bq{{\scr B}[1, \infty)$, define $$ y_n= \hat{v}_n \left(\sum_{i= n}^{\infty} u_i (H \mathbf{x} (i))^{q/{p^*}} \right)^{{p^*}/q}. $$ Obviously, $y_n> 0$ on $[1, \infty)$, then $\mathbf{y} \in {\scr A}} \def\bq{{\scr B}[1, \infty)$. Again, using the proportional property, we have $$ \aligned \sup_{n\in [1, \infty)} I_n^*(\mathbf{y}) &= \sup_{n\in [1, \infty)} \left[ \frac{\sum_{i=n}^\infty u_i (H \mathbf{y} (i))^{q/{p^*}}}{\sum_{i=n}^\infty u_i (H \mathbf{x} (i))^{q/{p^*}}} \right]^{{p^*}/q} \\ &\leqslant \sup_{n\in [1, \infty)} \frac{1}{H \mathbf{x} (n)} \sum_{i= 1}^n \hat{v}_i \left(\sum_{j= i}^\infty u_j (H \mathbf{x} (j))^{q/{p^*}} \right)^{{p^*}/q} \\ &= \sup_{n\in [1, \infty)} I\!I_n^*(\mathbf{x}). \endaligned $$ Since $\mathbf{x}$ is arbitrary, we obtain the conclusion we need. (b) The next step is to show the upper estimates of the optimal constants. Assume $A$ is attained at a non-negative sequence $\mathbf{a}$. For each positive sequence $\mathbf{h}$, as an application of the H\"older inequality and the H\"older-Minkowski inequality, we have $$ \aligned \sum_{n= 1}^{\infty} u_n (H \mathbf{a} (n))^q &= \sum_{n= 1}^{\infty} u_n \left(\sum_{i= 1}^n a_i v_i^{1/p} h_i^{-1} v_i^{-1/p} h_i \right)^q \\ &\leqslant \sum_{n= 1}^{\infty} u_n \left(\sum_{i= 1}^n a_i^p v_i h_i^{-p} \right)^{q/p} \left(\sum_{k= 1}^n v_k^{-{p^*}/{p}} h_k^{p^*} \right)^{q/{p^*}} \\ &\leqslant \left\{ \sum_{n= 1}^{\infty} a_n^p v_n h_n^{-p} \left[\sum_{i= n}^{\infty} u_i \left(\sum_{k= 1}^i \hat{v}_k h_k^{p^*} \right)^{q/{p^*}} \right]^{p/q} \right\}^{q/p}. \endaligned $$ At the last step, we use the H\"older-Minkowski inequality, which needs the condition $p< q$. In particular, when $p= q$, it is Fubini theorem. Now, making a power $1/q$, we get \begin{align}\label{==1} \left[\sum_{n= 1}^{\infty} u_n (H \mathbf{a} (n))^q \right]^{1/q} &\leqslant \left\{ \sum_{n= 1}^{\infty} a_n^p v_n h_n^{-p} \left[\sum_{i= n}^{\infty} u_i \left(\sum_{k= 1}^i \hat{v}_k h_k^{p^*} \right)^{q/{p^*}} \right]^{p/q} \right\}^{1/p} \notag \\ &\leqslant \sup_{n\in [1, \infty)} \left[ \frac{1}{h_n^q} \sum_{i=n}^{\infty} u_i \left(\sum_{k= 1}^i \hat{v}_k h_k^{p^*} \right)^{q/{p^*}} \right]^{1/q} \left(\sum_{j= 1}^{\infty} v_j a_j^p \right)^{1/p}. \end{align} For any $\mathbf{x} \in {\scr A}} \def\bq{{\scr B}[1, \infty)$, let $$ h_n= \left( \sum_{i=n}^{\infty} u_i (H \mathbf{x} (i))^{q/{p^*}} \right)^{1/q}, $$ by the proportional property, we have $$ \aligned \sup_{n\in [1, \infty)} & \left[\frac{1}{h_n^q} \sum_{i=n}^{\infty} u_i \left(\sum_{j= 1}^i \hat{v}_j h_j^{p^*} \right)^{q/{p^*}} \right]^{1/q} \\ &\leqslant \sup_{n\in [1, \infty)} \left[\frac{1}{H \mathbf{x} (n)} \cdot \sum_{i= 1}^n \hat{v}_i \left(\sum_{j= i}^{\infty} u_j (H \mathbf{x} (j))^{q/p^*} \right)^{p^*/q} \right]^{1/p^*} \\ &= \sup_{n\in [1, \infty)} {I\!I_n^*(\mathbf{x})}^{1/p^*}. \endaligned $$ Inserting this formula into (\ref{==1}), we obtain $$ A = \frac{\left( \sum_{n= 1}^\infty u_n (H \mathbf{a} (n))^q \right)^{1/q}}{\left( \sum_{n= 1}^\infty v_n a_n^p \right)^{1/p}}\leqslant \sup_{n\in [1, \infty)} {I\!I_n^*(\mathbf{x})}^{1/p^*}. $$ Since $\mathbf{x}$ is arbitrary, it follows that $$ A \leqslant \inf_{\mathbf{x} \in {\scr A}} \def\bq{{\scr B}[1, \infty)} \sup_{n \in [1, \infty)} {I\!I_n^*(\mathbf{x})}^{1/p^*}. $$ (c) For the lower estimates, again, we consider the relation of $I$ and $I\!I$ first. For any $\mathbf{x} \in {\scr A}} \def\bq{{\scr B}_0 [1, \infty)$, we need to show: $$ \inf_{n\in [1, \infty)} I_n (\mathbf{x}) \leqslant \inf_{n\in [1, \infty)} I\!I_n (\mathbf{x}). $$ In fact, with the help of proportional property, an argument similar to the one used in (a) can show this result. Next, we should show the variational formulas of $A$. Since the summability of sequence $\mathbf{x}$, without loss of generality, we may assume $\sum_{i= 1}^\infty v_i x_i^p =1$. Hence our next step is to proof $$ \sup_{\mathbf{x} \in \tilde{{\scr A}} \def\bq{{\scr B}}[1, \infty) } \inf_{n\in [1, \infty)} I\!I_n(\mathbf{x})^{(p- 1)/q} \leqslant A, $$ where $\tilde{{\scr A}} \def\bq{{\scr B}}[1, \infty)= \{ \mathbf{x} \in {\scr A}} \def\bq{{\scr B}_0 [1, \infty): \sum_{i=1}^\infty v_i x_i^p = 1 \}$. We would begin with the classical variational formulas (\ref{A}) of the optimal constants. For any $\mathbf{x} \in \tilde{{\scr A}} \def\bq{{\scr B}}[1, \infty)$, define $$ y_n= \hat{v}_n \left(\sum_{i= n}^\infty u_i (H \mathbf{x} (i))^{q-1} \right)^{p^*- 1}, $$ then we have \bg{equation}\label{==2} A \geqslant \frac{\| H \mathbf{y} \|_{l^q (u)}}{\| \mathbf{y} \|_{l^p(v)}} = \frac{\left[\sum_{n= 1}^\infty u_n (H \mathbf{y} (n))^{q} \right]^{1/q}}{\left[\sum_{n= 1}^\infty \hat{v}_n \left(\sum_{i=n}^\infty u_i (H \mathbf{x} (i))^{q-1} \right)^{p^*} \right]^{1/p}}. \end{equation} Consider the denominator of (\ref{==2}), according to Fubini theorem and the definition of $\mathbf{y}$, we obtain \begin{align}\label{==3} \sum_{i=1}^\infty & y_i \left(\sum_{j= i}^\infty u_j (H \mathbf{x} (j))^{q-1} \right) = \sum_{j=1}^\infty u_j (H \mathbf{y} (j)) (H \mathbf{x} (j))^{q-1}\notag \\ &\leqslant \left[\sum_{j= 1}^\infty u_j (H \mathbf{y} (j))^{q/p} (H \mathbf{x} (j))^{q/{p^*}} \right]^{p/q} \left[\sum_{j= 1}^\infty u_j (H \mathbf{x} (j))^q \right]^{(q-p)/q}. \end{align} The last step is based on the H\"older inequality, which needs the condition $p< q$. Moreover, since $\sum_{i= 1}^\infty v_i x_i^p = 1$ we have \bg{equation}\label{==4} \left[\sum_{j= 1}^\infty u_j (H \mathbf{x} (j))^q \right]^{(q-p)/q} \leqslant A^{q- p}. \end{equation} Combining (\ref{==2}), (\ref{==3}), (\ref{==4}) and using the proportional property, we obtain $$ A \geqslant \left[\frac{\sum_{i=1}^\infty u_i (H \mathbf{y} (i))^q}{\sum_{i=1}^\infty u_i (H \mathbf{y} (i))^{q/p} (H \mathbf{x} (i))^{q/{p^*}}} \right]^{p/{q^2}} \geqslant \inf_{n \in [1, \infty)} \left[ \frac{H \mathbf{y} (n)}{H \mathbf{x} (n)} \right]^{(p-1)/q}. $$ By the definition of $\mathbf{y}$, we get $$ \inf_{n\in [1, \infty)} I\!I_n(\mathbf{x})^{(p-1)/q} \leqslant A. $$ Since $\mathbf{x}$ is arbitrary, we obtain the variational formulas of $A$. The proof is completed in the case $N= \infty$. \quad $\square$ \medskip \noindent {\bf Proof of Corollary \ref{basic estimates}.} With the help of Theorem \ref{Var}, we can obtain the basic estimates if we choose an appropriate test function. We consider the upper estimates first. Before the proof, we need some preparations. Given an increasing positive sequence $\mathbf{\Phi} \def\qqz{\Psi}$ on $[1, N]$, for any $n\in [1, N- 1]$ and $0< \alpha} \def\bz{\beta< 1$, we assert \begin{align}\label{==5} \sum_{i=n+ 1}^{N} \left[\left(\frac{\Phi} \def\qqz{\Psi_i}{\Phi} \def\qqz{\Psi_n}\right)^\alpha} \def\bz{\beta- \left(\frac{\Phi} \def\qqz{\Psi_{i- 1}}{\Phi} \def\qqz{\Psi_n}\right)^\alpha} \def\bz{\beta \right] \left(\frac{\Phi} \def\qqz{\Psi_i}{\Phi} \def\qqz{\Psi_n}\right)^{-1} \leqslant \frac{\alpha} \def\bz{\beta}{1- \alpha} \def\bz{\beta}. \end{align} In fact, it can be proved by induction. Assume $n=N- 1$. Let $y= \Phi} \def\qqz{\Psi_N/ \Phi} \def\qqz{\Psi_{N- 1}$, then $y\geqslant 1$ (since $\mathbf{\Phi} \def\qqz{\Psi}$ is increasing). Through simple calculations, we know the function $$ f(x)= (x^\alpha} \def\bz{\beta- 1) x^{-1}, \qquad x\geqslant 1, $$ reaches the maximum when $x= \left(\frac{1}{1- \alpha} \def\bz{\beta}\right)^{1/\alpha} \def\bz{\beta}$. Hence $$ \aligned \left[\left(\frac{\Phi} \def\qqz{\Psi_N}{\Phi} \def\qqz{\Psi_{N- 1}}\right)^\alpha} \def\bz{\beta- 1 \right] \left(\frac{\Phi} \def\qqz{\Psi_N}{\Phi} \def\qqz{\Psi_{N- 1}}\right)^{-1} &= (y^\alpha} \def\bz{\beta- 1) y^{-1} \\ &\leqslant (1- \alpha} \def\bz{\beta)^{1/ \alpha} \def\bz{\beta}\left(\frac{\alpha} \def\bz{\beta}{1- \alpha} \def\bz{\beta}\right) \\ &\leqslant \frac{\alpha} \def\bz{\beta}{1- \alpha} \def\bz{\beta}. \endaligned $$ For any $1\leqslant m\leqslant N- 1$, assume the inequality (\ref{==5}) is true when $n= m$. Now consider $n=m- 1$. Let $y= \Phi} \def\qqz{\Psi_{m}/ \Phi} \def\qqz{\Psi_{m- 1}$, then $y\geqslant 1$. By the assumption, we have $$ \aligned \sum_{i= m}^N & \left[\left(\frac{\Phi} \def\qqz{\Psi_i}{\Phi} \def\qqz{\Psi_{m- 1}}\right)^\alpha} \def\bz{\beta - \left(\frac{\Phi} \def\qqz{\Psi_{i- 1}}{\Phi} \def\qqz{\Psi_{m- 1}}\right)^\alpha} \def\bz{\beta \right] \left(\frac{\Phi} \def\qqz{\Psi_i}{\Phi} \def\qqz{\Psi_{m- 1}}\right)^{-1} \\ &= \left(\frac{\Phi} \def\qqz{\Psi_{m}}{\Phi} \def\qqz{\Psi_{m- 1}}\right)^{\alpha} \def\bz{\beta- 1} \sum_{i=m}^{N} \left[\left(\frac{\Phi} \def\qqz{\Psi_i}{\Phi} \def\qqz{\Psi_{m}}\right)^\alpha} \def\bz{\beta - \left(\frac{\Phi} \def\qqz{\Psi_{i- 1}}{\Phi} \def\qqz{\Psi_{m}}\right)^\alpha} \def\bz{\beta \right] \left(\frac{\Phi} \def\qqz{\Psi_i}{\Phi} \def\qqz{\Psi_{m}}\right)^{-1} \\ &\leqslant \left(\frac{\Phi} \def\qqz{\Psi_{m}}{\Phi} \def\qqz{\Psi_{m- 1}}\right)^{\alpha} \def\bz{\beta- 1} \left[\frac{\alpha} \def\bz{\beta}{1- \alpha} \def\bz{\beta}+ 1- \left(\frac{\Phi} \def\qqz{\Psi_{m- 1}}{\Phi} \def\qqz{\Psi_{m}}\right)^\alpha} \def\bz{\beta \right] \\ &= \frac{1}{1- \alpha} \def\bz{\beta} y^{\alpha} \def\bz{\beta- 1}- y^{-1}. \endaligned $$ Again, by simple calculations, we know the function $$ f(x)= \frac{1}{1- \alpha} \def\bz{\beta} x^{\alpha} \def\bz{\beta- 1}- x^{-1}, \qquad x\geqslant 1, $$ reachs the maximum $\dfrac{\alpha} \def\bz{\beta}{1- \alpha} \def\bz{\beta}$ when $x= 1$. Hence $$ \sum_{i= m}^N \left[\left(\frac{\Phi} \def\qqz{\Psi_i}{\Phi} \def\qqz{\Psi_{m- 1}}\right)^\alpha} \def\bz{\beta - \left(\frac{\Phi} \def\qqz{\Psi_{i- 1}}{\Phi} \def\qqz{\Psi_{m- 1}}\right)^\alpha} \def\bz{\beta \right] \left(\frac{\Phi} \def\qqz{\Psi_i}{\Phi} \def\qqz{\Psi_{m- 1}}\right)^{-1} \leqslant \frac{\alpha} \def\bz{\beta}{1- \alpha} \def\bz{\beta}. $$ By induction, we prove this assertion. We should notice that the right side of (\ref{==5}) is independent of $N$, then (\ref{==5}) also be true when $N \rightarrow \infty$. Let $\alpha} \def\bz{\beta\in (0, 1)$ be an undetermined parameter. Having the inequality (\ref{==5}) in hand, we can prove: \bg{equation}\label{==6} \left(\sum_{i= n}^\infty u_i (H \mathbf{\hat{v}} (i))^{\alpha} \def\bz{\beta q/p^*} \right)^{1/q}\leqslant B (H \mathbf{\hat{v}} (n))^{(\alpha} \def\bz{\beta- 1)/p^*} \left(\frac{1}{1- \alpha} \def\bz{\beta}\right)^{1/q}, \quad 1\leqslant n< \infty. \end{equation} With the definition of $B$, for any $n$, we have $(\sum_{i=n}^\infty u_i)^{1/q}\leqslant B (H \mathbf{\hat{v}} (n))^{-1/{p^*}}$. Let $\Phi} \def\qqz{\Psi_n= (H \mathbf{\hat{v}} (n))^{q/{p^*}}$, summation by parts, we have $$\aligned \sum_{i= n}^\infty u_i \Phi} \def\qqz{\Psi_i^\alpha} \def\bz{\beta &= \Phi} \def\qqz{\Psi_n^\alpha} \def\bz{\beta \left(\sum_{i=n}^\infty u_i \right) + \sum_{i=n+ 1}^\infty \left(\Phi} \def\qqz{\Psi_i^\alpha} \def\bz{\beta- \Phi} \def\qqz{\Psi_{i- 1}^\alpha} \def\bz{\beta \right) \left(\sum_{j=i}^\infty u_j \right) \\ &\leqslant B^q \Phi} \def\qqz{\Psi_n^{\alpha} \def\bz{\beta- 1} + B^q \sum_{i=n+ 1}^\infty \left(\Phi} \def\qqz{\Psi_i^\alpha} \def\bz{\beta- \Phi} \def\qqz{\Psi_{i- 1}^\alpha} \def\bz{\beta \right) \Phi} \def\qqz{\Psi_i^{-1} \\ &= B^q \Phi} \def\qqz{\Psi_n^{\alpha} \def\bz{\beta- 1} \left\{ 1+ \sum_{i=n+ 1}^\infty \left[\left(\frac{\Phi} \def\qqz{\Psi_i}{\Phi} \def\qqz{\Psi_n}\right)^\alpha} \def\bz{\beta- \left(\frac{\Phi} \def\qqz{\Psi_{i- 1}}{\Phi} \def\qqz{\Psi_n}\right)^\alpha} \def\bz{\beta \right] \left(\frac{\Phi} \def\qqz{\Psi_i}{\Phi} \def\qqz{\Psi_n}\right)^{-1} \right\} \\ &\leqslant B^q \Phi} \def\qqz{\Psi_n^{\alpha} \def\bz{\beta- 1} \left(\frac{1}{1- \alpha} \def\bz{\beta} \right). \endaligned$$ Making a power $1/q$, we obtain the inequality (\ref{==6}). Now, we are ready to prove the upper bounds of the basic estimates. For any $n\in[1, \infty)$, Let $x_n = (H \mathbf{\hat{v}} (n))^{\alpha} \def\bz{\beta} - (H \mathbf{\hat{v}} (n- 1))^{\alpha} \def\bz{\beta}$, we have $$ \aligned {I_n^*(\mathbf{x})}^{1/p^*} &= \left[\frac{\hat{v}_n}{(H \mathbf{\hat{v}} (n))^{\alpha} \def\bz{\beta} - (H \mathbf{\hat{v}} (n- 1))^{\alpha} \def\bz{\beta}} \left(\sum_{i= n}^\infty u_i (H \mathbf{\hat{v}} (i))^{\alpha} \def\bz{\beta q /p^*} \right)^{{p^*}/q} \right]^{1/{p^*}} \\ &\leqslant \left[\frac{1}{\alpha} \def\bz{\beta (H \mathbf{\hat{v}} (n))^{\alpha} \def\bz{\beta- 1} } \left(\sum_{i= n}^\infty u_i (H \mathbf{\hat{v}} (i))^{\alpha} \def\bz{\beta q/ p^*} \right)^{{p^*}/q} \right]^{1/{p^*}} \\ &= \alpha} \def\bz{\beta^{-1/{p^*}} \left(H \mathbf{\hat{v}} (n) \right)^{(1- \alpha} \def\bz{\beta)/{p^*}} \left(\sum_{i= n}^\infty u_i (H \mathbf{\hat{v}} (i))^{\alpha} \def\bz{\beta q/{p^*}} \right)^{1/q} \\ &\leqslant B \alpha} \def\bz{\beta^{-1/{p^*}} (1- \alpha} \def\bz{\beta)^{-1/q}. \endaligned $$ An easy calculation shows that the function $$ f(x)= x^{-1/{p^*}} (1- x)^{-1/q}, \qquad 0< x<1, $$ reaches the maximum $$ \tilde{k}_{q, p}= \left(1+ \frac{q}{p^*}\right)^{1/q} \left(1+ \frac{p^*}{q}\right)^{1/p^*} $$ when $x= \dfrac{q}{p^*+ q}$. Hence we take $\alpha} \def\bz{\beta= \dfrac{q}{p^*+ q}$, then we have $$ \sup_{n\in [1, \infty)} \left( I_n^* (\mathbf{x}) \right)^{1/p^*}\leqslant \tilde{k}_{q, p} B. $$ By Theorem \ref{Var}, we get the basic upper estimates. The basic lower estimates are more straightforward. For any $n\in [1, \infty)$, we can choose a test sequence as $$ x_i^{(n)}= \begin{cases} \hat{v}_i, & 1\leqslant i\leqslant n, \\ 0, & n< i< \infty. \end{cases} $$ It is obvious that $\mathbf{x}^{(n)} \in {\scr A}} \def\bq{{\scr B}_0 [1, \infty)$, then by (\ref{A}) we have $$ \aligned A&\geqslant \sup_{n\in [1, \infty)} \frac{\left[\sum_{i= 1}^\infty u_i (H \mathbf{x}^{(n)} (i))^q \right]^{1/q}}{\left[ \sum_{i= 1}^\infty v_i \left(x_i^{(n)} \right)^p \right]^{1/p}} \\ &= \sup_{n\in [1, \infty)} \left(\sum_{i= 1}^n \hat{v}_i \right)^{1/p^*} \left[ \left(\sum_{i= 1}^n \hat{v}_i \right)^{-q} \left( \sum_{i= 1}^{n- 1} u_i \left(H \mathbf{\hat{v}} (i) \right)^q \right) + \sum_{i= n}^\infty u_i \right]^{1/q} \\ &\geqslant B. \endaligned $$ This completes the proof of Corollary \ref{basic estimates}. \quad $\square$ \medskip \medskip \noindent {\bf Proof of Corollary \ref{approximating}.} By the proportional property, we can obtain the monotonicity of \{$\dz_n$\}. The approximating sequence $\{ \dz_n \}$ comes from the upper estimates of the variational formula, and $\{\widetilde{\dz}_n\}$ comes from the lower one. These results are the simple applications of Theorem \ref{Var}. The sequence $\{\overline{\dz}_n\}$ is the straightforward application of the classical variational formula (\ref{A}). \quad $\square$ \medskip \section{Proof of Theorem \ref{kqpB}} Again, we assume $N= \infty$, and the case of finite interval will be discussed in Section \ref{Interval}. We begin with the following well-known lemma. \nnd\begin{lmm1}\label{compare} {\cms Let $\mathbf{a}$, $\mathbf{b}$ be sequences with non-negative entries. If $$ \sum_{k=i}^\infty a_k \leqslant \sum_{k=i}^\infty b_k, \qquad (\forall ~ i=1,2, \cdots) $$ then for any increasing non-negative sequence $\mathbf{c}$, we have $$ \sum_{k=1}^\infty a_k c_k \leqslant \sum_{k=1}^\infty b_k c_k. $$ } \end{lmm1} \medskip \noindent {\bf Proof}. Set $c_0= 0$. Summation by parts, we have $$ \aligned \sum_{k=1}^\infty a_k c_k &= \sum_{k=1}^\infty \left(\sum_{i=1}^k (c_i- c_{i-1}) \right) a_k = \sum_{i=1}^\infty \left(\sum_{k=i}^\infty a_k \right) (c_i- c_{i-1}) \\ &\leqslant \sum_{i=1}^\infty \left(\sum_{k=i}^\infty b_k \right) (c_i- c_{i-1})= \sum_{k=1}^\infty b_k c_k. \endaligned $$ This completes the proof of Lemma \ref{compare}. \quad $\square$ \medskip The conclusion of Lemma \ref{compare} is about increasing sequences, analogously, there is a conclusion corresponding to the decreasing sequences, cf. \rf{Bennett2}{Lemma 1}. The following lemma is due to Bliss \cite{Bliss}, which gives a special Hardy-type inequality in the continuous case. \nnd\begin{lmm1}\label{Bliss_lemma} {\cms For any non-negative real function $f(x)$, we have \bg{equation} \left(\int_0^\infty \frac{1}{x^{q- r}} \left(\int_0^x f(t) \text{\rm d} t \right)^q \text{\rm d} x \right)^{1/q}\leqslant k_{q,p} \left(\frac{p^*}{q} \right)^{1/q} \left(\int_0^\infty f^p(x) \text{\rm d} x \right)^{1/p}, \end{equation} where $r= q/p -1$ and $k_{q,p}$ is the optimal constant, which is defined as (\ref{k_qp}). Moreover, the optimal constant is attained when $$ f(x)= \frac{c}{(d \cdot x^r + 1)^{(r+ 1)/r}}, $$ where $c$ and $d$ are non-negative constants. } \end{lmm1} \medskip \noindent { {\bf Proof of Theorem \ref{kqpB}}. By Corollary \ref{basic estimates}, it is obvious that $A= \infty$ if $B= \infty$. To avoid this trivial case, we assume $B< \infty$. (a) First we consider the case that $H \mathbf{\hat{v}} (\infty)= \lim_{n \rightarrow \infty} H \mathbf{\hat{v}} (n) =\infty$. Similar to Proposition \ref{decreasing}, we can rewrite the Hardy-type inequalities (\ref{Hardy}) as: $$ \sum_{n=1}^\infty u_n \left(\sum_{i=1}^n \hat{v}_i x_i \right)^q \leqslant A^q \left(\sum_{n=1}^\infty \hat{v}_n x_n^p \right)^{q/p}. $$ Define sequence $\mathbf{\tilde{u}}$ \bg{equation}\label{tilde_u} \tilde{u}_n = B^q \left( (H \mathbf{\hat{v}} (n))^{-q/p^*}- (H \mathbf{\hat{v}} (n+1))^{-q/p^*} \right), \qquad n\geqslant 1. \end{equation} By direct summation and $H \mathbf{\hat{v}} (\infty)= \infty$, we have $$ \sum_{i= n}^\infty \tilde{u}_i = B^q \left(H \mathbf{\hat{v}} (n)^{-q/p^*} - H \mathbf{\hat{v}} (\infty)^{-q/p^*} \right)= B^q H \mathbf{\hat{v}} (n)^{-q/p^*} \geqslant \sum_{i= n}^\infty u_i. $$ Applying Lemma \ref{compare}, for any non-negative sequence $\mathbf{x}$, we obtain \bg{equation}\label{kqp1} \sum_{n= 1}^\infty u_n \left(\sum_{i=1}^n \hat{v}_i x_i \right)^q \leqslant \sum_{n= 1}^\infty \tilde{u}_n \left(\sum_{i=1}^n \hat{v}_i x_i \right)^q. \end{equation} The next is to show $$ \sum_{n= 1}^\infty \tilde{u}_n \left(\sum_{i=1}^n \hat{v}_i x_i \right)^q \leqslant A^q \left(\sum_{n=1}^\infty \hat{v}_n x_n^p \right)^{q/p}. $$ In order to use Lemma \ref{Bliss_lemma}, we should construct a function which connects summation with integration. Defined function $f: [0, \infty) \rightarrow [0, \infty)$ \bg{equation}\label{prp1} f(x)= \begin{cases} x_n, & H \mathbf{\hat{v}} (n-1) \leqslant x< H \mathbf{\hat{v}} (n), \\ 0, & x\geqslant \sup_n H \mathbf{\hat{v}} (n). \end{cases} \end{equation} It is clear that \bg{equation}\label{prp2} \sum_{i=1}^\infty \hat{v}_i x_i^p = \int_{0}^\infty f^p(x) \text{\rm d} x, \end{equation} and \bg{equation}\label{prp3} \sum_{i=1}^n \hat{v}_i x_i\leqslant \int_0^\alpha} \def\bz{\beta f(t) \text{\rm d} t, \end{equation} where $H \mathbf{\hat{v}} (n) \leqslant \alpha} \def\bz{\beta< H \mathbf{\hat{v}} (n+ 1)$. For convenience, write $\tilde{u}_0= 0, \hat{v}_0= 0$. Applying (\ref{prp3}), Lemma \ref{Bliss_lemma} and (\ref{prp2}), we see that \begin{align} \allowdisplaybreaks \sum_{n=0}^\infty \tilde{u}_n \left(\sum_{k=0}^n \hat{v}_k x_k \right)^q &= \sum_{n=0}^\infty B^q \left( (H \mathbf{\hat{v}} (n))^{-q/p^*}- (H \mathbf{\hat{v}} (n+ 1))^{-q/p^*} \right) \left(\sum_{k= 0}^n \hat{v}_k x_k \right)^q \notag \\ &= \sum_{n=0}^\infty B^q \left(\frac{q}{p^*} \int_{H \mathbf{\hat{v}} (n)}^{H \mathbf{\hat{v}} (n+ 1)} x^{-q/p^* - 1} \text{\rm d} x\right) \left(\sum_{k= 0}^n \hat{v}_k x_k \right)^q \notag \\ &\leqslant \frac{q}{p^*} B^q \left(\sum_{n=0}^\infty \int_{H \mathbf{\hat{v}} (n)}^{H \mathbf{\hat{v}} (n+ 1)} x^{-q/p^* - 1} \left(\int_0^x f(t) \text{\rm d} t\right)^q \text{\rm d} x \right) \notag \\ &= \frac{q}{p^*} B^q \int_0^\infty x^{r- q} \left(\int_0^x f(t) \text{\rm d} t \right)^q \text{\rm d} x \quad (r=q/p - 1)\notag \\ &\leqslant B^q k_{q, p}^q \left(\int_0^\infty f^p(x) \text{\rm d} x \right)^{q/p} \notag \\ &= B^q k_{q, p}^q \left(\sum_{i=0}^\infty \hat{v}_i x_i^p \right)^{q/p}. \notag \end{align} By the definition of the optimal constant, we have $A\leqslant k_{q, p} B$. (b) To show the factor of the basic upper estimate is best possible, we attempt to mimic the extremal function in Lemma \ref{Bliss_lemma}: $$ f(x)= \frac{c}{\left(dx^r+ 1\right)^{(r+1)/r}} \quad \text{and} \quad \int_0^x f(t) \text{\rm d} t = \frac{cx}{\left(dx^r +1 \right)^{1/r}}, $$ where $c$ and $d$ are arbitrary positive constants. We start with this form, set $$ u_n= n^{-q/p^*}- (n+1)^{-q/p^*}, \qquad v_n \equiv 1, $$ and $$ x_n = \frac{c n}{(n^r+ d)^{1/r}}- \frac{c (n-1)}{((n-1)^r+ d)^{1/r}}. $$ Obviously, the form of $\mathbf{x}$ comes from the difference of $\int_0^x f(t) \text{\rm d} t$. In this case, we have $B= 1$. Here we are free to choose $c$ and $d$, however, no matter what choices, there is some loss of precision between integrals and series. But by direct calculation, we find that this loss becomes negligible when $c/d \rightarrow 0$. Without loss of generality, we choose $c= 1$ and let $d$ be a positive and large enough real number. Next, we calculate the left and the right side of (\ref{Hardy}). The calculation of the right side of (\ref{Hardy}) is direct. By the definition of $\mathbf{x}$, we have \begin{align}\label{right} \sum_{n= 1}^\infty x_n^p &= \sum_{n=1}^\infty \left[\frac{n}{(n^r + d)^{1/r}} -\frac{n- 1}{((n- 1)^r + d)^{1/r}} \right]^p \notag \\ &= \sum_{n=1}^\infty \left[\int_{n- 1}^n \frac{d}{(x^r+ d)^{1/r +1}} \text{\rm d} x \right]^p \notag \\ &\leqslant \sum_{n= 1}^\infty \int_{n- 1}^n \left(\frac{d}{(x^r+ d)^{1/r +1}} \right)^p \text{\rm d} x \notag \\ &= r^{-1} d^{(1- p)/r} B\left(\frac{1}{r}, \frac{q- 1}{r}\right). \end{align} The left side of (\ref{Hardy}) is difficult. First, we assert there is a large enough integer $N$ such that \bg{equation}\label{left1} \int_N^{\infty} x^{-q/{p^*}- 1} \left[ \frac{x^q}{(x^r+ d)^{q/r}} \right] \text{\rm d} x \leqslant \int_1^{\infty} (x+ 1)^{-q/{p^*}- 1} \left[ \frac{x^q}{(x^r+ d)^{q/r}} \right] \text{\rm d} x. \end{equation} In fact, we have $$ \int_N^{\infty} x^{-q/{p^*}- 1} \left[ \frac{x^q}{(x^r+ d)^{q/r}} \right] \text{\rm d} x \leqslant \int_N^{\infty} x^{-q/{p^*}- 1} \text{\rm d} x = \frac{p^*}{q} N^{-q/{p^*}}. $$ The existence of $N$ is obvious since the left side of (\ref{left1}) decreases to $0$ as $N \uparrow \infty$. Fix this sufficiently large integer $N$, then the left side of (\ref{left1}) is calculable. Using the integral transform $s^{-1}= d^{-1} x^r +1$, we have \bg{equation}\label{left2} \int_N^{\infty} x^{-q/{p^*}- 1} \left[ \frac{x^q}{(x^r+ d)^{q/r}} \right] \text{\rm d} x = r^{-1} d^{- \frac{q}{rp^*}} B\left(\frac{1+r}{r}, \frac{q- r- 1}{r}, \frac{d}{N^r+ d}\right), \end{equation} where $B(a, b, x)$ is the incomplete Beta function: $$ B(a, b, x)= \int_{0}^{x} s^{a- 1} (1- s)^{b- 1} \text{\rm d} s. $$ Applying the mean value theorem, (\ref{left1}) and (\ref{left2}), we have \begin{align}\label{left3} \int_1^\infty & \left[x^{-q/{p^*}}- (x+ 1)^{-q/{p^*}} \right] \frac{x^q}{(x^r+ d)^{q/r}} \text{\rm d} x \notag \\ & \geqslant \int_1^\infty \frac{q}{p^*} (x+ 1)^{-q/{p^*}- 1} \left[\frac{x^q}{(x^r+ d)^{q/r}} \right] \text{\rm d} x \notag \\ & \geqslant d^{- \frac{q}{rp^*}} \left(\frac{q}{p^*} \right) r^{-1} B\left(\frac{1+r}{r}, \frac{q- r- 1}{r}, \frac{d}{N^r+ d}\right). \end{align} Now, it's very easy to calculate the optimal constants. Using the relation $$ B(a+ 1, b- 1)= \frac{a}{b- 1} B(a, b), $$ it follows from (\ref{A}), (\ref{right}) and (\ref{left3}) that $$ \aligned A^q &\geqslant \left[\sum_{n=1}^{\infty} \left(n^{-q/p^*}- (n+ 1)^{-q/p^*} \right) (H \mathbf{x} (n))^q \right] \left( \sum_{n=1}^\infty x_n^p \right)^{- q/p} \\ &\geqslant \left[\int_{1}^{\infty} \left[x^{-q/p^*} - (x+ 1)^{-q/p^*} \right] \frac{x^q}{(x^r+ d)^{\frac{q}{r}}} \text{\rm d} x \right] \left( \sum_{n=1}^\infty x_n^p \right)^{- q/p} \\ &\geqslant \left(\frac{q}{p^*}\right) r^{q/p - 1} \cdot B\left(\frac{1+ r}{r}, \frac{q- 1- r}{r}, \frac{d}{N^r+ d} \right) \cdot B\left(\frac{1}{r}, \frac{q- 1}{r}\right)^{- q/p} \\ &\rightarrow k_{q,p}^q \qquad \text{(as $d \rightarrow \infty$)}. \endaligned $$ Hence, the factor of basic upper estimate is best possible. (c) The final step is to remove condition $H \mathbf{\hat{v}} (\infty)= \infty$. We use Proposition \ref{C. estimates} of section \ref{Interval}. Fix $N_0 < \infty$. For given $\mathbf{u}$ and $\mathbf{v}$ on $[1, \infty)$, we define $\mathbf{u}^{N_0}$ and $\mathbf{v}^{N_0}$ to be the restriction of $\mathbf{u}$ and $\mathbf{v}$ on $[1, N_0]$. Then define $$ \overline{u}_n = \begin{cases} u_n^{N_0}, & 1\leqslant n \leqslant N_0, \\ 0, & n> N_0, \end{cases} $$ and $$ \overline{v}_n = \begin{cases} v_n^{N_0}, & 1\leqslant n\leqslant N_0, \\ 1, & n> N_0. \end{cases} $$ Obviously, we have $H \mathbf{\overline{v}} (\infty) = \infty$. Applying the result of (a), we have $$ A(\mathbf{\overline{u}}, \mathbf{\overline{v}}) \leqslant k_{q, p} B(\mathbf{\overline{u}}, \mathbf{\overline{v}}). $$ By Proposition \ref{C. estimates}, we get $$ A(\mathbf{u}^{N_0}, \mathbf{v}^{N_0}) \leqslant k_{q, p} B(\mathbf{u}^{N_0}, \mathbf{v}^{N_0}). $$ The assertion follows by letting $N_0\rightarrow \infty$. This completes the proof of Theorem \ref{kqpB} in the case $N= \infty$. \quad $\square$ \medskip Review part (a) of the proof of Theorem (\ref{kqpB}), when \bg{equation}\label{v} N= \infty, \qquad H \mathbf{\hat{v}} (\infty) =\infty, \end{equation} we give the method to construct $\mathbf{u}$ from $\mathbf{v}$ such that the Hardy-type inequalities (\ref{Hardy}) hold with these $\mathbf{u}$ and $\mathbf{v}$. The part (b) show the optimal constant reaches the upper bound of the basic estimate. It means that the basic upper estimate with the improved factor $k_{q,p}$ holds for a large class of $(\mathbf{u}, \mathbf{v})$. The original idea of this construction is from Chen \rf{Chen3}{Proposition 4.5}. To distinguish it from Theorem \ref{kqpB}, we give the following proposition. \nnd\begin{prp1}\label{Bliss_prp} {\cms For any positive sequences $\mathbf{v}$ and constant $0< C< \infty$, the discrete Hardy-type inequalities (\ref{Hardy}) hold on $[1, \infty)$ with \bg{equation} \tilde{u}_n = C^q \left( (H \mathbf{\hat{v}} (n))^{-q/p^*}- (H \mathbf{\hat{v}} (n+1))^{-q/p^*} \right), \qquad n\geqslant 1, \end{equation} and its optimal constant $A$ satisfies \bg{equation} A \leqslant k_{q, p} C, \end{equation} where $k_{q,p}$ is defined as (\ref{k_qp}). Moreover, when $N$ and $\mathbf{\hat{v}}$ satisfy (\ref{v}), the upper bound is sharp with $C= B$. } \end{prp1} \section{Hardy-type Inequalities on Interval}\label{Interval} In this section, we study the comparison results of the optimal constants and the basic estimates on different intervals. In continuous case, the corresponding comparison results have been done by Chen \rf{Chen3}{Appendix}. With these results, we can get the complete proofs of Theorem \ref{Var} and Theorem \ref{kqpB}. Before specific discussion, we need some notations. Fix two natural numbers $N$ and $N'$ with $N< N'$. Given two positive sequences $\mathbf{u}$ and $\mathbf{v}$ on $[1, N]$, we can extend them to $[1, N']$ as follows: \bg{equation}\label{u'} u'_i = \begin{cases} u_i, & 1\leqslant i\leqslant N, \\ 0, & N< i\leqslant N'; \end{cases} \end{equation} \bg{equation}\label{v'} v'_i = \begin{cases} v_i, & 1\leqslant i\leqslant N, \\ \#, & N< i\leqslant N', \end{cases} \end{equation} where $\#$ means arbitrary positive numbers. Denote $A_N(\mathbf{u}, \mathbf{v})$ be the optimal constant of the Hardy-type inequalities (\ref{Hardy}) in the interval $[1, N]$ with sequences $\mathbf{u}$ and $\mathbf{v}$, and similar for $B_N(\mathbf{u}, \mathbf{v})$. The first result is a comparison for the optimal constants on different intervals. \nnd\begin{prp1}\label{C. constant} {\cms Given two positive sequences $\mathbf{u}'$ and $\mathbf{v}'$ on $[1, N']$. Use $\mathbf{u}$ and $\mathbf{v}$ to denote their restrictions to $[1, N]$. Then we have $A_N(\mathbf{u}, \mathbf{v}) \uparrow A_{N'}(\mathbf{u}', \mathbf{v}')$ as $N \uparrow {N'} \leqslant \infty$. In particular, if the inequality (\ref{Hardy}) holds on $[1, N']$, then it also holds with the same constant $A_{N'}(\mathbf{u}', \mathbf{v}')$ on $[1, N]$. } \end{prp1} \medskip \noindent {\bf Proof}. (a) Given a non-negative sequence $\mathbf{x}$ on $[1, N]$, we can extend to $[1, N']$ by setting \bg{equation}\label{extension of x} x'_i = \begin{cases} x_i & 1\leqslant i\leqslant N, \\ 0 & N< i\leqslant N'. \end{cases} \end{equation} Then we have $$ \aligned \left[ \sum_{n=1}^N u_n \left(H \mathbf{x} (n) \right)^q \right]^{1/q} &= \left[ \sum_{n=1}^{N'} u'_n \left(H \mathbf{x'} (n) \right)^q \right]^{1/q} \\ &\leqslant A_{N'}(\mathbf{u}', \mathbf{v}') \left[ \sum_{n=1}^{N'} v'_n {x'}_n^p \right]^{1/p} \\ &= A_{N'}(\mathbf{u}', \mathbf{v}') \left[ \sum_{n=1}^{N} v_n {x}_n^p \right]^{1/p}. \endaligned $$ It means that $A_N(\mathbf{u}, \mathbf{v}) \leqslant A_{N'}(\mathbf{u}', \mathbf{v}')$. (b) Our next goal is to show the convergence. First we consider the case that $\sum_{n=1}^{N'} u'_n = \infty$. Clearly, in this case we have $N'= \infty$ and $A_{N'}(\mathbf{u}', \mathbf{v}')= \infty$. Besides, restricting to $[1, n]$ and choosing $\mathbf{x}= (1, 0, \dots, 0)$, we obtain $$ A_n(\mathbf{u}, \mathbf{v}) \geqslant \left(\sum_{i=1}^n u_i \right)^{1/q} v_1^{- 1/p} \rightarrow \infty, \qquad \text{as $n\rightarrow \infty$}. $$ Hence the convergence holds in this case. (c) Let $\sum_{n=1}^{N'} u'_n < \infty$. For every non-negative sequence $\mathbf{x}$ on $[1, N']$ with $\sum_{n = 1}^{N'} v'_n x_n^p < \infty$, we get $$ \frac{\left[\sum_{n= 1}^N u_n (H \mathbf{x} (n))^q \right]^{1/q}}{\left(\sum_{n= 1}^N v_n x_n^p \right)^{1/p}} \rightarrow \frac{\left[\sum_{n= 1}^{N'} u'_n (H \mathbf{x} (n))^q \right]^{1/q}}{ \left( \sum_{n= 1}^{N'} v'_n x_n^p \right)^{1/p}} \leqslant A_{N'}(\mathbf{u}', \mathbf{v}'), $$ as $N \uparrow N'$. With (\ref{A}), for every $\varepsilon} \def\xz{\xi> 0$, we can choose a sequence $\mathbf{x}$ such that $$ A_{N'}(\mathbf{u}', \mathbf{v}') \leqslant \frac{\left[\sum_{n= 1}^{N'} u'_n (H \mathbf{x} (n))^q \right]^{1/q}}{\left(\sum_{n= 1}^{N'} v'_n x_n^p \right)^{1/p}} + \varepsilon} \def\xz{\xi. $$ Then we can choose $N$ closed to $N'$ such that $$ \frac{\left[\sum_{n= 1}^{N'} u'_n (H \mathbf{x} (n))^q \right]^{1/q}}{\left(\sum_{n= 1}^{N'} v'_n x_n^p \right)^{1/p}} \leqslant \frac{\left[\sum_{n= 1}^N u_n (H \mathbf{x} (n))^q \right]^{1/q}}{\left(\sum_{n= 1}^N v_n x_n^p \right)^{1/p}} + \varepsilon} \def\xz{\xi. $$ Hence, we get $$ A_N(\mathbf{u}, \mathbf{v}) \leqslant A_{N'}(\mathbf{u}', \mathbf{v}') \leqslant \frac{\left[\sum_{n= 1}^N u_n (H \mathbf{x} (n))^q \right]^{1/q}}{\left(\sum_{n= 1}^N v_n x_n^p \right)^{1/p}} + 2\varepsilon} \def\xz{\xi \leqslant A_N(\mathbf{u}, \mathbf{v}) + 2\varepsilon} \def\xz{\xi. $$ It means that the convergence holds. \quad $\square$ \medskip The following result is about the factor in the basic estimates. \nnd\begin{prp1}\label{C. estimates} {\cms Given two positive sequences $\mathbf{u}$ and $\mathbf{v}$ on $[1, N]$, $\mathbf{u}'$ and $\mathbf{v}'$, defined by (\ref{u'}) and (\ref{v'}), are the extensions on $[1, N']$. Suppose that $A_{N'}(\mathbf{u}', \mathbf{v}')\leqslant k B_{N'}(\mathbf{u}', \mathbf{v}')$ for a universal constant $k$, then we have $A_N(\mathbf{u}, \mathbf{v})\leqslant k B_N(\mathbf{u}, \mathbf{v})$. } \end{prp1} \medskip \noindent {\bf Proof}. Given a sequence $\mathbf{x}$ on $[1, N]$, we can extend it from $[1, N]$ to $[1, N']$ by (\ref{extension of x}). Then we have $$ \aligned \left( \sum_{n=1}^N u_n (H \mathbf{x} (n))^q \right)^{1/q} &= \left( \sum_{n=1}^{N'} u'_n (H \mathbf{x}' (n))^q \right)^{1/q} \\ &\leqslant A_{N'}(\mathbf{u}', \mathbf{v}') \left( \sum_{n= 1}^{N'} v'_n {x'}_n^p \right)^{1/p} \\ &\leqslant k B_{N'}(\mathbf{u}', \mathbf{v}') \left( \sum_{n= 1}^{N'} v'_n {x'}_n^p \right)^{1/p} \\ &= k B_{N'}(\mathbf{u}', \mathbf{v}') \left( \sum_{n= 1}^{N} v_n x_n^p \right)^{1/p}. \endaligned $$ With the definition of the extensions (\ref{u'}) and (\ref{v'}), we can easily check that $$ B_{N'}(\mathbf{u}', \mathbf{v}')= B_{N}(\mathbf{u}, \mathbf{v}). $$ It follows that $$ \left( \sum_{n=1}^N u_n (H \mathbf{x} (n))^q \right)^{1/q} \leqslant k B_{N}(\mathbf{u}, \mathbf{v}) \left( \sum_{n= 1}^{N} v_n x_n^p \right)^{1/p}. $$ Hence $A_N(\mathbf{u}, \mathbf{v})\leqslant k B_N(\mathbf{u}, \mathbf{v})$ as required. \quad $\square$ \medskip With the help of Proposition \ref{C. constant} and Proposition \ref{C. estimates}, we know the variational formulas of the optimal constants, the basic estimates and the improved factor of the basic upper estimates are true when $N< \infty$. So far, we complete the proofs of our main results. The following result gives an opposite view of Proposition \ref{C. constant}: from some local sub-intervals to the whole interval. It gives us an approximating procedure for the unbounded interval. \nnd\begin{prp1}\label{C. interval} {\cms Given two positive sequences $\mathbf{u}$ and $\mathbf{v}$ on $[1, N]$, extend them to $[1, N']$ by (\ref{u'}) and (\ref{v'}). Then we have $A_N (\mathbf{u}, \mathbf{v}) = A_{N'} (\mathbf{u}', \mathbf{v}')$. } \end{prp1} \medskip \noindent {\bf Proof}. For any sequence $\mathbf{x} \in {\scr A}} \def\bq{{\scr B}[1, N]$, let $\mathbf{x}'$ be the extension of $\mathbf{x}$ from $[1, N]$ to $[1, N']$ by (\ref{extension of x}). Obviously, we have $\mathbf{x}' \in {\scr A}} \def\bq{{\scr B}[1, N']$. The inequalities in $[1, N']$ are $$ \| H \mathbf{x}' \|_{l^q(u')} \leqslant A_{N'} (\mathbf{u}', \mathbf{v}') \| \mathbf{x}' \|_{l^p(v')}. $$ With (\ref{u'}), (\ref{v'}) and (\ref{extension of x}), it follows that $$ \| H \mathbf{x} \|_{l^q(u)} \leqslant A_{N'} (\mathbf{u}', \mathbf{v}') \| \mathbf{x} \|_{l^p(v)}. $$ Because $\mathbf{x}$ is arbitrary, it implies that $A_N (\mathbf{u}, \mathbf{v}) \leqslant A_{N'} (\mathbf{u}', \mathbf{v}')$. Conversely, for any $\mathbf{x} \in {\scr A}} \def\bq{{\scr B}[1, N']$, we have $$ \aligned \left(\sum_{n=1}^{N'} {u'}_n (H \mathbf{x} (n))^q \right)^{1/q} &= \left(\sum_{n=1}^{N} u_n (H \mathbf{x} (n))^q \right)^{1/q} \\ &\leqslant A_N (\mathbf{u}, \mathbf{v}) \left( \sum_{n=1}^N v_n \mathbf{x}_n^p \right)^{1/p} \\ &\leqslant A_N (\mathbf{u}, \mathbf{v}) \left( \sum_{n=1}^{N'} v'_n \mathbf{x}_n^p \right)^{1/p}. \endaligned $$ This implies that $A_{N'} (\mathbf{u}', \mathbf{v}') \leqslant A_N (\mathbf{u}, \mathbf{v})$ and then the equality holds. \quad $\square$ \medskip \section{Examples}\label{examples} As mentioned in introduction, Hardy-type inequalities play important role in probability theory. The first example is from birth-death processes which is standard having constant birth and death rates, cf. \rf{Chen5}{Example 5.3}. We present this example to illustrate the power of out results. \nnd\begin{xmp1}\label{example1} {\cms Let $p=q=2$ and $N= \infty$. For $n \geqslant 1$, let $u_n = \gamma} \def\kz{\kappa^n$, $v_n= b \gamma} \def\kz{\kappa^n$, where $\gamma} \def\kz{\kappa$ and $b$ are constants with $\gamma} \def\kz{\kappa< 1$ and $b> 0$. Then $$ B < \widetilde{\dz}_1= \overline{\dz}_1 < A = \dz_1 < 2B, $$ where $B= \dfrac{1}{\sqrt{b} (1- \gamma} \def\kz{\kappa)}$, $\widetilde{\dz}_1= \overline{\dz}_1= \dfrac{\sqrt{1+ \gamma} \def\kz{\kappa}}{\sqrt{b} (1- \gamma} \def\kz{\kappa)}$, $A = \dz_1= \dfrac{1}{\sqrt{b}(1- \sqrt{\gamma} \def\kz{\kappa})}$. Moreover, the optimal constant is attained at sequence $$ a_n= \gamma} \def\kz{\kappa^{(-n +1)/2} \left[n - (n- 1) \gamma} \def\kz{\kappa^{1/2} \right], \qquad n\geqslant 1. $$ } \end{xmp1} \medskip \noindent {\bf Proof}. (a) First, $B$ is easy to calculate. By the definition, we have $$ B= \sup_{n\in [1, \infty)} \left(\sum_{i=1}^n b^{-1} \gamma} \def\kz{\kappa^{- i} \right)^{1/2} \left(\sum_{j= n}^{\infty} \gamma} \def\kz{\kappa^{j}\right)^{1/2} = \frac{1}{\sqrt{b} (1- \gamma} \def\kz{\kappa)}. $$ Next, by (\ref{k_pp}), we have $k_{2, 2}= 2$. By Corollary \ref{basic estimates}, we obtain the basic estimates of the optimal constants: \bg{equation}\label{EX13} \frac{1}{\sqrt{b} (1- \gamma} \def\kz{\kappa)} \leqslant A \leqslant \frac{2}{\sqrt{b} (1- \gamma} \def\kz{\kappa)}. \end{equation} (b) To compute $\dz_1$, we use Corollary \ref{approximating}. Let $$ x_n^{(1)}= (H \mathbf{\hat{v}} (n))^{1/2}- (H \mathbf{\hat{v}} (n- 1))^{1/2}, \qquad n\geqslant 1, $$ and then $$ H \mathbf{x}^{(1)} (n) = (H \mathbf{\hat{v}} (n))^{1/2} = \left[\frac{\gamma} \def\kz{\kappa^{-n}- 1}{b (1- \gamma} \def\kz{\kappa)}\right]^{1/2}. $$ For convenience, we use $\fz_n= \gamma} \def\kz{\kappa^{-n} -1$ in the following. By direct computations, we have \begin{align}\label{EX11} I\!I_n^* \left(\mathbf{x}^{(1)}\right) &= \frac{1}{H \mathbf{x}^{(1)} (n)} \sum_{i=1}^n \hat{v}_i \left(\sum_{j=i}^\infty u_j \left(H \mathbf{x}^{(1)} (j) \right) \right) \notag \\ &= \frac{b^{-3/2}}{(1- \gamma} \def\kz{\kappa)^{1/2}} \frac{1}{H \mathbf{x}^{(1)} (n)} \sum_{i=1}^n \gamma} \def\kz{\kappa^{-i} \left(\sum_{j=i}^\infty \gamma} \def\kz{\kappa^j \fz_j^{1/2} \right) \notag \\ &= \frac{\fz_n^{-1/2}}{b (1- \gamma} \def\kz{\kappa)} \left[\sum_{j=1}^n \gamma} \def\kz{\kappa^{j} \fz_j^{3/2} + \fz_n \sum_{j=n+ 1}^\infty \gamma} \def\kz{\kappa^{j} \fz_j^{1/2}\right]. \end{align} At the last step, we exchange the order of summation. From (\ref{EX11}), it is easy to check that $I\!I_n^* \left(\mathbf{x}^{(1)}\right)$ reaches the maximum when $n \rightarrow \infty$. Hence, by L'Hospital's rule, we obtain $$ \aligned \dz_{1}^2 &= \sup_{n\in [1, \infty)} I\!I_n^* \left(\mathbf{x}^{(1)} \right) \\ &= \frac{1}{b (1- \gamma} \def\kz{\kappa)} \left[ \lim_{n\rightarrow \infty} \fz_n^{-1/2} \sum_{j=1}^n \gamma} \def\kz{\kappa^{j} \fz_j^{3/2} + \lim_{n\rightarrow \infty} \fz_n^{1/2} \sum_{j=n+ 1}^\infty \gamma} \def\kz{\kappa^{j} \fz_j^{1/2} \right] \\ &= \frac{1}{b (1- \gamma} \def\kz{\kappa)} \left[ \frac{1}{1- \sqrt{\gamma} \def\kz{\kappa} } + \frac{\sqrt{\gamma} \def\kz{\kappa}}{1- \sqrt{\gamma} \def\kz{\kappa}} \right] \\ &= \frac{1}{b (1- \sqrt{\gamma} \def\kz{\kappa})^2}. \endaligned $$ (c) Similarly, we use Corollary \ref{approximating} to compute $\overline{\dz}_1$ and $\widetilde{\dz}_1$. Fix $k> 0$, let $$ y_n^{(k, 1)}= \begin{cases} b^{-1} \gamma} \def\kz{\kappa^{-n}, & n\leqslant k, \\ 0, & n>k, \end{cases} $$ and then $$ H \mathbf{y}^{(k, 1)} (n) = \frac{\gamma} \def\kz{\kappa^{-(n \wedge k)} - 1}{b (1- \gamma} \def\kz{\kappa)}= \frac{\fz_{n \wedge k}}{b (1- \gamma} \def\kz{\kappa)}. $$ By lots of tedious calculations, we have $$ \aligned I\!I_n \left( \mathbf{y}^{(k, 1)} \right) &= \frac{1}{H \mathbf{y}^{(k, 1)} (n)} \sum_{i=1}^n \hat{v}_i \left(\sum_{j= i}^\infty u_j \left(H \mathbf{y}^{(k, 1)} \right) \right) \\ &= \frac{1}{b \fz_{n \wedge k}} \sum_{i=1}^n \gamma} \def\kz{\kappa^{-i} \left(\sum_{j= i}^\infty \gamma} \def\kz{\kappa^{j} \fz_{j \wedge k} \right) \\ &= \frac{1}{b (1- \gamma} \def\kz{\kappa)} \left[\frac{1+ \gamma} \def\kz{\kappa}{1- \gamma} \def\kz{\kappa} - \frac{2 (n \wedge k)}{\fz_{n \wedge k}} + (k- n)\vee 0 - \frac{\gamma} \def\kz{\kappa^{k+ 1} \fz_{(n- k) \vee 0}}{1- \gamma} \def\kz{\kappa} \right]. \endaligned $$ Next, note that $I\!I_n \left( \mathbf{y}^{(k, 1)} \right)$ reaches the minimum when $n= k$, and then $$ \aligned \widetilde{\dz}_1^2 &= \sup_{k\in [1, \infty)} \inf_{n\in [1, \infty)} I\!I_n \left(\mathbf{y}^{(k, 1)} \right) \\ &= \sup_{k\in [1, \infty)} \frac{1}{b (1- \gamma} \def\kz{\kappa)} \left(\frac{1+ \gamma} \def\kz{\kappa}{1- \gamma} \def\kz{\kappa} - \frac{2 k}{\fz_{k}} \right) \\ &= \frac{1}{b (1- \gamma} \def\kz{\kappa)} \lim_{k\rightarrow \infty} \left(\frac{1+ \gamma} \def\kz{\kappa}{1- \gamma} \def\kz{\kappa} - \frac{2 k}{\fz_{k}} \right) \\ &= \frac{1+ \gamma} \def\kz{\kappa}{b (1- \gamma} \def\kz{\kappa)^2}. \endaligned $$ Now, we consider $\overline{\dz}_1$. Since $$ \sum_{n=1}^\infty v_n \left(y_n^{(k, 1)} \right)^2 = \frac{\fz_k}{b(1- \gamma} \def\kz{\kappa)}, $$ and $$ \sum_{n=1}^\infty u_n \left( H \mathbf{y}^{(k, 1)} (n) \right)^2 = \frac{1}{b^2 (1- \gamma} \def\kz{\kappa)^2} \left[\sum_{n= 1}^k \gamma} \def\kz{\kappa^n \fz_n^2 + \frac{\gamma} \def\kz{\kappa^{k+ 1} \fz_k^2}{1- \gamma} \def\kz{\kappa} \right], $$ we have $$ \aligned \overline{\dz}_1^2&= \sup_{k\in [1, \infty)} \frac{1}{b(1- \gamma} \def\kz{\kappa)} \left[\fz_k^{-1} \sum_{n=1}^k \gamma} \def\kz{\kappa^n \fz_n^2 + \frac{\gamma} \def\kz{\kappa- \gamma} \def\kz{\kappa^{k+ 1}}{1- \gamma} \def\kz{\kappa} \right] \\ &= \frac{1}{b(1- \gamma} \def\kz{\kappa)} \left[ \lim_{k\rightarrow \infty} \fz_k^{-1} \sum_{n=1}^k \gamma} \def\kz{\kappa^n \fz_n^2 + \frac{\gamma} \def\kz{\kappa }{1- \gamma} \def\kz{\kappa} \right] = \frac{1+ \gamma} \def\kz{\kappa}{b (1- \gamma} \def\kz{\kappa)^2}. \endaligned $$ In the last step, the L'Hospital's rule is used to calculate the limitation of $k$. (d) So far, by Corollary \ref{approximating}, we obtain the estimates of the optimal constants, which is more precise than the basic estimates (\ref{EX13}) \bg{equation}\label{EX12} \frac{\sqrt{1+ \gamma} \def\kz{\kappa}}{\sqrt{b} (1- \gamma} \def\kz{\kappa)} \leqslant A \leqslant \frac{1}{\sqrt{b} (1- \sqrt{\gamma} \def\kz{\kappa})}. \end{equation} In fact, the optimal constant can be accurately calculated. Let $a_n= \gamma} \def\kz{\kappa^{(-n +1)/2} \left[n - (n- 1) \gamma} \def\kz{\kappa^{1/2} \right] (n\geqslant 1)$, then $$ H \mathbf{a} (n)= n \gamma} \def\kz{\kappa^{(-n+ 1)/2}. $$ Here we want to use $\mathbf{a}$ instead of $\mathbf{y}^{(k, 1)}$ to get the lower estimates. However, it is easy to check that $\mathbf{a}$ is non-summability. It means that Theorem \ref{Var} is invalid. By the classical variational formula (\ref{A}) and the L'Hospital's rule, we have $$ \aligned A^2 &\geqslant \frac{\sum_{n= 1}^\infty \gamma} \def\kz{\kappa^n a_n^2 }{\sum_{n=1}^\infty b \gamma} \def\kz{\kappa^n \left(a_n - a_{n-1} \right)^2} \\ &= b^{-1} \lim_{n\rightarrow \infty} \frac{n^2}{\left[n- (n- 1) \gamma} \def\kz{\kappa^{1/2} \right]^2} \\ &= \frac{1}{b (1- \sqrt{\gamma} \def\kz{\kappa})^2}. \endaligned $$ As a consequence, we obtain $A = \dfrac{1}{\sqrt{b} (1- \sqrt{\gamma} \def\kz{\kappa})}$. \quad $\square$ \medskip To distinguish the first example, the second one is about the nonlinear situation $p \neq q$, which is from proof of Theorem \ref{kqpB}. The optimal constant is clear in this example. \nnd\begin{xmp1}\label{example2} {\cms Let $p \neq q$ and $N= \infty$. For $n\geqslant 1$, let $u_n= n^{-q/{p^*}}- (n+ 1)^{-q/{p^*}}$, $v_n \equiv 1$. Then (1) The optimal constant is $A= k_{q, p}$, which is attained at sequence $\mathbf{x}$: $$ x_n= \frac{cn}{(n^r+ d)^{1/r}} - \frac{c(n- 1)}{((n-1)^r+ d)^{1/r}}, \qquad n \geqslant 1, $$ where $r=q/p - 1$, $k_{q, p}$ is defined as (\ref{k_qp}), $c$ and $d$ are arbitrary positive constants. (2) The basic estimates and the approximating procedure are $$ B \leqslant \overline{\dz}_1 \vee \widetilde{\dz}_1 \leqslant A = k_{q, p} B \leqslant \dz_1, $$ where $B=1$, $\overline{\dz}_1 \geqslant 1$, $\widetilde{\dz}_1 \geqslant 1$ and $\dz_1 \leqslant \left(1+ \dfrac{q}{p^*} \right)^{1/q+ 1/p^*}$. } \end{xmp1} \medskip \noindent {\bf Proof}. The first part has been done in Theorem \ref{kqpB}. The remainder of this proof is to compute $\dz_1$, $\overline{\dz}_1$ and $\widetilde{\dz}_1$. To compute $\dz_1$, let $$ x_n^{(1)}= n^{q/(p^*+ q)} - (n- 1)^{q/(p^*+ q)}, n\geqslant 1, $$ then we have $$ \aligned I\!I_n^* \left(\mathbf{x}^{(1)}\right) &= n^{- \frac{q}{p^*+q}} \sum_{i= 1}^n \left\{\sum_{j=i}^{\infty} \left[j^{-\frac{q}{p^*}}- (j+ 1)^{- \frac{q}{p^*}} \right] j^{\frac{q^2}{p^*(p^*+ q)}} \right\}^{p^*/q} \\ &\leqslant n^{- \frac{q}{p^*+q}} \sum_{i= 1}^n \left\{ \left(\frac{q}{p^*} \right) \sum_{j=i}^{\infty} \int_j^{j+1} x^{\frac{q^2}{p^*(p^*+ q)}- \frac{q}{p^*}- 1} \text{\rm d} x \right\}^{p^*/q} \\ &= n^{- \frac{q}{p^*+q}} \sum_{i= 1}^n \left[ \left(\frac{p^*+ q}{p^*} \right) i^{- \frac{q}{p^*+ q}} \right]^{p^*/q} \\ &\leqslant n^{- \frac{q}{p^*+q}} \left(\frac{p^*+ q}{p^*} \right)^{p^*/q} \left(1+ \int_1^n x^{- \frac{p^*}{p^*+ q}} \text{\rm d} x\right) \\ &= \left(1+ \frac{q}{p^*} \right)^{p^*/q + 1}. \\ \endaligned $$ Therefore, we obtain $$ \dz_1 = \sup_{n\in [1, \infty)} \left[ I\!I_n^* \left(\mathbf{x}^{(1)} \right)\right]^{1/p^*} \leqslant \left(1+ \frac{q}{p^*} \right)^{1/q + 1/p^*}. $$ To compute $\overline{\dz}_1$ and $\widetilde{\dz}_1$, let $$ y_n^{(k, 1)} = \begin{cases} 1, & n\leqslant k, \\ 0, & n> k. \end{cases} $$ Obviously, we have $H \mathbf{y}^{(k, 1)} (n) = n \wedge k$, $\|\mathbf{y}^{(k, 1)} \|_{l^p(v)} = k^{1/p}$ and $$ \aligned \|H \mathbf{y}^{(k, 1)} \|_{l^q(u)} &= \left[ \sum_{n=1}^\infty u_n (n \wedge k)^q \right]^{1/q} \\ &= \left[ \sum_{n=1}^{k- 1} \left(n^{-\frac{q}{p^*}}- (n+1)^{-\frac{q}{p^*}} \right) n^q + k^{q/p} \right]^{1/q}. \endaligned $$ Hence, we obtain $$ \aligned \overline{\dz}_1 &= \sup_{k\in [1, \infty)} \frac{\|H \mathbf{y}^{(k, 1)} \|_{l^q(u)}}{\|\mathbf{y}^{(k, 1)} \|_{l^p(v)}} \\ &= \sup_{k\in [1, \infty)} \left[k^{-q/p} \sum_{n=1}^{k- 1} \left(n^{-q/p^*}- (n+1)^{-q/p^*} \right) n^q + 1 \right]^{1/q} \\ &\geqslant 1. \endaligned $$ Now, we consider $\widetilde{\dz}_1$. With directly calculating, we have $$ \aligned I\!I_n & \left(\mathbf{y}^{(k, 1)}\right) = \inf_{n\in [1, \infty)} \frac{1}{n\wedge k} \sum_{i=1}^n \left[\sum_{j=i}^{\infty} u_j \left(j \wedge k\right)^{q- 1} \right]^{p^*- 1}\\ &= \frac{1}{n \wedge k} \sum_{i=1}^{k \wedge n} \left[k^{q/p - 1} + \sum_{j=i}^{k-1} u_j j^{q-1} \right]^{p^*- 1} + \mathbbm{1}_{\{n> k\}} k^{\frac{q- p}{p- 1}} \sum_{i=k +1}^n i^{-q/p}. \endaligned $$ Obviously, $I\!I_n \left(\mathbf{y}^{(k, 1)}\right)$ is increasing when $n\geqslant k$. It means that $I\!I_n \left(\mathbf{y}^{(k, 1)}\right)$ reaches its minimum at $n\in [1, k]$. Thus, we obtain $$ \aligned \inf_{n\in [1, \infty)} I\!I_n \left(\mathbf{y}^{(k, 1)}\right) &= \inf_{n\leqslant k} \frac{1}{n} \sum_{i=1}^{n} \left[k^{q/p - 1} + \sum_{j=i}^{k-1} u_j j^{q-1} \right]^{p^*- 1} \\ &\geqslant \inf_{n\leqslant k} \left[k^{q/p - 1} + \sum_{j=n}^{k-1} u_j j^{q-1} \right]^{p^*- 1} \\ &= k^{(q/p - 1)(p^*- 1)}. \endaligned $$ Therefore, we obtain $$ \aligned \widetilde{\dz}_1 &= \sup_{k\in [1, \infty)} k^{1/q- 1/p} \left( \inf_{n\in [1, \infty)} I\!I_n \left(\mathbf{y}^{(k, 1)}\right) \right)^{(p- 1)/q} \\ &\geqslant \sup_{k\in [1, \infty)} k^{1/q- 1/p} \left[k^{(q/p - 1)(p^*- 1)} \right]^{(p- 1)/q}= 1. \qquad \square \endaligned $$ \noindent {\bf Acknowledgements} $\quad$ This paper is based on the series of studies of my supervisor Prof. M. F. Chen. Heartfelt thanks are given to my supervisor for his careful guidance and helpful suggestions. Thanks are also given to Prof. Y. H. Mao, Prof. F. Y. Wang and Prof. Y. H. Zhang for their comments and suggestions, which lead to lots of improvements of this paper. The research is supported by NSFC (Grant No. 11131003) and by the ``985'' project from the Ministry of Education in China.
{ "timestamp": "2014-06-24T02:11:17", "yymm": "1406", "arxiv_id": "1406.1984", "language": "en", "url": "https://arxiv.org/abs/1406.1984", "abstract": "This paper studies the Hardy-type inequalities on the discrete intervals. The first result is the variational formulas of the optimal constants. Using these formulas, one may obtain an approximating procedure and the known basic estimates of the optimal constants. The second result, which is the main innovation of this paper, is about the factor of basic upper estimates. An improved factor is presented, which is smaller than the known one and is best possible. Some comparison results are included for comparing the optimal constants on different intervals.", "subjects": "Functional Analysis (math.FA)", "title": "Discrete Hardy-type Inequalities", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9825575137315161, "lm_q2_score": 0.828938806208442, "lm_q1q2_score": 0.8144800524637378 }
https://arxiv.org/abs/math/0701940
Monochromatic triangles in two-colored plane
We prove that for any partition of the plane into a closed set $C$ and an open set $O$ and for any configuration $T$ of three points, there is a translated and rotated copy of $T$ contained in $C$ or in $O$. Apart from that, we consider partitions of the plane into two sets whose common boundary is a union of piecewise linear curves. We show that for any such partition and any configuration $T$ which is a vertex set of a non-equilateral triangle there is a copy of $T$ contained in the interior of one of the two partition classes. Furthermore, we give the characterization of these "polygonal" partitions that avoid copies of a given equilateral triple. These results support a conjecture of Erdos, Graham, Montgomery, Rothschild, Spencer and Straus, which states that every two-coloring of the plane contains a monochromatic copy of any nonequilateral triple of points; on the other hand, we disprove a stronger conjecture by the same authors, by providing non-trivial examples of two-colorings that avoid a given equilateral triple.
\section{Introduction} Euclidean Ramsey theory addresses the problems of the following kind: assume that a finite configuration $X$ of points is given; for what values of $c$ and $d$ is it true that every coloring of the $d$-dimensional Euclidean space by $c$ colors contains a monochromatic congruent copy of $X$? The first systematic treatise on this theory appears in 1973 in a series of papers \cite{erdi,erdii,erdiii} by Erd\H os, Graham, Montgomery, Rothschild, Spencer and Straus. Since that time, many strong results have been obtained in this field, often related to high-dimensional configurations (see, e.g., \cite{fra,kri1,kri2,mat} or the survey \cite{gra}); however, there are basic `low-dimensional' problems that remain open. In this paper, we consider the special case when $d=2$, $c=2$ and $|X|=3$; in other words, we study the configurations of three points in the Euclidean plane colored by two colors. We use the term \emph{triangle} to refer to any set of three points, including collinear triples of points, which we call \emph{degenerate} triangles. An $(a,b,c)$-triangle is a triangle whose edges, in anti-clockwise order, have respective lengths $a$, $b$ and~$c$. A $(1,1,1)$-triangle is also called a \emph{unit triangle}. We say that a set of points $X\subseteq \mathbb{R}^2$ is a \emph{copy} of a set of points $Y\subseteq\mathbb{R}^2$, if $X$ can be obtained from $Y$ by translations and rotations in the plane. A \emph{coloring} is a partition of $\mathbb{R}^2$ into two sets $\B$ and $\W$. The elements of $\B$ and $\W$ are called \emph{black points} and \emph{white points}, respectively. We use the term \emph{boundary of $\chi$} to refer to the common boundary of the sets $\B$ and $\W$. Given a coloring $\chi=(\B,\W)$, we say that a set of points $X$ is \emph{monochromatic}, if $X\subseteq\B$ or $X\subseteq\W$. We say that a coloring $\chi$ \emph{contains} a triangle $T$, if there exists a monochromatic set $T'$ which is a copy of $T$; otherwise, we say that $\chi$ \emph{avoids} $T$. A coloring that avoids the unit triangle is easy to obtain: consider a coloring $\chi^*$ that partitions the plane into alternating half-open strips of width $\frac{\sqrt{3}}{2}$; formally, a point $(x,y)$ is black if an only if $n\sqrt{3}<y\le\left(n+\frac{1}{2}\right)\sqrt{3}$ for some integer $n$. It can be easily checked that $\chi^*$ avoids the unit triangle. We can even change the color of some of the points on the boundaries of the strips without creating any monochromatic unit triangle. Erd\H os et al.~\cite[Conjecture~1]{erdiii} have conjectured that this is essentially the only example of colorings avoiding a given triangle: \begin{con}[Erd\H os et al.\ \cite{erdiii}]\label{con-silna} For every triangle $T$ and every coloring $\chi$, if $\chi$ avoids $T$, then $T$ is an equilateral $(l,l,l)$-triangle and $\chi$ is equal to an $l$-times scaled copy of the coloring $\chi^*$ defined above, up to possible modifications of the colors of the points on the boundary of the strips. \end{con} In Section~\ref{sec-poly} of this paper, we present a counterexample to this conjecture, and define a general class of colorings (which includes $\chi^*$ as a special case) that avoid the unit triangle. On the other hand, the following conjecture by Erd\H os et al. \cite[Conjecture~3]{erdiii} remains open: \begin{con} [Erd\H os et al.\ \cite{erdiii}]\label{con-slaba} Every coloring $\chi$ contains every nonequilateral triangle $T$. \end{con} In the past, it has been shown that Conjecture~\ref{con-slaba} holds for special types of triangles $T$ (see, e.g., \cite{erdiii,gra,sha}). Our approach is different: we prove that the conjecture is valid for a restricted class of colorings $\chi$ and arbitrary $T$. In Section~\ref{sec-uzav}, we show that every coloring that partitions $\mathbb{R}^2$ into a closed set and an open set contains every triangle $T$. Then, in Section~\ref{sec-poly}, we consider \emph{polygonal} colorings, whose boundary is a union of piecewise linear curves (see page~\pageref{def-poly} for the precise definition). We show that Conjecture~\ref{con-slaba} holds for the polygonal colorings, but there are polygonal counterexamples to the stronger Conjecture~\ref{con-silna}. In fact, we are able to characterize all these polygonal counterexamples. The following lemma from \cite{erdiii} offers a useful insight into the topic of monochromatic triangles in two-colored plane: \begin{lem} \label{lem-osm} Let $\chi$ be a coloring of the plane. The following holds: \begin{enumerate}[(i)] \item If $\chi$ contains an $(a,a,a)$-triangle for some $a>0$, then $\chi$ contains an $(a,b,c)$-triangle, for every $b,c>0$ such that $a,b,c$ satisfy the (possibly degenerate) triangle inequality. \item If $\chi$ contains an $(a,b,c)$-triangle, then $\chi$ contains an $(x,x,x)$-triangle for some $x\in\{a,b,c\}$. \end{enumerate} \end{lem} \begin{figure} \begin{center} \includegraphics[scale=0.9]{lem1_obr.eps} \end{center} \caption[Proof of Lemma~\ref{lem-osm}]{The illustration of the proof of Lemma~\ref{lem-osm}} \end{figure}\label{fig-osm} \begin{proof} The essence of the proof is the configuration in Figure~\ref{fig-osm}. The configuration consists of two $(a,a,a)$-triangles $ABC$ and $A'B'C'$, two $(b,b,b)$-triangles $ADB'$ and $A'D'B$ and two $(c,c,c)$-triangles $BDC'$ and $B'D'C$. To prove the first part of the lemma, assume, for a given $\chi$, that there is a monochromatic $(a,a,a)$-triangle $ABC$, and choose arbitrary $b$ and $c$ satisfying triangle inequality with $a$. Assume that $A$, $B$ and $C$ are all black. Furthermore, assume for contradiction that no $(a,b,c)$-triangle is monochromatic. Considering the configuration in Fig.~\ref{fig-osm}, we deduce that the points $B'$, $D$ and $D'$ are all white, otherwise one of the $(a,b,c)$-triangles $BAD$, $CAB'$ and $CBD'$ would be monochromatic. Then, $A'$ is black, due to $B'A'D'$, and $C'$ is white, due to $C'A'B$. It follows that $C'B'D$ is monochromatic, a contradiction. The second part is proved by an analogous argument: assume that $BAD$ is an all-white monochromatic triangle and that the statement does not hold. Then $B'$, $C$ and $C'$ are all black, due to $ADB'$, $ABC$ and $BDC'$. $A'$ is white, due to $A'B'C'$; $D'$ is black, due to $A'D'B$, and $B'D'C$ is monochromatic. This concludes the proof. \end{proof} From Lemma~\ref{lem-osm}, we obtain directly the following facts: \begin{cor}\label{cor-osm} For every coloring $\chi$ the following holds: \begin{enumerate}[(i)] \item $\chi$ contains every triangle if and only if $\chi$ contains every equilateral triangle. \item $\chi$ contains every non-equilateral triangle if and only if there is an $a_0>0$ such that $\chi$ contains the equilateral $(a,a,a)$-triangle for all values of $a>0$ different from $a_0$. \item $\chi$ contains an $(a,b,c)$-triangle if and only if $\chi$ contains a $(b,a,c)$-triangle. \end{enumerate} \end{cor} \section{Coloring by closed and open sets}\label{sec-uzav} The aim of this section is to prove the following result: \begin{thm}\label{thm-uzav} Let $\chi=(\B,\W)$ be a coloring such that $\B$ is closed and $\W$ is open. Then $\chi$ contains every triangle $T$. \end{thm} By Corollary~\ref{cor-osm}, it suffices to prove Theorem~\ref{thm-uzav} for the case when $T$ is an arbitrary equilateral triangle. Moreover, since scaling does not affect the topological properties of $\B$ and $\W$, we only need to consider the case when $T$ is the unit triangle. Before stating the proof, we introduce a definition and prove an auxiliary result. \begin{defi} Let $\varepsilon>0$. An $(a,b,c)$-triangle whose edge-lengths satisfy $1-\varepsilon\le a,b,c \le 1+\varepsilon$ is called \emph{an $\varepsilon$-almost unit triangle}. \end{defi} Suppose that an orthogonal coordinate system is given in the plane. For $a>0$, let $Q(a)$ be the closed square with vertices ${(a,a),(-a,a),(-a,-a),(a,-a)}$. \begin{prop}\label{prop-almost} Let $Q(3)=\B \cup \W$ be a decomposition of the square $Q(3)$ into two disjoint sets such that there is no monochromatic unit triangle in $Q(3)$. Then for every $\varepsilon > 0$ both $\B$ and $\W$ contain an $\varepsilon$-almost unit triangle. \end{prop} \begin{proof} Let $\varepsilon$ be a given positive number. Assume that we are given a partition $\B\cup\W=Q(3)$ such that $Q(3)$ does not contain any monochromatic unit triangle. For contradiction, assume that one of the classes, wlog the class $\B$, does not contain any $\varepsilon$-almost unit triangle. There is a white point $S$ and a black point $R$ in $Q(1)$ such that $|R-S| < \varepsilon$ (otherwise the whole square $Q(1)$ would be monochromatic). Let $\cC$ be the unit circle centered at $S$. For every $\alpha \in \mathbb{R}$, let $K(\alpha)$ denote the point of $\cC$ with coordinates $(x_S+\cos(\alpha), y_S+\sin(\alpha))$, where $(x_S,y_S)$ are the coordinates of $S$. Note that the distance between $R$ and any point on $\cC$ is always in the interval $(1-\varepsilon, 1+\varepsilon)$; thus, for every $\alpha$, the points $K(\alpha)$ and $K(\alpha + \frac{\pi}{3})$ must have different colors, otherwise they would form a monochromatic white unit triangle with $S$ or a monochromatic black $\varepsilon$-almost unit triangle with $R$. Let $K(\alpha_0)$ be a white point, then $K(\alpha_0 + \frac{\pi}{3})$ is black (see Fig.~\ref{fig-uzav}). Note that for every $\alpha \in (\alpha_0-\varepsilon, \alpha_0+\varepsilon)$ the distance between $K(\alpha)$ and $K(\alpha_0 + \frac{\pi}{3})$ is in the interval $(1-\varepsilon,1+\varepsilon)$, so the whole arc $\{K(\alpha); \alpha \in (\alpha_0-\varepsilon, \alpha_0+\varepsilon)\}$ is white. Let $A=\{K(\alpha); \alpha \in (\beta_1, \beta_2)\}$ be the maximal open white arc of $\cC$ containing the point $K(\alpha_0)$. Then the whole arc $\{K(\alpha); \alpha \in (\beta_1+\frac{\pi}{3},\beta_2+\frac{\pi}{3})\}$ is black. By definition of $A$, there exists $\beta \in (\beta_2, \beta_2 + \frac{\varepsilon}{2})$ such that $K(\beta)$ is black. There also exists $\gamma \in (\beta_2+\frac{\pi}{3} - \frac{\varepsilon}{2}, \beta_2+\frac{\pi}{3})$ such that $K(\gamma)$ is black. But then $(\gamma-\beta) \in (\frac{\pi}{3}-\varepsilon,\frac{\pi}{3})$, so the distance between the black points $K(\beta)$ and $K(\gamma)$ is in the interval $(1-\varepsilon, 1)$, hence the three points $R, K(\beta),K(\gamma)$ form a black $\varepsilon$-almost unit triangle\thinspace ---\thinspace a contradiction. \end{proof} \begin{figure} \begin{center}\includegraphics{uzav_obr}\end{center} \caption{Illustration of the proof of Proposition~\ref{prop-almost}}\label{fig-uzav} \end{figure} We are now ready to prove the main result of this section. \begin{proof}[Proof of Theorem~\ref{thm-uzav}] Let $\chi=(\B,\W)$ be a coloring, with $\B$ closed. By Corollary~\ref{cor-osm}, it is sufficient to show that $\chi$ contains the unit triangle. Assume, for contradiction, that this is not the case. Let $\B_0 = Q(3) \cap \B$ and let $\W_0=Q(3)\cap\W$. Clearly, neither $\B_0$ nor $\W_0$ contain the unit triangle, so by Proposition~\ref{prop-almost}, both these sets contain $\varepsilon$-almost unit triangles for every $\varepsilon>0$. In particular, the set $\B_0$ contains, for every $n\in\mathbb{N}$, a $\frac{1}{n}$-almost unit triangle $X_nY_nZ_n$. Since $\B_0$ is a compact set, the set $\B_0^3 = \B_0\times\B_0\times\B_0$ is compact as well. The sequence $\{(X_n,Y_n,Z_n); n \in \mathbb{N}\}$ is an infinite sequence of points in $\B_0^3$, so there exists a convergent subsequence $\{(X_{n_k},Y_{n_k},Z_{n_k}); k \in \mathbb{N}\}$. Let $(X,Y,Z) \in \B_0^3$ be its limit. Then $X,Y,Z\in \B$ are limits of the sequences $\{X_{n_k};k \in \mathbb{N}\}$, $\{Y_{n_k};k \in \mathbb{N}\}$, and $\{Z_{n_k};k \in \mathbb{N}\}$, respectively. The Euclidean distance is a continuous function of two variables, so $|X-Y| = \lim_{k \to \infty} |X_{n_k}-Y_{n_k}|=1$, similarly $|Y-Z|=|Z-X|=1$. Thus, $\{X,Y,Z\}$ is a black unit triangle in $Q(3)$, which is a contradiction. \end{proof} \section{Polygonal colorings}\label{sec-poly} Throughout this section, $\cC(A)$ denotes the unit circle with center $A$, and $\cD(A)$ denotes the closed unit disc with center $A$. In this section, we consider \emph{polygonal} colorings of the plane, defined as follows: \begin{defi}\label{def-poly} A coloring $\chi=(\B,\W)$ is said to be \emph{polygonal}, if it satisfies the following conditions (see an example in Fig.~\ref{fig-poly}): \begin{figure}[ht] \begin{center}\includegraphics{ob1_pol1}\end{center} \caption{Example of a polygonal coloring}\label{fig-poly} \end{figure} \begin{itemize} \item Each of the two sets $\B$ and $\W$ is contained in the closure of its interior. \item The boundary of $\chi$ (denoted by $\cB$) is a union of straight line segments (called \emph{boundary segments}). Two boundary segments may only intersect at their endpoints. We allow these segments to be unbounded, i.e., a boundary segment may in fact be a half-line or a line. An endpoint of a boundary segment is called a \emph{boundary vertex}. We may assume that if exactly two boundary segments meet at a boundary vertex, then the two segments do not form a straight angle, because otherwise they could be replaced with a single boundary segment. Note that with this condition, the boundary segments and boundary vertices of $\chi$ are determined uniquely. \item Every bounded region of the plane is intersected by only finitely many boundary segments (which implies that every bounded region contains only finitely many boundary vertices). \end{itemize} \end{defi} Note that these conditions imply that a sufficiently small disc around an interior point of a boundary segment is separated by the boundary segment into two halves, one of which is colored black and the other white. Note also that we make no assumptions about the colors of the points on the boundary~$\cB$. We say that a coloring $\chi'$ is a \emph{twin} of a coloring $\chi$ if the two colorings have the same boundary and they assign the same colors to the points outside this boundary. The main aim of this section is to prove that every polygonal coloring contains every nonequilateral triangle, and to characterize the polygonal colorings that avoid an equilateral triangle. To achieve this, we need the following definition: \begin{defi}\label{def-strip} A coloring $\chi=(\B,\W)$ is called \emph{zebra-like} if it has the following form: the boundary of $\chi$ is a disjoint union of infinitely many continuous curves $\cL_i; i\in\mathbb{Z}$ with the following properties (see Fig.~\ref{fig:except}): \begin{enumerate}[(a)] \item There is a unit vector $\vec x$ such that for every $i\in\mathbb{Z}$, $\cL_i+\vec x=\cL_i$. In other words, the $\cL_i$ are invariant upon a translation of length 1. \item For every $i\in\mathbb{Z}$, the curve $\cL_{i+1}$ is a translated copy of $\cL_i$. Moreover, there is a unit vector $\vec y$ orthogonal to $\vec x$, so that \[ \cL_{i+1}=\cL_i+\frac{1}{2}\vec x+\frac{\sqrt{3}}{2}\vec y. \] In other words, for an arbitrary boundary point $X\in\cL_i$, the points $Y=X+\vec x$ and $Z=X+\frac{1}{2}\vec x+\frac{\sqrt{3}}{2}\vec y$ belong to the boundary as well. Note that $XYZ$ is a unit triangle, and that $Y\in\cL_i$ and $Z\in\cL_{i+1}$. \item For every $i\in\mathbb{Z}$, the interior of the region delimited by $\cL_i\cup\cL_{i+1}$ is colored with a different color than the interior of the region delimited by $\cL_{i-1}\cup\cL_{i}$. \item For two points $A$ and $B$, let $\theta_{AB}$ denote the size of the acute angle formed by the segment $AB$ and the vector $\vec x$. For every $i\in\mathbb{Z}$ and every two points $A\in\cL_i$ and $B\in\cL_{i+1}$, the following holds: $\|AB\|>1$ if and only if $\theta_{AB}<\frac{\pi}{3}$. This last condition can also be stated in the following equivalent form: Let $A\in\cL_i$ be an arbitrary point on the boundary. Let $B_1=A-\frac{1}{2}\vec x+\frac{\sqrt{3}}{2}\vec y$ and $B_2=A+\frac{1}{2}\vec x+\frac{\sqrt{3}}{2}\vec y$ (the two points $B_1, B_2$ belong to $\cL_{i+1}$ by the previous conditions), and let $A'=A+\sqrt{3}\vec y$ (so that $A'\in\cL_{i+2}$). Under these assumptions, the portion of $\cL_{i+1}$ between $B_1$ and $B_2$ is contained inside of the closed lens-shaped region $\cD(A)\cap\cD(A')$ and no other point of $\cL_{i+1}$ is inside this region. \end{enumerate} \end{defi} We stress that a zebra-like coloring is not necessarily polygonal. \begin{figure}[htb] \begin{center} \includegraphics[scale=0.7]{except_coloring} \end{center} \caption{The boundary of a zebra-like coloring} \label{fig:except} \end{figure} \subsection{The result} The following theorem is the main result of this section: \begin{thm}\label{thm-poly} For a polygonal coloring $\chi$, the following conditions are equivalent: \begin{enumerate} \item[(C1)] The coloring $\chi$ is a zebra-like polygonal coloring. \item[(C2)] The coloring $\chi$ has a twin $\chi'$ which avoids the unit triangle. \item[(C3)] For every monochromatic unit triangle $ABC$, at least one of the three points $A,B$ and $C$ belongs to the boundary of $\chi$. \end{enumerate} \end{thm} Clearly, the condition (C2) of Theorem~\ref{thm-poly} implies the condition (C3), so we only need to prove that (C1) implies (C2) and that (C3) implies (C1). The proof is organized as follows: we first prove that (C3)$\Rightarrow$(C1). This part of the proof proceeds in several steps: first of all, we use the condition (C3) to describe the set $\cB(\chi)\cap\cC(A)$, where $A$ is a boundary point. Then we apply a continuity argument to extend this information into a global description of~$\chi$. Next, in Theorem~\ref{thm-obarv}, we prove that every (not necessarily polygonal) zebra-like coloring has a twin that avoids the unit triangle, which shows that (C1)$\Rightarrow$(C2), completing the proof of Theorem~\ref{thm-poly}. In the last part of this section, we show that Theorem~\ref{thm-poly} implies that every polygonal coloring contains a monochromatic copy $T$ of a given non-equilateral triangle, with the vertices of $T$ avoiding the boundary. \subsection{The proof} We begin with an auxiliary lemma: \begin{lem}\label{lem-triprimky} Let $q_1, q_2, q_3$ be (not necessarily distinct) lines in the plane, not all three parallel. Then exactly one of the following possibilities holds: \begin{enumerate} \item The lines $q_1, q_2, q_3$ intersect at a common point and every two of them form an angle $\frac{\pi}{3}$. \item There exist only finitely many unit triangles $ABC$ such that $A \in q_1$, $B \in q_2$ and $C \in q_3$. \end{enumerate} \end{lem} \begin{proof} It can be easily checked that the two conditions cannot hold simultaneously: in fact, if the three lines satisfy the first condition, then for every point $A\in q_1$ whose distance from the other two lines is at most 1 there are points $B\in q_2$ and $C\in q_3$ such that $ABC$ is a unit triangle. We now show that at least one of the two conditions holds. Since the three lines are not all parallel, we may assume that neither $q_1$ nor $q_2$ is parallel to $q_3$. Consider a Cartesian coordinate system whose $y$\nobreakdash-axis is $q_3$. There exist real numbers $a_1, a_2, b_1, b_2$ such that for $i \in \{1,2\}$ we have $q_i=\{(x,y)\in \mathbb{R}^2; y=a_ix+b_i\}$. Let $ABC$ be a unit triangle with $A=(x_1,y_1) \in q_1$, $B=(x_2,y_2) \in q_2$ and $C\in q_3$, and assume that $A,B,C$ are in the counter-clockwise order (the other case is symmetric). Then $C=(\frac{x_1+x_2}{2},\frac{y_1+y_2} {2})+\frac{\sqrt{3}}{2}(y_1-y_2, x_2-x_1)$. The point $C$ lies on $q_3$, which implies the following equality: \begin{equation} \frac{x_1+x_2}{2}+\frac{\sqrt{3}(y_1-y_2)}{2}=0 \label{equa1} \end{equation} Points $A$ and $B$ are at the distance $1$, from which we get \begin{equation} (x_1-x_2)^2+(y_1-y_2)^2=1 \label{equa2} \end{equation} By combining \eqref{equa1} and \eqref{equa2} and eliminating $y_1, y_2$ we get \[ \left(\frac{x_1+x_2}{2}\right)^2={\frac{3}{4}}\left(1-(x_1-x_2)^2\right), \] which yields \begin{equation} x_1^2+x_2^2-x_1x_2={\frac{3}{4}}. \label{equa3} \end{equation} Substituting $y_1=a_1x_1+b_1$ and $y_2=a_2x_2+b_2$ into \eqref{equa1} gives \begin{equation} {\frac{1+\sqrt{3}a_1}{2}}x_1 + {\frac{1-\sqrt{3}a_2}{2}}x_2 + {\frac{\sqrt{3}}{2}}(b_1-b_2)=0 \label{equa4} \end{equation} If both $\frac{1+\sqrt{3}a_1}{2}$ and $\frac{1-\sqrt{3}a_2}{2}$ are equal to zero, then the equality \eqref{equa4} degenerates and we get that $a_1=-\frac{1}{\sqrt{3}}$, $a_2=\frac{1}{\sqrt{3}}$ and $b_1=b_2$, so the first case of the statement holds. In the other case, suppose (wlog) that $\frac{1+\sqrt{3}a_1}{2} \neq 0$. From \eqref{equa4} we can obtain that $x_1=cx_2+d$ for some reals $c,d$. By substituting it into \eqref{equa3} we get a quadratic equation for the variable $x_2$, where the leading coefficient is equal to $c^2-c+1=(c-\frac{1}{2})^2+\frac{3}{4} > 0$, so there exist at most two possible values for $x_2$, thus at most two possible locations of $B$ and at most four possible unit triangles $ABC$. \end{proof} Throughout the rest of this section, we assume that $\chi$ is a fixed polygonal coloring satisfying the condition (C3) of Theorem~\ref{thm-poly}. Every boundary segment can be regarded as a common edge of two (possibly unbounded) polygonal regions, one of which is white and the other black. We choose an orientation of the boundary segments in the following way: a boundary segment with endpoints $A$ and $B$ is directed from $A$ to $B$ if the white region adjacent to this segment is on the left hand side from the point of view of an observer walking from $A$ to~$B$. \begin{defi}\label{def-vhodny} A boundary point $A\in\cB$ is called \emph{feasible}, if $A$ is not a boundary vertex, and the unit circle $\cC(A)$ does not contain any boundary vertex. An \emph{infeasible} point is a point on the boundary that is not feasible. \end{defi} We may easily see that every bounded subset of the plane contains only finitely many infeasible points. The first step in the proof of the main result is the description of the set of all the boundary points at the unit distance from a given feasible point $A$. Let $A$ be a fixed feasible point, let $s$ be the boundary segment containing $A$. The set $\cB\cap\cC(A)$ is finite, by the definition of polygonal coloring; on the other hand, this set is nonempty, otherwise we could find two points $B,C$ of $\cC(A)$ such that $ABC$ is a unit triangle, with $B$ and $C$ in the interior of the same color class. By shifting the triangle $ABC$ slightly in a suitable direction, we would obtain a monochromatic unit triangle avoiding the boundary, which is forbidden by the condition (C3). In the following arguments, we will use a Cartesian coordinate system whose origin is the point $A$, and whose $x$-axis is parallel to $s$ and has the same orientation. We shall assume that the $x$-axis and the segment $s$ is directed left-to-right and the $y$-axis is directed bottom-to-top. Assuming this coordinate system, we let $P(\alpha,A)$ denote the point of $\cC(A)$ with coordinates $\left(\cos(\alpha),\sin(\alpha)\right)$. If no ambiguity arises, we write $P(\alpha)$ instead of $P(\alpha, A)$. \begin{lem} Let $B=P(\alpha)$ be an arbitrary element of $\cB\cap\cC(A)$, let $t$ be the boundary segment containing $B$ (the segment $t$ is determined uniquely, because $A$ is a feasible point). Then the segments $s$ and $t$ are parallel. \end{lem} \begin{proof} For contradiction, assume that $s$ and $t$ are not parallel, let $\sigma\in(0,\pi)$ be the angular slope of $t$ with respect to the coordinate system established above, i.e., $\sigma$ is the angle formed by the lines containing $s$ and $t$. First of all, note that the point $C=P(\alpha+\frac{\pi}{3})$ lies on the boundary $\cB$; otherwise, a sufficiently small translation of the unit triangle $ABC$ in a suitable direction would yield a counterexample to condition (C3) (here we use the assumption that $s$ and $t$ are not parallel). Let $u$ be the boundary segment containing $C$, and let $\tau$ be the angular slope of $u$. Secondly, we may deduce that $\{\sigma,\tau\}=\{\frac{\pi}{3},\frac{2\pi}{3}\}$, and the three lines containing $s$, $t$ and $u$ all meet at one point. If this were not the case, then by Lemma~\ref{lem-triprimky} there would be only finitely many unit triangles with vertices belonging to the three segments $s$, $t$ and $u$. Thus, we could find a unit triangle $A'B'C'$ with $A'\in s$, $B'\in t$ and $C'\not\in\cB$, which is impossible, by the argument presented in the previous paragraph. By repeating this argument with $\{\alpha+\frac{i\pi}{3};\ i=1,\dots,5\}$ in place of $\alpha$, we obtain the following conclusions: \begin{itemize} \item The six points $\{P(\alpha+\frac{i\pi}{3});\ i=1,\dotsc,6\}$ all belong to the boundary $\cB$. \item The lines passing through the boundary segments containing these six points all meet at one point. \item The boundary segments containing $P(\alpha)$, $P(\alpha+\frac{2\pi}{3})$ and $P(\alpha+\frac{4\pi}{3})$ all have the same slope. \end{itemize} This is a contradiction, because three parallel segments intersecting a circle in three distinct points cannot belong to a single line, and two parallel lines do not intersect. \end{proof} \begin{lem}\label{lem-pipul} $P(\frac{\pi}{2})\not\in\cB$, $P(-\frac{\pi}{2})\not\in\cB$. \end{lem} \begin{proof} For contradiction, assume that $B=P(\frac{\pi}{2})\in\cB$ (the case of $P(-\frac{\pi}{2})$ is symmetric), let $t$ denote the boundary segment containing $B$. Let $C=P(\frac{\pi}{6})$. We distinguish the following cases: \begin{itemize} \item The segment $t$ has the same orientation as the segment $s$. In this case, by applying a rotation around the center $C$ and then, if $C\in\cB$, a suitable translation, we may transform the triple $ABC$ into a monochromatic triple with vertices avoiding the boundary, contradicting (C3). \item The segments $s$ and $t$ have opposite orientations (i.e., $t$ is oriented right-to-left, which means that there is a white region touching $t$ from below); furthermore, either $C\in\cB$ or $C$ is in the interior of the white color. In such case, we may rotate the configuration $ABC$ around the center of the segment $AB$ to obtain a unit triangle in the interior of the white color. \item The segments $s$ and $t$ have opposite orientations and the point $C$ is in the interior of the black color. Let $\theta$ be the maximal angle with the properties that for every $\alpha\in (\frac{\pi}{2},\frac{\pi}{2}+\theta)$ the point $P(\alpha)$ lies in the interior of the white color and for every $\alpha\in(\frac{\pi}{6},\frac{\pi}{6}+\theta)$ the point $P(\alpha)$ lies in the interior of the black color. The value of $\theta$ is well defined, and by the previous assumptions, $0<\theta<\frac{\pi}{3}$. Let $B'=P(\frac{\pi}{2}+\theta)$ and $C'=P(\frac{\pi}{6}+\theta)$. By the maximality of $\theta$, at least one of the two points lies on the boundary, and the boundary segment passing through this point is directed left-to-right (see Fig.~\ref{fig-pipul}). As in the first case of this proof, we may rotate and translate the configuration $AB'C'$ to obtain a monochromatic unit triangle. \end{itemize} In all cases we get a contradiction. \end{proof} \begin{figure}[ht!] \begin{center} \includegraphics{lemma3} \end{center} \caption{Illustration of the proof of Lemma~\ref{lem-pipul}} \label{fig-pipul} \end{figure} The previous two lemmas imply that if $A$ is a feasible point, then no boundary segment is tangent to $\cC(A)$. \begin{lem}\label{lem-bilapod} Let $B=P(\alpha)\in\cB$ be a point on the boundary, let $t$ be the boundary segment containing this point. If $\alpha\in (\frac{\pi}{6}, \frac{5\pi}{6})$ or $\alpha\in(\frac{7\pi}{6},\frac{11\pi}{6})$, then $s$ and $t$ have opposite orientation. If $|\alpha|<\frac{\pi}{6}$ or $|\alpha-\pi|<\frac{\pi}{6}$, then $s$ and $t$ have the same orientation. \end{lem} \begin{proof} We first consider the case $\alpha\in(\frac{\pi}{6}, \frac{5\pi}{6})$ or $\alpha\in(\frac{7\pi}{6},\frac{11\pi}{6})$. The proof is analogous to the proof of the first part of Lemma~\ref{lem-pipul}: if $t$ had the same orientation as $s$, we could take $C=P(\frac{\pi}{3}+\alpha)$ and then by rotating and translating the unit triangle $ABC$ we would get a contradiction. Note that the condition $\alpha\in(\frac{\pi}{6}, \frac{5\pi}{6})\cup(\frac{7\pi}{6},\frac{11\pi}{6})$ guarantees that $C$ is either the leftmost or the rightmost point of the triangle $ABC$, so whenever we start rotating the triangle $ABC$ around $C$, the two points $A,B$ move into the interior of the same color. The case $|\alpha|<\frac{\pi}{6}$ or $|\alpha-\pi|<\frac{\pi}{6}$ can be proven analogously. \end{proof} \begin{lem}\label{lem-pitre} $P(\alpha)\in\cB$ if and only if $P(\alpha+\frac{\pi}{3})\in\cB$. \end{lem} \begin{proof} It suffices to prove one implication, the other case is symmetric. Assume that for some $\alpha$ we have $P(\alpha)\in\cB$ and $P(\frac{\pi}{3}+\alpha)\not\in\cB$. Let $B=P(\alpha)$, $C=P(\frac{\pi}{3}+\alpha)$, and let $t$ be the boundary segment containing $B$. We consider the following cases: \begin{itemize} \item If $s$ and $t$ have opposite orientation, we may rotate $ABC$ around the center of $AB$ to obtain a monochromatic unit triangle in the interior of one color (see Fig.~\ref{fig-pitre}). Here we use the fact that $\alpha\neq\frac{\pi}{2}$, which follows from Lemma~\ref{lem-pipul}. \item If $s$ and $t$ have the same orientation, a small translation in a suitable direction transforms $ABC$ into a monochromatic unit triangle. \end{itemize} In both cases we get a contradiction. \end{proof} \begin{figure}[Ht!] \begin{center} \includegraphics{lemma5.eps} \end{center} \caption{Illustration of the proof of Lemma~\ref{lem-pitre}} \label{fig-pitre} \end{figure} \begin{lem} For every $\theta$ there is exactly one value of $\alpha\in[\theta,\theta+\frac{\pi}{3})$ such that $P(\alpha)\in\cB$. \end{lem} \begin{proof} By Lemma~\ref{lem-pitre}, if the statement holds for some value of $\theta$, it holds for all other values of $\theta$ as well. Thus, it is enough to prove the lemma for $\theta=\frac{\pi}{2}$. Clearly, there is at least one $\alpha\in[\frac{\pi}{2},\frac{5\pi}{6})$ such that $P(\alpha)\in\cB$; otherwise, the set $\cC(A)\cap\cB$ would be empty, which is impossible. Assume that there are $\alpha$ and $\alpha'$ such that $\frac{\pi}{2}\le\alpha<\alpha'<\frac{5\pi}{6}$ with $P(\alpha)\in\cB$ and $P(\alpha')\in\cB$. Let us fix $\alpha$ and $\alpha'$ as small as possible. Let $t$ and $t'$ be the boundary segments containing $P(\alpha)$ and $P(\alpha')$. The circle $\cC(A)$ consists of alternating black and white arcs and one of these arcs has $P(\alpha)$ and $P(\alpha')$ for endpoints. It follows that one of the segments $t$, $t'$ has the same orientation as the segment~$s$, contradicting Lemma~\ref{lem-bilapod}. \end{proof} Before we proceed with the proof of the main result, we summarize the lemmas proved so far (and introduce some related notation) in the following claim (see Fig.~\ref{fig-sum}): \begin{cla}\label{tvrz-sum} Let $A\in\cB$ be an arbitrary feasible point. The circle $\cC(A)$ intersects the boundary $\cB$ at exactly six points, which form the vertex set of a regular hexagon. These six points will be denoted by $P_0(A),\dotsc,P_5(A)$, where $P_i(A)= P(\alpha+\frac{i\pi}{3},A)$ with $\alpha\in\left( -\frac{\pi}{6},\frac{\pi}{6} \right)$ (this determines $P_i(A)$ uniquely). The boundary segments containing the six points $P_i(A)$ are all parallel to the boundary segment $s$ containing the point $A$. The boundary segments containing the points $P_0(A)$ and $P_3(A)$ have the same orientation as $s$, whereas the boundary segments containing $P_1(A)$, $P_2(A)$, $P_4(A)$ and $P_5(A)$ have opposite orientation. \end{cla} \begin{figure}[Ht!] \begin{center} \includegraphics[scale=.9]{tvrz7.eps} \end{center} \caption{Illustration of Claim~\ref{tvrz-sum}} \label{fig-sum} \end{figure} Now we use Claim~\ref{tvrz-sum} to get more global information about the boundary. \begin{lem} \label{lem-uhel} Let $u_1$ and $u_2$ be two boundary segments that share a common endpoint $X$. The size of the convex angle formed by these two segments is greater than $\frac{2\pi}{3}$. \end{lem} \begin{figure}[ht] \begin{center}\includegraphics{ob2_le11_mod}\end{center} \caption{Illustration of the proof of Lemma~\ref{lem-uhel}}\label{fig-uhel} \end{figure} \begin{proof} For contradiction, assume that for some $u_1$, $u_2$ and $X$, the statement of the lemma does not hold (see Fig.~\ref{fig-uhel}). We may assume that the convex angle determined by $u_1$ and $u_2$ does not contain any other boundary segment with endpoint $X$. Furthermore, we may assume that the segment $u_1$ is directed from $X$ to the other endpoint. For $0<t<|u_1|$, let $A(t) \in u_1$ denote the point with $|A(t)-X|=t$ and let $A'(t)=P_4(A(t))$. There exists $\varepsilon >0$ such that for all $0<t<\varepsilon$ the points $A(t)$ are feasible, the points $A'(t)$ are feasible as well and lie on a common boundary segment. By our assumption, the convex angles between the ray $A(t)A'(t)$ and the segments $u_1, u_2$ directed from $X$ are both greater than $\frac{\pi}{2}$. It follows that if $t$ is sufficiently small, the tangent to the circle $\cC(A(t))$ at $A(t)$ intersects both segments $u_1, u_2$ and so does the circle $\cC(A(t))$, contradicting Claim~\ref{tvrz-sum}. \end{proof} An important consequence of Lemma~\ref{lem-uhel} is that no three boundary segments share a common endpoint. Hence, every connected component of the boundary is either an infinite piecewise linear curve, or a simple closed piecewise linear curve (i.e. the boundary of a simple polygon). We will call these cuves \emph{boundary components} or simply \emph{components}. \begin{defi} Let $A$ be a point on the boundary. For $t\in\mathbb{R}$, let $A(t)$ denote the point of the same boundary component as $A$, such that the directed length of the part of the boundary starting at $A$ and ending at $A(t)$ is equal to~$t$. $A(t)$ is clearly a continuous function of $t$. If $A(t)$ is a feasible point, we let $p_i(t)=P_i(A(t))$, for $i=0,\dotsc,5$. \end{defi} It is easy to see that the functions $p_i$ are continuous on a sufficiently small neighborhood of every value of $t$ for which $A(t)$ is a feasible point. Our next aim is to show that these functions can be extended into continuous functions by suitably defining the values of $p_i(t)$ when $A(t)$ is not feasible. It is not obvious that the functions $p_i$ can be extended in this way: the definition of $P_i(A(t))$ uses the Cartesian system whose $x$-axis is parallel with the boundary segment containing $A(t)$. Hence, if $A_1$ and $A_2$ are two feasible points belonging to two distinct boundary segments of the same boundary component, it might not be immediately clear that $P_i(A_1)$ belongs to the same boundary component as $P_i(A_2)$. The next lemma shows that these technical difficulties can be overcome. \begin{lem}\label{lem-limita} Let $A(t_0)$ be an infeasible point. For every $i=0,\dotsc,5$, there is a point $P_i\in\cB$ such that \[ \lim_{t\to t_0-}p_i(t)=P_i=\lim_{t\to t_0+} p_i(t) \] This means that if we define $p_i(t_0)=P_i$, then $p_i$ is continuous at $t_0$. \end{lem} \begin{proof} It is sufficient to prove the lemma for $i=0$, because $p_i(t)$ is clearly a continuous function of $A(t)$ and $p_0(t)$. Since every boundary segment contains only finitely many infeasible points, we may choose a sufficiently small $\varepsilon>0$, such that for every $t$ from the open interval $(t_0-\varepsilon,t_0)$ the points $A(t)$ are feasible and they all belong to a single boundary segment $u_1$, and similarly, for every $t'\in (t_0,t_0+\varepsilon)$ the points $A(t')$ are feasible, and they belong to a single boundary segment $u_2$. If the segments $u_1$ and $u_2$ are distinct, then $A(t_0)$ is their common endpoint. Note that for $t \in(t_0-\varepsilon,t_0)$, the points $p_0(t)$ all belong to a single boundary segment $v_1$, otherwise some of the $A(t)$ would not be feasible. By Claim~\ref{tvrz-sum}, the segment $v_1$ is parallel and consistently oriented with $u_1$. Similarly, for $t'\in(t_0,t_0+\varepsilon)$ the points $p_0(t')$ belong to a single boundary segment $v_2$, parallel and consistently oriented with $u_2$. We do not know yet whether $v_1$ and $v_2$ appear consecutively on the same component of the boundary. Let $B=\lim_{t\to t_0-} p_0(t)$ (clearly, the limit exists, because the points $\{p_0(t);\, t\!\in\!(t_0-\varepsilon,t_0)\}$ form an open segment whose endpoint is $B$). See Fig.~\ref{fig-limita}. \begin{figure}[ht] \begin{center}\includegraphics{ob3_le12_mod}\end{center} \caption{Illustration of the proof of Lemma~\ref{lem-limita}}\label{fig-limita} \end{figure} For $t\in (t_0-\varepsilon,t_0)$, let us fix $\alpha\in(-\frac{\pi}{6},\frac{\pi}{6})$ such that $p_0(t)=P(\alpha,A(t))$, i.e., $\alpha$ is the (signed) measure of the angle between the segment $u_1$ and the segment $A(t)p_0(t)$. Note that $\alpha$ does not depend on the choice of $t$. The circle $\cC(A(t_0))$ intersects the boundary at $B$. Let $w$ be the boundary segment starting at $B$ and directed away from $B$. By Lemma~\ref{lem-uhel}, the convex angles determined by $v_1$ and $w$ and by $u_1$ and $u_2$ have size at least $\frac{2\pi}{3}$, which implies that the convex angle $\alpha'$ between $u_2$ and $BA(t_0)$ is acute and the convex angle between $w$ and $BA(t_0)$ is obtuse. Thus, for $t'\in(t_0,t_0+\varepsilon)$ the circle $\cC(A')$ (where $A'=A(t')$) intersects the segment $w$ at a point $B'=p_i(t')$. From Claim~\ref{tvrz-sum} it follows that $w$ is parallel to $u_2$. Also, the segment $A'B'$ is parallel to the segment $A(t_0)B$, which is in turn parallel to any of the segments $A(t)p_0(t)$, for $t\in(t_0-\varepsilon,t_0)$. To finish the proof of this lemma, we need to show that $B'=p_0(t')$ (as opposed to $B'=p_i(t')$ for some $i\neq 0$), i.e., we need to prove that the angle $\alpha'$ determined by the segment $u_2$ and the segment $A'B'$ falls into the range $(-\frac{\pi}{6},\frac{\pi}{6})$. We have observed that $\alpha'\in(-\frac{\pi}{2},\frac{\pi}{2})$. This leaves us with the following three possibilities: either $B'=p_5(t')$, or $B'=p_1(t')$, or $B'=p_0(t')$. However, the former two possibilities are ruled out by the fact that the segment $w$ is oriented consistently with the segment $u_2$. This concludes the proof. \end{proof} \begin{lem}\label{lem-posun} Let $i\in\{0,\dotsc,5\}$, let $A\in\cB$ be an arbitrary boundary point. All the unit segments of the form $A(t)p_i(t)$ have the same slope, independently of the choice of $t$. \end{lem} \begin{proof} The slope of $A(t)p_i(t)$ (as a function of $t$) is constant in a neighborhood of every $t$ for which $A(t)$ is feasible. Moreover, this slope is a continuous function of $t$, which follows from Lemma~\ref{lem-limita}. Hence the function is constant on the whole range. \end{proof} Lemma~\ref{lem-posun} shows that every translation that maps a feasible point $A$ to the point $P_i(A)$ also maps the boundary component containing $A$ onto the boundary component containing $P_i(A)$ (which may be the same component). Composing such translations (or their inverses) we conclude that the translations that send $P_i(A)$ to $P_j(A)$ have the same component-preserving property. For the proof of Lemma~\ref{lem-usek}, we will need a slight extension of Claim~\ref{tvrz-sum} to infeasible points: \begin{cla}\label{cla-infeasible_body} Let $A\in\cB$ be an arbitrary infeasible point. \begin{enumerate}[(i)] \item At each of the six points $P_0(A), P_1(A), \dots, P_5(A)$ the circle $\cC(A)$ properly crosses the corresponding boundary component, i.e., in a sufficiently small neighborhood of such point, the circle $\cC(A)$ separates the boundary component into two portions, one lying inside $\cC(A)$ and the other one lying outside $\cC(A)$. \item There are no more proper crossings of $\cC(A)$ with boundary components. (However, $\cC(A)$ may touch the boundary at some other points.) \item The boundary components containing the points $P_0(A)$ and $P_3(A)$ have the same orientation as the component containing $A$, whereas the boundary components containing $P_1(A)$, $P_2(A)$, $P_4(A)$ and $P_5(A)$ have opposite orientation. \end{enumerate} \end{cla} \begin{proof} The first two statements follow from the fact that $\cC(A)$ has the same number of proper crossings with the boundary as the circle $\cC(A(t))$, where $A(t)$ is a feasible point sufficiently close to $A$. The third statement follows from Claim~\ref{tvrz-sum} applied to the point $A(t)$. \end{proof} \begin{lem}\label{lem-usek} Let $A\in\cB$ be an arbitrary boundary point. For the sake of brevity, let us write $P_i$ instead of $P_i(A)$, $\cC$ instead of $\cC(A)$ and $\cD$ instead of $\cD(A)$ in the statement and proof of this lemma. The point $P_1$ belongs to the same boundary component as $P_2$, the point $P_0$ belongs to the same boundary component as $A$ and $P_3$, and the point $P_4$ belongs to the same boundary component as $P_5$. The four portions of the boundary that connect $P_1$ with $P_2$, $P_0$ with $A$, $A$ with $P_3$, and $P_4$ with $P_5$ are all translated copies of a single piecewise linear curve. These four portions of the boundary are all contained in the closed unit disc with center~$A$. \end{lem} \begin{proof} It suffices to show that the boundary component that enters inside $\cD$ at $P_1$ leaves $\cD$ at $P_2$. The rest of the statement follows from Lemma~\ref{lem-posun}. Let $\cL$ be the boundary component that contains $P_1$. Let us follow $\cL$ from $P_1$ in the direction of its orientation, i.e., into the interior of the unit disc $\cD$, and let $X$ be the first point where $\cL$ leaves $\cC$. We observe the following: \begin{itemize} \item $X$ is neither $P_3$ nor $P_5$, because in these points, the boundary is oriented into the interior of the disc $\cD$. \item $X$ is not the point $P_0$: if $X=P_0$, then the translation $P_0\mapsto A$ would map the fragment of the boundary between $P_1$ and $P_0$ onto a fragment directed from $P_2$ to $A$. Similarly, the translation $P_1\mapsto A$ would map the fragment $P_1P_0$ onto a fragment directed from $P_5$ to $A$. This is impossible, because two different boundary fragments of equal length cannot both end at $A$. \item $X$ is not $P_4$: if $X$ were equal to $P_4$, we would consider the boundary component that enters into the interior of $\cC$ at the point $P_3$. Since this boundary component cannot intersect the boundary fragment between $P_1$ and $P_4$, it must leave the interior of $\cC$ at the point $P_2$. However, this is symmetric to the previous case and leads to contradiction in the same way. \item Having excluded all other possibilities, we know that $X=P_2$. \end{itemize} Let $U$ denote the fragment of $\cL$ between $P_1$ and $P_2$. By definition, this fragment properly crosses $\cC$ only at its endpoints. Applying a symmetric argument, we find that the boundary fragment from $P_5$ to $P_4$ (which is a translated copy of $U$) properly crosses $\cC$ only in its endpoints. Translating $U$ appropriately, we obtain the boundary fragments connecting $P_3$ with $A$ and $A$ with $P_0$. This concludes the proof. \end{proof} From the previous lemmas, we readily obtain the following claim. \begin{cla}\label{cla-nutne} The condition (C3) of Theorem~\ref{thm-poly} implies the condition (C1). \end{cla} \begin{proof} We check that the coloring $\chi$ satisfies the conditions of Definition~\ref{def-strip}. Let $\vec x$ denote the unit vector $\overrightarrow{AP_0}$ and let $\vec y$ be a unit vector orthogonal to $\vec x$. By Lemma~\ref{lem-usek}, every component of the boundary is a piecewise linear $\vec x$-periodic curve and if $\cL$ is a boundary component, then any other component is a translate of $\cL$ by an integral multiple of the vector $\overrightarrow{AP_1}=\frac{1}{2}\vec x+\frac{\sqrt{3}}{2}\vec y$. Let $\vec z$ denote this last vector and let $\cL_i=\cL_0+i\vec z$, $i\in\mathbb{Z}$, where $\cL_0$ is a boundary component chosen arbitrarily. We have $\cB=\bigcup_{i\in\mathbb{Z}}\cL_i$. The condition $(d)$ of Definition~\ref{def-strip} follows from Lemma~\ref{lem-usek}. \end{proof} It remains to show that the condition (C1) implies (C2). This is the easier part of the proof. In fact, we prove a more general claim: \begin{thm}\label{thm-obarv} Every zebra-like coloring has a twin that avoids the unit triangle. \end{thm} \begin{proof} Let $\chi$ be a zebra-like coloring, let $\cL_i$, $\vec x$ and $\vec y$ be as in Definition~\ref{def-strip}. Let $\vec z=\frac{1}{2}\vec x+\frac{\sqrt{3}}{2}\vec y$. Let $\chi'$ be the twin coloring of $\chi$ such that the points of $\cL_i$ are black in $\chi'$ if $i$ is even and white if $i$ is odd. Observe that by the definition of the coloring, the color of a point $P$ is equal to the color of $P+\vec x$ and different from the color of $P+\vec z$. Now assume that $ABC$ is a monochromatic unit triangle, wlog the three points are black. By the previous observation, no edge of the triangle forms an angle of size $\frac{\pi}{3}$ (or $\frac{2\pi}{3}$) with the vector $\vec x$. It follows that exactly one of the three edges (wlog the edge $AB$) forms with $\vec x$ an angle whose size falls into the range $(\frac{\pi}{3},\frac{2\pi}{3})$. We claim that the three points $A,B,C$ all belong to a single connected component of the black color: otherwise one of the two edges $AC$ and $BC$ would have to intersect (at least) two curves $\cL_i$ and $\cL_{i+1}$. By the definition of the coloring, the distance between the two points of intersection is greater than~1, contradicting the fact that $ABC$ is a unit triangle. We now deduce that $\|AB\|<1$: let $\ell$ be the line containing the segment $AB$. Note that the line $\ell$, as well as any other line not parallel with $\vec x$, must intersect all the curves $\cL_i$. Let $A'B'$ be the segment obtained as the convex hull of the intersection of $\ell$ with the closure of the black component containing $A$ and $B$. By the definition of the coloring, $\|A'B'\|\le 1$. Moreover, since the two points $A'$ and $B'$ belong to two adjacent boundary curves $\cL_i$ and $\cL_{i+1}$, they have different colors. Hence, the segment $AB$ is a proper subset of the segment $A'B'$, and $\|AB\|<1$. This shows that $ABC$ is not a unit triangle\thinspace ---\thinspace a~contradiction. \end{proof} This concludes the proof of Theorem~\ref{thm-poly}. Next, we present a simple corollary, which shows that every polygonal coloring of the plane contains any nonequilateral triangle. \subsection{Nonequilateral triangles} The following result is a direct consequence of Theorem~\ref{thm-poly}, by an easy modification of the proof of Lemma~\ref{lem-osm}. \begin{thm}\label{thm-osm} Let $XYZ$ be a nonequilateral triangle, let $\chi$ be a polygonal coloring. There is a monochromatic copy $X'Y'Z'$ of the configuration $XYZ$, such that none of the three points $X',Y'$ and $Z'$ belongs to the boundary of $\chi$. \end{thm} \begin{proof} Let $a, b$ and $c$ be the lengths of the three edges of $XYZ$. Wlog, assume that $a\neq b$. From Theorem~\ref{thm-poly} it follows that no polygonal coloring can simultaneously avoid copies of equilateral triangles of two different sizes. Hence, we may assume that $\chi$ contains a monochromatic equilateral triangle $ABC$ with edges of length $a$ whose vertices avoid the boundary of $\chi$. Assume that the three points $A$, $B$ and $C$ are all black. Consider the configuration of eight points on Fig.~\ref{fig-osm}. As discussed in the proof of the first part of Lemma~\ref{lem-osm}, every coloring of the five points $D$, $A'$, $B'$, $C'$ and $D'$ yields a monochromatic $(a,b,c)$-triangle. Furthermore, we may assume that the eight points all avoid the boundary of $\chi$, otherwise we might shift the configuration slightly to move the points away from the boundary, without changing the color of $ABC$ (recall that $A, B$ and $C$ already belong to the interior of the black color). This concludes the proof. \end{proof} \section{Concluding remarks} The Conjecture~\ref{con-slaba} remains wide open, despite the indirect support from the results of this paper, as well as from earlier research. It might happen that the validity of this conjecture would depend on the particular choice of set-theoretical axioms. Such issues do not arise in this paper, since our proof techniques are very elementary. Unfortunately, these elementary techniques do not offer much hope for broad generalizations. It might nevertheless be possible to extend our results about polygonal colorings to some broader class of colorings, e.g., the colorings by monochromatic regions bounded by continuous curves. Colorings of this kind have already been studied in the context of the related problem of the chromatic number of the plane (see \cite{woo}). The zebra-like colorings provide a hitherto unknown example of colorings that avoid an equilateral triangle. We are not aware of any other examples of colorings avoiding a given triangle, but we do not dare to make any conjectures about the uniqueness of our construction, because our understanding of non-polygonal colorings is rather limited. \section*{Acknowledgments} We appreciate the useful discussions with Zden\v ek Dvo\v r\'ak, Jan Kratochv\'\i l, Martin Tancer, Pavel Valtr and Tom\'a\v s Vysko\v cil.
{ "timestamp": "2007-01-31T22:00:13", "yymm": "0701", "arxiv_id": "math/0701940", "language": "en", "url": "https://arxiv.org/abs/math/0701940", "abstract": "We prove that for any partition of the plane into a closed set $C$ and an open set $O$ and for any configuration $T$ of three points, there is a translated and rotated copy of $T$ contained in $C$ or in $O$. Apart from that, we consider partitions of the plane into two sets whose common boundary is a union of piecewise linear curves. We show that for any such partition and any configuration $T$ which is a vertex set of a non-equilateral triangle there is a copy of $T$ contained in the interior of one of the two partition classes. Furthermore, we give the characterization of these \"polygonal\" partitions that avoid copies of a given equilateral triple. These results support a conjecture of Erdos, Graham, Montgomery, Rothschild, Spencer and Straus, which states that every two-coloring of the plane contains a monochromatic copy of any nonequilateral triple of points; on the other hand, we disprove a stronger conjecture by the same authors, by providing non-trivial examples of two-colorings that avoid a given equilateral triple.", "subjects": "Combinatorics (math.CO); Metric Geometry (math.MG)", "title": "Monochromatic triangles in two-colored plane", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9877587243478403, "lm_q2_score": 0.8244619263765706, "lm_q1q2_score": 0.8143694606710844 }
https://arxiv.org/abs/1301.5020
Generalized cover ideals and the persistence property
Let $I$ be a square-free monomial ideal in $R = k[x_1,\ldots,x_n]$, and consider the sets of associated primes ${\rm Ass}(I^s)$ for all integers $s \geq 1$. Although it is known that the sets of associated primes of powers of $I$ eventually stabilize, there are few results about the power at which this stabilization occurs (known as the index of stability). We introduce a family of square-free monomial ideals that can be associated to a finite simple graph $G$ that generalizes the cover ideal construction. When $G$ is a tree, we explicitly determine ${\rm Ass}(I^s)$ for all $s \geq 1$. As consequences, not only can we compute the index of stability, we can also show that this family of ideals has the persistence property.
\section{Introduction} Let $I$ be an ideal of the polynomial ring $R = k[x_1,\ldots,x_n]$ with $k$ a field. A prime ideal $P \subseteq R$ is an {\it associated prime} of $I$ if there exists an element $T \in R$ such that $I:\langle T \rangle = P$. The {\it set of associated primes} of $I$, denoted ${\rm Ass}(I)$, is the set of all prime ideals associated to $I$. We shall be interested in the sets ${\rm Ass}(I^s)$ as $s$ varies. Brodmann \cite{B} proved that there exists an integer $s_0$ such that ${\rm Ass}(I^s) = {\rm Ass}(I^{s_0})$ for all integers $s \geq s_0$. The least such integer $s_0$ is called the {\it index of stability}, and following \cite{HQ}, we denote it by {\rm astab}$(I)$. We are interested in the following problem which arises from Brodmann's result: determine {\rm astab}$(I)$ in terms of the invariants of $R$ and $I$. Little is known about this problem, and in particular, there are few results providing exact calculations of {\rm astab}$(I)$. An upper bound on {\rm astab}$(I)$ for any monomial ideal $I$ was given by Hoa \cite{H}. This bound is quite large and is in terms of the number of variables in the ring, the number of minimal generators of the ideal and the maximal degree of a minimal generator. Even when $I$ is a square-free monomial ideal, determining ${\rm astab}{(I)}$ remains a challenging problem. A lower bound for ${\rm astab}{(I)}$ was given in \cite{FHVT} in terms of the chromatic number of a hypergraph constructed from the primary decomposition of $I$. When $I$ is the edge ideal of a graph (a quadratic square-free monomial ideal), Chen, Morey and Sung \cite{CMS} provide an upper bound on {\rm astab}$(I)$. However, the recent work of \cite{BHR,FHVT,FHVTconj,HQ,HRV,MMV} has suggested a possible answer. In particular, Herzog and Qureshi \cite{HQ} posit that the bound ${\rm astab}(I) \leq \dim R-1 = n-1$ should hold for square-free monomial ideals $I$ (this bound is significantly smaller than that given in \cite{H}). Brodmann's results also suggest the following secondary question: which ideals satisfy the {\it persistence property}, that is, for which ideals does the containment ${\rm Ass}(I^s)\subseteq {\rm Ass}(I^{s+1})$ hold for all $s \geq 1$? Recently, Kaiser, Stehl\'ik, and \u Skrekovski \cite{KSS} have shown that not all square-free monomial ideals have this property. In light of this result, it is an interesting question to determine which square-free monomial ideals have the persistence property. Results in this direction have shown that the persistence property holds for many classes of square-free monomial ideals, including square-free principal Borel ideals \cite{A}, edge ideals \cite{MMV}, the cover ideals of perfect graphs \cite{FHVT}, and polymatroidal ideals \cite{HRV}. In this paper, we introduce a family of square-free monomial ideals (generalizing the notion of a cover ideal) that can be associated to a finite simple graph $G$, and study the associated primes of their powers. More formally, suppose that $G$ is a finite simple graph on the vertex set $V_G = \{x_1,x_2,\ldots,x_n\}$ with edge set $E_G$. For any $x \in V_G$, we let $N(x) = \{y ~|~ \{x,y\} \in E_G\}$ denote the set of {\it neighbours of $x$}. By identifying the vertex $x_i$ with the variable $x_i$ in $R$, we define the following ideals. \begin{definition}\label{partial} Fix an integer $t \geq 1$. The {\it partial $t$-cover ideal} of $G$ is the monomial ideal \[J_t(G) = \bigcap_{x \in V_G} \left(\bigcap_{\{x_{i_1},\ldots,x_{i_t}\} \subseteq N(x)} \langle x,x_{i_1},\ldots,x_{i_t} \rangle \right).\] \end{definition} \noindent When $t=1$, our construction is simply the cover ideal of a finite simple graph $G$ (see Section 2 for more details). Recall that a graph is a {\it tree} if it has no induced cycles. Our main result is to show that for when $G$ is a tree, we can compute the index of stability of $J_t(G)$, and show that this family has the persistence property. \begin{theorem}\label{maintheorem} Let $G = (V_G,E_G)$ be a tree on $n$ vertices and fix any integer $t \geq 1$. Then the partial $t$-cover ideal $J_t(G)$ satisfies the persistence property. Furthermore \[{\rm astab}(J_t(G)) = \left\{ \begin{array}{ll} 1 & \mbox{if $t=1$} \\ \min\{s ~|~ s(t-1) \geq \Delta(G) -1 \} & \mbox{if $t > 1$} \end{array} \right. \] where $\Delta(G)$ is the maximal degree of $G$, i.e., the largest degree of a vertex of $G$. \end{theorem} \noindent In fact, we prove a stronger result (Theorem \ref{maintheoremtrees}) by determining the elements of ${\rm Ass}(J_t(G)^s)$ for all $s \geq 1$. Note that $\Delta(G) \leq n-1$, so the upper bound suggested by Herzog and Qureshi also holds for this family. Our paper is structured as follows. In Section 2, we review the required ingredients of associated primes and describe some of the properties of $J_t(G)$. In Section 3, we specialize to the case that $G = K_{1,n}$ is the star graph. These graphs will play an important role in our proof of Theorem \ref{maintheorem}; we also use these graphs to answer a question raised in \cite{FHVT}. Section 4 is devoted to the proof of our main result. \noindent {\bf Acknowledgements.} Some of the results of Section 3 first appeared in \cite{Bhat}. {\em Macaulay2} \cite{Mt} was used for computer experiments. We thank T. H\`a and C. Francisco for their feedback. The third author acknowledges the support of an NSERC Discovery Grant. \section{Preliminaries} We continue to use the terminology and definitions introduced in the previous section. Throughout this paper, $\mathcal{G}(I)$ denotes the unique set of minimal generators of a monomial ideal $I$. For any $W = \{x_{i_1},\ldots,x_{i_s}\} \subseteq V_G$, we let $x_W = x_{i_1}\cdots x_{i_s} \in R$. We first explain the significance of the name partial $t$-cover ideal in Definition \ref{partial}. A {\it vertex cover} of a graph $G$ is a subset $W \subseteq V_G$ which satisfies the following property: for any $x \in V_G$, either $x \in W$ or $N(x) \subseteq W$. In other words, all the edges containing $x$ are covered. We generalize this definition: a {\it partial $t$-cover} is a subset $W \subseteq V_G$ which satisfies the following property: for any $x \in V_G$, either $x \in W$ or there exists some subset $S \subseteq N(x)$ with $|S| = |N(x)|-t+1$ and $S \subseteq W$. That is, for each $x \in V_G$, all, but perhaps $t-1$ of the edges containing $x$, are covered by $W$. When $t =1$, this is simply the definition of a vertex cover. The following lemma justifies our choice of name for $J_t(G)$. \begin{lemma} \label{genspartial} Let $G = (V_G,E_G)$ be a finite simple graph and $t \geq 1$ an integer. Then \[J_t(G) = \langle x_W ~|~ \mbox{$W \subseteq V_G$ is a partial $t$-cover} \rangle.\] \end{lemma} \begin{proof} Let $m \in \mathcal{G}(J_t(G))$, and so $m = x_W$ for some $W \subseteq V_G$. Suppose $W$ is not a partial $t$-cover. Then there exists a vertex $x$ such that $x \not\in W$, and for all $S \subseteq N(x)$ with $|S| = |N(x)|-t+1$, there is some $x_j \in S \setminus W$. We claim that there are $t$ neighbours of $x$ not in $W$. Let $S_1 = \{x_1,\ldots,x_{|N(x)|-t+1}\}$. Because $W$ is not a partial $t$-cover, let $x_{i_1} \in S_1 \setminus W$. Set $S_2 = (S_1 \setminus \{x_{i_1}\}) \cup \{x_{|N(x)|-t+2}\}$. Again, $W$ is not a partial $t$-cover, so there exists $x_{i_2} \in S_2 \setminus W$. We repeat $t$ times and find $t$ neighbours of $x$, say $\{x_{i_1},\ldots,x_{i_t}\}$, that do not appear in $W$. It then follows that $m = x_W \not\in \langle x,x_{i_1},\ldots,x_{i_t} \rangle$ since none of these variables appear in $x_W$. But this contradicts the fact that $m \in J_t(G) \subseteq \langle x,x_{i_1},\ldots,x_{i_t} \rangle$. Therefore $W$ is a partial $t$-cover. For the converse, let $x_W$ be any square-free monomial which corresponds to a partial $t$-cover. Rewrite $J_t(G)$ as \footnotesize \[J_t(G) = \left(\bigcap_{x \in W} \left(\bigcap_{\{x_{i_1},\ldots,x_{i_t}\} \subseteq N(x)} \langle x,x_{i_1},\ldots,x_{i_t} \rangle \right)\right) \cap \left( \bigcap_{x \in V_G \setminus W} \left(\bigcap_{\{x_{i_1},\ldots,x_{i_t}\} \subseteq N(x)} \langle x,x_{i_1},\ldots,x_{i_t} \rangle \right)\right) .\] \normalsize If $x \in W$, then $x_W \in \langle x,x_{i_1},\ldots,x_{i_t}\rangle$, so $x_W$ is in the first intersection. If $x \not\in W$, then there exists a subset $S \subseteq N(x)$ with $|N(x)|-t+1$ elements such that $S \subseteq W$. But then for any subset $T \subseteq N(x)$ with $|T| = t$, $S \cap T \neq \emptyset$. This implies that $x_W \in \langle x,x_{i_1},\ldots,x_{i_t} \rangle$ for each subset $\{x_{i_1},\ldots,x_{i_t}\}$ of $N(x)$ of size $t$. So $x_W$ is in the second intersection, thus completing the proof. \end{proof} \begin{remark} The Alexander dual (see \cite{MS} for the definition) of $J_t(G)$ is also of interest: \[ I_t(G):= J_t(G)^\vee = \sum_{x \in V_G} \langle xx_{i_1}\cdots x_{i_t} ~|~ \{x_{i_1},\dots, x_{i_t}\}\subseteq N(x)\rangle. \] If $t = 1$, then $I_1(G)$ is the edge ideal of $G$, and if $t=2$, then $I_2(G)$ is the 2-path ideal of $G$ (see \cite{CD} for the definition). The ideals $I_t(G)$ can be viewed as generalized edge ideals. In a future paper, we will investigate some of the properties of $I_t(G)$. \end{remark} We turn to the relevant results on associated primes of square-free monomial ideals. Via the technique of localization, and using the fact that localization and taking powers commute, we simply need to determine when the maximal ideal is an associated prime of a monomial ideal. The following lemma justifies this reduction. The proof is similar to the proof of \cite[Lemma 2.11]{FHVT}, so is omitted. Given a graph $G = (V_G,E_G)$ and subset $P \subseteq V_G$, we write $G_P$ for the {\it induced graph} on $P$, i.e., the graph with vertex set $P$, and edge set $E_{G_{P}} = \{e \in E_G ~|~ e \subseteq P\}$. \begin{lemma}\label{localization} Let $G$ be a graph on the vertex set $\{x_1, \dots, x_n\}$, and let $J_t(G)$ be the partial $t$-cover ideal of $G$. The following are equivalent: \begin{enumerate} \item[$(i)$] $P= \langle x_{i_1}, \dots, x_{i_r}\rangle \in {\rm Ass}(J_t(G)^s)$ in $R = k[x_1,\ldots,x_n]$ \item[$(ii)$] $P=\langle x_{i_1}, \dots, x_{i_r}\rangle \in {\rm Ass} (J_t(G_P)^s)$ in $R_P = k[x_{i_1},\ldots,x_{i_r}]$. \end{enumerate} \end{lemma} The next lemma shows $P \in {\rm Ass}(J_t(G)^s)$ gives a necessary condition on the graph $G_P$. \begin{lemma}\label{connected} Let $G$ be a graph on the vertex set $\{x_1, \dots, x_n\}$, and let $J_t(G)$ be the partial $t$-cover ideal of $G$. If $P =\langle x_{i_1}, \dots, x_{i_r}\rangle \in {\rm Ass}(J_t(G)^s)$, then $G_P$ is connected. \end{lemma} \begin{proof} By Lemma \ref{localization}, it is enough to show that if $\langle x_1,\ldots,x_n \rangle \in {\rm Ass}(J_t(G)^s)$ for some $s$, then $G$ is connected. Suppose $G$ is not connected, i.e., $G = G_1 \cup G_2$ with $G_1 \cap G_2 = \emptyset$. After relabeling the vertices, we can assume the vertices of $G_1$ are $\{y_1,\ldots,y_a\}$ and the vertices of $G_2$ are $\{z_1,\ldots,z_b\}$. If $m \in \mathcal{G}(J_t(G))$, then $m = m_ym_z$ where $m_y$ is a square-free monomial in the $y$ variables, and $m_z$ is a square-free monomial in the $z$ variables, and furthermore, we must have $m_y \in \mathcal{G}(J_t(G_1))$, and $m_z \in \mathcal{G}(J_t(G_2))$. Because $\langle x_1,\ldots,x_n \rangle = \langle y_1,\ldots,y_a,z_1, \ldots,z_b \rangle$, and $\langle x_1,\ldots,x_n \rangle \in {\rm Ass}(J_t(G)^s)$, there exists a monomial $T \not\in J_t(G)^s$ such that \begin{eqnarray*} Ty_1 &= &m_1\cdots m_sM ~~\mbox{with $m_i \in \mathcal{G}(J_t(G))$} \\ & = & m_{y,1}m_{z,1} \cdots m_{y,s}m_{z,s}M_yM_z ~~ \mbox{with $m_i = m_{y,i}m_{z,i}$} \end{eqnarray*} where $m_{y,i} \in \mathcal{G}(J_t(G_1))$ and $m_{z,i} \in \mathcal{G}(J_t(G_2))$, and $M_y$ (respectively $M_z$) is a monomial in the $y$ variables (respectively the $z$ variables). So, $T = (m_{z,1}\cdots m_{z,s}M_z)T'$ where $T'$ is a monomial in the $y$ variables. But we also know that $Tz_1 \in J_t(G)^s$, so a similar argument allows us to write $T = (u_{y,1}\cdots u_{y,s}U_y)T''$ where $T''$ is a monomial in the $z$ variables, $U_y$ is a monomial in the $y$ variables, and each $u_{y,j} \in \mathcal{G}(J_t(G_1))$. But this means \begin{eqnarray*} T &= &(m_{z,1}\cdots m_{z,s}M_z)(u_{y,1}\cdots u_{y,s}U_y) = (u_{y,1}m_{z,1})\cdots (u_{y,s}m_{z,s})U_yM_z. \end{eqnarray*} Now each $u_{y,i}m_{z,i} \in \mathcal{G}(J_t(G))$, so $T \in J_t(G)^s$, a contradiction. Thus $G$ is connected. \end{proof} Section 3 focuses on {\it star graphs} $G = K_{1,n}$. These are the graphs with vertex set $V_G = \{z,x_1,\ldots,x_n\}$ and edge set $E_G = \{\{z,x_i\} ~|~ 1 \leq i \leq n\}$. The generators of $J_t(K_{1,n})$, as described by the next lemma, follow directly from the definitions: \begin{lemma} \label{generators} Let $G = K_{1,n}$ with $V = \{z,x_1,\ldots,x_n\}$, and let $n \geq t \geq 1$. Then \[J_t(G) = \langle z \rangle + \langle x_{j_1}\cdots x_{j_{n-t+1}} ~|~ \{j_1,\ldots,j_{n-t+1}\} \subseteq \{1,\ldots,n\} \rangle.\] \end{lemma} The next example explains what we know about ${\rm Ass}(J_t(K_{1,n})^s)$ when $t=1$; the situation for $t \geq 2$ is explored in the next section. \begin{example}\label{caset=1} Let $G = K_{1,n}$ and $t=1$. By Lemma \ref{generators}, $J_1(G) = \langle z,x_1x_2\cdots x_n \rangle$. But this is a complete intersection, so for all $s \geq 1$, \[{\rm Ass}(J_1(G)^s) = {\rm Ass}(J_1(G)) = \{ \langle z,x_i \rangle ~|~ 1 \leq i \leq n\}. \] There are at least two ways to prove this result. For any complete intersection $J$, $J^s = J^{(s)}$, the $s$-th symbolic power of $J$ (see \cite{ZS}) and thus ${\rm Ass}(J^s) = {\rm Ass}(J)$ for all $s \geq 1$. Alternatively, Gitler, Reyes, and Villarreal have shown \cite[Corollary 2.6]{GRV} that $J_1(G)$ is normal, i.e., $J_1(G)^s = \overline{J_1(G)^s}$, whenever $G$ is a bipartite graph, whence the conclusion again follows. Because ${\rm astab}(J_1(G)) = 1$, $J_1(G)$ has the persistence property. \end{example} \section{Star graphs} Fix integers $n \geq t \geq 1$. In this section we will completely describe the sets ${\rm Ass}(J_t(G)^s)$ when $G = K_{1,n}$. We use our results to give a new answer to a question raised by Francisco, H\`a, and the third author in \cite{FHVT}. Our main result is a corollary of the following theorem: \begin{theorem}\label{maintheoremstar} Fix integers $n \geq t \geq 1$ and let $G = K_{1,n}$ be the star graph on $V_G = \{z,x_1,\ldots,x_n\}$. Set $J_t = J_t(G)$. The following are equivalent: \begin{enumerate} \item[$(i)$] $\langle z,x_1,\ldots,x_n \rangle \in{\rm Ass}(J_t^s)$ \item[$(ii)$] $s(t-1) \geq n-1$. \end{enumerate} \end{theorem} We postpone the proof, but record its consequences: \begin{corollary}\label{corstar} Fix integers $n \geq t \geq 1$ and let $G = K_{1,n}$ be the star graph on $V_G = \{z,x_1,\ldots,x_n\}$. For any $s \geq 1$, \[{\rm Ass}(J_t(G)^s) = \left. \left\{\langle z,x_{i_1},\ldots,x_{i_r} \rangle ~\right|~ t \leq r \leq \min\{n,s(t-1)+1\}\right \}.\] Moreover, \[{\rm astab}(J_t(G)) = \left\{ \begin{array}{ll} 1 & \mbox{if $t=1$} \\ \min\{s ~|~ s(t-1) \geq n -1 \} & \mbox{if $t > 1$.} \end{array} \right.\] \end{corollary} \begin{proof} The result on ${\rm astab}(J_t(G))$ follows from the first statement. Let $\mathcal{P}$ denote the set on the right hand side of the first statement. Let $P \in {\rm Ass}(J_t(G)^s)$. Because $G_P$ is connected by Lemma \ref{connected}, $P = \langle z,x_{i_1},\ldots,x_{i_r} \rangle$, i.e., $P$ cannot be generated by a subset of $x$ variables. Note that this means that $G_P = K_{1,r}$ for some $r$. Either $P$ is a minimal prime of $J_t(G)$, or contains a minimal prime of $J_t(G)$, thus showing showing that $t \leq r$. By Lemma \ref{localization}, $\langle z,x_{i_1},\ldots,x_{i_r} \rangle \in {\rm Ass}(J_t(G_P)^s)$, and so by Theorem \ref{maintheoremstar}, $s(t-1) \geq r-1$, i.e., $r \leq s(t-1) +1$. Also, it is clear that $r \leq n$, so $P \in \mathcal{P}$. Conversely, suppose that $P = \langle z,x_{i_1},\ldots,x_{i_r} \rangle \in \mathcal{P}$. Abusing notation, let $P \subseteq V_G$ denote the corresponding vertices. After localizing at $P$, $P \in {\rm Ass}(J_t(G_P)^s)$ by Theorem \ref{maintheoremstar} since $s(t-1) \geq r-1$. Lemma \ref{localization} then gives $P \in {\rm Ass}(J_t(G)^s)$. \end{proof} To prove Theorem \ref{maintheoremstar} we require some information about our annihilator. \begin{lemma}\label{annlemma} Fix integers $n \geq t \geq 1$ and let $G = K_{1,n}$ be the star graph on $V_G = \{z,x_1,\ldots,x_n\}$. Set $J_t = J_t(G)$. Suppose that there exists a monomial $T \in k[z,x_1,\ldots,x_n]$, $T \notin J_t^s$, such that $J_t^s:\langle T \rangle = \langle z,x_1,\ldots,x_n \rangle$. If $T = z^eT'$ where $z \nmid T'$, then $T \mid z^e(x_1 \cdots x_n)^{s-e-1}.$ \end{lemma} \begin{proof} It suffices to prove that $T' | (x_1\cdots x_n)^{s-e-1}$. Suppose that there exists some $x_i$ such that $x_i^{s-e} | T'$. Now $x_iT = z^ex_iT' \in J_t^s$, so \[z^ex_iT' = m_1m_2 \cdots m_sM ~~\mbox{with $M \in k[z,x_1,\ldots,x_n]$ and $m_i \in \mathcal{G}(J_t)$}.\] We cannot have $x_i|M$. If it did, then we could cancel $x_i$ from both sides and have $T =z^eT' = m_1\cdots m_s(M/x_i) \in J_t^s$, which contradicts the fact that $T \not\in J_t^s$. So, the variable $x_i$ appears at least $s-e+1$ times in $z^ex_iT'$, and thus, must appear in at least $s-e+1$ of $m_1,\ldots,m_s$, because each $m_j$ is square-free. In particular, we can assume $m_1 = x_im'_1$. This means at most $e-1$ of $m_1,\ldots,m_s$ can be equal to $z$ (no minimal generator of $J_t$ is divisible by both $z$ and $x_i$ by Lemma \ref{generators}). So, $z$ must divide $M$, i.e., $M = zM'$. So, to summarize, \[z^ex_iT' = m_1m_2\cdots m_sM = (x_im'_1)m_2\cdots m_s(zM').\] If we cancel $x_i$ from both sides, we get \[T = z^eT' = (m'_1)m_2\cdots m_s(zM').\] But $m_2,\ldots,m_s,z \in \mathcal{G}(J_t)$, which means $T \in J_t^s$. This is our desired contradiction. \end{proof} We are now ready to prove Theorem \ref{maintheoremstar}. \begin{proof} (of Theorem \ref{maintheoremstar}) Note that if $t=1$, then Example \ref{caset=1} implies $\langle z, x_1,\ldots,x_n \rangle \in {\rm Ass}(J_1(G)^s)$ if and only if $n=1$ if and only if $0 = s(t-1) \geq n-1$. So, we assume $t > 1$. $(i) \Rightarrow (ii)$. If $\langle z,x_1,\ldots,x_n \rangle \in {\rm Ass}(J_t^s)$, then there exists a monomial $T \notin J_t^s$ such that $J_t^s: \langle T \rangle = \langle z,x_1,\ldots,x_n\rangle$. Rewrite $T$ as $T = z^eT'$ where $z \nmid T'$. We now claim that \begin{equation}\label{specialmonomial} z^e(x_1\cdots x_{n-t+2})^{s-e}(x_{n-t+3}\cdots x_n)^{s-e-1} \in J_t^{s+1}. \end{equation} Indeed, by Lemma \ref{annlemma}, $z^eT' | z^e(x_1\cdots x_n)^{s-e-1}$. Now $x_1z^eT' \in J_t^s$, which means $$z^ex_1^{s-e}(x_2\cdots x_n)^{s-e-1} \in J_t^s.$$ But $x_2\cdots x_{n-t+2} \in J_t$, so multiplying these two elements together gives us the desired element in $J_t^{s+1}$. We proceed by a degree argument. By \eqref{specialmonomial} there exist generators $m_1, \ldots, m_{s+1}$ of $J_t$ such that \[z^e(x_1\cdots x_{n-t+2})^{s-e}(x_{n-t+3}\cdots x_n)^{s-e-1} =m_1 \cdots m_{s+1}M.\] By Lemma \ref{generators}, $f$ of these generators are of the form $z$, and the remaining $s+1-f$ generators are of degree $n-t+1$ and have the form $x_{j_1}\cdots x_{j_{n-t+1}}$ for some $\{j_1,\ldots,j_{n-t+1}\} \subseteq \{1,\ldots,n\}$. Note that we must have $f \leq e$, and thus, looking at the degree of the generators in the $x$ variables, we must have \[ (s+1-f)(n-t+1) \leq (n-t+2)(s-e) + (t-2)(s-e-1) = (s-e)n - (t-2).\] Expanding out the left hand side gives \[sn-st+s+n-t+1-fn+ft-f \leq sn -en -t + 2.\] Removing $sn$ and $-t$ from both sides and using the fact that $-en \leq -fn$ and $0 \leq f(t-1)$ gives $-st+s+n \leq 1$, which implies $s(t-1) \geq n-1$, as desired. $(ii) \Rightarrow (i)$ Let $s_0 = \min\{s ~|~ s(t-1) \geq n-1 \}$. We first show that $\langle z,x_1,\ldots,x_n \rangle \in {\rm Ass}(J_t^{s_0})$. We construct our annihilator as follows. Write out the variables $x_1,\ldots,x_n$ as a repeating sequence, i.e., \begin{equation}\label{word} x_1,x_2,\ldots,x_n,x_1,x_2,\ldots,x_n,x_1,x_2,\ldots,x_n,x_1,\ldots. \end{equation} Let $T$ be the product of the first $s_0(n-t+1)-1$ variables in this sequence, that is, \[T = \underbrace{x_1x_2\cdots x_nx_1x_2 \cdots x_nx_1 \cdots x_j}_{s_0(n-t+1)-1}.\] The monomial $T \not\in J_t^{s_0}$. We can see this by a degree argument because $J_t$ is generated by monomials in the $x$ variables of degree $n-t+1$. We make the crucial observation that the index $j$ of the last variable in $T$ has the property that $n-t+1 \leq j \leq n$. To see this, note that after $n-t+1$ steps in the sequence \eqref{word} we are at vertex $x_{n-t+1}$, after $2(n-t+1)$ steps in the sequence \eqref{word}, we are at the vertex $x_{n-2(t-1)} = x_{n-2t+2}$, after $3(n-t+1)$ steps, we are at $x_{n-3t+3}$, ..., and finally, after $(s_0-1)(n-t+1)$ steps, we are at vertex $x_{n-(s_0-1)(t-1)} = x_{n-s_0t+s_0+t-1}$. By our choice of $s_0$, $-s_0t+s_0 \leq -n+1$, so $n-s_0t+s_0+t-1 \leq t$. In fact, after $(s_0-1)$ steps of size $(n-t+1)$ in our sequence \eqref{word}, this is the first time we arrive at an index $\leq t$. At the same time, by our choice of $s_0$, we have $(s_0-1)(t-1) < n-1$, so we are at an index $\geq 1$. When constructing $T$, we go an additional $n-t$ steps in the sequence. This means that we arrive at an index between $n-t+1$ and $n$. We next show $J_t^{s_0}:\langle T \rangle = \langle z,x_1,\ldots,x_n \rangle$. Now $zT \in J_t^{s_0}$. To see this, note that $z$ is a minimal generator of $J_t$, and every $n-t+1$ consecutive variables in \eqref{word} is also a generator of $J_t$. Thus, the product of the first $(s_0-1)(n-t+1)$ elements of \eqref{word} is in $J_t^{s_0-1}$, and so $z \in J_t^{s_0}:\langle T \rangle $. Now take $x_i$ with $i \in \{1,\ldots, n\}$. To show $x_iT \in J_t^{s_0}$, take the first $s_0(n-t+1)-1$ variables in \eqref{word}, and insert $x_i$ after its first appearance, i.e., \[ x_1,x_2,\ldots,x_i,x_i,x_{i+1},\ldots,x_n,x_1,x_2,\ldots,x_n,x_1,x_2,\ldots,x_n,x_1,\ldots,x_j. \] Think of these variables as being placed around a circle. Starting at the second $x_i$, move around the circle, grouping $n-t+1$ variables together. Because we have $s_0(n-t+1)$ variables, we end up with $s_0$ groups. Because the index of $j$ is between $n-t+1$ and $n$, each group will consist of $n-t+1$ distinct variables, and thus, by Lemma \ref{generators}, when we multiply each group of $n-t+1$ distinct variables together, we have a generator of $J_t$. But this means that $x_iT \in J_t^{s_0}$ since $x_iT$ is expressed as a product of $s_0$ generators. Thus, $\langle z,x_0,\ldots,x_n \rangle \subseteq J_t^{s_0}:\langle T \rangle \subsetneq \langle 1 \rangle$, which completes the proof for the case $s_0$. Now suppose that $s > s_0$. Let $e = s-s_0$ and let $T$ be as above. We will show that $J_t^{s}:\langle z^eT \rangle = \langle z,x_1,\ldots,x_n \rangle$. By a degree argument $z^eT \not\in J_t^s$, but $z(z^eT) \in J_t^s$ because, as noted above, $T \in J_t^{s_0-1}$ and $z^{e+1} \in J_t^{e+1}$. Similarly, $x_iz^eT \in J_t^{s}$ because $z^e \in J_t^e$, and as above, $x_iT \in J_t^{s_0}$. Hence $J_t^{s}:\langle z^eT \rangle = \langle z,x_1,\ldots,x_n \rangle$. \end{proof} \subsection{An application} Corollary \ref{corstar} allows us to answer a question raised by Francisco, H\`a, and the third author \cite{FHVT}. We first recall some terminology. A {\it hypergraph} $\mathcal{H}$ is a pair of sets $\mathcal{H} = (\mathcal{X},\mathcal{E})$ where $\mathcal{X} = \{x_1,\ldots,x_n\}$ and $\mathcal{E}$ is a collection of subsets $\{E_1,\ldots,E_t\}$ with each $E_i \subseteq \mathcal{X}$. We call $\mathcal{H}$ a {\it simple} hypergraph if $|E_i| \geq 2$ for all $i$, and if $E_i \subseteq E_j$, then $i=j$. (When each $|E_i| = 2$, then $\mathcal{H}$ is a finite simple graph.) As in the case of graphs, we say a subset $W \subseteq \mathcal{X}$ is a {\it vertex cover} if $W \cap E \neq \emptyset$ for all $E \in \mathcal{E}$. In a manner analogous to the cover ideal, we can define the cover ideal of $\mathcal{H}$: \[J(\mathcal{H}) = \langle x_W ~|~ W = \{x_{i_1},\ldots,x_{i_t}\} \subseteq \mathcal{X} ~~\mbox{is a vertex cover} \rangle.\] A {\it colouring} of $\mathcal{H}$ is an assignment of a colour to each vertex of $\mathcal{X}$ so that no edge $E$ is mono-coloured, i.e., each edge must contain at least two vertices of different colours. The {\it chromatic number} of $\mathcal{H}$, denoted $\chi(\mathcal{H})$, is the least number of colours required to colour $\mathcal{H}$. The chromatic number provides a lower bound on the index of stability of $J(\mathcal{H})$. \begin{theorem}[{\cite[Corollary 4.9]{FHVT}}] For any finite simple hypergraph $\mathcal{H}$, \[\chi(\mathcal{H}) -1 \leq {\rm astab}(J(\mathcal{H})).\] \end{theorem} It was asked in \cite[Question 4.10]{FHVT} if for each $m \geq 1$, there exists a hypergraph $\mathcal{H}_m$ with $\chi(\mathcal{H}_m)-1+m \leq {\rm astab}(J(\mathcal{H}_m))$, that is, could the index of stability be arbitrarily larger than the chromatic number. Wolff \cite{W} showed that this is the case, even if $\mathcal{H}$ is a finite simple graph. Wolff's family of graphs requires $5m-1$ vertices. We can use Corollary \ref{corstar} to give another answer to this question which only requires $m+3$ vertices. \begin{theorem} Fix an $m \geq 1$, and let $\mathcal{H}_m = (\mathcal{X}_{m},\mathcal{E}_{m})$ where $\mathcal{X}_m = \{z,x_1,\ldots,x_{m+2}\}$ and $\mathcal{E}_m = \{\{z,x_i,x_j\} ~|~ 1 \leq i < j \leq {m+2}\}$. Then \[\chi(\mathcal{H}_m)-1+m \leq {\rm astab}(J(\mathcal{H}_m)).\] \end{theorem} \begin{proof} First, $\chi(\mathcal{H}_m) = 2$ because each $x_i$ can be assigned the same colour, and $z$ can be given a different colour. Note that $J(\mathcal{H}_m) = J_2(K_{1,m+2})$. By Corollary \ref{corstar}, ${\rm astab}(J(\mathcal{H}_m)) = {\rm astab}(J_2(K_{1,m+2})) \geq m+1 = (2-1)+m = \chi(\mathcal{H}_m)-1+m$. \end{proof} \section{Associated primes of Generalized cover ideals of trees} In this section we completely determine the associated primes of the ideals $J_t(\Gamma)^s$ when $\Gamma$ is a {\it tree}, that is, a graph with no induced cycles. Theorem \ref{maintheorem} will follow directly from this result. We begin by stating the main theorem of this section: \begin{theorem} \label{maintheoremtrees} Fix an integer $t \geq 1$ and let $\Gamma$ be a tree on $n$ vertices. Then for all $s \geq 1$, \[{\rm Ass}(J_t(\Gamma)^s) = \left. \left\{ P = \langle x_{i_0},x_{i_1},\ldots,x_{i_r} \rangle ~\right|~ \Gamma_P = K_{1,r} ~\mbox{with $ t \leq r \leq \min\{n,s(t-1)+1\}$}\right\}.\] \end{theorem} \noindent In other words, a prime is associated to $J_t(\Gamma)^s$ if and only if the corresponding induced subgraph in $\Gamma$ is a star of a particular size. We require the following lemma which can be found in \cite[Proposition 4.1]{JK}). This lemma will gives us some insight into the generators of $J_t(\Gamma)$. \begin{lemma}\label{specialvertex} For any tree $\Gamma$, there exists a vertex $x$ such that all, but possibly one, of its neighbours have degree $1$. \end{lemma} We fix some notation to be used throughout the remainder of this paper. Let $\Gamma$ be a tree, and let $x$ be the vertex of Lemma \ref{specialvertex} with neighbours $y_1,\ldots,y_d$. We can assume that $\deg y_1 = \cdots = \deg y_{d-1} = 1$ and $\deg y_d \geq 1$. Using this notation, we have: \begin{lemma} \label{structure} Let $\Gamma$ be a tree with partial $t$-cover ideal $J_t(\Gamma)$. If $m \in \mathcal{G}(J_t(\Gamma))$, then $m$ has one of the following forms: \begin{enumerate} \item[$(i)$] $m = y_{i_1}\cdots y_{i_{d-t+1}}m'$ \item[$(ii)$] $m = xm'$ \item[$(iii)$] $m = xy_dm'$ \end{enumerate} where in each case, $m'$ is not divisible by any of the variables $y_1, \dots, y_d$, $x$. \end{lemma} \begin{proof} By Lemma \ref{genspartial}, the minimal generators of $J_t(\Gamma)$ correspond to the minimal partial $t$-covers of $\Gamma$. The result will follow if we look at the corresponding statement for minimal partial $t$-covers of $\Gamma$. Let $W$ be a minimal partial $t$-cover of $\Gamma$. First, suppose that $x \not\in W$. By definition, $W$ must contain a subset $S \subseteq N(x)$ of size $|N(x)|-t+1 = d-t+1$. Because $N(x) = \{y_1,\ldots,y_d\}$, let us say that $S = \{y_{i_1},\ldots,y_{i_{d-t+1}}\}$. It now suffices to show that $W \setminus S$ does not contain any other neighbours of $x$. If $t=1$, then $S = N(x)$, so this is clear. So, suppose that $t \geq 2$, and suppose that there is some $y_j \in N(x)\cap (W \setminus S)$. There are two cases to consider: $j \neq d$ and $j = d$. If $j \neq d$, then Lemma \ref{specialvertex} gives $\deg y_j = 1$. We claim that $(W \setminus \{y_j\})$ is also a partial $t$-cover of $\Gamma$, thus contradicting the minimality of $W$. Indeed, take any vertex $z$ of $\Gamma$. Because $y_j$ is only adjacent to $x$, for any vertex $z \not\in \{y_j,x\}$, either $z$ is in $(W \setminus \{y_j\}) \subseteq W$ or all but perhaps $t-1$ of the neighbours of $z$ are in $(W \setminus \{y_j\}) \subseteq W$. We know that $x \not\in W$, but because $S \subseteq (W\setminus \{y_j\}) \subseteq W$, we know that all but perhaps $t-1$ of the neighbours of $x$ are in $(W \setminus \{y_j\})$. Finally, although $y_j \not\in W$, all but perhaps $t-1 \geq 2-1 =1$ of its neighbours belong to $W$. But since $y_j$ only has the neighbour $x$, $(W\setminus \{y_j\})$ is also a partial $t$-cover. If $j = d$, then we can simply repeat the above argument to show that $(W \setminus \{y_{i_1}\})$ (remove one the vertices of $S$, but keep $y_d$) creates a smaller partial $t$-cover. Now consider the case that $x \in W$. It suffices to show that $\{y_1,\ldots,y_{d-1}\} \cap W = \emptyset$. Then we will have the form $(ii)$ if $y_d \not\in W$, and the form $(iii)$ if $y_d \in W$. Suppose that $y_j \in \{y_1,\ldots,y_{d-1}\} \cap W$. We claim that $(W \setminus \{y_j\})$ would also be a partial $t$-cover. By Lemma \ref{specialvertex}, $\deg y_j = 1$, and $y_j$ is only adjacent to $x$. As argued above, for any vertex $z \not\in \{y_j,x\}$, either $z$ or all but perhaps $t-1$ of its neighbours will belong to $(W \setminus \{y_j\})$. The vertex $x$ is in $(W \setminus \{y_j\})$, and as for $y_j$, although $y_j \not\in (W \setminus \{y_j\})$, the unique edge containing $y_j$ is covered by $x$. So $(W \setminus \{y_j\})$ is a partial $t$-cover, contradicting the minimality of $W$. \end{proof} \begin{proof} (of Theorem \ref{maintheoremtrees}) Let $\mathcal{P}$ denote the set on the right. Lemma \ref{localization} and Corollary \ref{corstar} imply that every induced star graph of $\Gamma$ of the appropriate size will contribute an associated prime; more precisely, we already have $\mathcal{P} \subseteq {\rm Ass}(J_t(\Gamma)^s)$. It therefore suffices to show that if $P \in {\rm Ass}(J_t(\Gamma)^s)$, then $\Gamma_P$ is a star graph. Corollary \ref{corstar} and Lemma \ref{localization} then imply the condition on the size of the star graph, thus showing $P \in \mathcal{P}$. We let $J = J_t(\Gamma)$. If $P \in \operatorname{Ass}(J^s)$, by Lemma \ref{localization} we can assume that $\Gamma_P = \Gamma$ and by Lemma \ref{connected}, we can assume that $\Gamma$ is connected. Because $\Gamma$ is a tree, so is $\Gamma_P$. So, we can apply Lemma \ref{specialvertex}. That is, we can assume that there is a vertex $x$ with neighbours $y_1,\ldots, y_d$ such that $\deg y_1 = \cdots =\deg y_{d-1} = 1$, and $\deg y_d \geq 1$ in $\Gamma_P$. It suffices to show that $\deg y_d= 1$. Since $\Gamma_P$ is connected, this would mean $\Gamma_P = K_{1,d}$. So, suppose $y_d$ has a neighbour, say $w \neq x$. We thus have $P = \langle y_1,\ldots,y_d,x,w,\ldots \rangle$. We now want to build a contradiction from this information. Since $P \in \operatorname{Ass}(J^s)$, there exists a monomial $T\notin J^s$ such that $J^s:\langle T\rangle = P$. Because $w \in P$, \[Tw = m_1\cdots m_sM ~~\mbox{with $m_i \in \mathcal{G}(J)$}.\] By Lemma \ref{structure}, a generator of $J$ has one of three forms. Let's say that $a$ of $m_1,\ldots,m_s$ are of type $(i)$, $b$ of $m_1\ldots,m_s$ are of type $(ii)$, and $c$ are of type $(iii)$. We then have \[T = T'y_1^{e_1}y_2^{e_2}\cdots y_{d-1}^{e_{d-1}}y_d^{e_d+c}x^{b+c}\] where $e_1+\dots+e_d = (d-t+1)a$ and $a+b+c=s$. Without loss of generality we may assume that $e_1= \max\{e_1, \dots, e_{d-1}\}$. We now consider $Ty_1$. Since $y_1 \in P$, $Ty_1 \in J^s$, that is, \[Ty_1 = u_1\cdots u_sU ~~\mbox{with $u_j \in \mathcal{G}(J)$}.\] First, note that $y_1$ does not divide $U$, since if it did we would then have $T = u_1\cdots u_s(U/y_1) \in J^s$, a contradiction. Since $Ty_1 = T'y_1^{e_1+1}y_2^{e_2}\cdots y_{d-1}^{e_{d-1}}y_d^{e_d+c}x^{b+c}$, this means that (at least) $e_1+1$ of the generators $u_1,\ldots,u_s$ are divisible by $y_1$. We may assume that after reordering that these generators are $u_1,\ldots,u_{e_1+1}$. We next observe that $x$ also does not divide $U$. To see why, suppose that $U = xU'$. As noted above, $u_1 = y_1y_{i_2} \cdots y_{i_{d-t+1}}m$ for some monomial $m$ not divisible by $x$. Note that $(u_1x)/y_1 = xy_{i_2} \cdots y_{i_{d-t+1}}m$ will also be a non-minimal generator of $J$. This means that \begin{eqnarray*} Ty_1 &= &u_1\cdots u_sU = (y_1y_{i_2} \cdots y_{i_{d-t+1}}m)u_2\cdots u_s(xU') \\ & = & (xy_{i_2} \cdots y_{i_{d-t+1}}m)u_2\cdots u_s(y_1U'). \end{eqnarray*} If we now cancel $y_1$ from both sides, this implies that $T \in J^s$, a contradiction. So $x$ cannot divide $U$, and thus at least $b+c$ of $u_1,\ldots,u_s$ are divisible by $x$. By Lemma \ref{structure}, they cannot be among $u_1,\ldots,u_{e_1+1}$ since these are all divisible by $y_1$. Let us say that they are $u_{e_1+2},\ldots,u_{e_1+b+c+1}$. To summarize, we now have \begin{eqnarray*} Ty_1 &=& \underbrace{u_1\cdots u_{e_1+1}}_{\mbox{all divisible by $y_1$}}\cdot \underbrace{u_{e_1+2}\cdots u_{e_1+1+b+c}}_{\mbox{all divisible by $x$}}\cdots u_s U. \end{eqnarray*} We finish the proof by counting the degrees of the variables $y_2, \ldots, y_d$ in $Ty_1$. There are two cases to consider: ({\it Case 1}) there is a generator among $u_1, \ldots, u_s$ of type $(iii)$; and ({\it Case 2}) there is no generator among $u_1, \ldots, u_s$ of type $(iii)$. {\it Case 1:} Suppose there is some $u_j = xy_dm$. Then $y_d$ must divide every generator among $u_1, \dots, u_s$ of type $(i)$. To see why, suppose that there is some generator $u_r = y_1y_{i_2}\cdots y_{i_{d-t+1}}m'$ with $y_{i_\ell} \neq y_d$ for all $2\leq \ell \leq d-t+1$. \begin{eqnarray*} Ty_1 = u_1\cdots u_r \cdots u_j \cdots u_sU &=& u_1 \cdots (y_1y_{i_2}\cdots y_{i_{d-t+1}}m')\cdots (xy_dm) \cdots u_sU\\ &=& u_1 \cdots (xm') \cdots (y_{i_2}\cdots y_{i_{d-t+1}}y_dm)\cdots u_s (y_1U). \end{eqnarray*} Note that $xm', y_{i_2}\cdots y_{i_{d-t+1}}y_dm \in\mathcal{G}(J)$. If we cancel $y_1$ from both sides, we get $T \in J^s$, which is a contradiction. Similarly, suppose that there is some generator $u_r = y_{i_1}\dots y_{i_{d-t+1}}m'$ with $y_{i_\ell} \neq y_1, y_d$ for all $1\leq \ell \leq d$, and let $u_1 = y_1y_{k_2}\cdots y_{k_{d-t}}y_dm''$ (since $u_1$ is divisible by $y_1$, it must also be divisible by $y_d$ by above). Since $y_{i_\ell} \neq y_1$ for all $1\leq \ell \leq d-t+1$, there is some variable among $y_{i_1}, \ldots, y_{i_{d-t+1}}$ which does not divide $u_1$. Without loss of generality, assume that $y_{i_1}$ does not divide $u_1$. Then \begin{eqnarray*} Ty_1 &=& u_1\cdots u_r \cdots u_j \cdots u_sU\\ &=& (y_1y_{k_2}\cdots y_{k_{d-t}}y_dm'') \cdots (xy_dm) \cdots (y_{i_1}y_{i_2}\cdots y_{i_{d-t+1}}m')\cdots u_sU\\ &=& (y_{i_1}y_{k_2}\cdots y_{k_{d-t}}y_dm'') \cdots (y_{i_2}\cdots y_{i_{d-t+1}}y_dm) \cdots (xm')\cdots u_s (y_1U). \end{eqnarray*} The monomials $xm'$, $y_{i_2}\cdots y_{i_{d-t+1}}y_dm$, and $y_{i_1}y_{k_2}\cdots y_{k_{d-t}}y_dm''$ are generators of $J$, so if we cancel $y_1$ from both sides this leads to the contradiction $T \in J^s$. So if there is some $u_j = xy_dm$, then every generator of type $(i)$ among $u_1, \dots, u_s$ is divisible by $y_d$. Now consider the monomials $u_1, \ldots, u_{e_1+1}$. After relabeling, we may assume that $y_1, \ldots, y_j$ and $y_d$ divide all of $u_1, \ldots, u_{e_1+1}$, and that each of the remaining variables $y_{j+1}, \ldots, y_{d-1}$ do not divide at least one of the generators $u_1, \dots, u_{e_1+1}$. We now count the number of times that the variables $y_{j+1}, \ldots, y_{d-1}$ occur in the generators $u_1, \ldots, u_s$. Each of $u_1, \ldots u_{e_1+1}$ are divisible by exactly $d-t+1$ of the variables $y_1, \ldots, y_d$ including $y_1, \ldots, y_j$ and $y_d$. Therefore exactly $d-t+1-(j+1) = d-t-j$ of the variables $y_{j+1}, \ldots, y_{d-1}$ divide each of $u_1, \ldots, u_{e_1+1}$. In addition, the variables $y_{j+1}, \ldots, y_{d-1}$ may divide each of the monomials $u_{e_1+b+c+2}, \ldots, u_s$ (there are $s-(e_1+b+c+1) = a-e_1-1$ such monomials). Since $y_d$ divides every generator of type $(i)$ in the list $u_1, \ldots, u_s$, at most $d-t$ of the variables $y_{j+1}, \ldots, y_{d-1}$ divide each of the generators $u_{e_1+b+c+2}, \ldots, u_s$. In total, the number of times that the variables $y_{j+1}, \ldots, y_{d-1}$ divide the monomials $u_1, \ldots, u_s$ is at most \[ (d-t-j)(e_1+1) + (d-t)(a-e_1-1) = (d-t)a - j e_1 -j. \] On the other hand, since $T = T'y_1^{e_1}\cdots y_{d-1}^{e_{d-1}}y_d^{e_d+c}x^{b+c}$, the number of times that the variables $y_{j+1}, \ldots, y_{d-1}$ divide $T$ is at least \begin{eqnarray*} e_{j+1}+ \cdots + e_{d-1} &=& e_1+\cdots+e_d - (e_1+\cdots +e_j+e_d)\\ &=& (d-t+1)a - (e_1+\cdots + e_j + e_d). \end{eqnarray*} Since $e_1 = \max\{e_1, e_2, \ldots, e_{d-1}\}$ we have $e_1+e_2+\cdots+e_j\leq je_1$. So \begin{eqnarray*} (d-t+1)a - (e_1+\cdots + e_j + e_d) &\geq& (d-t+1)a - (je_1 + e_d)\\ &=& (d-t+1)a -je_1-e_d. \end{eqnarray*} And since $e_d$ is the number of times that the variable $y_d$ appears among the (square-free) monomials $m_1, \dots, m_a$ we have $a\geq e_d$. So \begin{eqnarray*} (d-t+1)a -je_1-e_d &\geq&(d-t+1)a - je_1 -a = (d-t)a -je_1. \end{eqnarray*} Since $j\geq 1$, this number is larger than the number of times that the variables $y_{j+1}, \ldots, y_{d-1}$ divide $u_1, \ldots , u_s$. Therefore, there must be some $y_k$ with $j+1\leq k\leq d-1$ which divides $U$. Let $U = y_kU'$. By assumption, there is some monomial among $u_1, \ldots, u_{e_1+1}$ which is not divisible by $y_k$. Without loss of generality, say $y_k \nmid u_1$. Then $u_1 = y_1y_{i_2}\cdots y_{i_{d-t+1}}m'$ for some monomial $m'$ with $y_{i_\ell}\neq y_k$ for all $2\leq \ell \leq d-t+1$. Then \begin{eqnarray*} Ty_1 = u_1\dots u_s U &=& (y_1y_{i_2}\cdots y_{i_{d-t+1}}m') u_2\cdots u_s(y_kU')\\ &=& (y_ky_{i_2}\cdots y_{i_{d-t+1}}m') u_2 \cdots u_s(y_1U'). \end{eqnarray*} Since $y_ky_{i_2}\cdots y_{i_{d-t+1}}m' \in \mathcal{G}(J)$ this implies that $T \in J^s$, which is a contradiction. So $w \not\in P$, and thus $\Gamma_P = K_{1,d}$, as desired. {\it Case 2:} Suppose that no generator among $u_1, \dots, u_s$ is of the form $xy_dm'$ (which implies $c=0$). Assume again that each of the variables $y_1, \dots, y_j$ with $1 \leq j < d$ divides each of the monomials $u_1, \dots, u_{e_1+1}$ and that the variables $y_{j+1}, \ldots, y_{d-1}$ do not. Note that $y_d$ may or may not divide every monomial in $u_1,\ldots,u_{e+1}$. We will count the variables $y_{j+1}, \dots, y_d$. We saw in the previous case that we arrive at a contradiction if we assume that the variable $y_d$ divides every minimal generator of type $(i)$ in the list $u_1, \dots, u_s$. Therefore we may assume that there is some monomial of type $(i)$ among $u_1, \ldots , u_{e_1+1}, u_{e_1+b+2}, \dots, u_s$ which is of type $(i)$ and which is not divisible by $y_d$. Now $(d-t+1-j)$ of the variables $y_{j+1}, \dots, y_d$ divide each of the monomials $u_1, \dots, u_{e_1+1}$. In addition, at most $(d-t+1)$ of the variables $y_{j+1}, \dots, y_d$ divide each of the monomials $u_{e_1+b+2}, \dots, u_s$. In total the number of times that the variables $y_{j+1}, \dots, y_d$ divide the monomials $u_1, \dots, u_s$ is at most \[ (d-t+1-j)(e_1+1)+(d-t+1)(s-(b+e_1+1)) = (d-t+1)a -je_1 -j.\\ \] On the other hand, since $T = T'y_1^{e_1}\cdots y_{d-1}^{e_{d-1}}y_d^{e_d}x^{b}$ (because $c=0$ in this case), the number of times the variables $y_{j+1}, \dots, y_d$ divide $Ty_1$ is at least \begin{eqnarray*} e_{j+1}+\cdots+ e_d &=& e_1+\cdots+e_d - (e_1+\cdots +e_j)\\ &=& (d-t+1)a -(e_1+\cdots+e_j)\\ &\geq& (d-t+1)a -je_1 \end{eqnarray*} because $e_1 = \max\{e_1, \dots, e_{d-1}\}$. Since $j\geq1$, this number is strictly greater than the number of times that $y_{j+1}, \ldots, y_d$ divide the monomials $u_1, \ldots, u_s$. Therefore there is some $y_k$ with $j+1\leq k\leq d$ which divides $U$. If $k \neq d$, then we know that there is some monomial among $u_1, \ldots, u_{e_1+1}$ which is not divisible by $y_k$. Without loss of generality we may assume that $y_k$ does not divide $u_1 = y_1y_{i_2}\cdots y_{i_{d-t+1}}m'$. Then \begin{eqnarray*} Ty_1 = u_1\cdots u_s U &=& (y_1y_{i_2}\cdots y_{i_{d-t+1}}m') u_2\cdots u_s(y_kU')\\ &=& (y_ky_{i_2}\cdots y_{i_{d-t+1}}m') u_2 \cdots u_s(y_1U'). \end{eqnarray*} Since $y_ky_{i_2}\cdots y_{i_{d-t+1}}m' \in \mathcal{G}(J)$, this implies that $T \in J^s$ which is a contradiction. Finally, assume that none of $y_{j+1}, \dots, y_{d-1}$ divide $U$. Then $y_d$ must divide $U$. Let $U = y_dU'$. If there is some monomial among $u_1, \dots, u_{e_1+1}$ which is not divisible by $y_d$ then we arrive at a contradiction as above. If $y_d$ divides each of $u_1, \dots, u_{e_1+1}$, then there is some monomial in the list $u_{e_1+b+2}, \dots, u_s$ which is not divisible by $y_d$. Without loss of generality, assume $u_s$ is not divisible by $y_d$. So $u_s = y_{k_1}\cdots y_{k_{d-t+1}}m$, where $y_{k_\ell} \neq y_d$ for all $1 \leq \ell \leq d-t+1$ and $u_1 = y_1 y_{i_2}\cdots y_{i_{d-t}}y_dm'$. Since $y_d$ divides $u_1$ and does not divide $u_s$ there is at least one of the variables $y_{k_1}, \ldots, y_{k_{d-t+1}}$ which does not divide $u_1$. Assume that $y_{k_1}$ does not divide $u_1$. Then \begin{eqnarray*} Ty_1 &=& u_1\cdots u_sU\\ &=& (y_1 y_{i_2}\cdots y_{i_{d-t}}y_dm')u_2\cdots u_{s-1}(y_{k_1}y_{k_2}\cdots y_{k_{d-t+1}}m)(y_dU')\\ &=&(y_{k_1} y_{i_2}\cdots y_{i_{d-t}}y_dm')u_2\cdots u_{s-1}(y_{k_2}\cdots y_{k_{d-t+1}}y_dm)(y_1U'). \end{eqnarray*} Since $y_{k_1} y_{i_2}\cdots y_{i_{d-t}}y_dm'$ and $y_{k_2}\cdots y_{k_{d-t+1}}y_dm$ are also minimal generators of $J$, this implies that $T$ is an element of $J^s$ which is a contradiction. Therefore the associated prime $P$ cannot be of the form $P = \langle y_1, \dots, y_d, x, w, \ldots \rangle$. In other words, $\deg (y_d) = 1$, so $\Gamma_P = K_{1, d}$ is a star graph as desired. \end{proof} We can now prove Theorem \ref{maintheorem}. \begin{proof}(of Theorem \ref{maintheorem}) The persistence property is immediate from our description of the sets ${\rm Ass}(J_t(\Gamma)^s)$ in Theorem \ref{maintheoremtrees}. When $t=1$, ${\rm astab}(J_1(\Gamma))=1$ since $\Gamma$ is bipartite. So the result follows from \cite{GRV}. When $t \geq 2$, let $x$ be a vertex with $\deg x = \Delta(\Gamma)$, i.e., a vertex of maximal degree. Let $P = \{x\} \cup N(x)$. Then $\Gamma_P = K_{1,\Delta(\Gamma)}$. If we abuse notation, and let $P$ also denote the ideal generated by the variables corresponding to the vertices in $P$, then $P \in {\rm Ass}(J_t(\Gamma)^s)$ if and only if $s(t-1) \geq \Delta(\Gamma)-1$. So ${\rm astab}(J_t(\Gamma)) \geq \min\{s ~|~ s(t-1) \geq \Delta(\Gamma)-1\}$. Let $s_0 = \min\{s ~|~ s(t-1) \geq \Delta(\Gamma)-1\}$ and suppose that ${\rm astab}(J_t(\Gamma)) > s_0$. Because $J_t(\Gamma)$ has the persistence property, that means that there is a $P \in {\rm Ass}(J_t(\Gamma)^s) \setminus {\rm Ass}(J_t(\Gamma)^{s_0})$ with $s > s_0$. We can assume $s$ is the smallest such integer with this property. By Theorem \ref{maintheoremtrees}, $\Gamma_P = K_{1,r}$, and by Theorem \ref{maintheoremstar}, we must have $s(t-1) \geq r-1$. Since $P \not\in {\rm Ass}(J_t(\Gamma)^{s_0})$, we must have $s_0(t-1) \not\geq r-1$. But this means that $r > \Delta(\Gamma)$, which implies that $\Gamma$ has a vertex of degree greater than $\Delta(\Gamma)$, a contradiction. \end{proof}
{ "timestamp": "2013-12-18T02:12:50", "yymm": "1301", "arxiv_id": "1301.5020", "language": "en", "url": "https://arxiv.org/abs/1301.5020", "abstract": "Let $I$ be a square-free monomial ideal in $R = k[x_1,\\ldots,x_n]$, and consider the sets of associated primes ${\\rm Ass}(I^s)$ for all integers $s \\geq 1$. Although it is known that the sets of associated primes of powers of $I$ eventually stabilize, there are few results about the power at which this stabilization occurs (known as the index of stability). We introduce a family of square-free monomial ideals that can be associated to a finite simple graph $G$ that generalizes the cover ideal construction. When $G$ is a tree, we explicitly determine ${\\rm Ass}(I^s)$ for all $s \\geq 1$. As consequences, not only can we compute the index of stability, we can also show that this family of ideals has the persistence property.", "subjects": "Commutative Algebra (math.AC)", "title": "Generalized cover ideals and the persistence property", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9850429107723175, "lm_q2_score": 0.8267117898012104, "lm_q1q2_score": 0.8143465877955767 }
https://arxiv.org/abs/2207.04701
Spectral radius and edge-disjoint spanning trees
The spanning tree packing number of a graph $G$, denoted by $\tau(G)$, is the maximum number of edge-disjoint spanning trees contained in $G$. The study of $\tau(G)$ is one of the classic problems in graph theory. Cioabă and Wong initiated to investigate $\tau(G)$ from spectral perspectives in 2012 and since then, $\tau(G)$ has been well studied using the second largest eigenvalue of the adjacency matrix in the past decade. In this paper, we further extend the results in terms of the number of edges and the spectral radius, respectively; and prove tight sufficient conditions to guarantee $\tau(G)\geq k$ with extremal graphs characterized. Moreover, we confirm a conjecture of Ning, Lu and Wang on characterizing graphs with the maximum spectral radius among all graphs with a given order as well as fixed minimum degree and fixed edge connectivity. Our results have important applications in rigidity and nowhere-zero flows. We conclude with some open problems in the end.
\section{Introduction} In this paper, we only consider simply graphs unless otherwise stated and $k$ always denotes a positive integer. The study of edge-disjoint spanning trees has been shown to be very important to graphs and has many applications in fault-tolerance networks as well as network reliability~\cite{Cunningham,Hobbs91}. Thus it is quite interesting to explore how many edge-disjoint spanning trees in a given graph. The \textit{spanning tree packing number} (or simply \textit{STP number}) of a graph $G$, denoted by $\tau(G)$, is the maximum number of edge-disjoint spanning trees contained in $G$. Nash-williams~\cite{Nash-Williams} and Tutte~\cite{Tutte} independently discovered a fundamental theorem that characterizes graphs $G$ with $\tau(G)\ge k$ (See Theorem~\ref{lem::2.9} in the next section). The \textit{edge connectivity} of $G$, denote by $\kappa'(G)$, is the minimum cardinality of an edge cut of $G$. It is known that $\kappa'(G)$ and $\tau(G)$ are closely related. In fact, the fundamental theorem of Nash-williams~\cite{Nash-Williams} and Tutte~\cite{Tutte} implies that if $\kappa'(G)\ge 2k$, then $\tau(G)\ge k$. For more about the spanning tree packing number, we refer readers to the survey~\cite{Palmer01} by Palmer. \medskip The number of spanning trees has also been well studied from spectral perspectives. For a graph $G$, let $A(G)$ denote the adjacency matrix of $G$ and let $\lambda_i(G)$ denote the $i$th largest eigenvalue of $A(G)$. In particular, the largest eigenvalue of $A(G)$ is called the \textit{spectral radius} of $G$ and is denoted by $\rho(G)$. Denote by $D(G)$ the diagonal matrix of vertex degrees of $G$. The \textit{Laplacian matrix} of $G$ is defined as $L(G)=D(G)-A(G)$. The Laplacian matrix is positive semidefinite and we order its eigenvalues as $\mu_{1}\geq\mu_{2}\geq\ldots\geq\mu_{n-1}\geq\mu_{n}=0$. The well-known Matrix-Tree Theorem of Kirchhoff \cite{Kirchhoff} indicates that the number of spanning trees (not necessarily edge-disjoint) of a graph $G$ with $n$ labelled vertices is $\frac{\prod_{i=1}^{n-1}\mu_{i}}{n}$. For edge-disjoint spanning trees, Seymour proposed the following problem in private communication to Cioab\u{a} as mentioned in~\cite{Wong}. \begin{prob}\label{prob1} Let $G$ be a connected graph. Determine the relationship between $\tau(G)$ and the eigenvalues of $G$. \end{prob} Inspired by the Kirchhoff's Matrix-Tree Theorem and Problem~\ref{prob1}, Cioab\u{a} and Wong \cite{Wong} started to study the spanning tree packing number via the second largest eigenvalue of the adjacency matrix. They proved that for a $d$-regular connected graph $G$, $\tau(G)\geq k$ if $\lambda_{2}(G)<d-\frac{2(2k-1)}{d+1}$ for $d\geq 2k\geq 4$, and further conjectured that the sufficient condition can be improved to $\lambda_{2}(G)<d-\frac{2k-1}{d+1}$. In the same paper, they did the preliminary work of this conjecture for $k=2,3$ and gave examples to show the bound is best possible. Later, Gu et al.~\cite{Gu-Lai} extended the conjecture to graphs that are not necessarily regular, that is, $\tau(G)\geq k$ if $\lambda_2(G)< \delta-\frac{2k-1}{\delta+1}$ for $\delta\geq 2k\geq 4$. They confirmed the conjecture for $k=2,3$ and obtained a partial result that $\lambda_2(G)< \delta-\frac{3k-1}{\delta+1}$ suffices. This conjecture was completely settled in 2014 by Liu et al.~\cite{Liu-Hong} who proved a stronger result, which also implied the truth of the conjecture of Cioab\u{a} and Wong \cite{Wong}. Most recently, the result in~\cite{Liu-Hong} has been shown to be essentially best possible in \cite{coppww22} by constructing extremal graphs. On the other hand, the result was extended to a fractional version by Hong et al.~\cite{HGLL16}, and improved by Liu, Lai and Tian \cite{Liu-Lai} with a Moore function. Motivated by Problem~\ref{prob1} and the above results, we study the spanning tree packing number by means of the spectral radius of graphs. We first investigate an extremal result for $\tau(G)\ge k$. Let $e(G)$ denote the number of edges in $G$. \begin{thm}\label{thm::edgenumber} Let $G$ be a connected graph with minimum degree $\delta\geq 2k$ and order $n\geq 2\delta+2$. If $e(G)\geq {\delta+1\choose2}+{n-\delta-1\choose 2} +k$, then $\tau(G)\geq k$. \end{thm} The condition in Theorem~\ref{thm::edgenumber} is tight. Denote by $K_n$ the complete graph on $n$ vertices, and $\mathcal{G}_{n,n_1}^{i}$ the set of graphs obtained from $K_{n_1}\cup K_{n-n_{1}}$ by adding $i$ edges between $K_{n_1}$ and $K_{n-n_{1}}$. Notice that any graph $G$ in $\mathcal{G}_{n,\delta+1}^{k-1}$ has exactly ${\delta+1\choose2} + {n-\delta-1\choose 2} +k-1$ edges but $\tau(G)<k$. \medskip We then focus on a spectral analogue. The corresponding spectral problem is much harder. Let $B_{n,\delta+1}^{i}$ be a graph obtained from $K_{\delta+1}\cup K_{n-\delta-1}$ by adding $i$ edges joining a vertex in $K_{\delta+1}$ and $i$ vertices in $K_{n-\delta-1}$. We succeed in discovering a sufficient condition for $\tau(G)\geq k$ via the spectral radius, and characterize the unique spectral extremal graph $B_{n,\delta+1}^{k-1}$ among the structural extremal graph family $\mathcal{G}_{n,\delta+1}^{k-1}$. \begin{thm}\label{thm::1.2} Let $k\geq 2$, and let $G$ be a connected graph with minimum degree $\delta\geq 2k$ and order $n\geq 2\delta+3$. If $\rho(G)\geq \rho(B_{n,\delta+1}^{k-1})$, then $\tau(G)\geq k$ unless $G\cong B_{n,\delta+1}^{k-1}$. \end{thm} Our proofs are novel and as an application, we will use similar proof techniques to settle a conjecture of Ning, Lu and Wang \cite{Ning}. Let $\mathcal{A}_{n}^{\kappa',\delta}$ be a set of graphs of order $n$ with minimum degree $\delta$ and edge connectivity $\kappa'$. For $0\leq \kappa'\leq 3$, Ning, Lu and Wang \cite{Ning} determined that the unique extremal graph with the maximum spectral radius is $B_{n,\delta+1}^{\kappa'}$, and they proposed the following conjecture. \begin{conj}[Ning, Lu and Wang~\cite{Ning}]\label{conj1} For $4\leq \kappa'<\delta$, $B_{n,\delta+1}^{\kappa'}$ is the graph with the maximum spectral radius in $\mathcal{A}_{n}^{\kappa',\delta}$. \end{conj} We confirm this conjecture for $n\ge 2\delta +4$ as described below. \begin{thm}\label{thm::1.3} Let $G\in \mathcal{A}_{n}^{\kappa',\delta}$ where $4\leq\kappa'<\delta$ and $n\geq 2\delta+4$. Then $\rho(G)\leq \rho(B_{n,\delta+1}^{\kappa'})$, with equality if and only if $G\cong B_{n,\delta+1}^{\kappa'}$. \end{thm} The proofs of Theorems~\ref{thm::edgenumber} and \ref{thm::1.2} as well as the one of Theorem~\ref{thm::1.3} will be presented in the next two sections, respectively. Since edge-disjoint spanning trees have many applications, we will list some of them in Section~\ref{sec::app}, including rigidity and nowhere-zero flows. Some concluding remarks will be made in Section~\ref{sec::remarks}. \section{Proofs of Theorems~\ref{thm::edgenumber} and \ref{thm::1.2}} In this section, we present the proofs of Theorems~\ref{thm::edgenumber} and \ref{thm::1.2}. We first list several lemmas that will be used in the sequel. The following sharp upper bound on the spectral radius was obtained by Hong, Shu and Fang ~\cite{HSF} and Nikiforov~\cite{V.N}, independently. \begin{lem}[\cite{HSF,V.N}]\label{lem::2.1} Let $G$ be a graph on $n$ vertices and $m$ edges with minimum degree $\delta\geq 1$. Then $$\rho(G) \leq \frac{\delta-1}{2}+\sqrt{2 m-n \delta+\frac{(\delta+1)^{2}}{4}},$$ with equality if and only if $G$ is either a $\delta$-regular graph or a bidegreed graph in which each vertex is of degree either $\delta$ or $n-1$. \end{lem} \begin{lem}[\cite{HSF,V.N}]\label{lem::2.2} For nonnegative integers $p$ and $q$ with $2q \leq p(p-1)$ and $0 \leq x \leq p-1$, the function $f(x)=(x-1) / 2+\sqrt{2 q-p x+(1+x)^{2} / 4}$ is decreasing with respect to $x$. \end{lem} Recall that $\mathcal{G}_{n,n_1}^{i}$ is the set of graphs obtained from $K_{n_1}\cup K_{n-n_{1}}$ by adding $i$ edges between $K_{n_1}$ and $K_{n-n_{1}}$. \begin{lem}\label{lem::2.3} Let $k\geq 2$ and let $G\in \mathcal{G}_{n,\delta+1}^{k-1}$ where $n\geq 2\delta+3$ and $\delta\geq 2k$. Then $$n-\delta-2< \rho(G)< n-\delta-1.$$ \end{lem} \begin{proof} Note that $G$ contains $K_{\delta+1}\cup K_{n-\delta-1}$ as a proper spanning subgraph and $n\geq 2\delta+3$. Then $\rho(G)>\rho(K_{\delta+1}\cup K_{n-\delta-1})= n-\delta-2$. Since $\delta\geq 2k\geq 4$, it follows that \begin{equation*} \begin{aligned} e(G)&={\delta+1\choose 2}+{n-\delta-1\choose 2}+k-1\\ &\leq {\delta+1\choose 2}+{n-\delta-1\choose 2}+\frac{\delta}{2}-1\\ &=\frac{n^2}{2}-\frac{(2\delta+3)n}{2}+\delta^2+\frac{5\delta}{2}. \end{aligned} \end{equation*} Combining this with Lemmas \ref{lem::2.1} and \ref{lem::2.2}, we have \begin{equation*} \begin{aligned} \rho(G)&\leq \frac{\delta-1}{2}+\sqrt{n^2 + (-3\delta - 3)n + \frac{9\delta^2}{4}+ \frac{11\delta}{2}+\frac{1}{4}}\\ &= \frac{\delta-1}{2}+\sqrt{\left(n-\frac{3\delta}{2}-\frac{1}{2}\right)^2-2(n-2\delta)}\\ &< \frac{\delta-1}{2}+ \left(n-\frac{3\delta}{2}-\frac{1}{2}\right)~~(\mbox{since $n\geq 2\delta\!+\!3$})\\ &=n-\delta-1. \end{aligned} \end{equation*} This completes the proof. \end{proof} It is known that if $G$ is connected, then $A(G)$ is irreducible. By the Perron-Frobenius theorem(cf. \cite[Section 8.8]{C.G-2}), the Perron vector $x$ is a positive eigenvector of $A(G)$ with respect to $\rho(G)$. For any $v\in V(G)$, let $N_{G}(v)$ and $d_G(v)$ be the neighborhood and degree of $v$ in $G$, respectively. Let $N_{G}[v]=N_{G}(v)\cup\{v\}$. \begin{lem}[\cite{H.L-1}]\label{lem::2.4} Let $G$ be a connected graph, and let $u,v$ be two vertices of $G$. Suppose that $v_{1},v_{2},\ldots,v_{s}\in N_{G}(v)\backslash N_{G}(u)$ with $s\geq 1$, and $G^*$ is the graph obtained from $G$ by deleting the edges $vv_{i}$ and adding the edges $uv_{i}$ for $1\leq i\leq s$. Let $x$ be the Perron vector of $A(G)$. If $x_{u}\geq x_{v}$, then $\rho(G)<\rho(G^*)$. \end{lem} For any vertex $v\in V(G)$ and any subset $S\subseteq V(G)$, let $N_{S}(v)=N_{G}(v)\cap S$ and $d_{S}(v)=|N_{S}(v)|$. \begin{lem}\label{lem::2.5} Let $k\geq 2$ and let $G\in \mathcal{G}_{n,\delta+1}^{k-1}$ where $n\geq 2\delta+3$ and $\delta\geq 2k$. Then $\rho(G)\leq \rho(B_{n,\delta+1}^{k-1})$, with equality if and only if $G\cong B_{n,\delta+1}^{k-1}$. \end{lem} \begin{proof} Suppose that $G'$ is the graph that attains the maximum spectral radius in $\mathcal{G}_{n,\delta+1}^{k-1}$. For every $G\in \mathcal{G}_{n,\delta+1}^{k-1}$, we have \begin{equation}\label{equ::1} \begin{aligned} \rho(G)\leq \rho(G'). \end{aligned} \end{equation} We partition $V(G')$ into $V_1\cup V_2$ with $V_1=V(K_{\delta+1})=\{u_1,u_2,\ldots,u_{\delta+1}\}$ and $V_2=V(K_{n-\delta-1})=\{v_{1},v_2,\ldots,v_{n-\delta-1}\}$. Let $x$ be the Perron vector of $A(G')$, and let $\rho'=\rho(G')$. Without loss of generality, we assume that $x_{u_{i+1}}\leq x_{u_{i}}$ and $x_{v_{j+1}}\leq x_{v_{j}}$ for $1\leq i\leq \delta$ and $1\leq j\leq n-\delta-2$. We assert that $N_{G'}(u_{i+1})\subseteq N_{G'}[u_i]$ and $N_{G'}(v_{j+1})\subseteq N_{G'}[v_j]$ for $1\leq i\leq \delta$ and $1\leq j\leq n-\delta-2$. If not, suppose that there exist $i,j$ with $i<j$ such that $N_{G'}(u_j)\nsubseteq N_{G'}[u_i]$. Let $w\in N_{G'}(u_j)\backslash N_{G'}[u_i]$ and $G^{*}=G'-wu_{j}+wu_{i}$. Note that $x_{u_{j}}\leq x_{u_{i}}$. Then $\rho(G^{*})>\rho(G')$ by Lemma \ref{lem::2.4}, which contradicts the maximality of $\rho'$. This implies that $N_{G'}(u_{i+1})\subseteq N_{G'}[u_i]$ for $1\leq i\leq \delta$. Similarly, we can deduce that $N_{G'}(v_{j+1})\subseteq N_{G'}[v_j]$ for $1\leq j\leq n-\delta-2$. Furthermore, we have $N_{V_2}(u_{i+1})\subseteq N_{V_2}(u_i)$ and $N_{V_1}(v_{j+1})\subseteq N_{V_1}(v_j)$ for $1\leq i\leq \delta$ and $1\leq j\leq n-\delta-2$. Let $d_{V_2}(u_1) =r$, $d_{V_2}(u_2)=t$, and $d_{V_1}(v_1)=s$. Again by the maximality of $\rho'$ and Lemma \ref{lem::2.4}, we have $N_{V_2}(u_1)=\{v_1,v_2,\dots,v_r\}$, $N_{V_2}(u_2)=\{v_1,v_2,\dots,v_t\}$ and $N_{V_1}(v_1)=\{u_1,u_2,\dots,u_s\}$. If $r=k-1$ or $s=1$, then $G'\cong B_{n,\delta+1}^{k-1}$. Combining this with (\ref{equ::1}), we can deduce that $\rho(G)\leq\rho(B_{n,\delta+1}^{k-1})$, with equality if and only if $G\cong B_{n,\delta+1}^{k-1}$, as required. Next we consider $r\leq k-2$ and $s\geq 2$ in the following. Note that $x_{v_i}=x_{v_{r+1}}$ for $r+1\leq i\leq n-\delta-1$, $x_{v_j}\geq x_{v_{r+1}}$ for $2\leq j\leq r$. Then, by $A(G')x=\rho' x$, we have \begin{eqnarray*} \rho' x_{v_{r+1}} =x_{v_1}+\sum_{2\leq i\leq r}x_{v_i}+(n-\delta-r-2)x_{v_{r+1}}\geq x_{v_1}+(n-\delta-3)x_{v_{r+1}}. \end{eqnarray*} Note that $G'\in\mathcal{G}_{n,\delta+1}^{k-1}$. Then $\rho'> n-\delta-2$ by Lemma \ref{lem::2.3}, and hence \begin{eqnarray} x_{v_{r+1}}\geq\frac{x_{v_1}}{\rho'-(n-\delta-3)}. \label{equ::2} \end{eqnarray} Assume that $E_1=\{u_1v_{i}|~r+1\leq i\leq k-1\}$ and $E_2=\{u_{i}v_j\in G' |~ 2\leq i\leq s, 1\leq j\leq t\}$. Let $G''=G'-E_2+E_1$. Clearly, $G''\cong B_{n,\delta+1}^{k-1}$. Let $y$ be the Perron vector of $A(G'')$, and let $\rho''=\rho(G'')$. By $A(G'')y=\rho'' y$, we have \begin{eqnarray*} &\rho''y_{u_2}=y_{u_1}+(\delta-1)y_{u_2}. \end{eqnarray*} Note that $G''\in\mathcal{G}_{n,\delta+1}^{k-1}$ and $n\geq2\delta+3$. Again by Lemma \ref{lem::2.3}, $\rho''> n-\delta-2> \delta-1$, and hence \begin{eqnarray} y_{u_2}=\frac{y_{u_1}}{\rho''-\delta+1}. \label{equ::3} \end{eqnarray} Since $G',G''\in \mathcal{G}_{n,\delta+1}^{k-1}$, by Lemma \ref{lem::2.3}, $\rho''\!-\!\rho'>-1$. Note that $x_{u_1}\geq x_{u_i}$ for $2\leq i\leq s$, $x_{v_1}\geq x_{v_j}$ for $2\leq j\leq t$. Combining this with (\ref{equ::2}), (\ref{equ::3}) and $\rho''\!-\!\rho'>-1$, we have \begin{equation*} \begin{aligned} y^{T}(\rho''-\rho')x &=y^{T}(A(G'')-A(G'))x\\ &=\sum_{u_1v_i\in E_1}(x_{u_{1}}y_{v_{i}}\!+\!x_{v_{i}}y_{u_{1}})\! -\sum_{u_iv_j\in E_2}\!(x_{u_{i}}y_{v_{j}}\!+\!x_{v_{j}}y_{u_{i}})\\ &\geq(k\!-\!1-r)(x_{u_1}y_{v_1}+x_{v_{r+1}}y_{u_1}-x_{u_1}y_{v_1}-x_{v_1}y_{u_2}) ~~(\mbox{since $r\leq k-2$})\\ &=(k\!-\!1-r)(x_{v_{r+1}}y_{u_1}-x_{v_1}y_{u_2})\\ &\geq(k\!-\!1-r)x_{v_{1}}y_{u_1}\left(\frac{1}{\rho'-(n-\delta-3)}-\frac{1}{\rho''-\delta+1}\right) ~~(\mbox{by (\ref{equ::2}) and (\ref{equ::3})})\\ &=\frac{(k\!-\!1-r)x_{v_{1}}y_{u_1}}{(\rho'-(n-\delta-3))(\rho''-\delta+1)}(\rho''-\rho'+n-2\delta-2)\\ &>0~~(\mbox{since $k\geq r\!+\!2$, $\rho'>n\!-\!\delta\!-\!2$, $\rho''>\delta\!-\!1$ and $n\geq 2\delta\!+\!3$}), \end{aligned} \end{equation*} and hence $\rho''>\rho'$, which contradicts the maximality of $\rho'$. This completes the proof. \end{proof} For $X,Y\subseteq V(G)$, we denote by $E_{G}(X,Y)$ the set of edges with one endpoint in $X$ and one endpoint in $Y$, and $e_{G}(X,Y)=|E_{G}(X,Y)|$. \begin{lem}\label{lem::2.6} Let $k\geq 2$, and let $G\in \mathcal{G}_{n,a}^{k-1}$ where $n\geq 2a$, $a\geq \delta+2$ and $\delta\geq 2k$. Then $\rho(G)<\rho(B_{n,\delta+1}^{k-1}).$ \end{lem} \begin{proof} Let $G\in \mathcal{G}_{n,a}^{k-1}$. Then $V(G)=V(K_a)\cup V(K_{n-a})$ and $e_{G}(V(K_a),V(K_{n-a}))=k-1$. We partition $V(K_a)$ into $A_1\cup A_{2}$ and $V(K_{n-a})$ into $B_1\cup B_{2}$ such that $|A_1|=|B_1|=\delta+1$, $e_{G}(A_2, V(K_{n-a}))=0$ and $e_{G}(B_2, V(K_{a}))=0$. Since $n\geq 2a$, $a\geq \delta+2$ and $\delta\geq 2k$, it follows that $|A_2|=|V(K_a)|-|A_1|\geq 1$ and $|B_2|=|V(K_{n-a})|-|B_1|\geq 1$. Let $x$ be the Perron vector of $A(G)$. If $\sum_{v\in A_1}x_{v}<\sum_{v\in V(K_{n-a})}x_{v}$, then let $G'$ be a graph obtained from $G$ by deleting all edges between $A_1$ and $A_2$, and adding all possible edges between $A_2$ and $V(K_{n-a})$. Clearly, $G'\in \mathcal{G}_{n,\delta+1}^{k-1}$ and \begin{equation*} \begin{aligned} \rho(G')-\rho(G)&\geq x^{T}(A(G')-A(G))x\\ &=2\sum_{v\in A_2}x_{v}\left(\sum_{v\in V(K_{n-a})}x_{v}-\sum_{v\in A_1}x_{v}\right)\\ &>0, \end{aligned} \end{equation*} and hence $\rho(G')>\rho(G)$. Combining this with Lemma \ref{lem::2.5}, we have $\rho(B_{n,\delta+1}^{k-1})\geq\rho(G')>\rho(G)$, as required. If $\sum_{v\in A_1}x_{v}\geq\sum_{v\in V(K_{n-a})}x_{v}$, since $\sum_{v\in A_2}x_{v}>0$ and $\sum_{v\in B_2}x_{v}>0$, we have $$\sum_{v\in V(K_{a})}x_{v}>\sum_{v\in A_1}x_{v}\geq\sum_{v\in V(K_{n-a})}x_{v}>\sum_{v\in B_1}x_{v}.$$ Let $G''$ be a graph obtained from $G$ by deleting all edges between $B_1$ and $B_2$, and adding all possible edges between $V(K_{a})$ and $B_2$. One can verify that $G''\in \mathcal{G}_{n,\delta+1}^{k-1}$ and \begin{equation*} \begin{aligned} \rho(G'')-\rho(G)&\geq x^{T}(A(G'')-A(G))x\\ &=2\sum_{v\in B_2}x_{v}\left(\sum_{v\in V(K_a)}x_{v}-\sum_{v\in B_1}x_{v}\right)\\ &>0, \end{aligned} \end{equation*} and hence $\rho(G'')>\rho(G)$. Similarly, $\rho(B_{n,\delta+1}^{k-1}) \geq\rho(G'') >\rho(G)$. This completes the proof. \end{proof} \begin{lem}\label{lem::2.7} Let $a$ and $b$ be two positive integers. If $a\geq b$, then $${a\choose 2}+{b\choose 2}< {a+1\choose 2}+{b-1\choose 2}.$$ \end{lem} \begin{proof} Note that $a\geq b$. Then \begin{equation*} \begin{aligned} &{a+1\choose 2}+{b-1\choose 2}\!-\!{a\choose 2}\!-\!{b\choose 2}=a-b+1>0. \end{aligned} \end{equation*} Thus the result follows.\end{proof} Denote by $\partial_{G}(X)=E_{G}(X,V(G)-X)$. \begin{lem}[Lemma~2.8 of \cite{Gu-Lai}]\label{lem::2.8} Let $G$ be a graph with minimum degree $\delta$ and $U$ be a non-empty proper subset of $V(G)$. If $|\partial_{G}(U)|\leq \delta-1$, then $|U|\geq \delta+1$. \end{lem} For any partition $\pi$ of $V(G)$, let $E_{G}(\pi)$ denote the set of edges in $G$ whose endpoints lie in different parts of $\pi$, and let $e_{G}(\pi)=|E_{G}(\pi)|$. A part is \textit{trivial} if it contains a single vertex. The following fundamental theorem on spanning tree packing number of a graph was established by Nash-Williams \cite{Nash-Williams} and Tutte \cite{Tutte}, independently. \begin{thm}[Nash-williams~\cite{Nash-Williams} and Tutte~\cite{Tutte}] \label{lem::2.9} Let $G$ be a connected graph. Then $\tau(G)\geq k$ if and only if for any partition $\pi$ of $V(G)$, $$e_{G}(\pi)\geq k(t-1),$$ where $t$ is the number of parts in the partition $\pi$. \end{thm} For $X\subseteq V(G)$, let $G[X]$ be the subgraph of $G$ induced by $X$, and let $e_{G}(X)$ be the number of edges in $G[X]$. Now, we shall give the proofs of Theorems \ref{thm::edgenumber} and \ref{thm::1.2}. \begin{proof}[\bf Proof of Theorem \ref{thm::edgenumber}] Assume to the contrary that $\tau(G)\leq k-1$. By Theorem \ref{lem::2.9}, there exists a partition $\pi$ of $V(G)$ with $t_1$ trivial parts $v_1,v_2,\ldots, v_{t_1}$ and $t_2$ nontrivial parts $V_1,V_2,\ldots,V_{t_2}$ such that \begin{equation}\label{equ::4} \begin{aligned} e_{G}(\pi)\leq k(t-1)-1, \end{aligned} \end{equation} where $t=t_1+t_2$. If the partition $\pi$ contains only one part, then $e_{G}(\pi)\leq -1$ by (\ref{equ::4}), which is impossible because $e_{G}(\pi)=0$. This implies that $t\geq 2$. We assert that $t_2\geq 2$. If not, suppose that $t_2\leq 1$. Then $t_1\geq t-1$. Note that $d_{G}(v_i)\geq \delta$ for $1\leq i\leq t_1$. Combining this with $\delta\geq 2k$, we have \begin{equation*} \begin{aligned} e_{G}(\pi)\geq \frac{1}{2}\sum_{1\leq i\leq t_1}d_{G}(v_i)\geq \frac{1}{2}\delta t_1\geq k(t-1), \end{aligned} \end{equation*} which contradicts (\ref{equ::4}). This implies that $t_2\geq 2$. If the partition $\pi$ contains at most one nontrivial part, say $V_j$ ($1\leq j\leq t_2$), such that $|\partial_{G}(V_j)|\leq \delta-1$. Then $|\partial_{G}(V_i)|\geq \delta$ for all $i\in\{1,\ldots,t_2\}\backslash\{j\}$. Since $G$ is connected, it follows that $|\partial_{G}(V_j)|\geq 1$, and hence \begin{equation*} \begin{aligned} 2e_{G}(\pi)&= \sum_{1\leq i\leq t_2}|\partial_{G}(V_i)|+\sum_{1\leq j\leq t_1}d_{G}(v_j)\\ &\geq (t_2-1)\delta+1+\delta t_1\\ &= \delta(t-1)+1~~(\mbox{since $t=t_1+t_2$})\\ &\geq 2k(t-1)+1 ~~(\mbox{since $\delta\geq 2k$}), \end{aligned} \end{equation*} which also contradicts (\ref{equ::4}). Therefore, the partition $\pi$ contains at least two nontrivial parts, say $V_1,V_2$, such that $|\partial_{G}(V_i)|\leq \delta-1$ for $i=1,2$. Furthermore, by Lemma~\ref{lem::2.8}, we obtain $|V_i|\geq \delta+1$ for $i=1,2$. If $|V_1|=\max\{|V_1|,|V_2|,\ldots,|V_{t_2}|\}$ or $|V_2|=\max\{|V_1|,|V_{2}|,\ldots,|V_{t_2}|\}$, since $|V_{i}|\geq\delta+1$ and $|V_{j}|\geq2$ for $i=1,2$ and $3\leq j\leq t_2$, by Lemma \ref{lem::2.7}, \begin{equation*} \begin{aligned} \sum_{1\leq i\leq t_2}e_{G}(V_i)&\leq {\delta+1\choose 2}+(t_2-2){2\choose 2}+{n-(t+\delta+t_2-3)\choose 2}\\ &\leq{\delta+1\choose 2} +{n-(t+\delta-1)\choose 2} ~~(\mbox{since $t_2\geq 2$}). \end{aligned} \end{equation*} If there exists a nontrivial part, say $V_j$, such that $|V_j|=\max\{|V_1|,|V_2|,\ldots,|V_{t_2}|\}$ for some $3\leq j\leq t_2$. Similarly, $$\sum_{1\leq i\leq t_2}e_{G}(V_i)\leq 2{\delta+1\choose 2} +{n-(2\delta+t-1)\choose 2}.$$ Note that $|V_1|\geq \delta+1$ and $|V_2|\geq \delta+1$. Then $2\leq t\leq n-2\delta$. Combining this with (\ref{equ::4}) and $\sum_{1\leq i\leq t_1}e_{G}(v_i)=0$, we have \begin{equation*} \begin{aligned} e(G)&=\sum_{1\leq i\leq t_2}e_{G}(V_i)+\sum_{1\leq i\leq t_1}e_{G}(v_i)+e_{G}(\pi)\\ &\leq \max\left\{{\delta\!+\!1\choose 2}\!+\!{n\!-\!(t\!+\!\delta\!-\!1)\choose 2},2{\delta\!+\!1\choose 2}\!+\!{n\!-\!(2\delta\!+\!t\!-\!1)\choose 2}\right\}\!+\!k(t\!-\!1)\!-\!1\\ &\leq{\delta\!+\!1\choose 2}\!+\!{n\!-\!(t\!+\!\delta\!-\!1)\choose 2}+k(t\!-\!1)\!-\!1~~(\mbox{since $t\leq n-2\delta$})\\ &=\frac{t^2}{2}+(-n+\delta-\frac{1}{2}+k) t+\delta^2-n\delta+ \frac{n^2}{2}+\frac{n}{2}-k-1. \end{aligned} \end{equation*} Let $g(t)=\frac{t^2}{2}+(-n+\delta-\frac{1}{2}+k) t+\delta^2-n\delta+ \frac{n^2}{2}+\frac{n}{2}-k-1$. We take the derivative of $g(t)$. Thus $$g'(t)=t+\delta+k-n-\frac{1}{2}\leq k-\delta-\frac{1}{2}<0$$ by the facts that $\delta\geq 2k$, $k\geq 1$ and $t\leq n-2\delta$. This implies that $g(t)$ is decreasing with respect to $2\leq t\leq n-2\delta$, and hence $$e(G)\leq {\delta\!+\!1\choose2}\!+\!{n\!-\!\delta\!-\!1\choose 2}+k-1,$$ a contradiction. This completes the proof.\end{proof} \begin{proof}[\bf Proof of Theorem \ref{thm::1.2}] Assume to the contrary that $\tau(G)\leq k-1$. By Theorem~\ref{lem::2.9}, there exists a partition $\pi$ of $V(G)$ with $t_1$ trivial parts $v_1,v_2,\ldots, v_{t_1}$ and $t_2$ nontrivial parts $V_1,V_2,\ldots,V_{t_2}$ such that \begin{equation}\label{equ::5} \begin{aligned} e_{G}(\pi)\leq k(t-1)-1, \end{aligned} \end{equation} where $t=t_1+t_2$. By using the same analysis as Theorem \ref{thm::edgenumber}, we can deduce that the partition $\pi$ contains at least two nontrivial parts, say $V_1,V_2$, such that $|V_i|\geq\delta+1$ for $i=1,2$. First suppose that $t=2$. This implies that the partition $\pi$ consists of two nontrivial parts $V_1,V_2$. Then $e_{G}(\pi)=e_{G}(V_1,V_2)\leq k-1$ by (\ref{equ::5}). Clearly, $G$ is a spanning subgraph of some graph $H$ in $\mathcal{G}_{n,|V_1|}^{k-1}$. Then \begin{equation}\label{equ::6} \rho(G)\leq \rho(H), \end{equation} with equality if and only if $G\cong H$. Note that $\min\{|V_1|,|V_2|\}\geq\delta+1$. Combining this with Lemmas~\ref{lem::2.5} and \ref{lem::2.6} as well as (\ref{equ::6}), we conclude that $$\rho(G)\leq\rho(B_{n,\delta+1}^{k-1}),$$ with equality if and only if $G\cong B_{n,\delta+1}^{k-1}$. However, this is impossible because $\rho(G)\geq\rho(B_{n,\delta+1}^{k-1})$ and $G\ncong B_{n,\delta+1}^{k-1}$. Now suppose that $t\geq 3$. Note that $\rho(G)\geq\rho(B_{n,\delta+1}^{k-1})>\rho(K_{n-\delta-1})=n-\delta-2$. Then by Lemmas \ref{lem::2.1} and \ref{lem::2.2}, \begin{equation}\label{equ::7} e(G)> \frac{n^2}{2}-\frac{(2\delta+3)n}{2}+(\delta+1)^{2}. \end{equation} Since $\min\{|V_1|,|V_2|\}\geq\delta+1$, by using a similar analysis as Theorem \ref{thm::edgenumber}, it follows that \begin{equation*} \begin{aligned} e(G)&=\sum_{1\leq i\leq t_2}e_{G}(V_i)+\sum_{1\leq i\leq t_1}e_{G}(v_i)+e_{G}(\pi)\\ &\leq \max\left\{{\delta\!+\!1\choose 2}\!+\!{n\!-\!(t\!+\!\delta\!-\!1)\choose 2},2{\delta\!+\!1\choose 2}\!+\!{n\!-\!(2\delta\!+\!t\!-\!1)\choose 2}\right\}\!+\!k(t\!-\!1)\!-\!1\\ &\leq{\delta\!+\!1\choose 2}\!+\!{n\!-\!(t\!+\!\delta\!-\!1)\choose 2}+k(t\!-\!1)\!-\!1~~(\mbox{since $t\leq n-2\delta$})\\ &= \frac{n^2}{2}-\frac{(2t+2\delta-1)n}{2}+\frac{(t+2\delta+2k-1)t}{2}+\delta^2-k-1. \end{aligned} \end{equation*} Combining this with (\ref{equ::7}) and $t\geq 3$, we have \begin{equation}\label{equ::8} n< \delta+k+\frac{t+1}{2}+\frac{k-1}{t-2}. \end{equation} Suppose that $f(t)=\delta+k+\frac{t+1}{2}+\frac{k-1}{t-2}$. One can verify that $f(t)$ is convex for $t>0$ and its maximum in any closed interval is attained at one of the ends of this interval. Note that $3\leq t\leq n-2\delta$. Then \begin{equation*} \begin{aligned} f(t)&\leq \max\left\{\delta+2k+1, \frac{n+1}{2}+k+\frac{k-1}{n-2\delta-2}\right\} \leq \frac{n-1}{2}+2k~(\mbox{since $n\geq 2\delta +3$}). \end{aligned} \end{equation*} Combining this with (\ref{equ::8}) and $\delta\geq 2k$, we can deduce that $n<4k-1\leq 2\delta -1$, a contradiction. This completes the proof. \end{proof} \section{Proof of Theorem \ref{thm::1.3}}\label{sec::econ} The proof idea of Theorem~\ref{thm::1.3} is quite similar to that of Theorem~\ref{thm::1.2}. To present the proof, we need several lemmas below. \begin{lem}[Theorem~3.1 of \cite{Ning}]\label{lem::3.1} If $G_0$ is a graph with the maximum spactral radius in $\mathcal{A}_{n}^{\kappa',\delta}$, where $1\leq\kappa'<\delta$, then $G_0\in \mathcal{G}_{n,\delta+1}^{\kappa'}$. \end{lem} Recall that $\mathcal{G}_{n,n_1}^{i}$ is the set of graphs obtained from $K_{n_1}\cup K_{n-n_{1}}$ by adding $i$ edges between $K_{n_1}$ and $K_{n-n_{1}}$, and $B_{n,\delta+1}^{i}$ is the graph obtained from $K_{\delta+1}\cup K_{n-\delta-1}$ by adding $i$ edges joining a vertex in $K_{\delta+1}$ and $i$ vertices in $K_{n-\delta-1}$. We can extend the results of Lemmas \ref{lem::2.3} and \ref{lem::2.5}. \begin{lem}\label{lem::3.2} Let $G\in\mathcal{G}_{n,\delta+1}^{\kappa'}$, where $n\geq 2\delta+4$ and $4\leq \kappa'< \delta$. Then $$n-\delta-2< \rho(G)< n-\delta.$$ \end{lem} \begin{proof} Note that $G$ contains $K_{\delta+1}\cup K_{n-\delta-1}$ as a proper spanning subgraph and $n\geq 2\delta+4$. Then $\rho(G)>\rho(K_{\delta+1}\cup K_{n-\delta-1})= n-\delta-2$. Since $G\in\mathcal{G}_{n,\delta+1}^{\kappa'}$ and $\delta> \kappa'$, we have \begin{equation*} \begin{aligned} e(G)&={\delta+1\choose 2}+{n-\delta-1\choose 2}+\kappa'\\ &\leq {\delta+1\choose 2}+{n-\delta-1\choose 2}+\delta-1\\ &=\frac{n^2}{2}-\frac{(2\delta+3)n}{2}+\delta^2+3\delta. \end{aligned} \end{equation*} Combining this with Lemmas \ref{lem::2.1} and \ref{lem::2.2}, we have \begin{equation*} \begin{aligned} \rho(G)&\leq \frac{\delta-1}{2}+\sqrt{n^2 + (-3\delta - 3)n + \frac{9\delta^2}{4}+ \frac{13\delta}{2}+\frac{1}{4}}\\ &= \frac{\delta-1}{2}+\sqrt{\left(n-\frac{3\delta}{2}+\frac{1}{2}\right)^2-4(n-2\delta )}\\ &< \frac{\delta-1}{2}+ \left(n-\frac{3\delta}{2}+\frac{1}{2}\right)~~(\mbox{since $n\geq 2\delta\!+\!4$})\\ &=n-\delta. \end{aligned} \end{equation*} Thus $n-\delta-2< \rho(G)< n-\delta$, as required.\end{proof} By Lemma \ref{lem::3.2} and using the same analysis as the proof of Lemma \ref{lem::2.5}, we easily obtain the following result. \begin{lem}\label{lem::3.3} Let $G\in \mathcal{G}_{n,\delta+1}^{\kappa'}$ where $4\leq \kappa'< \delta$ and $n\geq 2\delta+4$. Then $\rho(G)\leq \rho(B_{n,\delta+1}^{\kappa'})$, with equality if and only if $G\cong B_{n,\delta+1}^{\kappa'}$. \end{lem} \begin{proof}[\bf Proof of Theorem \ref{thm::1.3}] The result follows from Lemmas~\ref{lem::3.1} and \ref{lem::3.3}. \end{proof} \section{Some applications}\label{sec::app} Edge-disjoint spanning trees are closely related to many graph properties, such as collapsibility, supereulerianity, spanning connectivity, nowhere-zero flows, group connectivity, rigidity, and others \cite{CDGG22,LL19,Palmer01}. We introduce several applications of our main result in spectral graph theory. Spectral conditions of classic rigidity have been studied in \cite{CDGG22,CDG21,FHL22}. We present spectral conditions for two other variations of rigidity in the next two subsections. To conclude, we also present applications in nowhere-zero flows at the end of this section. \subsection{Body-and-bar rigidity} A \textit{body-and-bar frameworks} in $\mathbb{R}^d$ is a framework of $d$-dimension rigid bodies that are connected by fixed-length bars attached at points of their surfaces (see \cite{Tay84} for more details). Informally, we say a graph $G$ is \textit{body-bar rigid in $\mathbb{R}^d$} if there exists a generic rigid body-bar framework in $\mathbb{R}^d$. Instead of a formal definition, we present the following characterization. \begin{thm}[Tay~\cite{Tay84}] A graph $G$ is body-bar rigid in $\mathbb{R}^d$ if and only if it contains $\frac{d(d+1)}{2}$ edge-disjoint spanning trees. \end{thm} By the above theorem and Theorem~\ref{thm::1.2}, we have the following spectral condition for body-bar rigidity. \begin{pro} Let $k=\frac{d(d+1)}{2}$, and let $G$ be a connected graph with minimum degree $\delta\geq 2k$ and order $n\geq 2\delta+3$. If $\rho(G)\geq \rho(B_{n,\delta+1}^{k-1})$, then $G$ is body-bar rigid in $\mathbb{R}^d$ unless $G\cong B_{n,\delta+1}^{k-1}$. \end{pro} \subsection{Rigidity on surfaces of revolution} Here we assume that the joints of our framework are restricted to lie on a smooth surface $\mathcal{M} \subset \mathbb{R}^3$, and $\mathcal{M}$ is an \textit{irreducible surface}; i.e.,~$\mathcal{M}$ is the zero set of an irreducible rational polynomial $h(x,y,z) \in \mathbb{Q}[X,Y,Z]$. The framework $(G,p)$ with $p(v) \in \mathcal{M}$ for every $v \in V(G)$ is \textit{rigid on $\mathcal{M}$} if there exists $\varepsilon >0$ such that if $(G,p)$ is equivalent to $(G,q)$ and $\|p(v)-q(v)\|<\epsilon$ and $q(v) \in \mathcal{M}$ for every $v \in V(G)$, then $(G,p)$ is congruent to $(G,q)$. An irreducible surface is called an \textit{irreducible surface of revolution} if it can be generated by rotating a continuous curve about a fixed axis. See \cite{NixonOwenPower12} for more information. \begin{thm}[Nixon, Owen and Power \cite{NixonOwenPower12,NixonOwenPower14}] \label{thm::nop} Let $\mathcal{M}$ be an irreducible surface of revolution. Then a graph $G$ is rigid on $\mathcal{M}$ if and only if one of the following: \\$(i)$ $G$ is a complete graph, \\$(ii)$ $\mathcal{M}$ is a sphere and $G$ contains a spanning Laman graph, \\$(iii)$ $\mathcal{M}$ is a cylinder and $G$ contains two edge-disjoint spanning trees, or \\$(iv)$ $\mathcal{M}$ is not a cylinder or a sphere and $G$ contains two edge-disjoint spanning subgraphs $G_1,G_2$, where $G_1$ is a tree and every connected component of $G_2$ contains exactly one cycle. \end{thm} By the above theorem, the results in \cite{FHL22} actually imply spectral conditions for the rigidity on sphere. By Theorem~\ref{thm::1.2}, we have the following spectral condition for rigidity on irreducible surfaces of revolution that is not a sphere. \begin{pro} Let $G$ be a connected graph with minimum degree $\delta\geq 4$ and order $n\geq 2\delta+3$. If $\rho(G)\geq \rho(B_{n,\delta+1}^{1})$, then $G$ is rigid on any irreducible surface of revolution that is not a sphere unless $G\cong B_{n,\delta+1}^{1}$. \end{pro} \begin{proof} By Theorem~\ref{thm::1.2}, $G$ contains two edge-disjoint spanning trees. Since $\delta\ge 4$, we have $|E(G)|\ge n\delta/2\ge 2n$. Thus $G$ has extra edges that are not in the two edge-disjoint spanning trees. It follows that $G$ contains a spanning tree and a spanning subgraph with exactly one cycle that are edge-disjoint. By Theorem~\ref{thm::nop}, the result follows. \end{proof} \subsection{Nowhere-zero flows} The theory of integer flows was initiated by Tutte. For an orientation $D$ of a graph $G$ and for each vertex $v$, let $E_D^+(v)$ and $E_D^-(v)$ be the set of edges oriented away from $v$ and the set of edges oriented into $v$, respectively. A \textit{nowhere-zero $k$-flow} of a graph $G$ is an orientation $D$ together with a function $f: E(G)\rightarrow \{\pm 1, \pm 2,\ldots, \pm(k-1)\}$ such that $\sum_{e\in E_D^+(v)} f(e) = \sum_{e\in E_D^-(v)} f(e)$ for each vertex $v$. As a generalization of nowhere-zero $k$-flow, a \textit{nowhere-zero circular $k/d$-flow}, introduced by Goddyn, Tarsi and Zhang~\cite{GTZ98}, is a nowhere-zero $k$-flow such that the range of $f$ is contained in $\{\pm d, \pm (d+1),\ldots, \pm(k-d)\}$. The \textit{flow index $\phi(G)$} of a graph $G$ is the least rational number $r$ such that $G$ admits a nowhere-zero circular $r$-flow. The central problems in this research area are the three well-known flow conjectures proposed by Tutte. For the application of spanning tree packing number in nowhere-zero flow theory, we refer readers to~\cite{LL19}. In particular, it is proved in~\cite{LXZ07} that if $\tau(G)\ge 3$, then $\phi(G)<4$; and in~\cite{HLL18} that if $\tau(G)\ge 4$, then $\phi(G)\le 3$. Thus, by Theorem~\ref{thm::1.2}, we can easily obtain sufficient conditions of flow index via spectral radius, however, these spectral conditions might not be best possible. It is worth studying flow index directly from spectral perspectives in the future. \section{Concluding remarks}\label{sec::remarks} Theorem~\ref{thm::1.2} actually implies that $B_{n,\delta+1}^{\tau}$ is the unique graph that has the maximum spectral radius among all graphs of fixed order $n$ with minimum degree $\delta$ and spanning tree packing number $\tau$. Let $G$ be a minimum graph with $\tau(G)\ge k$ and of order $n$, that is, $G$ consists of exactly $k$ edge-disjoint spanning trees with no extra edges. This implies that $\tau(G)=k$ and $e(G)=k(n-1)$. We are interested in the maximum possible spectral radius of $G$. \begin{prob}\label{prob::minkst} Let $G$ be a minimum graph with $\tau(G)\ge k$ and of order $n$. For each $n\ge 4$ and each $k\ge 2$, determine the maximum possible spectral radius of $G$ and characterize extremal graphs. \end{prob} To attack Problem~\ref{prob::minkst}, notice that $\delta(G)\ge k$ and $e(G)=k(n-1)$, we can easily obtain an upper bound on $\rho(G)$ by Lemma~\ref{lem::2.1}. However, this upper bound may not be tight, since for most values of $n$, $G$ cannot be $k$-regular or a bidegreed graph in which each vertex is of degree either $k$ or $n-1$. To attain the maximum spectral radius, it seems that $G$ is obtained from a $K_{2k}$ by continuously adding a vertex and $k$ incident edges step by step until $G$ has $n$ vertices, and in particular, we guess the graph is $K_{k}\nabla (K_{k}\cup (n-2k)K_1)$, where $\nabla$ and $\cup$ denote the join and the union of two graphs, respectively. We leave this as an open question. Nash-Williams~\cite{Nash64} ever studied the forest covering problem, seeking the minimum number of forests that cover the entire graph. This is like a dual problem of spanning tree packing. The {\it arboricity} $a(G)$ is the minimum number of edge-disjoint forests whose union equals $E(G)$. \begin{thm}[Nash-Williams~\cite{Nash64}]\label{thm::5.1} Let $G$ be a connected graph. Then $a(G)\le k$ if and only if for any subgraph $H$ of $G$, $|E(H)|\le k(|V(H)|-1)$. \end{thm} Naturally, we have the following problem. \begin{prob}\label{prob::arbo} Find a tight spectral radius condition for a graph $G$ of order $n$ with $a(G)\le k$ and characterize extremal graphs. \end{prob} When $n\leq 2k$, Problem~\ref{prob::arbo} is trivial. Any graph $G$ of order $n\leq 2k$ has the property $a(G)\le k$ and so the extremal graph is $K_n$. To see this, for any subgraph $H$ of $G$, we can deduce that $|E(H)|\leq {|V(H)|\choose 2} \leq k(|V(H)|-1)$ when $n\leq 2k$, and the conclusion follows from Theorem~\ref{thm::5.1}. The situation becomes more involved for $n\geq 2k+1$. In fact, this case is even stronger than Problem~\ref{prob::minkst}, since if $G$ consists of exactly $k$ edge-disjoint spanning trees with no extra edges, we have $a(G) =\tau(G)=k$. Notice that $a(K_{k}\nabla (K_{k}\cup (n-2k)K_1))=k$ and $e(K_{k}\nabla (K_{k}\cup (n-2k)K_1))=k(n-1)$, thus we guess that the extremal graph w.r.t. the spectral radius is also $K_{k}\nabla (K_{k}\cup (n-2k)K_1)$. This is left for possible future work. \iffals \section{Some possible problems} \red{This is a temporary section for discussing and listing related problems.} \begin{prob} If a simple graph $G$ consists of exactly $k$ edge-disjoint spanning trees with maximum possible spectral radius, what is the structure of $G$? \end{prob} \red{For example, if $k=2$, it seems $G$ can be obtained from a $K_4$ by continuously adding a vertex and two edges in each step? In general, $G$ can be obtained from a $K_{2k}$ by continuously adding a vertex and $k$ edges step by step?} \medskip A {\bf $c$-forest} is a forest with exactly $c$ components. \begin{prob}[Cioab\u{a} and Wong~\cite{Wong}] Find a (tight) spectral condition for a graph $G$ that contains $k$ edge-disjoint spanning $c$-forests. \end{prob} \begin{prob} Find a (tight) spectral condition for a graph $G$ that contains two edge-disjoint spanning subgraphs $G_1,G_2$, where $G_1$ is a tree and every connected component of $G_2$ contains exactly one cycle. \red{It has an application in rigidity theory.} \end{prob} \begin{prob} Find a (tight) spectral condition for a graph $G$ so that $\tau(G-e)\ge k$ for every edge $e$ of $G$. (\red{Is this feasible?}) \end{prob} \begin{prob} Denoted by $tG$ the multigraph obtained from $G$ by replacing every edge with $t$ parallel edges. Find a (tight) spectral condition for a graph $G$ so that $\tau(tG)\ge k$ (or $\tau(tG-e)\ge k$). \red{It has an application in rigidity theory.} \end{prob} \begin{prob} Any generalized edge connectivity version of Theorem~\ref{thm::3.1}? \end{prob} Motivated by Conjecture~\ref{conj1}, we have the following problem. To attack this problem, we need a result similar to Lemma~\ref{lem::3.1}. \begin{prob} Let $\mathcal{H}_{n}^{\tau,\delta}$ be a set of graphs of order $n$ with minimum degree $\delta$ and STP number $\tau$. What is the graph with maximum spectral radius in $\mathcal{H}_{n}^{\tau,\delta}$? \red{This can be solved by Theorem~\ref{thm::1.2}.} \end{prob} \f \section*{Acknowledgements} Xiaofeng Gu was supported by a grant from the Simons Foundation (522728), and Huiqiu Lin was supported by the National Natural Science Foundation of China (No. 12011530064) and Natural Science Foundation of Shanghai (No. 22ZR1416300).
{ "timestamp": "2022-07-12T02:28:02", "yymm": "2207", "arxiv_id": "2207.04701", "language": "en", "url": "https://arxiv.org/abs/2207.04701", "abstract": "The spanning tree packing number of a graph $G$, denoted by $\\tau(G)$, is the maximum number of edge-disjoint spanning trees contained in $G$. The study of $\\tau(G)$ is one of the classic problems in graph theory. Cioabă and Wong initiated to investigate $\\tau(G)$ from spectral perspectives in 2012 and since then, $\\tau(G)$ has been well studied using the second largest eigenvalue of the adjacency matrix in the past decade. In this paper, we further extend the results in terms of the number of edges and the spectral radius, respectively; and prove tight sufficient conditions to guarantee $\\tau(G)\\geq k$ with extremal graphs characterized. Moreover, we confirm a conjecture of Ning, Lu and Wang on characterizing graphs with the maximum spectral radius among all graphs with a given order as well as fixed minimum degree and fixed edge connectivity. Our results have important applications in rigidity and nowhere-zero flows. We conclude with some open problems in the end.", "subjects": "Combinatorics (math.CO)", "title": "Spectral radius and edge-disjoint spanning trees", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.990440598395557, "lm_q2_score": 0.8221891392358015, "lm_q1q2_score": 0.8143295030590352 }
https://arxiv.org/abs/1302.2076
On measures of symmetry and floating bodies
We consider the following measure of symmetry of a convex n-dimensional body K: $\rho(K)$ is the smallest constant for which there is a point x in K such that for partitions of K by an n-1-dimensional hyperplane passing through x the ratio of the volumes of the two parts is at most $\rho(K)$. It is well known that $\rho(K)=1$ iff K is symmetric. We establish a precise upper bound on $\rho(K)$; this recovers a 1960 result of Grunbaum. We also provide a characterization of equality cases (relevant to recent results of Nill and Paffenholz about toric varieties) and relate these questions to the concept of convex floating bodies.
\section{\@startsection {section}{1}{\z@}{-3.5ex plus-1ex minus -.2ex}{2.3ex plus.2ex}{\reset@font\large\bf}} \def\subsection{\@startsection{subsection}{2}{\z@}{-3.25ex plus-1ex minus-.2ex}{-.1em}{\reset@font\large\bf}} \def\subsubsection{\@startsection{subsubsection}{3}{\z@}{-3.25ex plus -1ex minus-.2ex}{-.1em}{\reset@font\normalsize\bf}} \makeatother \date{} \title{ On measures of symmetry and floating bodies} \author{Stanislaw J. Szarek (Cleveland \& Paris) } \defS^{n-1}{S^{n-1}} \def\rightarrow{\rightarrow} \newcommand{{\mathbb{R}}}{{\mathbb{R}}} \newcommand{\R ^{+}}{{\mathbb{R}} ^{+}} \newcommand{\R ^n}{{\mathbb{R}} ^n} \newcommand{{\mathbb{N}}}{{\mathbb{N}}} \newtheorem{fact}{Fact \newtheorem{thm}[fact]{Theorem} \newtheorem{prop}[fact]{Proposition} \newtheorem{lemma}[fact]{Lemma} \newtheorem{cor}[fact]{Corollary} \newtheorem{ntn}[fact]{Notation} \newtheorem{dfn}[fact]{Definition} \newtheorem{example}[fact]{Example} \newtheorem{claim}[fact]{Claim} \begin{document} \maketitle \begin{center} {\small{\sl Dedicated to Olek Pe{\l}czy{\'n}ski, a teacher and a friend}} \end{center} \begin{abstract} We consider the following measure of symmetry of a convex $n$-dimensional body $K$: $\rho(K)$ is the smallest constant for which there is a point $x \in K$ such that for partitions of $K$ by an $n-1$-dimensional hyperplane passing through $x$ the ratio of the volumes of the two parts is $\le \rho(K)$. It is well known that $\rho(K)=1$ iff $K$ is symmetric. We establish a precise upper bound on $\rho(K)$; this recovers a 1960 result of Gr\"unbaum. We also provide a characterization of equality cases (relevant to recent results of Nill and Paffenholz about toric varieties) and relate these questions to the concept of convex floating bodies. \end{abstract} \noindent {\bf 1. Introduction.} The structure of general $n$-dimensional (bounded) convex bodies is understood much less than that of the symmetric ones. For example, if we endow each of these classes with the appropriate version of the Banach-Mazur distance, then the asymptotic order (as $n \rightarrow \infty$) of the diameter of the resulting compactum has been known for the symmetric case since the 1981 seminal Gluskin's paper \cite{Gl}, while the corresponding problem in the general case is wide open, with the first non-trivial results having been obtained only in the last several years \cite{BLPS, R}. Various invariants have been proposed to explain the difference between these two classes; see, e.g., \cite{Gr1} for an early survey of related work. One such measure of symmetry (or rather of asymmetry) was mentioned to the author by Olek Pe{\l}czynski around the turn of the millennium. Given convex body $K \subset \R ^n$ and $x \in K$, consider all partitions of $K$ into two parts by an $n-1$-dimensional hyperplane $H$ passing through $x$, and let $\rho(K,x)$ be the largest ratio of volumes of the two parts. Next, let $\rho(K) :=\min_{x\in K}{\rho(K,x)}$. Clearly $\rho(K) = 1$ if $K$ is centrally symmetric. (The reverse implication is also true but nontrivial, even for the larger class of star-shaped bodies; see \cite{S1} for details and \cite{Gro} for additional references.) Olek's question was to establish a dimension-free upper bound on $\rho(K)$. I came up with an argument (which also established a precise upper bound on $\rho(K)$ for $K \subset \R ^n$ and characterized bodies, for which that upper bound -- call it $\rho_n$ -- is attained) and wrote it up some time afterwards, but then realized that the question had been considered and solved by Gr\"unbaum in 1960 \cite{Gr0} and so I abandoned the project. However, it transpired very recently \cite{BB, NP} that Gr\"unbaum's result and a characterization of $K \subset \R ^n$ for which $\rho(K)=\rho_n$ were relevant to problems in toric geometry. Moreover, since the question in \cite{Gr0} was stated slightly differently than above, it led to an apparently non-equivalent analysis of equality cases (see \cite{Gr0}, Remark 4(i), p. 1260), less suitable -- at least without additional work -- for the applications considered in \cite{NP}. Accordingly, I am posting the manuscript (with added references and minor editorial changes). \medskip \noindent {\bf 2. More background and the results.} The parameter $\rho( K )$ is related to another geometrical concept, the convex floating bodies of $K$. [To give meaning to the formulae and to avoid artificial anomalies, it should be understood that $K$ -- and all bodies above and in what follows -- is convex, compact and has a nonempty interior.] Let $\delta \in (0, 1/2]$; slightly modifying the original definition from \cite{SW1}, let us denote by $K^\delta$ the intersection of all half-spaces whose complements contain at most the proportion $\delta$ of the volume of $K$. As is easy to see, if we call $\phi(K)$ the largest number $\delta$ for which the convex floating body $K^\delta$ is nonempty, then $\phi(K)=(\rho(K)+1)^{-1}$. The existence of a universal (i.e., independent of $n$ and $K$) strictly positive lower bound for $\phi(K)$ (and, analogously, universal upper bound for $\rho( K )$) has been a part of the folklore for some time \cite{Gi, SW2}. Here we prove the following ``isometric" result. \begin{thm} Let $n \in {\mathbb{N}} $ and let $K \subset \R ^n$ be a convex, compact body with nonempty interior. Let $c$ be the centroid of $K$ and let $H$ be an $n-1$-dimensional hyperplane which passes through $c$, and thus divides $K$ into two parts. Then the ratio of volumes of the two parts is $ \le (1+1/n)^n-1=: \rho_n$ (which is $< e -1 < 1.7183$). Moreover, we have an equality iff $K$ is a ``pyramid," i.e., $K = {\rm conv}(\{v\} \cup B)$, where $B$ is an $n-1$-dimensional convex body (the ``base") and $v$ the vertex, and $H$ is parallel to $B$. \end{thm} Similar statements about $\rho(K)$ and $\phi(K)$ are then simple consequences. \begin{cor} In the notation and under the hypotheses of Theorem 1 we have \smallskip \noindent {\rm (i) } $\rho(K) \le \rho_n$; moreover, $\rho(K,c) \le \rho_n$, with equality iff $K$ is a pyramid and the maximizing hyperplane $H$ is parallel to a base of the pyramid. \smallskip \noindent {\rm (ii) } $\phi(K) \ge (1+1/n)^{-n} =:\delta_n$; moreover, $c \in K^{\delta_n}$ with $c \in \partial K^{\delta_n}$ iff $K$ is a pyramid. \end{cor} The estimates $\rho(K) \le \rho_n$ and $\phi(K) \ge \delta_n$ are, in general, best possible as seen from the example of a simplex. \medskip \noindent {\bf 3. The proofs.} {\em Proof of Theorem 1 } Without loss of generality we may assume that the centroid $c$ of $K$ is at the origin. Let $\theta \in S^{n-1}$, $H = \theta^\perp$ and consider the function $f = f_\theta : {\mathbb{R}} \rightarrow \R ^{+}$ defined by \begin{equation} \label{sections} f(t) := {\rm vol_{n-1}} (K \cap (H+t\theta). \end{equation} It then follows that $f$ is upper semi-continuous (since $K$ is closed) and supported on some bounded interval $[-a, b]$ with $a, b>0$. (In fact it will follow from our arguments -- and is likely well known -- that the ratio $|a|/|b| \in [1/n,n]$.) Clearly, ${\rm vol_{n}} (K) = \int_{-a}^b {f(t) \, dt }$ and \begin{equation} \label{rhoeq} \rho(K,c) = \max_{\theta \in S^{n-1}} {\frac{\int_0^b {f(t) \, dt}}{\int_{-a}^0 {f(t) \, dt}}}. \end{equation} Additionally, the centroid being at the origin is equivalent to \begin{equation} \label{centroid} \int_{-a}^b {tf(t) \, dt } = 0 . \end{equation} The only other property of $f$ we shall need is that $h := f^{1/{(n-1)}}$ is concave on $[-a,b]$, which is a consequence of the Brunn-Minkowski inequality. We note that the concavity implies continuity on $(-a,b)$ and lower semi-continuity, hence continuity on $[-a,b]$. The Theorem will follow easily from the following two claims. \begin{claim} Let $n \in {\mathbb{N}}$ and $a,b > 0$. For any $\theta \in S^{n-1}$ and for any continuous function $f : [-a,b] \rightarrow \R ^{+}$ such that $h :=f^{1/{(n-1)}}$ is concave, there exists a closed convex body $K \subset \R ^n$ such that $f$ is obtained from $K, \theta$ via {\rm (\ref{sections})}. If, additionally, {\rm (\ref{centroid})} holds, then $K$ may be chosen so that its centroid is at the origin. Finally, $f$ is affine with $f(-a)=0$ (or $f(b)=0$) iff $K$ is a pyramid and $\theta $ is perpendicular to a base $B$; in that case, if {\rm (\ref{centroid})} also holds, the ratio from {\rm (\ref{rhoeq})} equals $(1+1/n)^n-1$. \end{claim} \begin{claim} Let $a, b >0$. Among continuous functions on $[-a,b]$, strictly positive on $(-a,b)$, verifying {\rm (\ref{centroid})} and such that $h :=f^{1/{(n-1)}}$ is concave on $[-a,b]$, the largest value of the ratio appearing in {\rm (\ref{rhoeq})} is achieved iff $h$ is affine on $[-a,b]$ with $h(-a) = 0$. \end{claim} Claim 3 shows that investigating $\rho(K)$ and the ratio from (\ref{rhoeq}) for functions verifying our assumptions are fully equivalent. Its proof is based on elementary geometric considerations. To construct $K$ starting from $f$, we choose any $n-1$-dimensional convex body $B_0$ in $H = \theta^\perp$ with ${\rm vol_{n-1}} (B_0)=1$ and $0 \in B_0$, and set $K:= \bigcup_{t \in [-a,b]} t\theta + h(t)B_0$. The ``only if" part in the last assertion follows from the analysis of equality cases in the Brunn-Minkowski inequality (it occurs ``essentially iff" the two sets are homothetic, see \cite{S1}, Theorem 6.1.1). The details are left to the reader. \medskip The proof of Claim 4 is also elementary, but less obvious. We will use the following lemma, variants of which exist in the literature. Similar arguments were employed (independently of this note) in \cite{F}, see also \cite{FG} for a more conceptualized application of closely related phenomena. \begin{lemma} Let $M >0$, $m \in {\mathbb{R}}$ and $n \in {\mathbb{N}}$. We consider the set $\mathcal{H}$ of functions $h : {\mathbb{R}} \rightarrow \R ^{+}$ which verify \newline {\rm (i) } the support of $h$ is the interval $[0,b]$ (for some $b>0$) \newline {\rm (ii) } $h$ is continuous and concave on $[0,b]$ \newline {\rm (iii) } $h(0)=1$ and the right derivative of $h$ at $0$ is $\leq m$ \newline {\rm (iv) } if $f := h^{n-1}$, then $\int_0^b {tf(t) \, dt } = M$. \newline The set $\mathcal{H}$ is nonempty iff $m \ge -1/\sqrt{Mn(n+1)}$. In that case, set $\mu(h):= \int_0^b {f(t) \, dt}$ for $h \in \mathcal{H}$. The minimal value of $\mu(h)$ is attained iff $h$ is affine on its support $[0,b]$ with $f(b)=0$. The maximal value of $\mu(h)$ is attained iff $h(t) = 1+mt $ on the support of $h$. \end{lemma} \noindent {\em Proof of Claim 4.} Since the ratio in (\ref{rhoeq}) doesn't change if $f(\cdot)$ is replaced by $\alpha f(\beta\,\cdot)$, it is enough to consider $f$'s whose support verifies $[-1/n,1/n] \subset [-a,b] \subset [-1,1]$ and such that $f(1)=1$ (for the first inclusion, see the comments in the paragraph following (\ref{sections}) and at the very end of this note). Concavity of $f^{1/(n-1)}$ gives then a lower bound on $\int_{-a}^0 {f(t) \, dt}$ and an upper bound on $\|f\|_{\infty}$ (both dependent on $n$). It follows that the set of the functions $f$ in question is compact (say, in the $L_1$ metric, which is relevant here) and hence that the supremum of the ratio in (\ref{rhoeq}) is attained. (This can also be proved in a variety of ways, including from the John's theorem.) Let $f_0$ be such an extremal function (with support $[-a_0,b_0]$), we shall show that it must be of the form indicated in Claim 4. Indeed, if $f_0$ was not affine on $[-a_0,0]$ with $f(-a_0)=0$, we could apply the Lemma with $h(\cdot) = f_0(-\,\cdot)^{1/{(n-1)}}: [0,a_0] \rightarrow \R ^{+}$, $M=-\int_{-a_0}^0 {tf_0(t) \, dt}$ and $m$ equal to the right derivative of $h$ at $0$ to obtain an extremal $h_1$ (supported on a possibly different interval $[0,a_1]$) for which $\mu(h_1) < \mu(h_0)$. Defining $f_1$ to coincide with $f_0$ on $[0,b_0]$ and with $h_1(-\, \cdot)^{n-1}$ on $[-a_1,0]$ we would get a function for which the ratio from (\ref{rhoeq}) was strictly larger than for $f_0$. At the same time, the conditions (iii) and (iv) from the Lemma (together with our choices of $m, M$ would assure that, respectively, $f_1^{1/{(n-1)}}$ was concave on $[-a_1,b_0]$ and that (\ref{centroid}) was satisfied. This shows that $f_0(t) = (1+t/a_0)^{n-1}$ for $t \in [-a_0,0]$. A similar argument applied to $h= f_0^{1/{(n-1)}}: [0,b_0] \rightarrow \R ^{+}$, $m=1/a_0$ and the same $M$ shows that $f_0$ is affine on the entire interval $[-a_0,b_0]$, which concludes the proof of the Claim. (In fact we showed directly that affine $h$'s give the extremal value of the ratio from (\ref{rhoeq}), so we didn't really need to know that the supremum was attained.) \hfill $\Box$ \bigskip \noindent {\em Sketch of the proof of Lemma 5.} First, let us point out that the condition $m \geq -1/\sqrt{Mn(n+1)}$ is equivalent to $\mathcal{H} \neq \emptyset$. Indeed, this is easily deduced from the observation that if $h$ is supported on $[0,b]$ and defined there by $h(t) = 1-t/b$, then we have the relationship $b=\sqrt{Mn(n+1)}$. The assertion of the Lemma is then ``essentially obvious from physical considerations.'' To obtain maximal ``mass" $\mu(h)$ for a given ``moment" $M$ (the constraint (iv)), we need to place the mass as closely to the axis $t=0$ as allowed by the concavity condition (ii) and by (iii). To minimize the mass for a fixed moment, we need to place the mass as far from $t=0$ as possible subject to (ii) and (iii). This is easily formalized. A very similar argument allows also to determine the largest possible value of $b$ for a given $M$, and the smallest possible value for $b$ (if at all possible) given $m$ and $M$, and to subsequently deduce that the ratio of the two is at most $n$, thus implying the bounds on the ratio $|a|/|b|$ stated in the paragraph following (\ref{sections}) and mildly used in the proof of Claim 4. \hfill $\Box$ \medskip \noindent{\small {\em Acknowledgement} \ Supported in part by grants from NSF~(U.S.A.) and by an International Cooperation Grant from Min. des Aff. Etrang. (France) \& KBN (Poland).} \small
{ "timestamp": "2013-02-11T02:02:19", "yymm": "1302", "arxiv_id": "1302.2076", "language": "en", "url": "https://arxiv.org/abs/1302.2076", "abstract": "We consider the following measure of symmetry of a convex n-dimensional body K: $\\rho(K)$ is the smallest constant for which there is a point x in K such that for partitions of K by an n-1-dimensional hyperplane passing through x the ratio of the volumes of the two parts is at most $\\rho(K)$. It is well known that $\\rho(K)=1$ iff K is symmetric. We establish a precise upper bound on $\\rho(K)$; this recovers a 1960 result of Grunbaum. We also provide a characterization of equality cases (relevant to recent results of Nill and Paffenholz about toric varieties) and relate these questions to the concept of convex floating bodies.", "subjects": "Metric Geometry (math.MG); Functional Analysis (math.FA)", "title": "On measures of symmetry and floating bodies", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.979667647844603, "lm_q2_score": 0.8311430520409023, "lm_q1q2_score": 0.8142439588152953 }
https://arxiv.org/abs/0912.5000
Classification of Q-trivial Bott manifolds
A Bott manifold is a closed smooth manifold obtained as the total space of an iterated $\C P^1$-bundle starting with a point, where each $\C P^1$-bundle is the projectivization of a Whitney sum of two complex line bundles. A \emph{$\Q$-trivial Bott manifold} of dimension $2n$ is a Bott manifold whose cohomology ring is isomorphic to that of $(\CP^1)^n$ with $\Q$-coefficients. We find all diffeomorphism types of $\Q$-trivial Bott manifolds and show that they are distinguished by their cohomology rings with $\Z$-coefficients. As a consequence, we see that the number of diffeomorphism classes in $\Q$-trivial Bott manifolds of dimension $2n$ is equal to the number of partitions of $n$. We even show that any cohomology ring isomorphism between two $\Q$-trivial Bott manifolds is induced by a diffeomorphism.
\section{Introduction} A Bott tower of height $n$ is a sequence of $\mathbb{C}P^1$-bundles \begin{equation} \label{eqn:Bott tower} B_n\stackrel{\pi_n}\longrightarrow B_{n-1} \stackrel{\pi_{n-1}}\longrightarrow \dots \stackrel{\pi_2}\longrightarrow B_1 \stackrel{\pi_1}\longrightarrow B_0=\{\text{a point}\}, \end{equation} where each $\pi_i\colon B_i\to B_{i-1}$ for $i=1,\dots,n$ is the projectivization of a Whitney sum of two complex line bundles over $B_{i-1}$. We call $B_i$ an \emph{$i$-stage Bott manifold} and are concerned with the diffeomorphism type of the $n$-stage Bott manifold $B_n$. Note that even if two Bott towers of height $n$ are different, their $n$-stage Bott manifolds can be diffeomorphic. If the fiber bundles in \eqref{eqn:Bott tower} are all trivial, then $B_n$ is diffeomorphic to $(\mathbb{C}P^1)^n$. It is shown in \cite{Ma-Pa-2008} that if the cohomology ring of $B_n$ is isomorphic to that of $(\mathbb{C}P^1)^n$ with $\mathbb{Z}$-coefficients as graded rings, then $B_n$ is diffeomorphic to $(\mathbb{C} P^1)^n$ and moreover the fiber bundles in \eqref{eqn:Bott tower} are all trivial. We say that $B_n$ is \emph{$\mathbb{Q}$-trivial} if its cohomology ring is isomorphic to that of $(\mathbb{C}P^1)^n$ with $\mathbb{Q}$-coefficients as graded rings. In this paper, we shall find all diffeomorphism types of $\mathbb{Q}$-trivial Bott manifolds and show that they are diffeomorphic if and only if their cohomology rings with $\mathbb{Z}$-coefficients are isomorphic as graded rings (Theorem~\ref{theorem:Q-trivial Bott manifold}). As a consequence, we see that the number of diffeomorphism classes in $\mathbb{Q}$-trivial Bott manifolds of dimension $2n$ is equal to the number of partitions of $n$. We also prove that any automorphism of the cohomology ring of a $\mathbb{Q}$-trivial Bott manifold is induced by a diffeomorphism. This implies that any cohomology ring isomorphism between two $\mathbb{Q}$-trivial Bott manifolds is induced by a diffeomorphism since we already establish that the diffeomorphism types of $\mathbb{Q}$-trivial Bott manifolds are distinguished by their cohomology rings. Our study is motivated by the so-called \emph{cohomological rigidity problem} for toric manifolds. A toric manifold is a non-singular compact complex algebraic variety with an algebraic torus action having a dense orbit. The cohomological rigidity problem for toric manifolds asks whether the topological types of toric manifolds are distinguished by their cohomology rings or not (see \cite{Ma-Su-2008}). This problem is open, but we have some affirmative partial solutions to the problem for (generalized) Bott manifolds in \cite{Ma-Pa-2008}, \cite{Ch-Ma-Su-2010}, \cite{ch-su-pre} and \cite{Ch-Par-Su-pre}. The result of this paper provides another affirmative evidence to the problem for Bott manifolds. One can consider the real analogue of Bott towers and Bott manifolds, but the cohomological rigidity for \emph{real} Bott manifolds is established with $\mathbb{Z}/2$-coefficients, see \cite{Ka-Ma-2009} and \cite{Masuda-2008}. This paper is organized as follows. In Section \ref{section:Bott manifolds}, we review Bott manifolds and prepare several lemmas to prove our main theorems. We find all diffeomorphism types of $\mathbb{Q}$-trivial Bott manifolds in Section \ref{section : Q-trivial Bott manifold} and prove the cohomological rigidity for $\mathbb{Q}$-trivial Bott manifolds in Section \ref{section: cohomological rigidity of Q-trivial}. Section~\ref{sect:auto} is devoted to proving that any automorphism of the cohomology ring of a $\mathbb{Q}$-trivial Bott manifold is induced by a diffeomorphism. Throughout this paper, cohomology is taken with $\mathbb{Z}$-coefficient unless otherwise stated. \section{Cohomology of Bott manifolds} \label{section:Bott manifolds} We begin with recalling some general facts on projective bundles. Let $\pi\colon E\to B$ be a complex vector bundle over a smooth manifold $B$ and let $P(E)$ be the projectivization of $E$. \begin{lemma}\cite[Lemma 2.1]{Ch-Ma-Su-2010} \label{lemm:line} Let $B$ and $E$ be as above and let $L$ be a complex line bundle over $B$. We denote by $E^*$ the complex vector bundle dual to $E$. Then both $P(E^*)$ and $P(E\otimes L)$ are isomorphic to $P(E)$ as fiber bundles over $B$, in particular, they are diffeomorphic. \end{lemma} \begin{proof} We shall reproduce the proof given in \cite{Ch-Ma-Su-2010} for the reader's convenience sake. Choose a Hermitian metric $\langle\ ,\ \rangle$ on $E$, which is anti-$\mathbb{C}$-linear on the first entry and $\mathbb{C}$-linear on the second entry, and define a map $\tilde b\colon E\to E^*$ by $\tilde b(u):=\langle u,\ \rangle$. This map is not $\mathbb{C}$-linear but anti-$\mathbb{C}$-linear, so it induces a map $b\colon P(E)\to P(E^*)$, which gives an isomorphism as fiber bundles. For each $x\in B$, we choose a non-zero vector $v_x$ from the fiber of $L$ over $x$ and define a map $\tilde c\colon E\to E\otimes L$ by $\tilde c(u_x):=u_x\otimes v_x$ where $u_x$ is an element of the fiber of $E$ over $x$. The map $\tilde c$ depends on the choice of $v_x$'s but the induced map $c\colon P(E)\to P(E\otimes L)$ does not because $L$ is a line bundle. It is easy to check that $c$ gives an isomorphism of $P(E)$ and $P(E\otimes L)$ as fiber bundles over $B$. \end{proof} \begin{remark} \label{rema:line} The bundle map $b\colon P(E)\to P(E^*)$ does not preserve the canonical complex structures on the fibers and the pullback of the tautological line bundle over $P(E^*)$ by $b$ is complex conjugate to the tautological line bundle over $P(E)$ since $\tilde b$ is anti-$\mathbb{C}$-linear. On the other hand, the bundle map $c\colon P(E)\to P(E\otimes L)$ above preserves the canonical complex structure on the fibers and pulls back the tautological line bundle over $P(E\otimes L)$ to that over $P(E)$. \end{remark} If $H^{odd}(B)=0$ (and this is the case for Bott manifolds), then $H^*(P(E))$ is a free module over $H^*(B)$ via $\pi^*\colon H^*(B)\to H^*(P(E))$ and the Borel-Hirzebruch formula \cite[(2) on p.515]{bo-hi58} tells us that \begin{equation} \label{eqn:BH} H^\ast(P(E)) = H^\ast(B)[x]/\big(\sum_{i=0}^m (-1)^ic_{i}(E)x^{m-i}\big), \end{equation} where $m$ is the fiber dimension of $E$, $c_i(E)$ denotes the $i$-th Chern class of $E$, and $x$ denotes the first Chern class of the tautological line bundle over $P(E)$. Moreover, the tangent bundle $T_fP(E)$ along the fibers of $P(E)\to B$ admits a canonical complex structure since each fiber is a complex projective space, and with this complex structure its total Chern class is given by \begin{equation} \label{eqn:tf} c(T_fP(E))=\sum_{i=0}^m(1-x)^{m-i}c_i(E). \end{equation} Now we consider the Bott tower \eqref{eqn:Bott tower}. Each fiber bundle $\pi_j\colon B_j\to B_{j-1}$ for $j=1,\dots,n$ is the projectivization of a Whitney sum of two complex line bundles by definition and we may assume that one of the two line bundles is trivial by Lemma~\ref{lemm:line}. Therefore, one can express \[ \text{$B_j=P(\underline{\mathbb{C}}\oplus\gamma^{\alpha_j})$ with $\alpha_j\in H^2(B_{j-1})$,} \] where $\underline{\mathbb{C}}$ denotes the trivial complex line bundle and $\gamma^{\alpha_j}$ denotes the complex line bundle over $B_{j-1}$ with $\alpha_j$ as the first Chern class. Note that $\alpha_1=0$ since $B_0$ is a point. Let $x_j$ be the first Chern class of the tautological line bundle over $B_j$. Then it follows from \eqref{eqn:BH} that $$ H^\ast(B_j) = H^\ast(B_{j-1})[x_j]/ \big(x_j^2 = \alpha_j x_j\big). $$ Using this formula inductively on $j$ and regarding $H^*(B_j)$ as a graded subring of $H^*(B_n)$ through the projections in \eqref{eqn:Bott tower}, we see that \begin{equation} \label{eqn:HBn} H^*(B_n)=\mathbb{Z}[x_1, \ldots, x_n]/\big(x_j^2 = \alpha_jx_j\mid j=1,\dots,n\big). \end{equation} Sometimes it is convenient and helpful to express \[ \alpha_j=\sum_{i=1}^{j-1} A^i_j x_i \quad\text{with $A^i_j\in \mathbb{Z}$} \] and form an upper triangular matrix of size $n$ with zero diagonals: $$ A=\left( \begin{array}{ccccc} 0 & A^1_2 & A^1_3 &\cdots & A^1_n \\ & 0 & A^2_3 & \cdots & A^2_n \\ & & \ddots & \ddots& \vdots \\ & & &0 &A^{n-1}_n\\ & & & &0 \end{array} \right). $$ Let $S^1$ and $S^3$ denote the unit sphere of $\mathbb{C}$ and $\mathbb{C}^2$ respectively. Using the matrix $A$, one can describe $B_n$ as the quotient of $(S^3)^n$ by a free action of $(S^1)^n$ defined by \begin{equation} \label{eqn:quotient} \begin{split} &(t_1,\dots,t_n)\cdot \big((z_1,w_1),\dots,(z_j,w_j),\dots,(z_n,w_n)\big)\\ =&\big((t_1z_1,t_1w_1),\dots,(t_jz_j,(\prod_{i=1}^{j-1}t_i^{-A^i_j})t_jw_j),\dots,(t_nz_n,(\prod_{i=1}^{n-1}t_i^{-A^i_n})t_nw_n)\big) \end{split} \end{equation} where $(t_1,\dots,t_n)\in (S^1)^n$ and $(z_j,w_j)$ denotes the coordinate of the $j$th component of $(S^3)^n$. In fact, the projections $$(S^3)^n\to (S^3)^{n-1}\to\dots \to S^3\to \text{\{a point\}}$$ defined by dropping the last factor at each stage induces the Bott tower \eqref{eqn:Bott tower}. The next lemma and corollary are tricks to simplify algebraic computations. An ordered pair $(z, \bar{z})$ of elements in $H^2(B_n)$ is said to be \emph{vanishing} if $z\bar{z} = 0$ and \emph{primitive} if both $z$ and $\bar{z}$ are primitive. Note that $(x_j,x_j-\alpha_j)$ is a primitive vanishing pair for each $j$ since $x_j^2=\alpha_jx_j$. \begin{lemma}\label{lemma:masuda's tricky} A primitive vanishing pair $(z, \bar{z})$ is of the form $$ (ax_j+ u, \pm (a(x_j - \alpha_j) - u)) $$ for some $j$, where $a$ is a non-zero integer, $u$ is a linear combination of $x_i$'s with $i<j$, and $u(u+a \alpha_j) = 0$. \end{lemma} \begin{proof} Set $z= ax_j + u$ (resp. $\bar{z} = bx_k + v$), where $a$ (resp. $b$) is a non-zero integer and $u$ (resp. $v$) is a linear combination of $x_i$'s with $i<j$ (resp. $i<k$). If $k \neq j$, then $ab x_j x_k$ term in $z\bar{z}$ survives in $H^\ast(B_n)$ because of \eqref{eqn:HBn}, hence $k=j$. Therefore, \begin{equation} \label{eqn:zz} 0=z\bar{z} = abx_j^2 + (av + bu)x_j + uv = (ab \alpha_j + av + bu)x_j + uv. \end{equation} Since $u$ and $v$ are linear combinations of $x_i$'s with $i<j$, the identity \eqref{eqn:zz} implies that \begin{equation} \label{eqn:vanish} \text{$ab \alpha_j + av +bu =0$\quad and\quad $uv = 0$.} \end{equation} The former identity in \eqref{eqn:vanish} shows that $bu$ is divisible by $a$. However $u$ is not divisible by any nontrivial factor of $a$ since $z=ax_j+u$ is primitive. Hence $a | b$. Similarly, $av$ is divisible by $b$ and hence $b | a$. Therefore, $b = \pm a$ and hence $v = \mp (u +a\alpha_j)$ by the former identity of \eqref{eqn:vanish}. This proves the first statement in the lemma because $\bar z=bz_j+v$. The last identity in the lemma follows from the latter identity of \eqref{eqn:vanish} since $v=u+a\alpha_j$ up to sign. \end{proof} \begin{corollary}\label{corollary: squared zero elements} A square zero primitive element in $H^2(B_n)$ is either $x_j - \frac{1}{2}\alpha_j$ or $2x_j - \alpha_j$ up to sign for some $j$, where $\alpha_j^2 = 0$ in both cases. In particular, the number of square zero primitive elements in $H^2(B_n)$ up to sign is equal to the number of $\alpha_j$'s with $\alpha_j^2 = 0$. \end{corollary} \begin{proof} Since $z = \bar{z}$ in the proof of Lemma~\ref{lemma:masuda's tricky}, either $2u = -a \alpha_j$ or $2x_j = \alpha_j$. But the latter case does not occur since $\alpha_j$ is a linear combination of $x_i$'s with $i<j$. Hence, $2u = -a\alpha_j$. Thus, it follows from the primitiveness of $z$ that $z$ must be either $x_j-\frac{1}{2}\alpha_j$ or $2x_j - \alpha_j$ up to sign. Since $u(u+a\alpha_j)=0$ and $2u=-a\alpha_j$, we have $\alpha_j^2=0$, proving the corollary. \end{proof} \section{$\mathbb{Q}$-trivial Bott manifolds} \label{section : Q-trivial Bott manifold} The purpose of this section is to classify $\mathbb{Q}$-trivial Bott manifolds. We freely use the notation in Section~\ref{section:Bott manifolds}. \begin{proposition}\label{proposition:Q-trivial Bott manifold} $B_n$ is $\mathbb{Q}$-trivial if and only if $\alpha_j^2 = 0$ in $H^*(B_n)$ for all $j=1, \ldots, n$. In particular, if $B_n$ is $\mathbb{Q}$-trivial, then every Bott manifold $B_j$ in the tower \eqref{eqn:Bott tower} is $\mathbb{Q}$-trivial. \end{proposition} \begin{proof} If $\alpha_j^2 =0$, then $( x_j - \frac{\alpha_j}{2})^2=0$ in $H^\ast(B_n; \mathbb{Q})$ because $x_j^2=\alpha_j x_j$. Since $ x_j - \frac{\alpha_j}{2}$ for $j=1,\dots,n$ generate $H^*(B_n;\mathbb{Q})$ as a graded ring, this shows that $B_n$ is $\mathbb{Q}$-trivial. Conversely, if $B_n$ is $\mathbb{Q}$-trivial, there are $n$ primitive elements in $H^2(B_n)$ up to sign whose square vanish. By Corollary \ref{corollary: squared zero elements}, the number of $\alpha_j$'s whose square vanish is also $n$, which implies the converse. \end{proof} \begin{example} \label{example:Hirzebruch surface} For $a\in \mathbb{Z}$, let $\Sigma_a = P(\underline{\mathbb{C}} \oplus \gamma^{ax_1})$, where $\gamma^{ax_1}$ is the complex line bundle over $\mathbb{C}P^1=B_1$ whose first Chern class is $ax_1\in H^2(\mathbb{C} P^1)$. $\Sigma_a$ is called a \emph{Hirzebruch Surface}, which was first studied by Hirzebruch in \cite{Hirzebruch-1951}. Note that $$ H^\ast(\Sigma_a;\mathbb{Z}) = \mathbb{Z}[x_1, x_2]/ (x_1^2=0,\ x_2^2 =ax_1), $$ so that $\alpha_1 = 0$ and $\alpha_2 = ax_1$ in this case. Since the squares of $\alpha_1$ and $\alpha_2$ are both $0$, $\Sigma_a$ is $\mathbb{Q}$-trivial. As is well-known, $\Sigma_a$ is diffeomorphic to $\mathbb{C}P^1 \times \mathbb{C}P^1$ if $a$ is even and to $\mathbb{C}P^2 \sharp \overline{\mathbb{C}P^2}$ if $a$ is odd. \end{example} Denote $\mathcal{H}_1 = \mathbb{C}P^1, \mathcal{H}_2 = \Sigma_1$ and let $\pi_2 : \mathcal{H}_2 \to \mathcal{H}_1$ be the canonical projection. We consider the pullback bundle $\pi_3 : \mathcal{H}_3 \to \mathcal{H}_2$ of $\pi_2:\mathcal{H}_2 \to \mathcal{H}_1$ via $\pi_2$; \begin{equation} \label{eqn:H32} \begin{CD} \mathcal{H}_3 @>\rho_3>> \mathcal{H}_2 = P(\underline{\mathbb{C}} \oplus \gamma^{x_1}) \\ @VV{\pi_3}V @VV{\pi_2}V\\ \mathcal{H}_2 = P(\underline{\mathbb{C}} \oplus \gamma^{x_1}) @>{\pi_2}>> \mathcal{H}_1 = \mathbb{C}P^1 \end{CD} \end{equation} where $\rho_3$ denotes the induced bundle map. Then $\mathcal{H}_3$ is a $3$-stage Bott manifold, in fact, $\mathcal{H}_3 = P(\underline{\mathbb{C}} \oplus \gamma^{x_1})$ where $\underline{\mathbb{C}}$ and $\gamma^{x_1}$ are both regarded as complex line bundles over $\mathcal{H}_2$. Therefore, the matrix corresponding to the Bott tower $$\mathcal{H}_3\stackrel{\pi_3}\longrightarrow \mathcal{H}_2\stackrel{\pi_2}\longrightarrow \mathcal{H}_1\stackrel{\pi_1}\longrightarrow \{\text{a point}\}$$ is given by $$ \left( \begin{array}{ccc} 0 & 1 & 1 \\ & 0 & 0 \\ & & 0 \\ \end{array} \right). $$ Since the pullback of the tautological line bundle over $\mathcal{H}_2$ by $\rho_3$ in \eqref{eqn:H32} is the tautological line bundle over $\mathcal{H}_3$, we have $\rho_3^*(x_2)=x_3$, while $\rho_3^*(x_1)=x_1$ which follows from the commutativity of the diagram \eqref{eqn:H32}. Inductively, we shall define $\mathcal{H}_n$ as follows: \begin{equation} \label{eqn:suyoung_tower} \begin{CD} \mathcal{H}_n @>\rho_n>> \mathcal{H}_{n-1} @>\rho_{n-1}>> \dots @>\rho_4>> \mathcal{H}_3 @>\rho_3>> \mathcal{H}_2\\ @VV{\pi_n}V @VV{\pi_{n-1}}V @. @VV{\pi_3}V @VV{\pi_2}V\\ \mathcal{H}_{n-1} @>{\pi_{n-1}}>> \mathcal{H}_{n-2} @>{\pi_{n-2}}>> \dots @>\pi_3>> \mathcal{H}_2 @>{\pi_2}>> \mathcal{H}_1. \end{CD} \end{equation} Note that \begin{equation} \label{eqn:rhon} \mathcal{H}_n\stackrel{\pi_n}\longrightarrow \mathcal{H}_{n-1}\stackrel{\pi_{n-1}}\longrightarrow \dots \stackrel{\pi_2}\longrightarrow \mathcal{H}_1 \stackrel{\pi_1}\longrightarrow \{\text{a point}\} \end{equation} is a Bott tower of height $n$ corresponding to the $n\times n$-matrix \begin{equation} \label{eqn:matrixHn} \left( \begin{array}{ccccc} 0 & 1 & 1 &\cdots & 1 \\ & 0 & 0 & \cdots & 0 \\ & & 0 & \cdots & 0 \\ & & & \ddots & \vdots \\ & & & & 0 \\ \end{array} \right) \end{equation} and \begin{equation} \label{eqn:Hn} H^\ast(\mathcal{H}_n) = \mathbb{Z}[x_1, \ldots, x_n]/ (x_1^2 =0,\ x_j^2 =x_1x_j \text{ for }j=2, \ldots, n), \end{equation} so that $\alpha_1=0$ and $\alpha_j = x_1$ for all $j=2, \ldots, n$. Since $\alpha_j^2 = 0$ for any $j$, $\mathcal{H}_n$ is a $\mathbb{Q}$-trivial Bott manifold by Proposition \ref{proposition:Q-trivial Bott manifold}. We also note that $\rho_j\colon \mathcal{H}_j\to \mathcal{H}_{j-1}$ $(j>2)$ is a bundle map and pulls back the tautological line bundle over $\mathcal{H}_{j-1}$ to that of $\mathcal{H}_j$, so that \begin{equation} \label{eqn:rhoj} \begin{split} &\rho_j^*(x_{j-1})=x_j \quad\text{for $j>2$, while}\\ &\rho_j^*(x_1)=x_1\quad \text{by the commutativity of \eqref{eqn:suyoung_tower}.} \end{split} \end{equation} \begin{lemma} \label{lemm:mod2} Square zero primitive elements in $H^2(\mathcal{H}_n)$ are \[ \text{$\pm x_1$ and $\pm(2x_j-x_1)$ for $j>1$.} \] In particular, their mod 2 reductions are equal to the mod 2 reduction of $x_1$. \end{lemma} \begin{proof} Since $\alpha_1=0$ and $\alpha_j=x_1$ for $j>1$ in \eqref{eqn:Hn}, the lemma is an immediate consequence of Corollary~\ref{corollary: squared zero elements}. \end{proof} Note that the mod 2 reduction of a square zero element of $H^2(\mathcal{H}_n)$ is either zero or equal to the mod 2 reduction of $x_1$ by Lemma~\ref{lemm:mod2}. \begin{lemma} \label{lemma:bundle over H_n} If $\alpha$ is a square zero element in $H^2(\mathcal{H}_n)$, then \[ P(\underline{\mathbb{C}}\oplus\gamma^\alpha)\cong \begin{cases} P(\underline{\mathbb{C}}\oplus\underline{\mathbb{C}})=\mathcal{H}_n\times \mathcal{H}_1 \quad &\text{if $\alpha=0$ in $H^2(\mathcal{H}_n)\otimes\mathbb{Z}/2$,}\\ P(\underline{\mathbb{C}}\oplus\gamma^{x_1})=\mathcal{H}_{n+1} \quad&\text{if $\alpha=x_1$ in $H^2(\mathcal{H}_n)\otimes\mathbb{Z}/2$,} \end{cases} \] as bundles over $\mathcal{H}_n$. \end{lemma} \begin{proof} By Lemma~\ref{lemm:mod2}, $\alpha$ is either $ax_1$ or $a(2x_j-x_1)$ for $j>1$, where $a$ is an integer. Thus it suffices to prove \begin{enumerate} \item \label{item:1} $P(\gamma^{ax_1}\oplus \underline{\mathbb{C}})\cong P(\gamma^{(a+2b)x_1} \oplus \underline{\mathbb{C}})$ as bundles for any $b \in \mathbb{Z}$, \item \label{item:2} $P(\gamma^{a( 2x_j-x_1)} \oplus \underline{\mathbb{C}})\cong P(\gamma^{-ax_1} \oplus \underline{\mathbb{C}})$ as bundles for any $j>1$. \end{enumerate} We first prove (1). By Lemma~\ref{lemm:line} we have \begin{equation*} P(\gamma^{ax_1}\oplus \underline{\mathbb{C}})\cong P((\gamma^{ax_1}\oplus \underline{\mathbb{C}})\otimes \gamma^{bx_1})=P(\gamma^{(a+b)x_1}\oplus\gamma^{bx_1})\quad\text{as bundles}. \end{equation*} Therefore it suffices to prove \begin{equation} \label{eqn:(1)} P(\gamma^{(a+b)x_1}\oplus\gamma^{bx_1})\cong P(\gamma^{(a+2b)x_1}\oplus\underline{\mathbb{C}})\quad\text{as bundles}. \end{equation} All line bundles involved in \eqref{eqn:(1)} are the pullback of line bundles over $\mathcal{H}_1$ by a composition of the projections $\pi_i$'s in the tower \eqref{eqn:rhon}. Therefore it suffices to prove \eqref{eqn:(1)} when the base space is $\mathcal{H}_1$. But then the two vector bundles $\gamma^{(a+b)x_1}\oplus\gamma^{bx_1}$ and $\gamma^{(a+2b)x_1}\oplus\underline{\mathbb{C}}$ in \eqref{eqn:(1)} are isomorphic because their total Chern classes are same and complex vector bundles over $\mathcal{H}_1=\mathbb{C}P^1$ are classified by their total Chern classes as is well-known. The proof of (2) is similar to that of (1). By Lemma~\ref{lemm:line} we have \begin{equation*} P(\gamma^{a(2x_j-x_1)}\oplus \underline{\mathbb{C}})\cong P((\gamma^{a(2x_j-x_1)}\oplus \underline{\mathbb{C}})\otimes \gamma^{-ax_j})=P(\gamma^{a(x_j-x_1)}\oplus\gamma^{-ax_j}). \end{equation*} Therefore it suffices to prove \begin{equation} \label{eqn:(2)} P(\gamma^{a(x_j-x_1)}\oplus\gamma^{-ax_j})\cong P(\gamma^{-ax_1}\oplus\underline{\mathbb{C}})\quad\text{as bundles}. \end{equation} As remarked at \eqref{eqn:rhoj}, $\rho_i\colon \mathcal{H}_i\to \mathcal{H}_{i-1}$ for $i>2$ is a bundle map and pulls back the tautological line bundle over $\mathcal{H}_{i-1}$ to that over $\mathcal{H}_i$ so that $\rho_i^*(x_{i-1})=x_i$. Therefore $\gamma^{x_j}$ is the pullback of $\gamma^{x_2}$ over $\mathcal{H}_2$ by a composition of the bundle maps $\rho_i$'s. Moreover $\rho_i^*(x_1)=x_1$ as noted before. Therefore it suffices to prove \eqref{eqn:(2)} when $j=2$ and the base space is $\mathcal{H}_2$. But then the two vector bundles $\gamma^{a(x_j-x_1)}\oplus\gamma^{-ax_j}$ and $\gamma^{-ax_1}\oplus\underline{\mathbb{C}}$ in \eqref{eqn:(2)} are isomorphic because their total Chern classes are same and complex vector bundles of complex dimension two over $\mathcal{H}_2$ are classified by their total Chern classes. In fact the last assertion follows from an exact sequence $$ [\mathcal{H}_2,U/U(2)] \to [\mathcal{H}_2,BU(2)] \to [\mathcal{H}_2,BU]=K(\mathcal{H}_2) $$ induced from a fibration $U/U(2)\to BU(2)\to BU$. Here $[\mathcal{H}_2,U/U(2)]=0$ because $\mathcal{H}_2$ is of real dimension 4 and $U/U(2)$ is 4-connected and $K(\mathcal{H}_2)$ is torsion free since $H^{odd}(\mathcal{H}_2)=0$, so that elements in $[\mathcal{H}_2,BU(2)]$ can be distinguished by their Chern classes. \end{proof} \section{Cohomological rigidity of $\mathbb{Q}$-trivial Bott manifolds} \label{section: cohomological rigidity of Q-trivial} For $n \in \mathbb{N}$, a finite sequence $\lambda = (\lambda_1, \ldots, \lambda_m )$ of positive integers is called a \emph{partition} of $n$ if $\sum_{1 \leq i \leq m} \lambda_i = n$ and $\lambda_1 \geq \cdots \geq \lambda_m \geq 1$. We define $\mathcal{H}_\lambda$ by $$ \mathcal{H}_\lambda := \mathcal{H}_{\lambda_1} \times \cdots \times \mathcal{H}_{\lambda_m}. $$ For instance, $(\mathbb{C}P^1)^n$ is $\mathcal{H}_{(1, \ldots, 1)}$ and $\mathcal{H}_n$ is $\mathcal{H}_{(n)}$. Note that \begin{equation} \label{eqn:tensor} \text{$H^\ast(\mathcal{H}_{\lambda})=H^\ast(\mathcal{H}_{\lambda_1}) \otimes \cdots \otimes H^\ast(\mathcal{H}_{\lambda_m})$.} \end{equation} \begin{theorem} \label{theorem:Q-trivial Bott manifold} \begin{enumerate} \item An $n$-stage $\mathbb{Q}$-trivial Bott manifold is diffeomorphic to $\mathcal{H}_{\lambda}$ for some partition $\lambda$ of $n$. \item Let $\lambda$ and $\lambda'$ be two partitions of $n$. If $H^\ast(\mathcal{H}_\lambda)$ is isomorphic to $H^\ast(\mathcal{H}_{\lambda'})$ as graded rings, then $\lambda = \lambda'$. \end{enumerate} \noindent Therefore, $\mathbb{Q}$-trivial Bott manifolds are distinguished by their cohomology rings with $\mathbb{Z}$-coefficients and the number of diffeomorphism classes in $n$-stage Bott manifolds is equal to the number of partitions of $n$. \end{theorem} \begin{proof} (1) We prove the statement (1) by induction on $n$. Let $B_n$ be an $n$-stage Bott manifold in the tower \eqref{eqn:Bott tower} and suppose that $B_n$ is $\mathbb{Q}$-trivial. When $n=1$, the statement is trivial since $B_1=\mathbb{C} P^1=\mathcal{H}_1$. Assume the statement (1) holds for $(n-1)$-stage $\mathbb{Q}$-trivial Bott manifolds. Then, since $B_{n-1}$ is also $\mathbb{Q}$-trivial by Proposition~\ref{proposition:Q-trivial Bott manifold}, we may assume that $B_{n-1}=\mathcal{H}_\mu$ for some partition $\mu$ of $n-1$ by the induction assumption and $B_n = P(\gamma^{\alpha_n} \oplus \underline{\mathbb{C}})$ with $\alpha_n \in H^2(\mathcal{H}_{\mu})$. We note that $\alpha_n^2 = 0$ by Proposition~\ref{proposition:Q-trivial Bott manifold} because $B_n$ is $\mathbb{Q}$-trivial. If $\alpha_n=0$, then $B_n=\mathcal{H}_\mu\times \mathcal{H}_1$ and the theorem holds in this case. Suppose $\alpha_n\not=0$. Then $\alpha_n$ must sit in $H^2(\mathcal{H}_{\mu_j})$ for some component $\mu_j$ of the partition $\mu$ in \eqref{eqn:tensor} with $\lambda$ replaced by $\mu$ because otherwise $\alpha_n^2$ cannot vanish. Therefore the line bundle $\gamma^{\alpha_n}$ over $\mathcal{H}_\mu$ can be obtained by pulling back a line bundle over $\mathcal{H}_{\mu_j}$. It follows that $B_n$ is diffeomorphic to \[ P(\gamma^{\alpha_n}\oplus\underline{\mathbb{C}})\times \prod_{i\not=j}\mathcal{H}_{\mu_i} \] where $\gamma^{\alpha_n}$ is regarded as a line bundle over $\mathcal{H}_{\mu_j}$, $\mu_i$ runs over all components of $\mu$ different from $\mu_j$. Then the statement (1) follows from Lemma~\ref{lemma:bundle over H_n}. (2) Any (non-zero) square zero element in $H^2(\mathcal{H}_\lambda)$ sits in $H^2(\mathcal{H}_{\lambda_i})$ for some component $\lambda_i$ of $\lambda$ as noted above and it follows from Lemma~\ref{lemm:mod2} that the mod 2 reductions of a square zero primitive element in $H^2(\mathcal{H}_{\lambda_i})$ and that in $H^2(\mathcal{H}_{\lambda_j})$ are same if and only if $i=j$. Therefore, if $\varphi : H^\ast(\mathcal{H}_{\lambda}) \to H^\ast(\mathcal{H}_{\lambda'})$ is a graded ring homomorphism, then all square zero primitive elements in $H^2(\mathcal{H}_{\lambda_i})$ map into $H^2(\mathcal{H}_{\lambda'_j})$ by $\varphi$ for some component $\lambda'_j$ of $\lambda'$. Since the square zero primitive elements in $H^2(\mathcal{H}_{\lambda_i})$ generate $H^*(\mathcal{H}_{\lambda_i})$ over $\mathbb{Q}$, this implies that $\varphi(H^*(\mathcal{H}_{\lambda_i}))$ is contained in $H^*(\mathcal{H}_{\lambda'_j})$. If $\varphi$ is in particular an isomorphism, then this together with \eqref{eqn:tensor} implies the statement (2). \end{proof} \begin{remark} One can show that $\mathcal{H}_\lambda$'s, in other words $\mathbb{Q}$-trivial Bott manifolds, can be distinguished by their cohomology rings even with $\mathbb{Z}/2$- or $\mathbb{Z}_{(2)}$-coefficients. It is not true that all Bott manifolds can be distinguished by their cohomology rings with $\mathbb{Z}/2$-coefficients (e.g. 3-stage Bott manifolds are such examples, see \cite{Ch-Ma-Su-2010}), but it might be true with $\mathbb{Z}_{(2)}$-coefficients, see \cite{ch-su-pre}. \end{remark} \section{Automorphisms of $\mathbb{Q}$-trivial Bott manifolds} \label{sect:auto} By Theorem~\ref{theorem:Q-trivial Bott manifold} we may assume that an $n$-stage Bott manifold is $\mathcal{H}_\lambda$ where $\lambda$ is a partition of $n$. In this section we shall study the group $\Aut(H^*(\mathcal{H}_\lambda))$ of graded ring automorphisms of $H^*(\mathcal{H}_\lambda)$ and prove the following. \begin{theorem} \label{theo:diffeo} Any element of $\Aut(H^*(\mathcal{H}_\lambda))$ is induced from a diffeomorphism of $\mathcal{H}_\lambda$. \end{theorem} Since $\mathbb{Q}$-trivial Bott manifolds are distinguished by their cohomology rings by Theorem~\ref{theorem:Q-trivial Bott manifold}, the theorem above implies the following. \begin{corollary} \label{coro:diffeo} Any cohomology ring isomorphism between two $\mathbb{Q}$-trivial Bott manifolds is induced from a diffeomorphism. \end{corollary} The rest of this section is devoted to the proof of Theorem~\ref{theo:diffeo}. Remember that the square zero primitive elements in $H^2(\mathcal{H}_n)$ are $\pm x_1$ and $\pm(2x_j-x_1)$ for $j>1$ by Lemma~\ref{lemm:mod2}. \begin{lemma} An automorphism of $H^*(\mathcal{H}_n)$ permutes $\pm x_1$ and $\pm(2x_j-x_1)$ for $j>1$ up to sign. On the other hand, any permutation of $\pm x_1$ and $\pm(2x_j-x_1)$ for $j>1$ up to sign induces an automorphism of $H^*(\mathcal{H}_n)$. Therefore, $\Aut(H^*(\mathcal{H}_n))$ is isomorphic to a semi-direct product $(\mathbb{Z}/2)^n\rtimes \frak S_n$ where $\frak S_n$ denotes the symmetric group on $n$ letters and the action of $\frak S_n$ on $(\mathbb{Z}/2)^n$ is the natural permutation of factors of $(\mathbb{Z}/2)^n$. \end{lemma} \begin{proof} The first statement is obvious. Suppose that $\varphi$ is a permutation of $\pm x_1$ and $\pm(2x_j-x_1)$ for $j>1$ up to sign. Then $\varphi(x_1)=\pm x_1$ or $\pm(2x_k-x_1)$ for some $k>1$. In any case one can easily check that if we extend $\varphi$ linearly, then $\varphi(x_i)$ is integral (i.e., a linear combination of $x_\ell$'s over $\mathbb{Z}$) for any $i$. For instance, if \[ \varphi(x_1)=2x_k-x_1,\ \varphi(2x_i-x_1)=x_1,\ \varphi(2x_j-x_1)=-(2x_\ell-x_1) \text{ for $j\not=i$}, \] then a simple computation shows that \[ \varphi(x_i)=x_k\text{ and } \varphi(x_j)=x_k-x_\ell. \] Thus the linear extension of $\varphi$ defines an endomorphism of $H^2(\mathcal{H}_n)$. Moreover, one can also check that $\varphi(x_1)^2=0$ and $\varphi(x_j)^2=\varphi(x_1)\varphi(x_j)$ for $j>1$. This ensures that $\varphi$ extends to a graded ring endmorphism $\overline{\varphi}$ of $H^*(\mathcal{H}_n)$ since the ideal in \eqref{eqn:Hn} is generated by $x_1^2$ and $x_j^2-x_1x_j$ for $j>1$. Similarly, $\varphi^{-1}$ induces a graded ring endomorphism $\overline{\varphi^{-1}}$ of $H^*(\mathcal{H}_n)$ and clealry $\overline{\varphi^{-1}}$ gives the inverse of $\overline{\varphi}$, so $\overline{\varphi}$ is an automorphism of $H^*(\mathcal{H}_n)$. This proves the lemma. \end{proof} We write $\lambda=(d_1^{a_1},\dots,d_k^{a_k})$ where $d_1>\dots>d_k$ and $d_i^{a_i}$ denotes $a_i$ copies of $d_i$ for $i=1,\dots,k$. Then \[ H^*(\mathcal{H}_\lambda)=\bigotimes_{i=1}^k H^*(\mathcal{H}_{d_i})^{\otimes a_i}. \] The proof of (2) in Theorem~\ref{theorem:Q-trivial Bott manifold} shows that an automorphism of $H^*(\mathcal{H}_\lambda)$ maps factors of $H^*(\mathcal{H}_{d_i})^{\otimes a_i}$ to themselves for each $i$, so that \begin{equation} \label{eqn:AutH} \Aut(H^*(\mathcal{H}_\lambda))=\prod_{i=1}^k\Aut(H^*(\mathcal{H}_{d_i})^{\otimes a_i})=\prod_{i=1}^k\Aut(H^*(\mathcal{H}_{d_i}))^{a_i}\rtimes \frak S_{a_i} \end{equation} where the action of $\frak S_{a_i}$ on $\Aut(H^*(\mathcal{H}_{d_i}))^{a_i}$ is the natural permutation of factors of $\Aut(H^*(\mathcal{H}_{d_i}))^{a_i}$. A permutation of factors of $\Aut(H^*(\mathcal{H}_{d_i}))^{a_i}$ is induced from a permutation of factors of $\mathcal{H}_{d_i}^{a_i}$, which is a diffeomorphism, so it suffices to prove Theorem~\ref{theo:diffeo} when $\lambda=(n)$ by \eqref{eqn:AutH}. We first prove it when $n=2$. \begin{lemma} \label{lemm:H2} Any element of $\Aut(H^*(\mathcal{H}_2))$, which permutes $\pm x_1$ and $\pm(2x_2-x_1)$ up to sign, is induced from a diffeomorphism of $\mathcal{H}_2$. \end{lemma} \begin{proof} As remarked in Example~\ref{example:Hirzebruch surface}, $\mathcal{H}_2=\Sigma_1$ is diffeomorphic to $\mathbb{C}P^2\#\overline{\mathbb{C}P^2}$. Let $u$ and $v$ be elements of $H_2( \mathbb{C}P^2\#\overline{\mathbb{C}P^2})$ represented by a canonical submanifold $\mathbb{C}P^1$ in $\mathbb{C}P^2$ and $\overline{\mathbb{C}P^2}$ respectively. They are a basis of $H_2( \mathbb{C}P^2\#\overline{\mathbb{C}P^2})$. (Through the Poincar\'e duality, $u$ and $v$ correspond to $x_2$ and $x_1-x_2$ up to sign since the self-intersection numbers of $u$ and $v$ are $\pm 1$ while squares of $x_2$ and $x_2-x_1$ are a cofundamental class $x_1x_2$ up to sign.) It suffices to show that any permutation of $\pm u$ and $\pm v$ up to sign can be represented by a diffeomorphism of $\mathbb{C}P^2\#\overline{\mathbb{C}P^2}=\mathcal{H}_2$ since the number of those permutations is 8 which agrees with the number of elements in $\Aut(H^*(\mathcal{H}_2))\cong (\mathbb{Z}/2)^2\rtimes \frak S_2$. We consider two involutions $s$ and $t$ on $\mathbb{C}P^2$ defined by \[ s\colon [z_1,z_2,z_3]\to [\bar{z}_1,\bar{z}_2,\bar{z}_3],\qquad t\colon [z_1,z_2,z_3]\to [z_1,z_2,-z_3] \] where $[z_1,z_2,z_3]$ denotes the homogenous coordinate of $\mathbb{C}P^2$ and $\bar{z}$ denotes the complex conjugate of a complex number $z$. Observe that \begin{enumerate} \item $s$ leaves the submanifold $\mathbb{C}P^1=\{z_3=0\}$ of $\mathbb{C}P^2$ invariant, reverses an orientation on the $\mathbb{C}P^1$ and the fixed point set of $s$ is $\mathbb{R} P^2$, \item the induced action of $t$ on $H_\ast(\mathbb{C}P^2)$ is trivial and the fixed point set of $t$ is the disjoint union of $\mathbb{C}P^1=\{z_3=0\}$ and a point $[0,0,1]$. \end{enumerate} \noindent {\bf Type 1.} We consider the involution $s$ on both $\mathbb{C}P^2$ and $\overline{\mathbb{C}P^2}$. Choose a point from the fixed set $\mathbb{R} P^2$ in $\mathbb{C}P^2$ and $\overline{\mathbb{C}P^2}$ respectively and take equivariant connected sum of $\mathbb{C}P^2$ and $\overline{\mathbb{C}P^2}$ around the chosen points. Then the resulting involution on $\mathbb{C}P^2\#\overline{\mathbb{C}P^2}$ sends $(u,v)$ to $(-u,-v)$. \noindent {\bf Type 2.} We consider the involution $s$ on $\mathbb{C}P^2$ and $t$ on $\overline{\mathbb{C}P^2}$. Choose a point from the fixed set $\mathbb{R} P^2$ in $\mathbb{C}P^2$ and a point from the fixed set $\mathbb{C}P^1$ in $\overline{\mathbb{C}P^2}$ and take equivariant connected sum of $\mathbb{C}P^2$ and $\overline{\mathbb{C}P^2}$ around the chosen points. Then the resulting involution on $\mathbb{C}P^2\#\overline{\mathbb{C}P^2}$ sends $(u,v)$ to $(-u,v)$. \noindent {\bf Type 3.} $\mathbb{C}P^2\#\overline{\mathbb{C}P^2}$ is obtained by removing an open disk $D$ from $\mathbb{C}P^2$ and $\overline{\mathbb{C}P^2}$ respectively and gluing together along the boundary $S^3$ via the identity map, so that it admits a reflection with respect to the $S^3$, which maps $\mathbb{C}P^2\backslash D$ to $\overline{\mathbb{C}P^2}\backslash D$. This reflection sends $(u,v)$ to $(v,u)$. Combining the diffeomorphisms of the three types above, one can realize any element of $\Aut(H^*(\mathcal{H}_2))$ by a diffeomorphism of $\mathcal{H}_2$. \end{proof} We shall prove that any element of $\Aut(H^*(\mathcal{H}_n))$ is induced from a diffeomorphism of $\mathcal{H}_n$ for any $n$ by induction on $n$, so that the proof of Theorem~\ref{theo:diffeo} will be completed. For that we prepare three lemmas. We regard $H^*(\mathcal{H}_{j})$ for $j<n$ as a subring of $H^*(\mathcal{H}_n)$ as usual and remember that $\pm x_1$ and $\pm(2x_j-2x_1)$ for $j>1$ are all the square zero primitive elements in $H^2(\mathcal{H}_n)$. \begin{lemma} \label{lemm:ext} Let $\psi$ be an element of $\Aut(H^*(\mathcal{H}_{j}))$ for $j<n$. If $\psi$ is induced from a diffeomorphism of $\mathcal{H}_{j}$, then there is a diffeomorphism of $\mathcal{H}_n$ whose induced automorphism of $H^*(\mathcal{H}_n)$ preserves the subring $H^*(\mathcal{H}_{j})$ and agrees with the given $\psi$ on $H^*(\mathcal{H}_j)$. \end{lemma} \begin{proof} Let $f_j$ be a diffeomorphism of $\mathcal{H}_j$ whose induced automorphism of $H^*(\mathcal{H}_j)$ is $\psi$. The pullback of the bundle \begin{equation} \label{eqn:bundle} \text{$\mathcal{H}_{j+1}=P(\underline{\mathbb{C}}\oplus \gamma^{\alpha_{j+1}})\stackrel{\pi_{j+1}}\longrightarrow \mathcal{H}_j$} \end{equation} by $f_j$ is of the form $P(\underline{\mathbb{C}}\oplus\gamma^{f_j^*(\alpha_{j+1})})\to \mathcal{H}_j$ but this is isomorphic to \eqref{eqn:bundle} by Lemma~\ref{lemma:bundle over H_n} since $\alpha_{j+1}^2=0=f_j^*(\alpha_{j+1})^2$ and the mod 2 reductions of $\alpha_{j+1}$ and $f_j^*(\alpha_{j+1})$ are same. It follows that there is a bundle automorphism $f_{j+1}$ of \eqref{eqn:bundle} which covers $f_j$. Since $f_{j+1}$ covers $f_j$, the automorphism $f_{j+1}^*$ of $H^*(\mathcal{H}_{j+1})$ induced by $f_{j+1}$ preserves the subring $H^*(\mathcal{H}_j)$ and agrees with $f_j^*$ on it. Repeating this argument for $f_{j+1}$ in place of $f_j$, we get a diffeomorphism $f_{j+2}$ of $\mathcal{H}_{j+2}$ which covers $f_{j+1}$ and so on. Then the last diffeomorphism $f_n$ of $\mathcal{H}_n$ is the desired one. \end{proof} \begin{lemma} \label{lemm:xn} There is a diffeomorphism of $\mathcal{H}_n$ whose induced automorphism of $H^*(\mathcal{H}_n)$ is the identity on the subring $H^*(\mathcal{H}_{n-1})$ and maps $x_n$ to $-x_n+x_1$ (equivalently maps $2x_n-x_1$ to $-(2x_n-x_1)$). \end{lemma} \begin{proof} Since the dual bundle of $\underline{\mathbb{C}}\oplus\gamma^{x_1}$ is isomorphic to $\underline{\mathbb{C}}\oplus\gamma^{-x_1}$, the proof of Lemma~\ref{lemm:line} shows that we have a bundle map \[ \text{$b\colon \mathcal{H}_n=P(\underline{\mathbb{C}}\oplus \gamma^{x_1})\to P(\underline{\mathbb{C}}\oplus\gamma^{-x_1})$} \] which covers the identity map on $\mathcal{H}_{n-1}$. The pullback of the tautological line bundle $\eta_-$ over $P(\underline{\mathbb{C}}\oplus\gamma^{-x_1})$ by $b$ is complex conjugate to the tautological line bundle $\eta_+$ over $P(\underline{\mathbb{C}}\oplus\gamma^{x_1})$ (see Remark~\ref{rema:line}); so we obtain \begin{equation} \label{eqn:c1} b^*(x)=-x_n \end{equation} where $x=c_1(\eta_-)$ and $x_n=c_1(\eta_+)$ by the definition of $x_n$. On the other hand, the proof of Lemma~\ref{lemm:line} shows that we have a bundle isomorphism \[ c\colon P(\underline{\mathbb{C}}\oplus\gamma^{-x_1})\to P((\underline{\mathbb{C}}\oplus\gamma^{-x_1})\otimes\gamma^{x_1})=P(\gamma^{x_1}\oplus\underline{\mathbb{C}})=\mathcal{H}_n \] which preserves the complex structures on each fiber. Therefore it induces a \emph{complex} vector bundle isomorphism $T_fP(\underline{\mathbb{C}}\oplus\gamma^{-x_1})\to T_fP(\gamma^{x_1}\oplus\underline{\mathbb{C}})$ between their tangent bundles along the fibers. According to the Borel-Hirzebruch formula \eqref{eqn:tf}, their first Chern classes are respectively $-2x-x_1$ and $-2x_n+x_1$, so \begin{equation} \label{eqn:c2} c^*(-2x_n+x_1)=-2x-x_1. \end{equation} Since the map $c$ covers the identity map on $\mathcal{H}_{n-1}$, $c^*(x_1)=x_1$. It follows from \eqref{eqn:c2} that $c^*(x_n)=x+x_1$. This together with \eqref{eqn:c1} shows that \begin{equation} \label{eqn:bc} b^*(c^*(x_n))=-x_n+x_1 \end{equation} because $b^*(x_1)=x_1$ which follows from the fact that $b$ covers the identity map on $\mathcal{H}_{n-1}$. The identity \eqref{eqn:bc} shows that the composition $c\circ b$ is the desired diffeomorphism. \end{proof} \begin{lemma} \label{lemm:xnxj} There is a diffeomorphism of $\mathcal{H}_n$ whose induced automorphism of $H^*(\mathcal{H}_n)$ interchanges $x_i$ and $x_j$ for $i,j>1$ and fixes $x_k$ for $k\not=i,j$. \end{lemma} \begin{proof} It suffices to show that there is a diffeomorphism $g_i$ of $\mathcal{H}_n$ for each $i>1$ whose induced automorphism of $H^*(\mathcal{H}_n)$ interchanges $x_i$ and $x_{i+1}$ and fixes $x_k$ for $k\not=i,i+1$, because the desired diffeomorphism can be obtained by composing those diffeomorphisms. Remember that $\mathcal{H}_{i+1}$ is obtained as the fiber product \[ \begin{CD} \mathcal{H}_{i+1} @>\rho_{i+1}>> \mathcal{H}_i\\ @VV\pi_{i+1}V @VV\pi_iV\\ \mathcal{H}_i @>\pi_i>> \mathcal{H}_{i-1}. \end{CD} \] Permuting the coordinates of $\mathcal{H}_i\times \mathcal{H}_i$ preserves the subset $\mathcal{H}_{i+1}$ and defines a diffeomorphism $\tau_{i+1}$ of $\mathcal{H}_{i+1}$. One notes that $\tau_{i+1}^*(x_i)=\rho_{i+1}^*(x_i)=x_{i+1}$ and $\tau_{i+1}^*(x_k)=x_k$ for $k<i$. Since $\pi_{i+1}\circ \tau_{i+1}=\pi_{i+1}$, the diffeomorphism $\tau_{i+1}$ naturally extends to a diffeomorphism $\tau_{i+2}$ of $\mathcal{H}_{i+2}$ and finally extends to a diffeomorphism $g_i$ of $\mathcal{H}_n$ because of \eqref{eqn:suyoung_tower}. Since $\tau_{i+1}^*(x_1)=x_1$, the pullback of the line bundle $\gamma^{x_1}$ over $\mathcal{H}_{i+1}$ is isomorphic to $\gamma^{x_1}$ itself. This implies that $\tau_{i+2}^*(x_{i+2})=x_{i+2}$ because $x_{i+2}$ is the first Chern class of the tautological line bundle over $P(\underline{\mathbb{C}}\oplus \gamma^{x_1})$. Therefore $g_i^*$ fixes $x_{i+2}$ since $g_i$ is an extension of $\tau_{i+2}$. Similarly, $g_i^*$ fixes $x_k$ for $k>i+1$. Thus $g_i$ is the desired diffeomorphism. \end{proof} \begin{remark} As remarked at \eqref{eqn:quotient}, one can regard $\mathcal{H}_n$ as the quotient of $(S^3)^n$ by a free action of $(S^1)^n$ associated with the matrix \eqref{eqn:matrixHn}. Then interchanging the $i$-th factor and the $j$-th factor of $(S^3)^n$ produces a desired diffeomorphism in Lemma~\ref{lemm:xnxj}. \end{remark} Now we shall prove that any element of $\Aut(H^*(\mathcal{H}_n))$ is induced from a diffeomorphism of $\mathcal{H}_n$ for any $n$ by induction on $n$. This claim is established for $n=2$ by Lemma~\ref{lemm:H2}. Suppose the claim holds for $n-1$. Let $\varphi$ be an element of $\Aut(H^*(\mathcal{H}_n))$. Then $\varphi$ permutes square zero primitive elements $\pm x_1, \pm(2x_j-x_1)$ $(j>1)$ up to sign. We distinguish three cases. {\bf Case 1.} The case where $\varphi(2x_n-x_1)=\pm(2x_n-x_1)$. In this case $\varphi$ preserves the subring $H^*(\mathcal{H}_{n-1})$ and let $\psi$ be the restriction of $\varphi$ to $H^*(\mathcal{H}_{n-1})$. By Lemma~\ref{lemm:ext} there is a diffeomorphism $f$ of $\mathcal{H}_n$ whose induced automorphism $f^*$ of $H^*(\mathcal{H}_n)$ agrees with $\psi$ on $H^*(\mathcal{H}_{n-1})$. Then the composition $(f^{-1})^*\circ \varphi$ is the identity on $H^*(\mathcal{H}_{n-1})$, so we may assume that $\varphi$ is the identity on $H^*(\mathcal{H}_{n-1})$. If $\varphi(2x_n-x_1)=2x_n-x_1$, then $\varphi$ is the identity so that it is induced from the identity diffeomorphism of $\mathcal{H}_n$. If $\varphi(2x_n-x_1)=-(2x_n-x_1)$, then $\varphi$ is induced from a diffeomorphism of $\mathcal{H}_n$ by Lemma~\ref{lemm:xn}. {\bf Case 2.} The case where $\varphi(2x_n-x_1)=\pm(2x_j-x_1)$ for some $1<j<n$. By Lemma~\ref{lemm:xnxj} there is a diffeomorphism $g$ of $\mathcal{H}_n$ whose induced automorphism $g^*$ of $H^*(\mathcal{H}_n)$ interchanges $x_j$ and $x_n$ and fixes $x_k$ for $k\not=j,n$. Therefore the composition $g^*\circ \varphi$ is an automorphism treated in Case 1, so that $g^*\circ \varphi$ is induced from a diffeomorphism of $\mathcal{H}_n$ by Case 1 and hence so is $\varphi$. {\bf Case 3.} The case where $\varphi(2x_n-x_1)=\pm x_1$. By Lemma~\ref{lemm:H2} and Lemma~\ref{lemm:ext}, there is a diffeomorphism $h$ of $\mathcal{H}_n$ whose induced automorphism $h^*$ of $H^*(\mathcal{H}_n)$ maps $x_1$ to $2x_2-x_1$. Therefore the composition $h^*\circ \varphi$ is an automorphism treated in Case 2, so that it is induced from a diffeomorphism of $\mathcal{H}_n$ and hence so is $\varphi$. This completes the proof of the desired claim and hence Theorem~\ref{theo:diffeo}. \medskip {\bf Concluding remark.} The cohomological rigidity problem asks whether two toric manifolds are diffeomorphic (or homeomorphic) if their cohomology rings are isomorphic. More strongly, it is asked in \cite{Ma-Su-2008} whether any cohomology ring isomorphism between two toric manifolds is induced from a diffeomorphism. We may call this problem the \emph{strong cohomological rigidity problem} for toric manifolds. Corollary~\ref{coro:diffeo} gives a supporting evidence to the problem and the authors do not know any counterexample to the problem. \bigskip \bibliographystyle{amsplain}
{ "timestamp": "2009-12-27T06:30:51", "yymm": "0912", "arxiv_id": "0912.5000", "language": "en", "url": "https://arxiv.org/abs/0912.5000", "abstract": "A Bott manifold is a closed smooth manifold obtained as the total space of an iterated $\\C P^1$-bundle starting with a point, where each $\\C P^1$-bundle is the projectivization of a Whitney sum of two complex line bundles. A \\emph{$\\Q$-trivial Bott manifold} of dimension $2n$ is a Bott manifold whose cohomology ring is isomorphic to that of $(\\CP^1)^n$ with $\\Q$-coefficients. We find all diffeomorphism types of $\\Q$-trivial Bott manifolds and show that they are distinguished by their cohomology rings with $\\Z$-coefficients. As a consequence, we see that the number of diffeomorphism classes in $\\Q$-trivial Bott manifolds of dimension $2n$ is equal to the number of partitions of $n$. We even show that any cohomology ring isomorphism between two $\\Q$-trivial Bott manifolds is induced by a diffeomorphism.", "subjects": "Algebraic Topology (math.AT)", "title": "Classification of Q-trivial Bott manifolds", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9770226341042415, "lm_q2_score": 0.8333245932423308, "lm_q1q2_score": 0.8141769891534676 }
https://arxiv.org/abs/1911.01151
Successive shortest paths in complete graphs with random edge weights
Consider a complete graph $K_n$ with edge weights drawn independently from a uniform distribution $U(0,1)$. The weight of the shortest (minimum-weight) path $P_1$ between two given vertices is known to be $\ln n / n$, asymptotically. Define a second-shortest path $P_2$ to be the shortest path edge-disjoint from $P_1$, and consider more generally the shortest path $P_k$ edge-disjoint from all earlier paths. We show that the cost $X_k$ of $P_k$ converges in probability to $2k/n+\ln n/n$ uniformly for all $k \leq n-1$. We show analogous results when the edge weights are drawn from an exponential distribution. The same results characterise the collectively cheapest $k$ edge-disjoint paths, i.e., a minimum-cost $k$-flow. We also obtain the expectation of $X_k$ conditioned on the existence of $P_k$.
\section{Introduction} It is a standard problem to find the shortest $s$--$t$\xspace path in a graph, i.e., the cheapest path $P_1$ between specified vertices $s$ and $t$, and its cost $X_1$, where the cost of a path is the sum of the costs of its edges. We will use the terms ``cost'' and ``weight'' interchangeably, and reserve ``length'' for the number of edges in a path. Consider the complete graph $G=K_n$ with each edge $\set{u,v}$ having weight $w(u,v)$, where the $w(u,v)$ are i.i.d.\ random variables with exponential distribution $\Exp(1)$ or uniform distribution $U(0,1)$ (we consider both versions). In this random setting, a well-known result of Janson~\cite{Janson123} is that as $n \to \infty$, \begin{equation}\label{eq:svante-X1} \frac{X_1}{\ln n / n } \pto 1 . \end{equation} We define the second cheapest path $P_2$, with cost $X_2$, to be the cheapest $s$--$t$\xspace path edge-disjoint from $P_1$, and in general define $P_k$, with cost $X_k$, to be the cheapest $s$--$t$\xspace path edge-disjoint from $P_1 \cup \cdots \cup P_{k-1}$, provided such a path exists. We also think of this as finding path $P_k$ after the preceding paths' edges have been removed. Our question is how the costs $X_k$ behave in the limit as $n \to \infty$ (this limit is implicit throughout). Our main result is the following. \begin{theorem}\label{Tmain} In the complete graph $K_n$ with i.i.d\xperiod uniform $U(0,1)$ edge weights, with $X_k$ the cost of the $k$th cheapest path, \begin{align} \frac{X_k}{2k/n + \ln n / n} & \pto 1 \label{eq:UnifMain} \end{align} uniformly for all $k \leq n-1$. That is, for any $\varepsilon>0$, asymptotically almost surely, for every $k =1,\ldots,n-1$, \begin{align}\label{Xkbounds} 1-\varepsilon &\leq \frac{X_k}{2k/n + \ln n / n} \leq 1+\varepsilon . \end{align} \end{theorem} Naturally, with $k=1$, \cref{eq:UnifMain} recovers Janson's result \cref{eq:svante-X1}, since $2/n=o(\ln n/n)$. As discussed shortly, in contrast to many cases, the result for the uniform distribution does not extend immediately to all distributions with positive density at 0. However, we have a corresponding result for exponentially distributed edge weights. Given an edge-weight distribution, let $W\os k$ be the (random) weight of the $k$th cheapest edge out of a vertex (the $k$th order statistic of $n-1$ edge weights). \begin{theorem}\label{Texp} In the complete graph $K_n$ with i.i.d\xperiod exponential edge weights with mean 1, \begin{align} \frac{X_k}{2 \E W\os k + \ln n / n} & \pto 1 \label{eq:ExpMain} \end{align} uniformly for all $k \leq n-1$. \end{theorem} \noindent We give the guiding intuition behind the formula \cref{eq:ExpMain} in \cref{sec:paths-intution}. Note that $\E W\os k =\sum_{i=1}^k \tfrac{1}{n-i}$ in the exponential case (see e.g. \cref{lemma:edge-orderstat}). In the uniform case, $\E W\os k = k/n$, so \cref{eq:UnifMain} in \cref{Tmain} can also be written as \cref{eq:ExpMain}. Rather than finding the $k$ successive cheapest paths, we may alternatively wish to find the $k$ edge-disjoint paths of \emph{collective} minimum cost. Equivalently, where every edge of $G$ has capacity~$1$, we may be interested in the minimum-cost $k$-flow from $s$ to $t$ in $G$. The following remark shows that this problem leads to essentially the same costs. (The analogous ``collective'' problem for minimum spanning trees is solved in \cite{FrJo}, and \cite{JaSoMST} shows that for MSTs, the ``successive'' version leads to strictly larger costs.) \begin{remark}\label{rmk:pathsFk} In the complete graph $K_n$ with i.i.d.\ edge weights with distribution $U(0,1)$ or exponential with mean 1, the minimum-cost $k$-flow has cost $F_k$ satisfying \begin{align} \label{kflow} \frac{F_k}{ \sum_{i=1}^k (2 \E W_{(i)}+ \ln n/n) } \pto 1 \end{align} uniformly for all $k \leq n-1$. \end{remark} As in \cref{Xkbounds}, the statement consists of high-probability upper and lower bounds. The upper bounds here, for the two models, follow immediately from the upper bounds of \cref{eq:UnifMain} and \cref{eq:ExpMain}. The lower bounds follow from the lower bound on $S_k \coloneqq \sum_{i=1}^k X_k$ (see \cref{Skdef}) in \cref{Sklower} and its analogue for the exponential case, as those bounds hold for any set of $k$ edge-disjoint paths. (The main work in \cref{lowerbound}, not needed here, is to extract lower bounds on $X_k$ from the lower bounds on $S_k$.) \begin{remark}\label{rmk:existence} $P_k$ is always defined for all $k \leq n/2$, but, at least for $n$ even, may be undefined for all $k>n/2$. \end{remark} \begin{proof} There are $n-2$ length-2 $s$--$t$\xspace paths. Any path $P_k$ can destroy (share an edge with) at most two such paths (since $P_k$ uses just one edge incident to each of $s$ and $t$). Also, the single-edge path \ensuremath{\set{s,t}}\xspace is destroyed only by the path $P_k$ consisting of just this edge. So, for $P_1,\ldots,P_{k}$ to destroy all length-1 and length-2 paths requires $k \geq (n-2)/2+1=n/2$, so for $k \leq n/2$, certainly path $P_k$ exists. Conversely, a construction described in 1892 by Lucas \cite[pp.~162--164]{Lucas}, which he attributes to Walecki, shows that a complete graph $K_{2r}$ can be decomposed into $r$ edge-disjoint Hamilton paths whose $2r$ terminals are all distinct. For $n$ even, decompose $G=K_n \setminus \set{s,t}$ in this way, then link $s$ to one ``start'' terminal of each such path and $t$ to the other ``end'' terminal, giving $(n-2)/2$ edge-disjoint $s$--$t$\xspace paths. The edge \ensuremath{\set{s,t}}\xspace gives another path, for $n/2$ paths in all. The only edges not used by these paths are a star from $s$ to the Hamilton paths' end terminals, and another star from $t$ to their start terminals, and as there are no other unused edges to connect these two stars, there is no further $s$--$t$\xspace path. With nonzero probability, the edge weights are such that $P_1,\ldots,P_{n/2}$ are these $n/2$ paths, so that $P_{n/2+1}$ does not exist. \end{proof} \cref{rmk:existence} implies that, at least for $n$ even, $\E[X_k]$ is undefined for $k > n/2$. The following theorem establishes $\E X_k$ for $k \leq n/2$, and for all $k \leq n-1$, gives the expectation conditioned on the (high-probability) event that $P_k$ exists. \begin{theorem}\label{thm:expectation} In both the uniform and exponential models, for $k \leq n-1$, a.a.s.\ $P_k$ exists, and \begin{align} \E[X_k \mid P_k \textnormal{ exists}] &=(1+o(1))(2\E W\os k + \ln n / n), \label{EX} \end{align} uniformly in $k$. \end{theorem} For $k \leq n/2$, by \cref{rmk:existence} the conditioning is null, so it is immediate from \cref{thm:expectation} that $E[X_k]=(1+o(1))(2\E W\os k + \ln n / n)$. \subsection{Intuition}\label{sec:paths-intution} The intuitive picture is that path $P_k$ should use the $k$th cheapest edges out of $s$ and $t$, whose costs are denoted $W\os k^s$ and $W\os k^t$ respectively. Then, if we ignore previous paths' use of other edges in $G\setminus \set{s,t}$, by \cref{eq:svante-X1} the opposite endpoints of these two edges should be connected by a path of cost about $\ln n/n$. This suggests that $X_k \leq W\os k^s+W\os k^t + \ln n/n$, and this is our guiding intuition. Obviously, the path $P_k$ does not have to use the $k$th cheapest edge, its middle section may cost more or less than $\ln n / n$, and as earlier paths use up edges, the costs of these middle sections may rise. It is true, though, that $\sum_{i=1}^k X_i \geq \sum_{i=1}^{k-1} \parens{W\os k^s + W\os k^t}$ (summing only to ${k-1}$ on the right-hand side to avoid doubly counting edge \ensuremath{\set{s,t}}\xspace), and we use this in proving the lower bounds on $X_k$ (in \cref{lowerbound} for uniform and \cref{ExpLB} for exponential) and, more surprisingly, in proving the upper bounds on $X_k$ for large $k$ (in \cref{largekUB} generically, the details treated in \cref{unifUB,ExpBounds}). Our upper bounds are obtained by reasoning as follows. Janson~\cite{Janson123} analyses the shortest $s$--$t$\xspace path, and shortest-path tree (SP tree or SPT) on $s$, in the randomly edge-weighted graph $G=K_n$, showing that the cost of $P_1$ is asymptotically almost surely, almost exactly $\ln n/n$. When the path $P_1$ is deleted, this prunes away a root-level branch of the SP tree. The SP tree is a uniform random tree, and using known properties of such trees (see for example~\cite{Su}) it is not hard to show that what remains of the SP tree is likely to be large; capitalising on this we can find an almost equally cheap path $P'_2$. This line of argument also shows that there remains a cheap path after deleting $P'_2$, but we need to know what happens when we delete the true second-shortest path $P_2$, and at this point the argument fails because it gives no characterisation of $P_2$, only of $P_2'$. We do know, however, that $P_2$ is cheap (no more expensive than $P'_2$), and of course uses just one edge incident to each of $s$ and $t$, and we will show that deleting \emph{any} {edge set} with these properties (including $P_2$ as a possibility) must still leave a cheap path $P'_3$, and so forth. This ``adversarial'' deletion argument is developed in \cref{adversary} to prove \cref{Tmain}. \subsection{Context} The question fits with a broad research theme on optimisation (and satisfiability) problems on random structures. The novel element here is the ``robustness'' aspect of finding cheap structures even after the cheapest has been removed, and in this we were motivated by a recent study by Janson and Sorkin~\cite{JaSoMST} of the same question for successive minimum spanning trees (MSTs), again for $K_n$ with uniform or exponential random edge weights. The results for shortest paths and MSTs are dramatically different. For MSTs, it is a celebrated result of Frieze~\cite{FriezeMST} that as $n \to \infty$ the cost of the MST $T_1$ satisfies $w(T_1) \pto \zeta(3) \eqdef \sum_{k=1}^{\infty} 1/k^3$, and~\cite{JaSoMST} shows that each subsequent tree's cost has $w(T_k) \pto \gamma_k$ with the $\gamma_k$ strictly increasing (and $2k-2\sqrt k <\gamma_k<2k+2\sqrt k$). That is very different from the case here, for paths, where for $k=o(\ln n)$ we have $X_k$ asymptotically equal to $X_1$. Further context is given in the discussion of open problems in \cref{otherModels}. \subsection{Edge weight distributions} As remarked earlier, in many contexts (including for the length $X_1$ of a shortest path) the result for any distribution with positive density at 0 follows immediately from that for the uniform distribution $U(0,1)$, but that is not the case for the successive paths considered here. \begin{remark}\label{remark:blackbox} Janson proves the $X_1$ case in the exponential model but provides standard ``black-box'' reasoning that it holds also for the uniform distribution, for any distribution with density 1 at 0 (i.e., with cumulative distribution function (CDF) $\Pr(X \leq x) = x+o(x)$ for $x \downto 0$), and, after simple rescaling, for any distribution with positive density at 0. Simply, if there is a path of cost $o(1)$ in some such model, each edge $w$ must also cost $o(1)$, and, coupling with the uniform distribution by replacing $w$ with $w'=F(w)$, with $F$ the CDF, $w' \leq (1+o(1)) w$, and thus the same path is similarly cheap in the uniform model. By the same token, if a path is cheap in any model, the same path has asymptotically the same cost in any other model, and thus the cheapest paths have asymptotically the same cost. \end{remark} \begin{remark}\label{remark:openbox} In our setting this argument does not apply: to find path $P_k$ we must know the nature of the $k-1$ previous paths; their costs are not enough. For $k=o(n)$, however, the standard argument applies within our proofs, since the proofs rely only on edges of cost $o(1)$. However, for larger $k$ there are genuine difficulties. Our argument for the exponential case, in \cref{ExpBounds}, largely parallels that for uniform but requires new calculations for the upper bound, and one new idea for the lower bound (in \cref{ExpLB}). It is not clear for what other edge-weight distributions (even those with density 1 at 0) \cref{eq:ExpMain} will hold. \end{remark} \section{Open problems} \subsection{Poisson multigraph model} The issue of possible non-existence of paths $P_k$ for $k>n/2$ (see \cref{rmk:existence}) is obviated if, as in \cite{JaSoMST}, we work in a Poisson multigraph model. Here, each pair of vertices $\set{u,v}$ of $K_n$ is joined by infinitely many edges, whose weights are drawn from a Poisson process of rate 1 (so that the cheapest $\set{u,v}$ edge has exponentially distributed cost of mean 1). By construction, in this model every $s$--$t$\xspace path is always available (possibly at a higher cost). \begin{conjecture} In the Poisson multigraph model, $ \frac{X_k}{2k/n + \ln n / n} \pto 1 $ uniformly for all $k \leq n-1$, and $ \frac{\E X_k}{2k/n + \ln n / n} \to 1 $ for all $k \leq n-1$. \end{conjecture} \noindent Actually, in this model there is no need to stop at $k=n-1$, but it is not clear how far out we can go (especially preserving uniform convergence). \subsection{Other models} \label{otherModels} Most narrowly, it would be interesting to characterise successive shortest paths that are vertex-disjoint rather than edge-disjoint, and (in the style of \Cref{rmk:pathsFk} for edge-disjoint paths) the $k$ vertex-disjoint paths of collective minimum cost. In this model, guessing that path lengths stay around $\log n$, we would expect $P_k$ to be defined up to $k$ about $n / \log n$. More broadly, it would be interesting to explore different edge-weight distributions, different structures, and different graphs. As noted earlier, we have results for uniformly and exponentially distributed edge weights, but not for arbitrary distributions. As mentioned, results for the single shortest path follow by standard arguments for any distribution with positive density near~0. For a distribution with density tending to 0 or $\infty$ at 0, shortest paths were studied in \cite{BH2012}. In particular, they consider the case when edge weights are i.i.d\xperiod and have the same distribution as $Z^{p}$, where $Z \sim \Exp(1)$ and $p>0$ is a fixed parameter; in this setting, the shortest path has length $p \ln n$ and its cost is $\ln n/n^p$ times a $p$-dependent constant. A variant where the edge-weight distribution may depend on $n$ is studied in \cite{Eckhoff}. To what distributions does \cref{Texp} extend? Restricting to distributions with positive density near~0, the arguments in \cref{ExpBounds} should immediately extend for all $k=o(n)$. For larger $k$, the ``middle'' of each path should remain short, so the issue is the edges incident on $s$ and $t$ in $P_k$. Certainly \cref{eq:ExpMain} will fail if the order statistics of edges incident to $s$ are not concentrated, for example if the edge distribution is a mixture of $U(0,1)$ and an atom at 2 or (for a continuous example) a mixture of $U(0,1)$ and the Pareto distribution with CDF $1-1/x$ for $x\geq 1$. It might be true that \cref{eq:ExpMain} holds more generally if the expectation $2\E W\os k$ is replaced by $W\os k^s+W\os k^t$. However, to obtain the needed lower bound for the exponential model (see \cref{ExpBounds}), we had to address the fact that the $k$th path does not necessarily use the edges of cost $W\os k^s$ and $W\os k^t$; we also needed exponential-specific calculations for the upper bound. One could explore other structural models. Minimum spanning trees (MSTs) have already been explored in \cite{JaSoMST} for the successive version and in \cite{FrJo} for the collective version. But for many other models the single cheapest structure is well studied but the successive and collective extensions have not been explored: this includes perfect matchings in complete bipartite graphs $K_{n,n}$ \cite{AldousAP,WastlundAP}, perfect matchings in complete graphs $K_n$ \cite{WastlundPM}, and Hamilton cycles (i.e., the Travelling Salesman Problem) in $K_n$ \cite{Was2010}. One could also consider graphs other than complete graphs, in the style of studies of the MST in a random regular graph \cite{BFM1998}, and of first-passage percolation in \ER random graphs \cite{Bhamidi} and hypercubes \cite{Martinsson}. \section{Upper bound for small \tp{$k$}{k}}\label{sec:k-small} In this section we prove the upper bound of \cref{Tmain} for all $k=o(\sqrt n)$; larger values are treated in the next section. As discussed in the introduction, we can characterise the cheapest path $P_1$ and subsequent paths that are \emph{cheap} but not necessarily \emph{cheapest}, putting us at a loss to characterise what remains on deletion of a subsequent \emph{cheapest} path. We address this in this section. Given $k$, we show a construction of a subgraph $\ensuremath{R}\xspace=R^{(k)}$ of $G$ designed so that, as we will show in turn, its $s$--$t$\xspace paths are all cheap, and no deletion of edges from $\ensuremath{R}\xspace$ subject to certain constraints can destroy all these paths. We show that the union of the $k$ shortest paths satisfies these constraints, so that there remains a cheap $s$--$t$\xspace path in $\ensuremath{R}\xspace$ and thus in $G$, and use this to prove \cref{Tmain}. \begin{figure}[h] \centering \includegraphics[width=0.7\linewidth]{Rsmall} \caption{Cartoon of a robust subgraph $\ensuremath{R}\xspace$ of $G$, showing the vertices $s$ and $t$, their respective structures $R_s$ and $R_t$ including shortest-path trees represented by triangles (some ``failed'' and thus not shown), and the cheap edges connecting triangles in $R_s$ and $R_t$. Vertices $s$ and $t$ have down-degree (number of children) $r_0$, and vertices at levels 1 and 2 (in $R_s$ and $R_t$) have down-degrees $r_1$ and $r_2$ respectively. }\label{Frobust} \end{figure} Specifically, we will define a structure $\ensuremath{R}\xspace$, sketched in \cref{Frobust}, that has many cheap and spread-out paths between $s$ and $t$, within which we will always find a cheap path. A crucial point is that each step of the construction occurs in a complete induced subgraph of $G$ of size $n-o(n)$ with all edges unconditioned. We will show, assuming that \begin{align}\label{indHyp} X_i \leq (1+\eps) \parens { \frac{2i}n+\frac{\ln n}n } \end{align} for all $i\leq k$, that the same holds for $i={k+1}$. We will do so by showing that after deleting $k$ paths, each of cost $\leq (1+\eps) (2k/n+\ln n/n)$ from $G$, some or all of whose edges may lie in $R$, there remains a path in $R$ satisfying the same cost bound, and so this must also be true of $P_{k+1}$. \medskip Consistent with this approach, and because to prove convergence in probability it suffices to consider an arbitrarily small, fixed $\varepsilon$ (see around \cref{Xkbounds}), throughout this section we assume that $\varepsilon>0$ is fixed. Thus, in the $n \to \infty$ limit implicit throughout, \begin{align} \varepsilon=\Theta(1) , \label{epsconst} \end{align} and $\varepsilon$ (and functions of $\varepsilon$) may be absorbed into the constants implicit in any Landau-notation expression. \begin{remark} \label{warning} Most of the calculations below hold for any $\varepsilon>0$, but a few (\cref{Bheavy} and \cref{pathlength} for example) hold only for $\varepsilon$ sufficiently small. This is not restrictive here, in proving convergence in probability, but to characterise expectation, \cref{expSmallk} requires $\varepsilon$ to be a large constant (to assure sufficiently small failure probabilities). The proof of \cref{lem:large-eps} addresses the changes needed. \end{remark} Before going into detail let us sketch the construction of $R$. We first build up a tree $R_s$ on $s$, starting from $s$ at level 0, the opposite endpoints of edges out of level $i$ forming level $i+1$. We will always choose ``cheap'' edges, but not always the cheapest ones, as explained later. From $s$ we will choose $k+\rr0$ cheap edges; from each of these $k+\rr0$ level-1 vertices we choose $\rr1$ cheap edges; from each of the $(k+\rr0)\rr1$ level-2 vertices we choose $\rr2$ cheap edges; and on each of the $(k+\rr0)\rr1 \rr2$ level-3 vertices we construct a shortest-path tree comprising $\dd$ vertices. We do a similar construction on $t$ to form $R_t$. Finally, we link $R_s$ and $R_t$ using cheap edges between their shortest-path trees. The values of the parameters $\rr0$, $\rr1$, $\rr2$ and $d$ are given in ~\cref{k0},~\cref{k1},~\cref{k2} and~\cref{ddef}, and it is confirmed in \cref{sec:Rsize} that the construction uses only a small fraction of $G$'s vertices, \begin{align}\label{n'} \card{V(R)} &= O((k+\rr0) \rr1 \rr2 d) = o(n) , \end{align} a fact we rely on in the construction. We will repeatedly use the following Chernoff bound, which in fact holds under more general conditions; see for example~\cite[Theorem 1, eq.~(4)]{Janson2001}. \begin{lemma}\label{lemma:BinDev} Let $X \sim \Bi(n,p)$ be a binomial random variable with mean $\lambda = np$. Then for any $\varepsilon>0$, $\Pr(X< (1-\varepsilon)\lambda) \leq \exp(-\varepsilon^2 \lambda/2)$. \end{lemma} \subsection{Cheap paths are short} We show that, w.h.p\xperiod} % {with high probability, every cheap path in $G$ is also short. The following lemma asserts the contrapositive. The result is used in~\cref{Bany} to restrict the number of edges the adversary can delete. \begin{lemma}\label{LLenBd} In both the uniform and exponential models, with probability $1-\OO{n^{-1.9}}$, simultaneously for all $l$ with $\ln n \leq l < n$, every $s$--$t$\xspace path of length $l$ has cost $\geq l/(19 n)$. \end{lemma} \begin{proof} We start with the uniform distribution. Here, with $X=\sum_{i=1}^{l} X_i$, $X_i \sim U(0,1)$ i.i.d\xperiod, $X$ has the Irwin-Hall distribution and it is a standard result that $\Pr(X \leq a) \leq a^l/l!$ (see for example \cite[eq.\ 8]{SorkinClique}). Recall that Stirling's approximation is also a lower bound. Thus, \begin{align*} \Prr{X \leq \frac{l}{19n}} &\leq \frac{(l/19n)^l}{l!} \leq \frac{(l/19n)^l}{\sqrt{2\pi l} \, (l/e)^l} < \parens{\frac{e}{19n}}^l. \end{align*} The cost of a fixed path of length $l$ has the same law as $X$. Over the $\leq n^l$ choices for such a path, the number $M_l$ of ``cheap paths'' (of cost $<l/(19n)$) satisfies (by Markov's inequality) \[ \Pr(M_l>0) \leq \E M_l \leq n^l \Prr{X \leq \frac{l}{19n}} \leq n^l \parens{\frac{e}{19n}}^l = \parens{\frac e{19}}^l . \] Summing over $l \geq \ln n$, the probability that there is a cheap path of any such length is $\OO{{(e/19)}^{\ln n}} = \OO{n^{-1.9}}$. Since an $\Exp(1)$ random weight $X'$ can be obtained from a $U(0,1)$ r.v.\ $X$ by setting $X' = -\ln(1-X)>X$, the exponential weight stochastically dominates the uniform, so the result for uniform immediately implies that for exponential. \end{proof} \subsection{Adversarial edge deletions}\label{adversary} As noted in the introduction, we introduce an edge-deleting adversary whose powers allow it to delete the paths $P_1,\ldots,P_k$, but which is more easily characterised than those paths are. We now specify what the adversary is permitted to do. Let \begin{align}\label{sdef} s = s(k) & \coloneqq 2k+\ln n . \end{align} (From context it should be easy to distinguish this use of $s$ from that as the source of an $s$--$t$\xspace path.) Let $w_0$ be the ``target cost'' of a path, namely \begin{align}\label{w0def} w_0 = w_0(k) & \coloneqq \frac s n = \frac{2k}n+\frac{\ln n}n . \end{align} Define a ``heavy'' edge to be one of cost \begin{align}\label{heavydef} & \geq \tfrac1{11} \varepsilon w_0. \end{align} Assuming that each of $P_1,\ldots,P_k$ has weight $\leq (1+\eps) w_0$, the \emph{number of heavy edges} in $P_1 \cup \cdots \cup P_k$ is at most \begin{align} \frac{k (1+\eps) w_0}{\tfrac1{11} \varepsilon w_0} < \frac{12k}{\varepsilon} < \frac{12 s}{\varepsilon} . \label{Bheavy} \end{align} Also, modulo the one-time failure probability $\OO{n^{-1.9}}$ from \cref{LLenBd}, by that lemma each path has length at most \begin{align} (1+\eps) w_0 \cdot 19n < 20 s . \label{pathlength} \end{align} Thus, the length of all $k$ paths taken together (i.e., the number of edges in $P_1 \cup \dots \cup P_k$) is at most \begin{align} 20 k s < 10 s^2 . \label{Bany} \end{align} And of course the $k$ paths include \begin{align} \label{Bincident} &\text{exactly $k$ edges incident on each of $s$ and $t$.} \end{align} Subject to these assumptions --- that each of $P_1,\ldots,P_k$ has weight $\leq (1+\eps) w_0$ and that the high-probability conclusion of \cref{LLenBd} holds --- $P_1 \cup \cdots \cup P_k$ satisfies all three of the constraints \cref{Bheavy}, \cref{Bany}, and \cref{Bincident} on heavy edges, all edges, and ``incident'' edges. An adversary who can delete any edge set subject to these constraints is able to delete $P_1 \cup \cdots \cup P_k$, which is all we require. However, to simplify analysis we will give the adversary even more power. At the root of $R$ we will allow the adversary to delete edges subject only to \cref{Bincident}; at level 1, additional edges subject only to the ``heavy-edge budget'' \cref{Bheavy}; and at levels 2 and 3 and for ``middle'' edges, additional edges subject only to the ``edge-count budget'' \cref{Bany}. \medskip We will show how to choose the parameters of $R$ so that every $s$--$t$\xspace path in $R$ has cost $\leq (1+\eps) w_0$, and so that $R$ is ``robust'': after the adversarial deletions, at least one path remains. Specifically, we will arrange that there remains a path in which the ``root'' edge incident to $s$ costs $\leq \tfrac k n + \frac19 \varepsilon w_0$, the edge out of level 1 is heavy but has cost $\leq \frac19 \varepsilon w_0$, the edge out of level 2 may be light or heavy and also has cost $\leq \frac19 \varepsilon w_0$, the path through the SP tree has total cost $\leq \frac12 \tfrac {\ln n} n + \frac19 \varepsilon w_0$, the central edge joining this to the opposite SP tree adds cost $\leq \frac19 \varepsilon w_0$, and the continuation of this path to $t$ has the symmetrical properties. It is immediate that such a path has total cost $\leq (2k+\ln n)/n + 9 \cdot \tfrac19 \varepsilon w_0 = (1+\eps) w_0$. (But see \cref{Rpathcost} for confirmation, after the construction is detailed.) \medskip \subsection{Level 0, cheapest edges}\label{level0} On $s$, add to $R$ the $k+\rr0$ edges of lowest cost, excluding $\set{s,t}$ from consideration, with \begin{align}\label{k0} \rr0 = \ceil{\tfrac1{10} \varepsilon s} = \Theta(s) . \end{align} Consider this step a \emph{failure} if any selected edge has cost greater than $\tfrac k n+\frac19 \varepsilon w_0$. There are $n'=n-2=(1-o(1))n$ edges under consideration, with weights i.i.d\xperiod $U(0,1)$, and failure occurs iff the number $X$ of edges with weights in the interval $[0, \tfrac k n+\frac19 \varepsilon w_0]$ is smaller than $k+\rr0$. Note that $X \sim \Bi(n', \tfrac k n+\frac19 \varepsilon w_0)$, thus $\E X = (1-o(1)) \, (k+\frac19 \varepsilon s)$, and failure means that $X <k+\rr0$, i.e., that \begin{align*} \frac{X}{\E X} &< (1+o(1)) \, \frac{k+\rr0}{k+\frac19 \varepsilon s} = (1+o(1)) \, \frac{k+\frac1{10} \varepsilon s}{k+\frac19 \varepsilon s} , \intertext{which by $s>2k$ is} &< (1+o(1)) \, \frac{k+\frac1{10} \varepsilon \cdot 2k}{k+\frac19 \varepsilon \cdot 2k} = (1+o(1)) \, \frac{(1+\frac2{10} \varepsilon)k}{(1+\frac29 \varepsilon)k} < 1-\tfrac1{50}\varepsilon = 1-\Omega(\varepsilon) . \end{align*} By \cref{lemma:BinDev}, then, the probability of failure is \begin{align} \Pr(X < (1-\Omega(\varepsilon)) \E X) &\leq \exp(-\Omega(\varepsilon^2) \E X/2) \leq \exp(-\Omega(\varepsilon^2 \cdot \varepsilon s)) \leq \exp(-\Theta(s)) , \label{level0fail} \end{align} the final expression using that $\varepsilon$ is constant (see \cref{epsconst}). So, modulo the given failure probability, every selected edge incident on $s$ has cost $\leq \tfrac k n+\frac19 \varepsilon w_0$, and after the adversarial deletion of $k$ of these edges, $\rr0$ remain. The selection of edges conditions the costs of the other edges incident on $s$, but none will play any role in the analysis. The purpose of the next two levels is to expand the number of edges to the point where the adversary cannot delete all of them, because of the heavy-edge budget \cref{Bheavy} for edges out of level 1, and the edge-count budget \cref{Bany} for edges out of level 2 and beyond. At the same time, we try to minimise the number of vertices introduced into the construction so that it will remain $o(n)$ for as large a $k$ as possible. \subsection{Level 1, cheapest heavy edges}\label{level1} From each neighbour $v$ of $s$ along the edges just added, add to $R$ the \begin{align}\label{k1} \rr1 \coloneqq \ceil{10,000/\varepsilon^2}= \Theta(1) \end{align} cheapest \emph{heavy} edges from $v$ to any of the $n'=n(1-o(1))$ vertices not yet added (see \cref{n'}), as before also excluding vertex $t$. Consider this step a \emph{failure} if any added edge has cost greater than $\frac19 \varepsilon w_0$. For each neighbour $v$ there are $n'$ edges under consideration, with weights i.i.d\xperiod $U(0,1)$, and failure occurs iff the number $X$ of edges with weights in the interval $[\frac1{11} \varepsilon w_0, \frac19 \varepsilon w_0]$ is smaller than $\rr1$. Note that $X \sim \Bi(n', (\frac19-\frac1{11}) \varepsilon w_0)$, thus $\E X = (1-o(1)) \, (\frac19-\frac1{11}) \varepsilon s = \Theta(\varepsilon s)$. Failure means that $X < \rr1 < \E X/2$, so by \cref{lemma:BinDev} the probability of failure for a given $v$ is $\leq \exp(-\Theta(\varepsilon s))$. The number of level-1 vertices $v$ is $k+\rr0 = O(s)$, so by the union bound the probability of any failure is \begin{align}\label{level1fail} \leq O(s) \exp(-\Theta(\varepsilon s)) & \leq \exp(-\Theta(s)), \end{align} by suitable adjustment of the constants implicit in $\Theta$. This edge selection conditions the costs of the other edges incident on each $v$, but none will play any role in the analysis. The adversary must leave $\rr0$ edges out of the root, expanding to \[ \rr0 \rr1 \geq \tfrac1{10} \varepsilon s \cdot 10,000/\varepsilon^2 = 1,000 s/\varepsilon \] (heavy) edges out of level 1, of which (by~\cref{Bheavy}) he can delete at most $12 s/\varepsilon$, leaving (very generously calculated) at least \begin{align}\label{K1} \Rsub1 & \coloneqq 120 s/\varepsilon = \Theta(\rr0 \rr1) \end{align} edges out of level 1. The vertices at the opposite endpoints of these edges constitute level 2. \subsection{Level 2, cheapest edges}\label{level2} From each level 2 vertex $v$ in turn, add to $R$ the cheapest $\rr2$ edges to any of the $n'=n(1-o(1))$ vertices not yet added, again also excluding vertex $t$ from consideration. Here choose $\rr2$ so as to make \begin{align}\label{K2} \Rsub2 \coloneqq \Rsub1 \rr2 = 12 s^2 , \end{align} namely taking \begin{align} \rr2 = \frac{12 s^2}{\Rsub1} = \frac{12 s^2}{120 s/\varepsilon} = \tfrac1{10} \varepsilon s = \Theta(\varepsilon s) . \label{k2} \end{align} Consider this step a \emph{failure} if any added edge has cost greater than $\frac19 \varepsilon w_0$. For each neighbour $v$ there are $n'=(1-o(1))n$ edges under consideration, with weights i.i.d\xperiod $U(0,1)$, and failure occurs iff the number $X$ of edges with weights in the interval $[0, \frac19 \varepsilon w_0]$ is smaller than $\rr2$. Note that $X \sim \Bi(n', \frac19 \varepsilon w_0)$, thus $\E X = (1-o(1)) \, \frac19 \varepsilon s = \Theta(\varepsilon s)$. Failure means that $X < \rr2 < 0.99 \E X$, so by \cref{lemma:BinDev} the probability of failure for a given $v$ is $\leq \exp(-\Theta(\varepsilon s))$. The number of level-2 vertices $v$ is $(k+\rr0) \rr1 = \OO{s}$ so by the union bound the probability of any failure is \begin{align}\label{level2fail} & \leq \exp(-\Theta(\varepsilon s)) \end{align} This edge selection conditions the costs of the other edges incident on each $v$, but none will play any role in the analysis. The adversary had to leave at least $\Rsub1$ edges out of level 1, expanding to $\Rsub1 \rr2 = \Rsub2 = 12 s^2$ edges out of level 2, of which by~\cref{Bany} he can delete at most $10 s^2$, leaving at least $2 s^2$ edges out of level 2. The vertices at the opposite endpoints of these edges constitute level 3. \subsection{Level 3, shortest-path trees}\label{level3} We now grow each level-3 vertex $v$ to a tree $T_v$ with $d$ vertices, including $v$, choosing \begin{align} d \coloneqq \ceil{ \sqrt{\frac{n \ln n}{2 s^3}} \; } < \sqrt n. \label{ddef} \end{align} We grow these trees one after another, always working within the $n'=n(1-o(1))$ vertices not yet added, and again always excluding vertex t from consideration. Controlling the lengths of the paths in $T_v$ would allow a choice of $d$ as large as $\sqrt n$, but we make it smaller to keep the number of vertices in $R$ as small as possible (and thus keep it to $o(n)$ for $s$ as large as possible). Here it will be convenient to work with exponentially rather than uniformly distributed edge weights. There are various easy ways to arrange this. We do so by temporarily replacing each uniform weight $w$ with a weight $w' = -\ln (1-w)$; it is standard that these transformed weights are exponentially distributed, and that $w' \geq w$. We construct a shortest-path tree (SPT) of order $d$ using the transformed weights; it will not be an SPT for the original weights, but its paths will be short under the original weights, which is all that we care about. Define the distance $\operatorname{dist}(u,v)$ between two vertices to be the cost of a minimum-weight path between them, and define the radius $\operatorname{rad}(T_v)$ of an SPT $T_v$ to be the maximum distance from $v$ to any vertex in $T_v$. The radius is described by the following claim, which we phrase in a generic setting with $n$ vertices and a root vertex $s$. \begin{claim} \label{treeradius} In a complete graph $K_n$ with i.i.d\xperiod exponential edge weights with mean 1, the radius $X=\operatorname{rad}(T_s)$ of a shortest-path tree $T_s$ of order $d$ is \begin{align}\label{treeX} X = \sum_{i=1}^{d-1} X_i , \end{align} where the $X_i$ are independent random variables with $X_i \sim \Exp(i(n-i))$. \end{claim} \begin{proof} Following~\cite{Janson123}, think of the process of finding shortest paths from $s$ to other vertices as first-passage percolation or ``infection spreading'' starting from $s$. Let $L\coloneqq L(r)$ be the set of vertices within radius (distance) $r$ of $s$; we think of gradually increasing $r$, starting with $r=0$ where $L=\set s$. It is well known that each edge $(v,u) \in L(r) \times (V\setminus L(r))$ has exponentially distributed weight $W$ conditioned by $W+\operatorname{dist}(s,v) \geq r$, and that these random weights are independent. This can be seen by imagining that infection has spread to radius $r$ from $s$, including to the vertex $v$ and additionally a length $r-\operatorname{dist}(s,v)$ further along the edge $(v,u)$, and appealing to the memoryless property of the exponential distribution; it can also be verified by analysing Dijkstra's algorithm in this randomised setting. It follows that the distance $X_1$ to the vertex nearest $s$ is distributed as $X_1 \sim \Exp(n-1)$; the additional distance to the next vertex is $X_2$ with $X_2 \sim \Exp(2(n-2))$ and independent of $X_1$ (for total distance $X_1+X_2$); and when there are $i$ vertices in the tree, the additional distance to the next is $X_i \sim \Exp(i(n-i))$, with all the $X_i$ independent, for total distance as claimed. \end{proof} \emph{We will only use trees} $T_v$ whose radius is $X \leq (1+\tfrac29 \varepsilon) \, \tfrac12 \ln n/n$ $< \tfrac12 \ln n/n + \tfrac19 \varepsilon w_0$. Call a tree a failure (and do not include it in the structure $R$) if $X > (1+\tfrac29 \varepsilon) \tfrac12 \ln n/n$. Declare the construction of level 3 a \emph{failure} if more than $0.01 s^2$ trees fail. Since~\cref{treeX} is monotone increasing in $d$, the larger the $d$, the greater the probability of failure, so in the next paragraphs we will pessimistically take $d$ to be $\sqrt n$ (ignoring integrality since $\sqrt n$ is large). In this case, applying \cref{treeradius} to $T_v$, constructed in a complete graph of order $n'=n(1-o(1))$, the expectation of $X$ is \begin{align} \mu \coloneqq \E X &= \sum_{i=1}^{d-1} \frac1{i(n'-i)} = \frac{1+o(1)}{n} \sum_{i=1}^{d-1} \frac1{i} = (1+o(1)) \frac{\ln d}n = (1+o(1)) \frac12 \frac{\ln n}n \label{treeDia} . \end{align} Thus, failure of $T_v$ implies that \begin{align}\label{treefail1} \frac X \mu &> 1+\frac15 \varepsilon . \end{align} To bound the probability of this event we require one more lemma (also used later in proving \cref{lemma:edge-orderstat}). \begin{lemma}[{\cite[Theorem 5.1]{JansonExpTail}}]\label{exptail} Let $X=\sum_{i=1}^{n} X_i$ with $X_i \sim \Exp(a_i)$ independent rate-$a_i$ random variables, where $a_i \geq 0$. Write $a_{*} \coloneqq \min_{i} a_i$ and $\mu \coloneqq \E X = \sum_{i=1}^n \frac 1{a_i}$. Then: \newline for any $\lambda = 1+\varepsilon > 1$, \begin{align}\label{exp.1} \Prob(X \geq \lambda \mu) & \leq \lambda^{-1} e^{-a_{*} \mu (\lambda - 1 - \ln \lambda)} \leq \exp(-\Omega(\a_* \mu)) \end{align} for any $\lambda = 1- \varepsilon < 1$, \begin{align}\label{exp.2} \Prob(X \leq \lambda \mu) & \leq e^{-a_{*} \mu (\lambda - 1 - \ln \lambda)} \leq \exp(-\Omega(\a_* \mu)), \end{align} and for any $\varepsilon>0$, \begin{align}\label{exp.3} \Prob( \abs{X-\mu} \geq \varepsilon \mu) & \leq 2 \exp(-\Omega(\a_* \mu )). \end{align} The constants implicit in the $\Omega(\cdot)$ expressions are positive and only depend on $\varepsilon$. \end{lemma} \begin{proof} The inequalities in \cref{exp.1} and \cref{exp.2} in terms of $\lambda$ are directly from~\cite[Theorem 5.1]{JansonExpTail}. The remaining expressions, including \cref{exp.3}, follow immediately. \end{proof} From \cref{exp.1} of \cref{exptail}, the probability of the event in \cref{treefail1} (and thus that of $T_v$ failing) is at most \begin{align} \Pr\big( {X-\mu} > \frac15 \varepsilon \mu \big) & \leq \exp(-\Omega(\a_* \mu) ) = \exp( -\Omega(\ln n)), \label{treeFail} \end{align} using that $a_* = n' = (1-o(1))n$, $\mu$ is given by \cref{treeDia}, and $\varepsilon=\Theta(1)$. The total number of trees built is $N=(k+\rr0) \rr1 \rr2$, which, with reference to \cref{sdef}, \cref{k0}, \cref{k1}, and \cref{k2}, is $\Theta(s^2)$. By \cref{treeFail}, each tree independently fails with at most some probability $p=o(1)$. Thus, the number of trees surviving dominates $\Bi(N,1-p)$, with expectation $\lambda=N(1-p)=N(1-o(1))$. Failure at level 3 means that at least $0.01 s^2=\Theta(N)$ trees fail, equivalently the number surviving is at most some $\lambda(1-\Theta(1))$, which by \cref{lemma:BinDev} has probability \begin{align}\label{level3fail} & \exp(-\Omega(s^2)) . \end{align} \begin{remark}\label{rem1} When construction of a tree $T_v$ rooted at a level-3 vertex $v$ is finished, the edge between any vertex $a$ of $T_v$ and any vertex $b$ in $V' \setminus V(T_v)$ has weight $w(a,b)$ that --- still in the uniform model with edge weights temporarily transformed to be exponentially distributed --- is exponentially distributed conditional upon being $\geq \operatorname{rad}(T_v)-d(v,a)$. Equivalently, the edge $\set{a,b}$ gives a $v$-to-$b$ path (through $a$) with cost $\operatorname{rad}(T_v)+X_{a,b}$, where the ``excess'' $X_{a,b}$ has simple exponential distribution $X_{a,b} \sim \Exp(1)$ (with no conditioning). Furthermore, the $X_{a,b}$ are independent, over all choices of $a$ and $b$. \end{remark} Call $R_s$ the now-complete construction on $s$. Note that there is no conditioning on edges between the remaining vertices; in particular, the SPT infection process (or equivalently Dijkstra's algorithm) as described in \cref{treeradius} never looked at edges between uninfected vertices. \subsection{Symmetric construction on vertex \tp{$t$}{t}} Just as we have constructed $R_s$, we now make a similar construction $R_t$ for vertex $t$, with the same branching factors out of levels 0, 1, and 2 and similar SPTs on level-3 vertices. Since the number $n'$ of vertices available after constructing $R_s$ still satisfies $n' = (1-o(1))n$, and because the construction on $s$ did not look at nor condition any edge between these vertices, the construction on $t$ enjoys the same properties as that on $s$. \subsection{Edges between the trees on \tp{$s$}{s} and \tp{$t$}{t}} It remains only to complete paths between $s$ and $t$, which we do by adding cheap edges (where present) between the SPTs in $R_s$ and those in $R_t$. Let $T_u$ be an SPT rooted at a level-3 vertex $u$ of $R_s$, and $T_v$ one rooted at a level-3 vertex $v$ of $R_t$. Let $a$ and $b$ be any vertices in $T_u$ and $T_v$ respectively. By \cref{rem1}, edge $\set{a,b}$ gives a $u$-to-$b$ path with cost $\operatorname{rad}(T_u)+X_{a,b}$, the collection of all the excesses $X_{a,b}$ being i.i.d\xperiod each with distribution $X_{a,b} \sim \Exp(1)$. Thus, $\set{a,b}$ gives a $u$-to-$v$ path with cost $\leq \operatorname{rad}(T_u) + X_{a,b} + \operatorname{rad}(T_v)$. Select, and add to the full construction $R$, any such ``middle edge'' $\set{a,b}$ having $X_{a,b} \leq \frac19 \varepsilon w_0$. This completes the construction of $R$. \subsection{Order of \tp{$R$}{R}, failure probability, and path costs}\label{sec:Rsize} It is worth first confirming that the construction uses, as claimed, $o(n)$ vertices. The number of vertices used is of order $(k+\rr0)\rr1\rr2 d$, which by \cref{k0}, \cref{k1}, \cref{k2}, and \cref{ddef} is $O(s^2 d)$. Recalling from~\cref{ddef} that $d = \ceil{ \sqrt{\lfrac{n \ln n}{2 s^3}} \;}$, as long as the ceiling function does not affect the order of $d$, the total number of vertices is $O(s^2 d) = O(\sqrt{n s \ln n})$, which is $o(n)$ for $s=o(n/\ln n)$. However, the ceiling function does affect the order of $d$ when $n \ln n/2s^3 < 1$, i.e., when $s> {(\frac12 n \ln n)}^{1/3}$; in this case, $d=1$, the total number of vertices used is $O(s^2)$, and this is still $o(n)$ if $s=o(\sqrt n)$. Taking the two cases together, the construction is valid up to any $s=o(\sqrt n)$, or equivalently for any $k=o(\sqrt n)$. Failures at levels 0, 1, and 2 each occur w.p.\ $\leq \exp(-\Theta(s))$ (by \cref{level0fail}, \cref{level1fail}, and \cref{level2fail}), and at level 3 w.p.\ $\leq \exp(-\Omega(s^2))$ (by \cref{level3fail}), so by the union bound the probability of any failure is $\leq \exp(-\Theta(s))$. We now confirm that, assuming that the construction was successful, any $s$--$t$\xspace path in $R$ \emph{through successful SPTs} has cost $\leq (1+\varepsilon) w_0$. (Remember that there may be some unsuccessful SPTs.) By assumption of success, any level-0 edge on $s$ or $t$ has cost $\leq \tfrac k n+\frac19 \varepsilon w_0$, any level-1 edge has cost $\leq \frac19 \varepsilon w_0$, and any level-2 edge also has cost $\leq \frac19 \varepsilon w_0$. Each successful level-3 tree $T$ in $R_s$ or $R_t$ has radius $\operatorname{rad}(T) \leq (1+\frac29 \varepsilon) \frac12 \ln n/n \leq \frac12 \ln n/n + \frac19 \varepsilon w_0$, and each selected middle edge $\set{a,b}$ connects the roots of two trees at an excess cost (above the sum of the two radii) of $X_{a,b} \leq \frac19 \varepsilon w_0$. The total of the 9 upper bounds in question is \begin{align}\label{Rpathcost} 2 \cdot \frac k n + 2 \cdot \frac12 \ln n/n + 9 \cdot \frac19 \varepsilon w_0 &= (1+\varepsilon) w_0 . \end{align} \subsection{Robustness of \tp{$R$}{R}} We now show that, after the deletion of the $k$ cheapest paths in $G$, there remains at least one path in $R$ (that uses successful SPTs). Recall from \cref{adversary} that deletion of the $k$ cheapest paths in $G$ is conservatively modeled as an adversarial deletion subject to: \cref{Bincident}, the deletion of exactly $k$ edges incident on each of $s$ and $t$; \cref{Bheavy}, the number of heavy edges deleted at level 1; and \cref{Bany}, the total number of edges deleted elsewhere in $R$ (at levels 2 and 3, and joining $R_s$ and $R_t$). Without loss of generality we may assume that the adversary does not delete an edge within an SPT $T$, nor a middle edge from such a tree to a facing one, since deleting the level-2 edge into the level-3 root of $T$ destroys more paths in $R$ at the same budgetary cost. By the assumption of success, there are at most $0.01 s^2$ failed SPTs on each of $s$ and $t$, and for simplicity we will deal with them by imagining all trees to be successful but allowing the adversary his choice of this many SPTs to delete; by the argument above we can model this as deletion of edges into the roots of these trees, and simply add $0.02 s^2$ to this budget. Let us now allow the adversary to delete $k$ edges from each of $s$ and $t$, $12s/\varepsilon$ edges out of level 1 for each (double-counting the heavy-edge budget), and $10.02 s^2$ edges out of level 2 for each (again double-counting). Can he destroy all $s$--$t$\xspace paths? We have not yet made any high-probability structural assertion about the middle edges, so this is a probabilistic question: what is the probability, over the randomness still present in the middle edges, that there is an adversarial deletion destroying all paths? Of the $k+\rr0=O(s)$ edges on $s$, the adversary chooses $k$ to delete; there are at most $2^{O(s)}$ ways to do so. Any choice leaves $\Theta(\rr0 \rr1) = \Theta(s)$ edges out of level 1, of which the adversary is able to delete a positive fraction, again in at most $2^{O(s)}$ ways. Any choice leaves $\Theta(s^2)$ edges out of level 2, of which the adversary is able to delete a positive fraction, in at most $2^{O(s^2)}$ ways. The adversary makes a similar set of choices on $t$, but still this comes to just $2^{O(s^2)}$ possible outcomes in all. A given deletion choice destroys all paths precisely if it leaves no middle edge of excess $\leq \frac19 \varepsilon w_0$. (Remember that, w.l.o.g., we have excluded deletions in and between the SPTs at level~3.) By construction, any deletion choice leaves $\Theta(s^2)$ edges out of level 2 and thus, by \cref{ddef}, $\Theta(s^2 d) = \Omega(\sqrt{n s \ln n})$ vertices in SPTs in each of $R_s$ and $R_t$, for $\Omega(n s \ln n)$ potential middle edges. A middle edge is \emph{selected} if its excess cost (in the exponential model) is $w'=-\ln(1-w) \leq \frac19 \varepsilon w_0$, i.e., if $1-w \geq \exp(-\frac19 \varepsilon w_0)$, thus is \emph{rejected} with probability $\exp(-\frac19 \varepsilon w_0)$. There is no path only if every potential edge is rejected, which happens w.p.\ $\leq \exp(-\frac19 \varepsilon w_0 \cdot n s \ln n) = \exp(-\Omega(s^2 \ln n))$. Taking the union bound over all adversarial choices, the probability than any choice leaves no paths is \begin{align} \label{smallkAdversaryFailure} 2^{O(s^2)} \exp(-\Omega(s^2 \ln n)) &= \exp(-\Omega(s^2 \ln n)) . \end{align} This is dominated by the failure probabilities $\exp(-\Theta(s))$ for other steps. \subsection{Success for each \tp{$k$}{k}, and for all \tp{$k$}{k}}\label{sec:small-success} We have shown that, for any $k=o(\sqrt n)$, subject to an absence of failures, we can generate a robust structure $R^{(k)}$ in which, after adversarial deletions, there remains an $s$--$t$\xspace path of cost $\leq (1+\varepsilon) w_0(k)$. (Remember that $w_0$ and $s$ are simple functions of $k$, per~\cref{sdef} and~\cref{w0def}. Here we retain the argument $k$ we usually suppress.) There are two types of failures possible. The first is that the graph fails \cref{LLenBd}'s conclusion that ``cheap paths are short''; this occurs w.p.\ $\OO{n^{-1.9}}$. The second is that $R^{(k)}$ is not robust; this occurs w.p.\ $\OO{\exp(-\Omega(s(k)))}$. Assume success in generating $R^{(k)}$. We claim that $P_1,\ldots,P_{k+1}$ all have cost $\leq (1+\varepsilon) w_0(k)$ (call this ``cheap''). Suppose not. Then there is some $i\leq k$ for which $P_1,\ldots,P_i$ are cheap but $P_{i+1}$ is not. Our adversary's budget allows it to delete $P_1,\ldots,P_i$, and by assumption of success this leaves a cheap path $P$ in $R^{(k)}$. Thus there is a cheap $i+1$st path in $G$, a contradiction. It follows that \emph{for each} $k$, $X_{k+1} \leq (1+\varepsilon) w_0(k)$ with probability \begin{align} 1-O(n^{-1.9}) -O(\exp(-\Omega(s(k)))) . \label{ksucceeds} \end{align} \medskip A simple calculation shows that w.h.p\xperiod} % {with high probability $X_{k+1} \leq (1+\varepsilon) w_0(k)$ \emph{simultaneously for all $k$} in this range, proving the upper bound in \cref{Xkbounds}. By the union bound, the probability of failure to build a robust structure for \emph{any} $k$ is at most \begin{align} \sum_{k=0}^{\infty} \exp(-\Omega(s(k))) &\leq \ln n \exp(-\Omega(\ln n)) + \sum_{k=\ln n}^{\infty} \exp(-\Omega(k)) \notag \\ &= \exp(-\Omega(\ln n)) = n^{-\Omega(1)}. \label{eq:all-k-small} \end{align} Including the probability of failure in applying \cref{LLenBd}, the total failure probability is $O(n^{-1.9} + n^{-\Omega(1)}) = o(1)$. \subsection{Limitation to small \tp{$k$}{k}} \label{small-k-limitation} We have established \cref{Tmain} up to any $k=o(\sqrt n)$, and the construction of $R^{(k)}$ was tailored to such values. For levels using heavy edges, fanout is limited to $O(s)$. On the other hand, the meet-in-the-middle argument requires that each side grow large, to $\Omega(\sqrt{n/s})$. Thus, for small $k$, a more-than-constant number of levels is needed. Summing heavy edges over this many levels would exceed the target weight $(1+\varepsilon) w_0$, so light edges are needed. The adversary may delete $\Theta(s^2)$ light edges, so the construction must contain at least this many. The construction explicitly required each light edge to lead to a new vertex, and we do not readily see how to do otherwise as long as we are using shortest-path trees, thus intrinsically limiting $s$ (thus $k$) to $O(\sqrt n)$. For larger $k$, however, we can obtain sufficient heavy-edge fanout in constant depth, permitting a simpler construction described in \cref{largekUB}. \section{Edge order statistics}\label{sec:order-stat} In this section we establish results on order statistics needed in later sections. Let $\set{W\os k}_{k=1}^{n-1}$ be the order statistics of $n-1$ i.i.d\xperiod random variables, variously uniform $U(0,1)$ or exponential $\Exp(1)$. We choose $n-1$ rather than $n$ as the parameter both because many expressions are more natural in this parametrisation, and because this way $W\os k$ is the cost of the $k$th cheapest edge incident to a fixed vertex $v \in K_n$. The following lemma is used in \cref{pathweightsdetail}. \begin{lemma}\label{intervals} Let $l=n^{-0.99}$. Consider the unit interval $[0,1]$ with $n$ points placed uniformly and independently at random. Then w.h.p\xperiod} % {with high probability every interval of length at least $l'\geq l$ contains at least $0.99 l' n$ points. \end{lemma} \begin{proof} Partition the unit interval into contiguous intervals $I_i$ each of length $L\coloneqq l/1000$, using $\floor{1/L}$ such intervals (possibly leaving a small interval near 1 not covered). Any interval $I$ of length $l' \geq l$ has at least a $998/1000$ fraction of its length covered by intervals $I_i \subset I$, and we will show that w.h.p\xperiod} % {with high probability every interval $I_i$ contains at least $0.999 L n$ points (that is, at least a $0.999$ fraction of the expectation). If so, it follows that $I$ has at least $0.999 \cdot 0.998 l' n \geq 0.99 l' n$ points. The distribution of the number of points in each interval $I_i$ of length $L$ follows the binomial distribution $\Bi(n, L)$. By \cref{lemma:BinDev}, \begin{align*} \Prob \parens{\Bi(n, L) \leq 0.999 L n} \leq \exp(-\Omega(L n)) , \end{align*} where the sign in the $\Omega$ is taken as positive. The probability that any interval $I_i$ contains less than $0.999$ points is, by the union bound, at most, \begin{align}\label{eq:intervals} \floor{1/L} \cdot \exp(-{\Omega(L n)}) = \exp(-\Omega(n^{0.01})) = o(1) \end{align} as desired. \end{proof} The following lemma is used in \cref{eq:implies-main} and \cref{eq:edge-orderstat}. \begin{lemma}\label{lemma:edge-orderstat} Let $\set{W\os k}_{k=1}^{n-1}$ be the order statistics of $n-1$ i.i.d\xperiod random variables, either all uniform $U(0,1)$ or all exponential $\Exp(1)$. For any $\varepsilon > 0$ and $a= a(n) = \omega(1)$, w.h.p\xperiod} % {with high probability \[ 1-\varepsilon \leq \frac{W\os k}{\E W\os k} \leq 1+\varepsilon \] simultaneously for all $k$ in the range $a \leq k \leq n-1$. \end{lemma} \begin{proof} Without loss of generality, we may assume that $a \leq n/10$. \textbf{Exponential case}. It is standard that, where $Z_i \sim \Exp(i)$ are independent exponential r.v.s, we may generate the $W\os k$ as \begin{align} W\os k &= \sum_{i=1}^{k} Z_{n-i} . \label{ordersumexp} \end{align} Using a superscripted $E$ to highlight the exponential model, $W\os k$ has mean \begin{align} \label{muX} \mu_k = \mu_k^{(E)} \coloneqq \E W\os k &= \sum_{i=1}^k \frac{1}{n-i} = H(n-1)-H(n-k-1) \asymp \ln(n)-\ln(n-k) ; \end{align} the change by 1 in the logarithms' arguments avoids $\ln(0)$ when $k=n-1$ and remains asymptotically correct. By~\cref{exp.3}, \begin{align} \Prob( \abs{ W\os k - \mu_k} \geq \varepsilon \mu) & \leq 2 \exp \parens{-\Omega((n-k) \mu_k)} .\label{eq:ExpSum} \end{align} By the union bound, it suffices to show that the sum over $k$ from $a$ to $n-1$ of the RHS of~\cref{eq:ExpSum} is $o(1)$. We treat the sum in two ranges. For $k \leq {\frac n2}$, $(n-k)\mu_k \geq \frac n 2 \cdot \frac k n = \frac k 2$. Thus, \begin{align} \sum_{k=a}^{{\lfrac n2}} \exp \parens{-\Omega((n-k) \mu_k)} \; \leq \; \sum_{k=a}^{{\lfrac n2}} \exp \parens{- \Omega(k)} \; \leq \; O(a e^{-\Omega(a)}) \to 0, \end{align} since $a = \omega(1)$. For $k > \frac n 2$, for brevity let $\bar{k}=n-k$. Then $\mu_k \asymp \ln n-\ln(\bar{k})$ by~\cref{muX} and \begin{align} \notag \sum_{\kbar=1}^{n/2} \exp\parens{ -\Omega((n-k) \mu_k) } &= \sum_{\kbar=1}^{n/2} \exp \parens{ - \bar{k} \, \Omega(\ln n-\ln(\bar{k}))} = \sum_{\kbar=1}^{n/2} \parens{\frac{\bar{k}}{n}}^{\Omega(\bar{k})} \\ & \leq \parens{\frac1n}^{\Omega(1)} \sum_{\kbar=1}^{n/2} \bar{k}^{\Omega(1)} \parens{\frac \bar{k} n}^{\Omega(\bar{k}-1)} = n^{-\Omega(1)} = o(1) , \end{align} where the explicit inequality factors out the $\bar{k}=1$ term, from which, since $\bar{k}/n \leq 1/2$, the later terms decrease geometrically. This concludes the exponential case. \medskip \textbf{Uniform case}: Let $U_i \sim U(0,1)$ be i.i.d\xperiod uniform random variables and $W_i \sim \Exp(1)$ i.i.d\xperiod exponential random variables. Because the exponential distribution has CDF $F(x) = 1-\exp(-x)$, we may couple the two sets of variables as $U_i = F(W_i)$ or equivalently $W_i = f(U_i)$ with $f(x) = F^{-1}(x) = -\ln(1-x)$. Because $f$ is increasing, $W\os k = f(U\os k)$. Now using superscript $U$ to distinguish the uniform model, the mean is well known to be \begin{align}\label{muU} \mu_k = \mu_k^{(U)} & \coloneqq \E U_{(k)} = \frac kn \end{align} We want to show that with high probability, for all $k$ in the range $a \leq k \leq n-1$, \[ (1-\varepsilon) \mu_k^{(U)} \leq U_{(k)} \leq (1+\varepsilon) \mu_k^{(U)} \] or equivalently, \[ f\parens{ (1-\varepsilon) \mu_k^{(U)}} \leq W\os k \leq f\parens{ (1+\varepsilon) \mu_k^{(U)}} . \] From the exponential case already proved, taking error bound $\varepsilon/2$, we know that w.h.p\xperiod} % {with high probability, for all $k$, \[ (1-\varepsilon/2) \mu_k^{(E)} \leq W\os k \leq (1+\varepsilon/2) \mu_k^{(E)} , \] so it suffices to show that, for all $k$ (deterministically), \[ f\parens{ (1-\varepsilon) \mu_k^{(U)}} \leq (1-\varepsilon/2) \mu_k^{(E)} \quad \text{ and } \quad f\parens{ (1+\varepsilon) \mu_k^{(U)}} \geq (1+\varepsilon/2) \mu_k^{(E)} . \] This is so. Using~\cref{muU},~\cref{muX}, and convexity of $f$, \[ f\left((1-\varepsilon) \mu_k^{(U)} \right) = f((1-\varepsilon)k/n) \leq (1-\varepsilon) f(k/n) = (1-\varepsilon) \ln \left( \frac{n}{n-k} \right) \leq (1-\varepsilon/2) \mu_k^{(E)} ; \] \[ f\left((1+\varepsilon) \mu_k^{(U)} \right) = f((1+\varepsilon)k/n) \geq (1+\varepsilon) f(k/n) = (1+\varepsilon) \ln \left( \frac{n}{n-k} \right) \geq (1+\varepsilon/2) \mu_k^{(E)} . \] \end{proof} \section{Upper bound for large \tp{$k$}{k}, sketch}\label{largekUB} \subsection{Introduction} \label{largekIntro} To address larger values of $k$ we use a different construction, generating $s$--$t$\xspace paths of length 4. A straightforward extension of the previous argument to this construction would let us get up to $k=n-f(n)$ for an arbitrarily slowly growing function $f$, but not to $k=n-1$ because it requires ${k+1}/\varepsilon^2$ edges incident on each of $s$ and $t$ (thus requires that ${k+1}/\varepsilon^2 \leq n-1$). Getting all the way to $k=n-1$ requires a couple of additional ideas. Again, we will introduce an adversary with a cost budget that with high probability exceeds the cost of the first $k$ cheapest paths. First, we observe that much of the adversary's cost budget must be spent on edges incident to $s$ and $t$, leaving less to delete other edges, thus allowing a smaller structure $R$ to be sufficiently robust. In particular, the $k$ cheapest paths from $s$ to $t$ must use edges incident on $s$ of total weight at least $\sum_{i=1}^k W\os i^s$ where \begin{align} \label{Wkv} W\os i^v \end{align} is the cost of the $i$th cheapest edge incident to $v$. (We may omit the superscript when it is either generic or clear from context.) One technical detail is that, where $R$ includes the $k+\rr0$ cheapest edges incident to $s$, we will control $W\os{\kk}-W\os k$ directly, using results on order statistics from \cref{sec:order-stat}, rather than through a high-probability upper bound on $W\os{\kk}$ and a high-probability lower bound on $W\os k$. Finally, it is no longer adequate to allow path costs to exceed their nominal values by an $\varepsilon=\Theta(1)$ factor, as such large excesses would swell the adversary's budget too quickly, so we more tightly control the excess cost of each path $P_k$ as a function of $k$ (and $n$, implicitly). The details later will be clearer if we sketch the argument now, with most details but without the calculations. We will argue for $k$ from $n^{4/10}$ to $n-1$. (We must start with some $k=o(n^{1/2})$ since that is as far as the ``small $k$'' argument extended, and we need $k=\omega(n^{1/3})$ since below this the new construction's path costs would exceed the $2k/n$ target.) \subsection{Structure \tp{$R$}{R}} \label{structR} \cref{figR2} illustrates the robust structure $R=R^{(k)}$ after adversarial deletion of root edges, as discussed in \cref{robustness} below. The construction is based on parameters $\rr0 = \rr0(k)$ and $\varepsilon_k$ to be defined later. Start with $R$ consisting of just the vertices $s$ and $t$. Add to $R$ the $k+\rr0$ edges incident on $s$ of lowest cost, and let $V_s$ be the set of opposite endpoints of these edges. Do the same for $t$, generating vertex set $V_t$. Take $M \coloneqq V(G)\setminus \set{s,t}$ as a collection of ``middle vertices''. Note that $V_s$, $V_t$, and $M$ may well have vertices in common, but our analysis will use a subgraph of $R$ where the relevant subsets of these three sets are disjoint, and it is easier to understand the construction imagining them to be disjoint. Add to $R$ each edge $e$ in $M \times V_s$ and $M \times V_t$ that is ``heavy but not too heavy'', with cost $W(e) \in (\eps_k, 2\eps_k)$. This concludes the construction of the structure $R$. \begin{figure} \centering \vspace*{2cm} \includegraphics[width=0.6\textwidth]{Rlarge} \caption{The robust structure $R=R^{(k)}$ after adversarial deletion of $k$ edges on $s$, leaving $\rr0$ edges to some vertices $V'_s \subseteq V_s$, and likewise for $t$ and $V'_t$. The middle vertices are pruned to $M'=M\setminus (V_s'\cup V_t')$, and edges from $M'$ to $V_s'$ and $V_t'$ are in $R$ if they have weight between $\varepsilon_k$ and $2\varepsilon_k$. Here, edges from just one representative vertex $v \in M'$ are illustrated. }\label{figR2} \end{figure} \subsection{Path weights} \label{pathweights} It is immediate that every $s$--$t$\xspace path in $R$ has cost at most \begin{align} W^s_{(k+\rr0)} +2\eps_k+2\eps_k+W^t_{(k+\rr0)} . \label{path1} \end{align} We will show (in \cref{eq:Wkk0} for uniform and \cref{eq:expWkk0} for exponential) that, subject to the non-occurrence of certain unlikely failure events, \cref{path1} is at most \begin{align} W\os{\kp}^s+W\os{\kp}^t+7\eps_k . \label{path+} \end{align} We will show in \cref{robustness} that, after deletion of the first $k$ paths, there remains an $s$--$t$\xspace path in $R$ (again subject to non-occurrence of unlikely failure events), whereupon it follows that \begin{align} X_{k+1} & \leq W\os{\kp}^s+W\os{\kp}^t+7\eps_k . \label{XkWok} \end{align} \subsection{Adversary} \label{subadv} We define an adversary who is ``sufficiently strong'' to delete the first $k$ paths. For $k \leq n^{4/10}$, taking $\varepsilon=0.1$, \cref{ksucceeds} implies that w.p.\ \begin{align} 1-O(n^{-1.9}) -O(\exp(-\Omega(n^{4/10}))) &= 1-O(n^{-1.9}) = 1-o(1), \label{Pn0.4} \end{align} we have that \begin{align} X_k &\leq X_{n^{4/10}} \leq 3 n^{4/10} / n . \label{Xn0.4} \end{align} For $k>n^{4/10}$, further assume the absence of the failure events alluded to just above, so that \cref{XkWok} holds. Then, hiding a sum of the $\ln n/n$ terms of~\cref{Xkbounds} in the $o(\,)$ term below, \begin{align} \sum_{i=1}^{k} X_i &= \sum_{i=1}^{n^{4/10}} X_i + \sum_{i=n^{4/10}+1}^k X_i \notag \\ &\leq \frac{3{(n^{4/10})}^2}{n} + \sum_{i=n^{4/10}+1}^{k} X_i \notag \\ &\leq 3n^{-2/10} + \sum_{i=n^{4/10}+1}^{k} \parens{W\os i^s+W\os i^t+7\varepsilon_{i-1}} \notag \\ & =: U_k . \label{Ukdef} \end{align} Thus, the first $k$ paths' edges have total weight at most $U_k$. Furthermore, the first $k$ paths' edges incident on $s$ and $t$ are all distinct except, possibly, for the edge \ensuremath{\set{s,t}}\xspace. Therefore, not counting edge $s$--$t$\xspace at all, the cost of these ``incident'' edges is at least \begin{align} \label{Ik} I_k & \coloneqq \sum_{i=1}^{k-1} \parens{ W\os i^s+W\os i^t } . \end{align} (In proving \cref{expkbig} we will use a slightly different lower bound $I_k$ on the weight of the incident edges.) It follows that the first $k$ paths' ``middle edges'' (edges other than the incident edges) cost at most $U_k-I_k$. We will explicitly define a budget $B_k$ satisfying \begin{align}\label{budget} B_k & \geq U_k-I_k . \end{align} We will allow the adversary to delete any $k$ edges in $G$ incident on each of $s$ and $t$, possibly including the edge $s$--$t$\xspace (enough to let it delete the incident edges of the first $k$ paths), and to delete any other edges in $G$ of total cost at most $B_k$ (enough to let it delete the middle edges of the first $k$ paths). Thus, the adversary is sufficiently strong to delete the first $k$ paths. The adversary's allowable deletions in $G$ mean that also in $R$ it deletes at most $k$ edges incident on each of $s$ and $t$, and middle edges of total cost at most $B_k$. \subsection{Budgets \tp{$B_k$}{B\_k}} \label{budgets} The budgets $B_k$ will be defined explicitly in the details. For the model with uniformly distributed edge weights we will do so in two ranges of $k$, corresponding to \cref{kmedium,kbig}, and likewise in the model with exponentially distributed edge weights, corresponding to \cref{expkmedium,expkbig}. For \cref{kbig,expkbig} we will establish \cref{budget} directly. For \cref{kmedium,expkmedium} we will establish \cref{budget} by the following reasoning; we will only need to check \cref{Bksuff}, \cref{Bkbase}, and \cref{epsfit} below. We will show that the budgets satisfy \begin{align}\label{Bksuff} B_{k+1} &\geq B_k + 8\eps_k . \end{align} (Roughly speaking, given $B_k$ we will set $\varepsilon_k$ as small as possible while keeping $R^{(k)}$ robust to the adversary with budget $B_k$. Then, we will set $B_{k+1}$ as small as possible, namely by taking equality in \cref{Bksuff}. Behind the scenes, we derive $B_k$ by solving the differential-equation equivalent of \cref{Bksuff} satisfied with equality.) We will show that \cref{budget} is satisfied in the base case, by showing that \begin{align} \label{Bkbase} B_k &\geq U_k \quad \text{for $k=n^{4/10}$} . \end{align} Then, \cref{budget} is established for all $k$ by induction on $k$: \begin{align} U_{k+1}-I_{k+1} &= (U_{k+1}-U_k)-(I_{k+1}-I_k)+ (U_k-I_k) \notag \intertext{which by \cref{Ukdef}, \cref{Ik}, and the inductive hypothesis \cref{budget} is} & \leq (\Wo {k+1}^s+\Wo {k+1}^t+7\eps_k)-(W\os k^s+W\os k^t) + B_k \notag \\ & \leq B_k + 8\eps_k \eqnote{see below} \label{epsfit} \\ & \leq B_{k+1} \eqnote{by \cref{Bksuff}} \label{Bkp} . \end{align} To justify \cref{epsfit} it suffices to show that $W\os{\kp}-W\os k$ is at most $0.1 \varepsilon_k$, and we do so in \cref{epsfitpf} for the uniform case and in \cref{expepsfitpf} for the exponential case. In both cases, $\rr0=\omega(1)$, and $W\os k+\rr0-W\os{\kp}=O(\varepsilon_k)$ (as used in going from \cref{path1} to \cref{path+}), making this conclusion unsurprising.% \footnote{In proving \cref{kbig,expkbig} we will set $\rr0=1$, so this reasoning does not apply. Indeed, in \cref{expkbig} (the large-$k$ exponential case) \cref{epsfit} would be false --- $W\os{\kp}-W\os k$ can be much larger than $\varepsilon_k$ --- but (to reiterate) it is not needed there, as we establish \cref{budget} directly. } \subsection{Robustness of \tp{$R$}{R}} \label{robustness} We wish to make $R$ robust against the adversary, so that after the deletions just described, $R$ should retain an $s$--$t$\xspace path w.h.p\xperiod} % {with high probability, so that \cref{XkWok} holds and $X_{k+1}$ is small. It will suffice to show that, to delete all $s$--$t$\xspace paths in $R$, \begin{align}\label{robustblurb} \parbox{0.8 \textwidth}{\emph{after deletion of $k$ edges incident on each of $s$ and $t$, an adversary would still have to delete middle edges of total cost more than $B_k$,}} \end{align} and thus it is powerless to do so. Obtaining this robustness requires choosing $\varepsilon_k$ sufficiently large in the construction. With reference to \cref{figR2}, on deletion of any $k$ edges on each of $s$ and $t$, the level-1 sets are in effect pruned to $V'_s$ and $V'_t$, each of cardinality $\rr0$. Should $V'_s$ and $V'_t$ have vertices in common, or if $t \in V'_s$ or $s \in V'_t$, then there is an $s$--$t$\xspace path. So, assume that $V'_s$ and $V'_t$ are disjoint and do not contain $s$ nor $t$. Consider only middle vertices $M' \subseteq M$ not appearing in $V'_s$ nor $V'_t$, i.e., $M'=M \setminus \set{V'_s \cup V'_t}$. We will have $\rr0=o(n)$, so $\card{M'} = n-2-2\rr0 > 0.99n$. Note that edges in $M' \times V'_s$, $M' \times V'_t$, $\set s \times V'_s$, and $\set t \times V'_t$ are all distinct. Consider a choice of the $k$ deletions on $s$ and $t$ to be fixed in advance. (We will eventually take a union bound over all such choices.) The weights of edges in $M' \times V'_s$ and $M' \times V'_t$ have not even been observed yet, so each has (unconditioned) $U(0,1)$ distribution, all are independent (by distinctness of the edges), and thus each such edge is included in $R$ with probability $\eps_k$, independently. A vertex $v \in M'$ is connected to $V'_s$ by \begin{align} \label{Zvs} Z_v^s & \sim \Bi(\rr0,\eps_k) \end{align} edges, with mean \begin{align}\label{kmedlambda} \lambda \coloneqq \E Z_v^s = \rr0 \eps_k . \end{align} Define $Z_v^t$ symmetrically, and note that $Z_v^s$ and $Z_v^t$ are i.i.d\xperiod. Intuitively, if $\lambda$ is small, $Z_v^s$ is usually 0, is 1 with probability about $\lambda$, and rarely any larger value. So, the probability that $v$ is connected to both $V'_s$ and $V'_t$ is about $\lambda^2$, in which case to destroy $s$--$t$\xspace paths through $v$ the adversary must delete an edge of cost at least $\varepsilon_k$. So, to delete all $s$--$t$\xspace paths, over the nearly $n$ vertices in $M'$ the adversary would have to delete edges of expected total weight at least \begin{align} \varepsilon_k \, n \, \lambda^2 \label{kmedtotalweight} . \end{align} We will choose $\varepsilon_k$ so that \begin{align} \label{epsk} \varepsilon_k \, n \, \lambda^2 > B_k , \end{align} which hopefully will ensure (see \cref{failprob}) that a path must remain (i.e., that $R$ is robust). Let us give a back-of-the-envelope calculation. In the uniform case we expect $W\os k$ to be about $k/n$, so letting $\rr0= \varepsilon_k n$ means that $W\os{\kk}-W\os k$ will be about $\varepsilon_k$, justifying \cref{path+}. Then \cref{kmedlambda} gives $\lambda=\eps_k^2 n$, so \cref{epsk} indicates that we need to take $\eps_k^5 n^3 > B_k$. As noted after \cref{Bksuff}, roughly speaking, we obtain $B_k$ and $\eps_k$ by solving this and \cref{Bksuff} with equality as a system of differential equations. \begin{remark} \label{failprob} This intuitive argument proves to be essentially sound, but to make it rigorous will take some work. Chiefly, $\Pr(Z_v^s > 0)$ is of course not exactly $\E Z_v^s = \lambda$ even when $\lambda$ is small, and we will also have to consider the case when $\lambda$ is large. Also, where the intuition is based on expectations, we must calculate the probability of the ``failure'' event that all paths can be deleted at a cost less than $B_k$. Finally, we must take the union bound of this failure event over all choices of root edges at $s$ and $t$ (but, as in the small-$k$ case, this turns out to change nothing). \end{remark} \section{Upper bound for large \tp{$k$}{k}, uniform model} \label{unifUB} In this section we fill in the details of the steps from \cref{largekUB} and show that they conclude the proof of the upper bound in \cref{Tmain}. Specifically, to control the \emph{path weights} (these emphasised keywords match section titles) we must show that \cref{path1} is at most \cref{path+}. For the \emph{adversary} we need only show \cref{budget}; as noted earlier, for large $k$ (\cref{kbig}) we will do this directly, while for medium $k$ (\cref{kmedium}) we will argue that the \emph{budgets} $B_k$ satisfy \cref{Bkbase} and \cref{epsfit}. And for \emph{robustness} we will prove that the probability of failure is small (i.e., it is unlikely that the adversary can destroy all $s$--$t$\xspace paths in $R^{(k)}$). \subsection{Claims, and implications for \cref{Tmain}}\label{uniclaims} We first state the two precise claims we make for large $k$, in two ranges. We use symbolic constants $C_B$, $C_\eps$, $C'_B$, and $C'_\varepsilon$ in the claims and the proofs, as it makes the calculations clearer. Whenever we encounter an inequality that the constants must satisfy, we will highlight with a parenthetical ``check'' that they do so. \begin{claim}\label{kmedium} For $k \in [n^{4/10}, n-14\sqrt n \,]$, let ${B_k} = \parens{C_B k n^{-3/5}+{(2n^{-1/5})}^{4/5}}^{5/4}$ and $\eps_k =C_\eps n^{-3/5} {{B_k}}^{1/5}$, with $C_B=32$ and $C_\eps=5$. Then, asymptotically almost surely, simultaneously for all $k$ in this range, \begin{equation}\label{eq:XkWk} X_{k+1} \leq W\os{\kp}^s + W\os{\kp}^t + 8 \eps_k. \end{equation} \end{claim} \noindent\textbf{Remark:} In proving \cref{kmedium} we will set \begin{align}\label{k0med} \rr0 & \coloneqq \varepsilon_k n . \end{align} From the definitions of $B_k$ and $\eps_k$ in \cref{kmedium}, both are increasing in $k$, and we will make frequent use of the following inequalities. For $n$ sufficiently large, \newline \noindent \begin{minipage}[t]{.5\textwidth} \begin{align} {B_k} &= \Theta \parens{ k^{5/4} n^{-3/4} + n^{-{1/5}}} \label{eq:Bk} \\ B_k &\leq B_n \leq 1.01 C_B^{5/4} n^{1/2} \label{eq:Bkupper}\\ B_k &\geq B_{n^{4/10}} \geq 2n^{-{1/5}} \label{eq:Bklower} \end{align} \end{minipage}% \begin{minipage}[t]{.5\textwidth} \begin{align} \eps_k &= \Theta \parens{ k^{1/4} n^{-3/4}+ n^{-{16/25}} } \label{eq:ek} \\ \eps_k &\leq \varepsilon_n \leq 1.01 C_\eps C_B^{1/4} n^{-1/2} \label{eq:ekupper} \\ \eps_k &\geq \varepsilon_{n^{4/10}} \geq 1.14 C_\eps n^{-{16/25}} \label{eq:eklower} . \end{align} \end{minipage}% \begin{claim}\label{kbig} For $k \in ( n-14\sqrt n, n-2]$, let \begin{align} B_k&=C_B' \sqrt n \quad \text{and} \quad \eps_k = C'_\varepsilon n^{-1/6} \label{kbigparams} \end{align} with $C'_B=78$ and $C'_\varepsilon=5$. Then, asymptotically almost surely, simultaneously for all $k$ in this range, \[ X_{k+1} \leq W\os{\kp}^s + W\os{\kp}^t + 8 \eps_k. \] \end{claim} \noindent\textbf{Remark:} In proving \cref{kbig} we will set $\rr0 \coloneqq 1$. Note that here $B_k$ and $\varepsilon_k$ are constants independent of $k$, but we retain the subscript for consistency with the notation of \cref{largekIntro}. We will prove the two claims shortly. \medskip \begin{proof}[Proof of the upper bound of \cref{Xkbounds} in \cref{Tmain}] Given $\varepsilon > 0$ from \cref{Tmain}, apply \cref{lemma:edge-orderstat} to the order statistics $W\os k^s$ and $W\os k^t$ with $\varepsilon$ in the lemma as our $\varepsilon/2$ and $a=n^{4/10}$. Then by \cref{kmedium} w.h.p\xperiod} % {with high probability, simultaneously for all $k \in [n^{4/10}+1, n-14\sqrt n]$, \begin{equation}\label{eq:implies-main} X_\k \leq W\os k^s + W\os k^t + 8\varepsilon_{k-1} \leq (1+\varepsilon/2) 2k/n + 8\varepsilon_{k-1} \leq (1+\varepsilon) (2k/n + \ln n/n); \end{equation} the key point is that $\varepsilon_{{k-1}} \leq \eps_k = o(k/n)$, which follows from~$\cref{eq:ek}$. Specifically, by \cref{eq:ek}, $\eps_k/(k/n) = O(k^{-3/4} n^{1/4}+k^{-1}n^{9/25})$, which by $k \geq n^{4/10}$ is $O(n^{-0.3}n^{0.25}+n^{-4/10}n^{0.36})=o(1)$. Likewise, by \cref{kbig}, inequality~\cref{eq:implies-main} holds w.h.p\xperiod} % {with high probability simultaneously for all $k \in [n-14\sqrt n, n-2]$. Again, we need only show that $\eps_k=o(k/n)$, which holds because here $k/n=\Theta(1)$ while by definition $\eps_k = o(1)$. \end{proof} We now prove the two claims, by filling in the details for \cref{structR,robustness}. \subsection{Structure \tp{$R$}{R}} With reference to \cref{structR}, all that we need to confirm is that $k+\rr0 \leq n-1$. For \cref{kmedium}, by hypothesis $k \leq n-14 \sqrt n$, and provided that $1.01 C_\eps C_B^{1/4} \leq 13$ (check), by~\cref{eq:ekupper} $\eps_k \leq 13 n^{-1/2}$, whereupon $\rr0 = \eps_k n \leq 13 \sqrt n$. For \cref{kbig}, with $\rr0=1$, $k+\rr0 \leq n-1$ is immediate. \subsection{Path weights} \label{pathweightsdetail} With reference to \cref{pathweights}, we establish that the bound \cref{path+} holds w.h.p\xperiod} % {with high probability simultaneously for all $k \geq n^{4/10}$. With $W\os k$ representing the cost of the $k$th cheapest edge incident on some fixed vertex (which we will take to be $s$ and then $t$ in turn), it suffices to show that \begin{align}\label{eq:Wkk0} W\os{\kk} &\leq W\os{\kp} + 1.1\eps_k \end{align} holds with high probability for all $k \geq n^{4/10}$. For \cref{kbig}, with $\rr0=1$, \cref{eq:Wkk0} is immediate. For \cref{kmedium}, with $\rr0=\varepsilon_k n$, generate the variables $W\os k$ by placing $n-1$ points uniformly at random on the unit interval $I$, associating $W\os k$ with the $k$th smallest point. It suffices to show that, w.h.p\xperiod} % {with high probability, each interval $(W\os{\kp}, W\os{\kp}+1.1\eps_k)$ contains at least $\rr0$ points. For all $k \in [n^{4/10}, n-14\sqrt n-1\,]$, \cref{kmedium} has $\eps_k \geq n^{-0.99}$ by~\cref{eq:eklower}, so \cref{intervals} shows that w.p.\ $1-\exp(-\Omega(n^{0.01}))$, every interval of length $\geq 1.1\eps_k$ in $[0,1]$ contains at least $\rr0 \eqdef \eps_k n$ points, and in particular this holds for all the intervals $(W\os{\kp}, W\os{\kp}+1.1\eps_k)$. We assume henceforth that the graph $G$ is ``good'' in the sense that \cref{eq:Wkk0} holds for all $k \geq n^{4/10}$ for vertices $s$ and $t$, and that for all $k \leq n^{4/10}$ we have the upper bounds on $X_k$ from~\cref{Xkbounds}, as proved to hold w.h.p.\ in \cref{sec:k-small}. \subsection{Adversary} With reference to \cref{subadv}, we need only verify \cref{budget}, and this will be done in the next subsection. \subsection{Budgets \tp{$B_k$}{B\_k}} With reference to \cref{budgets}, we first establish \cref{epsfit}. This follows from \begin{align}\label{epsfitpf} \Wo {k+1}^s-W\os k^s & \leq 0.1 \eps_k . \end{align} The reasoning for this is the same as for \cref{eq:Wkk0}: each interval of length $0.1 \eps_k$ contains at least one point. The parameters are trivial to check. Next, we show that the parameters of \cref{kmedium} satisfy \cref{budget}, for which as argued in \cref{budgets} it suffices to show that they satisfy \cref{Bksuff} and \cref{Bkbase}. We start with \cref{Bkbase}, the base case. Here $k=n^{4/10}$, $B_k \geq 3n^{-2/10}$ from~\cref{eq:Bklower}, and $U_k = 3n^{-2/10}$ from \cref{Ukdef}, establishing \cref{Bkbase}. To establish \cref{Bksuff}, first note that $\frac{\partial}{\partial k} {B_k} = \tfrac 54 C_B {{B_k}}^{1/5}n^{-3/5}$ is an increasing function. Then, \[ B_{k+1}-{B_k} \geq \frac{\partial}{\partial k} {B_k} = \frac 54 C_B {{B_k}}^{1/5}n^{-3/5} = \frac 54 \frac {C_B}{C_\eps} \eps_k \geq 8\eps_k, \] since $C_B \geq \tfrac{8 \cdot 4}{5} C_\eps$ (check). We now establish \cref{budget} for the parameters of \cref{kbig}. With ${k^\star}= \floor{n-14\sqrt n}$, the point where \cref{kmedium} ends and just before \cref{kbig} begins, the previous case showed that $B_{k^\star} \geq U_{k^\star} - I_{k^\star}$, and by~\cref{eq:Bkupper} $B_{k^\star} \leq 77 \sqrt n$. Then, for $k$ from ${k^\star}+1$ to $n-2$, \begin{align} U_k - I_k &= (U_{k^\star} - I_{k^\star}) + [(U_k-U_{k^\star}) - (I_k - I_{k^\star})] \notag \\&\leq B_{k^\star} + \left[ \sum_{i={k^\star}+1}^k (W\os i^s + W\os i^t + 7 \varepsilon_{i-1}) - \sum_{i={k^\star}}^{k-1} (W\os i^s + W\os i^t)\right] \eqnote{see \cref{Ukdef} and \cref{Ik}} \notag \\&\leq B_{k^\star} + \sum_{i={k^\star}+1}^{n-2} 7 \varepsilon_{i-1} + (W\os k^s + W\os k^t - \Wo{k^\star}^s - \Wo{k^\star}^t) \notag \\&\leq 77 \sqrt n + (14\,\sqrt n) \cdot 7 C'_\varepsilon n^{-1/6} + 2 \eqnote{see \cref{kbigparams}} \notag \\& \leq 78 \sqrt n \notag \\& \leq B_k \eqnote{see \cref{kbigparams}}, \label{budgetsClaim2} \end{align} using that $C_B' \geq 78$ (check). \subsection{Minimum of two binomial variables} Before addressing robustness of the structure $R$, we require a lemma (\cref{lemma:BinMin}) on the minimum $Z$ of two i.i.d\xperiod binomial $\Bi(n,p)$ random variables. There is a genuine difference in the cases when the common mean $\lambda=np$ is large or small: if $\lambda$ is large then $Z$ is likely to be close to $\lambda$, making $\E Z = \Theta(\lambda)$; if $\lambda$ is small then $Z$ will most often be 0, occasionally 1 (with probability about $\lambda^2$), and rarely anything larger, making $\E Z = \Theta(\lambda^2)$. The lemma relies on the following property of the median of a binomial random variable. (A weaker form of~\cref{Lbigla} and thus of \cref{lemma:BinMin} can be obtained from \cref{lemma:BinDev} in lieu of using the median.) \begin{theorem}[Hamza~{\cite[Theorem 2]{Hamza1999}}] A binomial random variable $X$ has median satisfying $|\operatorname{Med}(X)-\E X| \leq \ln 2$. \end{theorem} \noindent In this discrete setting $\operatorname{Med}(X)$ is not unique: it can be any value $m$ for which $\Pr(X \leq m) \geq 1/2$ and $\Pr(X \geq m) \geq 1/2$. \cite{Hamza1999} defines it uniquely as the smallest integer $m$ such that $\Pr(X \leq m) >1/2$; as desired, this gives $\Pr(X \geq \operatorname{Med}(X)) = 1-\Pr(X \leq \operatorname{Med}(X)-1) \geq 1-1/2=1/2$. (For other results on the binomial median see Kaas and Buhrman~\cite{Kaas1980}, in particular, Corollary~1. Stronger results for the Poisson distribution are given by Choi~\cite{Choi1994}, proving a conjecture of Chen and Rubin, and by Adell and Jodr\'a~\cite{Adell2005}.) \begin{lemma}\label{lemma:BinMin} Let $Z_1, Z_2$ be i.i.d\xperiod $\Bi(n, p)$ random variables, $Z \coloneqq \min(Z_1, Z_2)$ and $\lambda \coloneqq \E Z_1 = np$. \begin{enumerate}[(1)] \item If $\lambda \geq 2$, then \begin{align} \Prob(Z \geq 0.65 \lambda) &> 1/4. \label{Lbigla} \end{align} \item If $\lambda \leq 2$, then \begin{align} \Prob(Z \geq 1) &> 0.18 \lambda^2 . \label{Lsmallla} \end{align} \end{enumerate} \end{lemma} \begin{proof} In the first case, \[ \operatorname{Med}(Z_1) \geq \lambda-\ln 2 = \frac{\lambda-\ln 2}\lambda \lambda \geq \frac{2-\ln 2}2 \lambda \geq 0.65 \lambda , \] so $\Prob \parens{Z_1 \geq 0.65 \lambda} \geq \Prob \parens{Z_1 \geq \operatorname{Med}(Z_1)} \geq 1/2$. The same holds of course for $Z_2$, and the result follows by independence. In the second case we again use independence, and here \begin{equation*} \Prob \parens{Z_1 \geq 1} = 1-{(1-p)}^n \geq 1-\exp(-\lambda) = \frac{1-\exp(-\lambda)}{\lambda} \cdot \lambda \geq 0.43\lambda . \end{equation*} The last inequality comes from minimising $\frac{1-\exp(-x)}{x}$ over $0 \leq x \leq 2$; the function is decreasing so the minimum is at $x=2$. \end{proof} \subsection{Robustness in \cref{kmedium}} \label{kmedrobust} With reference to \cref{robustness}, let us complete the robustness argument for \cref{kmedium}, showing that \cref{robustblurb} holds with high probability. Here we have taken $\rr0 = \varepsilon_k n$, so that the number of edges from a middle vertex to $V'_S$ (see \cref{Zvs}) is $Z^s_v \sim \Bi(\varepsilon_k n, \varepsilon_k)$, with mean $\lambda=\rr0 \varepsilon_k = \varepsilon_k^2 n$ (see \cref{kmedlambda}). Recall that if $\lambda$ is small we expect (see \cref{kmedtotalweight}) that to destroy all paths the adversary will have to delete edges of total weight at least $\varepsilon_k \, n \, \lambda^2 = \eps_k^5 n^3$, which will exceed $B_k$. And, if $\lambda$ is large, then each $Z_v$ will have expectation close to $\lambda=\eps_k^2 n$, for a total cost $\eps_k n$ times larger, namely $\eps_k^3 n^2$, and again this exceeds $B_k$. We now replace these rough calculations with detailed probabilistic ones, applying \cref{lemma:BinMin} to $Z_v$ in the two cases of $\lambda$ small and large. For the adversary to delete all $s$--$t$\xspace paths via $v$, he must delete at least \[ Z_v\coloneqq \min(Z_v^s, Z_v^t) \] edges, and to destroy all paths he must delete at least \[ N\coloneqq \sum_{v \in M'} Z_v \] edges. As described in \cref{robustness}, we imagine a fixed deletion of $k$ edges on each of $s$ and $t$ giving neighbour sets $V'_s$ and $V'_t$ and a set $M'$ of middle vertices; we will eventually take a union bound over all such choices. \medskip \noindent \tmbf{If $\lambda \geq 2$}, then by \cref{lemma:BinMin}, for each $v \in M'$, $\Pr(Z^s_v \geq 0.65 \lambda) \geq 1/4$. Thus, $N$ stochastically dominates $0.65\lambda \cdot \Bi(0.99n, 1/4)$, with expectation $> 0.1608 \lambda n$. We shall consider it a \emph{failure} if $N \leq 0.16 \lambda n$. Assuming success, since each edge costs at least $\eps_k$ to delete, it costs at least $0.16 \eps_k \lambda n = 0.16 \eps_k^3 n^2$ to delete them all. This exceeds $B_k$: \begin{align*} \frac{0.16 \cdot \eps_k^3 n^2}{B_k} &= 0.16 \cdot C_\eps^3 n^{-9/5} B_k^{-2/5} n^2 \eqnote{by definition of $\varepsilon_k$} \\& \geq 0.15 \cdot C_\eps^3 C_B^{-1/2} n^{1/5} n^{-1/5} \eqnote{by~\cref{eq:Bkupper}} \\ &> 1, \end{align*} using that $0.15 \cdot C_\eps^3 C_B^{-1/2}>1$ (check). Failure means that $N/(0.65 \lambda) \sim \Bi(0.99n, 1/4) \leq (0.16/0.65)n$. Noting that $0.99 \cdot 1/4 > 0.16/0.65$, by \cref{lemma:BinDev}, the probability of failure is $\exp(-\Omega(n))$. By the union bound, the total of the failure probabilities, over all rounds (values of $k$) and all adversary choices of the $k$ root edges at $s$ and $t$, is small: \begin{align} \label{case1failure} \sum_k {{\binom{k+\rr0}{\rr0}}^2} & \cdot \exp(-\Omega(n)) \\& \leq \sum_k \, {(n^{\rr0})}^2 \exp(-\Omega(n)) \notag \\ &= \sum_k \exp\parens{2\eps_k n \ln n-\Omega(n)} \quad\text{(by $\rr0=\eps_k n$)} \notag \\&\leq n \exp(-\Omega(n)) \quad\text{(using $\eps_k n = O(n^{1/2})$ from~\cref{eq:ekupper})} \notag \\& = o(1) . \notag \end{align} \medskip \noindent \tmbf{If $\lambda < 2$}, then by \cref{lemma:BinMin} $N$ stochastically dominates $\Bi(0.99n, 0.18\lambda^2)$, with expectation $> 0.175 \lambda^2 n$. We shall consider it a \emph{failure} if $N \leq 0.17 \lambda^2 n = 0.17 \eps_k^4 n^3$. Each edge costs at least $\eps_k$ to delete. Assuming success, it thus costs at least $0.17 \eps_k^5 n^3$ to delete them all, which exceeds $B_k$: \begin{align*} \frac{0.17 \eps_k^5 n^3}{B_k} &= 0.17 C_\eps^5 \eqnote{by definition of $\varepsilon_k$} \\ &> 1, \end{align*} using that $0.17 C_\eps^5>1$ (check). By \cref{lemma:BinDev}, the probability of {failure} is \begin{equation}\label{eq:N2} \Prob \parens{ N \leq 0.17 \eps_k^4 n^3} = \exp(-\Omega(\eps_k^4 n^3)). \end{equation} Over all rounds (values of $k$) and adversary choices of edges incident to $s$ and $t$, the total failure probability is at most \begin{align} \sum_k {\binom{k+\rr0}{\rr0}}^2 & \cdot \Prob \parens{N < 0.17 \eps_k^4 n^2} \notag \\ &\leq \sum_k \exp\parens{2\eps_k n \ln n- \exp(-\Omega(\eps_k^4 n^3))} \notag \\&\leq n \exp(-\Omega(\eps_k^4 n^3)) , \notag \intertext{because $\eps_k n \ln n$ is dominated by $\eps_k^4 n^3$: the latter is larger by a factor $\eps_k^3 n^2/\ln n$, which by~\cref{eq:eklower} is $\Omega(n^{-48/25}n^2/\ln n)=\Omega(n^{2/25}/\ln n)=\omega(1)$. Continuing, this is} &\leq n \exp(-\Omega(n^{11/25})) \eqnote{invoking~\cref{eq:eklower} again} \notag \\& = o(1) . \label{case2failure} \end{align} \bigskip \subsection{Robustness in \cref{kbig}} \label{kbigrobust} Again, our aim is to establish robustness of $R$ by showing that \cref{robustblurb} holds with high probability, and the argument is similar to but simpler than that of \cref{kmedrobust}. Since $\rr0=1$, both $V'_s$ and $V'_t$ have size 1. For a vertex $v \in M'$, let $Z_v$ be the number of paths from $V'_s$ to $V'_t$ via $v$. There is only one such possible path, hence \[ Z_v \sim \Bern \parens{\eps_k^2}. \] To destroy all $s$--$t$\xspace paths the adversary must delete at least \[ N\coloneqq \sum_{v \in M'} Z_v \] edges. $N$ stochastically dominates $\Bi(0.99n, \eps_k^2)$, which has expectation $0.99 \eps_k^2 n$. We declare the event $N \leq 0.98 \eps_k^2 n$ a \emph{failure}. Assuming success, destroying all $s$--$t$\xspace paths would cost at least $\eps_k N \geq 0.98 \eps_k^3 n$. This exceeds $B_k$, since \begin{equation*} \frac{0.98 \eps_k^3 n}{B_k} = \frac{0.98 {C'_\varepsilon}^3}{C'_B}, \end{equation*} and $0.98 {C'_\varepsilon}^3 > C'_B$ (check). The probability of failure is \begin{equation}\label{eq:N3} \Prob \parens{ N \leq 0.98 \eps_k^2 n} = \exp(-\Omega(\eps_k^2 n)) = \exp(-\Omega(n^{2/3})). \end{equation} Over all rounds and adversary choices, using that $\binom{k+\rr0}{\rr0} = \binom{k+1}1 \leq n$, the total failure probability is at most \begin{align} \sum_k {\binom{k+\rr0}{\rr0}}^2 & \cdot \Prob(N \leq 0.98 \varepsilon^2 n) \notag \\ &\leq (14\sqrt n) \, n^2 \, \exp (-\Omega(n^{2/3})) \eqnote{by~\cref{eq:N3}} \label{eq:unifailure3} \\& = o(1) . \notag \end{align} \section{Lower bound}\label{lowerbound} In this section, we establish the lower bound in \cref{Xkbounds} of \cref{Tmain}. \cref{LBsmallk} establishes the lower bound on $X_k$ directly for $k \leq \sqrt{\ln n}$. Values $k \geq \sqrt{\ln n}$ are treated in the subsequent parts. In \cref{LBrunning}, \cref{lemma:S_k-lower} establishes a lower bound on the running totals $S_k$, \begin{align}\label{Skdef} S_k \coloneqq \sum_{i=1}^{k} X_i . \end{align} In \cref{LBlargek}, \cref{lemma:X_k-lower} obtains a lower bound on $X_k$ using \cref{lemma:S_k-lower}'s lower bound on $S_k$, the previously established upper bound on $X_k$ from \cref{Tmain}, and the monotonicity of $X_k$. \subsection{Lower bound for small \tp{$k$}{k}} \label{LBsmallk} We begin with $k \leq \sqrt{\ln n}$. For any fixed $\varepsilon>0$, we know from~\cite{Janson123} that w.h.p\xperiod} % {with high probability \begin{align}\label{X1lower} X_1 &> (1-\varepsilon/2)\frac{\ln n}{n} . \end{align} Assuming that~\cref{X1lower} holds, it follows immediately, and deterministically, that for all $k \leq \sqrt{\ln n}$, \begin{align}\label{Xklower1} X_k &\geq X_1 \geq (1-\varepsilon/2)\frac{\ln n}{n} \geq (1-\varepsilon)\frac{2k+\ln n}{n} . \end{align} The first inequality holds because the sequence $X_k$ is monotone increasing, the next by assumption on $X_1$, the next by $k = o(\ln n)$. \subsection{Lower bound on the running totals} \label{LBrunning} \begin{lemma}\label{lemma:S_k-lower} For any $\varepsilon>0$, w.h.p\xperiod} % {with high probability, simultaneously for every $k \leq n-1$, \begin{align}\label{Sklower} S_k &\geq (1-\eps)\sum_{i=1}^{k}\left(\frac{2i+\ln n}{n}\right). \end{align} \end{lemma} \begin{proof} Write $W\os i^s$ and $W\os i^t$ for the order statistics of edge weights out of $s$ and $t$, respectively. By \cref{lemma:edge-orderstat}, w.h.p\xperiod} % {with high probability, \begin{equation}\label{eq:edge-orderstat} W\os k^s, W\os k^t \in \left[\left(1-\varepsilon/2 \right) \frac k n, \left(1+\varepsilon/2 \right) \frac k n \right] \quad \text{ for all } k \geq \sqrt{\ln n} , \end{equation} and we will assume throughout the proof that~\cref{eq:edge-orderstat} holds. We prove the assertion in two ranges of $k$. \medskip\noindent\tmbf{For $\ln^{11/10} n \leq k \leq n-1$,} the $k$ paths must use at least $k-1$ edges on each of $s$ and $t$, all distinct ($k$ edges each, ignoring the edge $\set{s,t}$ if it is used). Then, using~\cref{eq:edge-orderstat}, we get that w.h.p\xperiod} % {with high probability, for all $k$ in the range, \begin{align} S_k &\geq \sum_{i=1}^{k-1} \left( W\os i^s + W\os i^t \right) \geq \sum_{i=\sqrt{\ln n}}^{k-1} (1-\varepsilon /2) \frac{2 i}n \notag \\& = (1-\varepsilon/2) \parens{ \sum_{i=1}^{k} {\frac{2i+\ln n}n} - \sum_{i=1}^{k} \frac{\ln n}n - \sum_{i=1}^{\sqrt{\ln n}-1} {\frac{2i}n} - {\frac{2k}n} } \label{SbigK1} \\& \geq (1-o(1)) (1-\varepsilon/2) \sum_{i=1}^{k} {\frac{2i+\ln n}n} \label{SbigK2} \eqnote{see below} \\&\geq (1-\eps)\sum_{i=1}^{k}\left(\frac{2i+\ln n}{n}\right) \label{SbigK} . \end{align} To justify \cref{SbigK2} it suffices to show that the first sum in \cref{SbigK1} is of strictly larger order than the other terms. The first sum is at least $\sum_{i=k/2}^{k} 2i/n = \Omega(k^2/n)$, which since $k \geq \ln^{11/10}n$ is also $\Omega(k \ln^{11/10}n/n)$ and $\Omega(\ln^{22/10}n/n)$; we will use all three formulations. The second term is of order $O(k \ln n/n)$, negligible compared with the middle formulation. The third term is $O(\ln^{2/3}n/n)$, negligible compared with the last formulation. And the fourth term, of order $O(k/n)$, is negligible compared with the first formulation. \medskip\noindent\tmbf{For $1 \leq k \leq \ln^{11/10} n$,} let $\delta=\varepsilon/3$ and let $G'=G-s-t$. Let $N_s$ and $N_t$ be the endpoints of the cheapest $\ln^3 n$ edges out of $s$ and $t$ respectively. Note that these sets are independent of the edge weights of $G'$. If any path $P_i$, $i \leq k$, uses a root edge (edge incident on $s$ or $t$) \emph{not} among the $\ln^3 n$ cheapest edges of $s$ or $t$, then by~\cref{eq:edge-orderstat} this edge costs at least $(1-\varepsilon) \ln^3 n / n$, thus $S_k \geq (1-\varepsilon) \ln^3 n / n$. Then \cref{Sklower} follows because this is larger than the RHS of~\cref{Sklower}, namely $\Theta((k^2+k\ln n)/n) = O(\ln^{11/5} n/n)$ for this range of $k$. Thus we may assume that for all $i \leq k$, each path $P_i$ goes via some $s' \in N_s, \; t' \in N_t$. For $s' \in N_s, \; t' \in N_t$, define $A(s',t')$ to be the event that $t'$ is one of the ${(n-2)}^{1-\delta}$ nearest vertices of $s'$, by cost, in $G'$. Clearly, for each pair $s', t'$, $\Pr(A(s',t')) = {(n-2)}^{-\delta}$. Let $A$ be the union of these events, i.e., the event that any such pair has this property. By the union bound, \[ \Prob(A) \leq {\left(\ln^3 n \right)}^2 {(n-2)}^{-\delta} = o(1) . \] We assume henceforth that $A$ does not hold: the $\ln^3n$ cheapest root edges at $s$ and $t$ do not happen to sample any ``nearest'' pairs in $G'$. By assumption that $A$ does not hold, in the \emph{exponential} model (where each edge is i.i.d.\ $\Exp(1)$) for $G'$, for each $s' \in N_s, \; t' \in N_t$, the distance $d(s',t')$ stochastically dominates $Y \sim \sum_{i=1}^{n^{1-\delta}} \Exp(i(n-2-i))$ by \cref{treeX}. We have $\E Y = (1+o(1))(1-\delta) \ln n / n$ by ~\cref{treeDia} (just adjusting its last equation where the value of $d$ is substituted in). Applying \cref{exptail}'s \cref{exp.3} with $\mu=\E Y$ as above, $a^\star=n-3$, and $\lambda=1-\delta$, that in the exponential model $G'$, \begin{align*} \Prob & \left(d_{G'}(s',t') \leq (1-\delta) \frac{(1+o(1))(1-\delta) \ln n}{n} \right) \; \leq \; \exp\parens{ -\Theta(n \cdot \ln n/n \cdot \delta^2) } \; = \; n^{-\Theta(1)}. \end{align*} Since ${(1+o(1))(1-\delta)}^2 \geq (1-\tfrac34 \varepsilon)$, by the union bound this implies, still in the exponential model $G'$, \begin{equation}\label{eq:not-too-close} \Prob\Big( \exists s' \in N_s, t' \in N_t \colon d_{G'}(s',t') \leq (1-\tfrac34 \varepsilon) \ln n / n \Big) \; \leq \; {\left(\ln^3 n \right)}^2 n^{-\Theta(1)} \; = \; o(1) . \end{equation} By standard coupling arguments (see \cref{remark:blackbox}), this also implies that \cref{eq:not-too-close} holds in the \emph{uniform} model $G$ in which we are working. Thus w.h.p\xperiod} % {with high probability, for all $s' \in N_s, t' \in N_t$, we have $d_{G'}(s',t') \geq (1-\tfrac34 \varepsilon) \ln n$; assume this holds. We already assumed that each path $P_i$, $i\leq k$, goes via some $s' \in N_s, t' \in N_t$, so its non-root edges contribute at least $d_{G'}(s',t') \geq (1-\tfrac34 \varepsilon) \ln n/n$ to $S_k$. Then, for all $k$ in this range, \begin{align} S_k &\geq \sum_{i=1}^{k} (1-\tfrac34 \varepsilon) \frac{\ln n}n + \sum_{i=1}^{k-1} \left( W\os i^s + W\os i^t \right) \notag \\& \geq \sum_{i=1}^{k} (1-\tfrac34 \varepsilon) \frac{\ln n}n + (1-\tfrac12 \varepsilon) \sum_{i=\sqrt{\ln n}}^{k-1} \frac{2i}n \eqnote{by \cref{eq:edge-orderstat}} \label{lbx1} \\& \geq (1-\eps)\sum_{i=1}^{k}\left(\frac{2i+\ln n}{n}\right). \notag \end{align} To justify the final inequality, rewrite the second sum in \cref{lbx1} as $\sum_{i=1}^{k} \frac{2i}n - \frac{2k}n - \sum_{i=1}^{\sqrt{\ln n}-1} \frac{2i}n$ and observe that both its second term, $2k/n$, and its final term, which is of order $O(\sqrt{\ln n}^2/n)$, are negligible compared with the first sum in \cref{lbx1}, which is of order $\Omega(k \ln n/n)$. \end{proof} \subsection{Lower bound for large \tp{$k$}{k}} \label{LBlargek} \begin{lemma}\label{lemma:X_k-lower} For any $\varepsilon>0$, w.h.p\xperiod} % {with high probability, simultaneously for every $k \in [\sqrt{\ln n}, n-1]$, \[ X_k \geq (1-\varepsilon)\parens{\frac{2k + \ln n}{n} }. \] \end{lemma} \begin{proof} Let $\delta=\varepsilon^2/9$ and define \begin{equation} c_k = \frac{2k+\ln n}{n}, \quad L_k = (1-\delta)\sum_{i=1}^k c_i, \quad U_k = (1+\delta) \sum_{i=1}^k c_i. \end{equation} W.h.p\xperiod} % With high probability, simultaneously for all $k$, $S_k \geq L_k$ (by \cref{lemma:S_k-lower}) and $S_k \leq U_k$ (by the upper bound of~\cref{Tmain}, already proved). Henceforth, assume that both hold, so $L_k \leq S_k \leq U_k$. The rest of the argument is deterministic. For any positive integer $t<k$, using that $X_k$ is monotone increasing, we have \begin{align} t X_k & \geq X_k +\cdots+ X_{k-t+1} \notag \\& = S_k - S_{k-t} \notag \\& \geq L_k - U_{k-t} \label{LBform} . \end{align} Thus \begin{align*} X_k &\geq \frac1t \parens{L_k - U_{k-t}} = \frac1t \parens{ (1-\delta)\sum_{i=1}^k c_i - (1+\delta)\sum_{i=1}^{k-t} c_i } \\&\geq \frac1t \parens{ \sum_{i=k-t+1}^k c_i -2\delta \sum_{i=1}^{k} c_i } \geq \frac1t \parens{ t c_{k-t} - 2\delta k c_k } = c_{k-t} - \frac{2 \delta k c_k}{t} \\&= c_k- \frac{2t}{n} - \frac{2 \delta k c_k}{t} \\& \geq c_k- \frac{t c_k}{k} - \frac{2\delta k}{t} c_k \quad\text{(using that $c_k/k>2/n$)} \\&= c_k \parens{ 1- \frac{t}{k} - \frac{2\delta k}{t} } . \end{align*} Ignoring integrality for a moment, setting $t=k\sqrt{2\delta}$ would make the last expression $c_k (1- 2\sqrt{2\delta})$. Since this $t=\Theta(k)=\omega(1)$, rounding it can be seen to change the expression by a factor $1+o(1)$, so we may safely write \begin{align*} X_k &\geq c_k (1-3\sqrt{\delta}) = (1-\varepsilon) \frac{2\k + \ln n}{n}. \end{align*} \end{proof} \section{Exponential model} \label{ExpBounds} In this section we prove \cref{Texp}, the analogue of \cref{Tmain} for exponentially distributed edge weights. For small $k$, results for the exponential case follow from those for the uniform. We first argue that the upper bound of \cref{Tmain} also holds in the exponential case for any $k=o(n)$. Couple the two models, so that any edge of weight $w=o(1)$ in one model has cost $w'=w(1+o(1))$ in the other. The uniform-model upper-bound constructions in \cref{sec:k-small} (for $k=o(n^{1/2})$) and \cref{largekUB,unifUB} (for larger $k$) only use edges of weight $o(1)$ (when $k=o(n)$), and therefore the same upper bounds hold for the exponential model; the multiplicative difference of $1+o(1)$ can be subsumed into the factor $1+\varepsilon$ already present. (In the construction of \cref{largekUB,unifUB}, the ``middle edges'' are of cost $o(1)$ for \emph{all} $k$, but the ``incident edges'' have larger cost for $k$ large. In particular, for large $k$, \cref{eq:Wkk0} will no longer hold in the exponential case until we adjust $\rr0$ and $\varepsilon_k$ appropriately.) For the lower bound too, the argument in \cref{lowerbound} carries over for all $k=o(n)$. The lower bounds $L_k$ on the prefix sums $S_k$ derived in \cref{LBsmallk,LBrunning} carry over to the exponential case because the edge costs are equal to within $1+o(1)$ factors in the two models. The upper bounds $U_k$ on the prefix sums are simply the sums of the individual upper bounds on $X_k$, and we have just argued that these change only by a $1+o(1)$ factor. \cref{LBlargek} only uses $L_k$ and $U_k$ to derive lower bounds on $X_k$, so with these both changed only by $1+o(1)$ factors, its results carry over verbatim. Our task, then, is to prove the upper and lower bounds in \cref{Texp} for larger $k$. For the upper bound, arguing for $k> n^{0.4}$ (there is no advantage to a larger starting value), we use the same approach as for the uniform model in \cref{largekUB}. For the lower bound, we argue for $k \geq n^{9/10}$. Unfortunately, the method used in \cref{lowerbound} for the uniform distribution does not extend; let us explain why. The lower bound there came from \cref{LBform}, $t X_k \geq L_k - U_{k-t}$, valid for any functions $L$ and $U$ with $L_k \leq S_k \leq U_k$. Here, we would take $L_k$ as the sum $I_k$ of incident edges as in \cref{Ik} and $U_k$ as the sum of the $X_k$ upper bounds as in \cref{XkWok}. Recall that we defined $B_k$ so that $B_k \geq U_k-L_k$, as in \cref{budget}. Then we can rewrite the previous lower bound approach as $X_k \geq \frac1t (L_k-U_{k-t}) \geq \frac1t (L_k-L_{k-t}) + \frac1t (L_{k-t}-U_{k-t}) \geq \frac1t \sum_{i=k-t}^{k-1} W\os i - \frac1t B_{k-t}$. For large $k$, $W\os k$ and therefore $X_k$ are $\Theta(\ln n)$. Since the $B_k$ grow to size $\Theta(n^{1/2})$ (in the exponential case as well as the uniform case), we are thus limited by the second term to $t=\Omega(n^{1/2+o(1)})$. However, from \cref{muX}, such a large value of $t$ would mean that the average given by the first term is significantly different from $W\os k$. The desired lower bound would be immediate if we could claim that $P_k$ necessarily used the $k$th cheapest edge on $s$ (of cost $W\os k^s$) or a later one, and likewise for $t$. We will prove something close to this. We argue in \cref{ExpLB} that every pair of vertices (excluding both $s$ and $t$) is joined by a path of cost at most $\delta$ (for some small $\delta$ to be specified) that is edge-disjoint from \emph{all} $P_i$, $i=1,\ldots,n-1$. We will show that this implies that path $P_k$ uses an edge on $s$ that is at most $\delta$ cheaper than $W\os k^s$, and likewise for $t$, yielding a sufficient lower bound. \subsection{Claims, and implications for \cref{Texp}} \label{ExpUpper} In order to establish upper bounds on $X_k$ in the exponential model, we use the same structure $R^{(k)}$ as described in \cref{structR}. Then \cref{XkWok} follows as before, and we can continue to define $U_k$ as in \cref{Ukdef}. For convenience define \begin{align}\label{kbardef} \bar{k} &= n-k . \end{align} As before we will treat $k$ in two ranges, and we start now with the smaller range. \begin{claim}\label{expkmedium} For $k \in [n^{4/10}, n-\sqrt{n} \,]$, let \begin{align}\label{ExpBk} B_k \coloneqq \parens{ \frac{2 n^{1/25}+C_B(n^{3/5}- \bar{k}^{3/5})}{n^{1/5}} }^{5/4} \quad \text{and} \quad \varepsilon_k \coloneqq C_\eps B_k^{1/5} n^{-1/5} \bar{k}^{-2/5} , \end{align} with $C_B=44$ and $C_\eps=4$. Then, asymptotically almost surely, \begin{equation}\label{ExpXkWk} X_{k+1} \leq W\os{\kp}^s + W\os{\kp}^t + 8 \eps_k. \end{equation} \end{claim} \noindent\textbf{Remark:} In proving \cref{expkmedium} we will set \begin{align}\label{expk0med} \rr0 & \coloneqq \eps_k \bar{k} . \end{align} because it roughly equates $\Wo{k+\rr0}-W\os k$ and $\eps_k$; see \cref{muX}. In this regime integrality is not an issue: $\rr0$ is large, per \cref{eq:Expr0}. It is clear that both $B_k$ and $\varepsilon_k$ in \cref{ExpBk} are increasing in $k$, even over the larger range $k \in [0,n]$. We will make use of the following bounds, holding for $n$ sufficiently large. Here, \cref{eq:ExpBkUpper} uses that at $k=n-\Theta(\sqrt{n})$, $\bar{k}^{3/5}$ dominates $2n^{1/25}$, while \cref{eq:ExpBkLower} takes $k=0$. \begin{align} B_k &\leq B_{n-\sqrt{n}} \leq C_B^{5/4} n^{1/2} \label{eq:ExpBkUpper}\\ B_k &\geq B_{n^{4/10}} \geq 2n^{-{1/5}} \label{eq:ExpBkLower} \\ \eps_k & \leq C_\eps B_k^{1/5} n^{-1/5} n^{-1/2 \cdot 2/5} \leq C_\eps C_B^{1/4} n^{-3/10} \label{eq:ExpEkUpper} \\ \eps_k & \geq C_\eps {(B_{n^{4/10}})}^{1/5} \, n^{-1/5} \, \bar{k}^{-2/5} \geq C_\eps n^{-{6/25}} \bar{k}^{-2/5} \label{eq:ExpEkLower} \\ \rr0 &= \bar{k} \eps_k \stackrel{\cref{eq:ExpEkLower}}{\geq} C_\eps n^{-6/25} \bar{k}^{3/5} \geq C_\eps n^{3/50} . \label{eq:Expr0} \end{align} \begin{claim}\label{expkbig} For $k \in (n-\sqrt{n}, n-2 \,] $, let \begin{align}\label{ExpBk2} B_k \coloneqq C_B' \sqrt n \quad \text{and} \quad \varepsilon_k \coloneqq C'_\varepsilon n^{-1/6} , \end{align} with $C_B=115$ and $C_\eps=5$. Then, asymptotically almost surely, simultaneously for all $k$ in this range, \begin{align} X_{k+1} &\leq W\os{\kp}^s + W\os{\kp}^t + 8 \eps_k. \label{ExpXkUB} \end{align} \end{claim} \noindent\textbf{Remark:} In proving \cref{expkbig} we will set \begin{align}\label{expk0big} \rr0 & \coloneqq 1 . \end{align} As in \cref{kbig}, $B_k$ and $\varepsilon_k$ are constants independent of $k$, but we retain the subscript for consistency with the notation of \cref{largekIntro}. \begin{proof}[Proof of the upper bounds in \cref{Texp}] Analogous to the argument in \cref{uniclaims}, it is sufficient to check that $\eps_k = o(\E W\os k)$. Since $\E W\os k \sim \ln \parens{\frac{n}{n-k}} \geq \frac kn$ (see \cref{muX}), it is enough to show that $\varepsilon_k = o(k/n).$ For $k \leq n^{0.99} = o(n)$, by first-order approximation, \begin{align}\label{n35} n^{3/5}-\bar{k}^{3/5} &\eqdef n^{3/5}-(n-k)^{3/5} \sim \tfrac 35 n^{-2/5} k , \end{align} so $ B_k = \Theta \parens{n^{-1/5} + n^{-3/4} k^{5/4}} $. Hence, from \cref{expkmedium}, specifically \cref{ExpBk}, \begin{align} \varepsilon_k = \Theta( (n^{-1/25} + n^{-3/20}k^{1/4}) n^{-1/5} n^{-2/5} ) = \Theta( n^{-16/25} + n^{-3/4}k^{1/4}) = o(k/n) \end{align} as $k \geq n^{4/10}$. For $k > n^{0.99}$, we have in \cref{expkmedium} that $\varepsilon_k = O(n^{-3/10})$ by \cref{eq:ExpBkUpper}, and so $\varepsilon_k = o(k/n)$, while in \cref{expkbig}, $\varepsilon_k = \Theta(n^{-1/6}) =o(k/n)$. \end{proof} \subsection{Path weights} To show inequality \cref{path+} it suffices to show that \begin{align}\label{eq:expWkk0} W\os{\kk} - W\os{\kp} \leq 1.1 \eps_k. \end{align} In \cref{expkbig}, we have defined $\rr0 \coloneqq 1$, so \cref{eq:expWkk0} is trivial. For \cref{expkmedium}, $\Delta \coloneqq W\os{\kk}-W\os{\kp}$ has the same distribution as $\sum_{i=k+2}^{k+\rr0} X(n-i)$, where $X(a) \sim \Exp(a)$ and these variables are all independent. Thus $\Delta$ is stochastically dominated by the sum of $\rr0-1$ independent random variables $X(\bar{k}-\rr0)$. Since $\rr0 = \bar{k} \varepsilon_k$, we have that $\E \Delta \leq \rr0/(\bar{k}-\rr0) = \eps_k/(1-\eps_k)$, and from \cref{exptail} it follows that $\Pr(\Delta > 1.1 \eps_k) = O(\exp(-\Theta(\rr0)))$. From \cref{eq:Expr0}, by the union bound, there is a negligible chance that \cref{path+} fails in any round. \subsection{Budgets in \cref{expkmedium}} \label{ExpC1budgets} As before, we need to define a $B_k$ satisfying \cref{budget} and, as before, $\eps_k$ can be guessed from \cref{epsk}, then checked to satisfy yield robustness as in \cref{kmedrobust,kbigrobust}. The base case, confirming \cref{Bkbase}, is given by $k=n^{4/10}$, where by \cref{eq:ExpBkLower} \begin{align} B_k \geq 2 n^{-1/5} \eqdef U_k . \end{align} To verify \cref{Bksuff}, it is straightforward to check that $\frac{\partial^2}{\partial k^2} {B_k}$ is positive, so $\frac{\partial}{\partial k} {B_k}$ is increasing, and \[ B_{k+1}-{B_k} \geq \frac{\partial}{\partial k} {B_k} = \frac 54 \frac 35 \frac {C_B}{C_\eps} = \frac 34 \frac {C_B}{C_\eps} \geq 8\eps_k , \] since $\frac 34 \frac {C_B}{C_\eps} \geq 8$ (check). Finally, we establish \cref{epsfit}. We show that w.h.p\xperiod} % {with high probability for all $k$ in the range, \begin{align}\label{expepsfitpf} \Delta &\coloneqq \Wo {k+1}^s-W\os k^s \leq 0.1 \eps_k . \end{align} Note that $\Delta \sim \Exp(\bar{k}-1)$, so \begin{align*} \Pr \parens{\Delta > 0.1\varepsilon_k} = \exp \parens{-0.1 \eps_k \cdot (\bar{k}-1)} = \exp(-\Omega(\rr0)) = \exp(-n^{\Omega(1)}) \end{align*} by \cref{eq:Expr0}. Then, by the union bound there is a negligible chance that \cref{expepsfitpf} fails for any $k$. \subsection{Robustness in \cref{expkmedium}} With reference to \cref{robustness}, we complete the robustness argument for \cref{expkmedium}, showing that \cref{robustblurb} holds with high probability. Here we have taken $\rr0 = \eps_k \bar{k}$, so the number of edges from a middle vertex to $V'_S$ (see \cref{Zvs}) is $Z^s_v \sim \Bi(\eps_k \bar{k}, \eps_k)$, with mean \begin{align}\label{Expla} \lambda &= \rr0 \eps_k = \eps_k^2 \bar{k} \end{align} (see \cref{kmedlambda}). Recall that if $\lambda$ is small we expect (see \cref{kmedtotalweight}) that to destroy all paths the adversary will have to delete edges of total weight at least $\eps_k \, n \, \lambda^2 = \eps_k^5 n \bar{k}^2$, which will exceed $B_k$. And, if $\lambda$ is large, then each $Z_v$ will have expectation close to $\lambda=\eps_k^2 \bar{k}$, for a total cost $\eps_k n$ times larger, namely $\eps_k^3 n \bar{k}$, and again this exceeds $B_k$. We now show the details of these rough calculations, including the probabilistic details, applying \cref{lemma:BinMin} to $Z_v$ in the two cases of $\lambda$ small and large. For the adversary to delete all $s$--$t$\xspace paths via $v$, he must delete at least \[ Z_v\coloneqq \min(Z_v^s, Z_v^t) \] edges, and to destroy all paths he must delete at least \[ N\coloneqq \sum_{v \in M'} Z_v \] edges. As described in \cref{robustness}, we imagine a fixed deletion of $k$ edges on each of $s$ and $t$, giving neighbour sets $V'_s$ and $V'_t$ and a set $M'$ of middle vertices, eventually taking a union bound over all such choices. \medskip \noindent \tmbf{If $\lambda \geq 2$}, then by \cref{lemma:BinMin}, for each $v \in M'$, $\Pr(Z^s_v \geq 0.65 \lambda) \geq 1/4$. Thus, $N$ stochastically dominates $0.65\lambda \cdot \Bi(0.99n, 1/4)$, with expectation $> 0.1608 \lambda n$. We shall consider it a \emph{failure} if $N \leq 0.16 \lambda n$. Assuming success, since each edge costs at least $\eps_k$ to delete, it costs at least $0.16 \eps_k \lambda n = 0.16 \eps_k^3 n \bar{k}$ to delete them all. This exceeds $B_k$: \begin{align*} \frac{0.16 \, \eps_k^3 n \bar{k}}{B_k} &= 0.16 \, C_\eps^3 n^{-3/5} \bar{k}^{-6/5} B_k^{-2/5} n \bar{k} \eqnote{by definition of $\varepsilon_k$} \\&= 0.16 \, C_\eps^3 B_k^{-2/5} n^{2/5} \bar{k}^{-1/5} \\& \geq 0.15 \, C_\eps^3 C_B^{-1/2} n^{1/5} n^{-1/5} \eqnote{by~\cref{eq:ExpBkUpper}} \\ &> 1, \end{align*} using that $0.15 \cdot C_\eps^3 C_B^{-1/2}>1$ (check). Failure means that $N/(0.65 \lambda) \sim \Bi(0.99n, 1/4) \leq (0.16\lambda n)/(0.65 n)=(0.16/0.65)n$. Noting that $0.99 \cdot 1/4 > 0.16/0.65$, by \cref{lemma:BinDev}, the probability of failure is $\exp(-\Omega(n))$. By the union bound, the total of the failure probabilities, over all rounds and all adversary choices of the $k$ root edges at $s$ and $t$, is small: \begin{align} \label{expcase1failure} \sum_k {{\binom{k+\rr0}{\rr0}}^2} & \cdot \exp(-\Omega(n)) \\& \leq \sum_k \, {(n^{\rr0})}^2 \exp(-\Omega(n)) \notag \\ &= \sum_k \exp\parens{2\eps_k \bar{k} \ln n-\Omega(n)} \quad\text{(by $\rr0=\eps_k \bar{k}$)} \notag \\&\leq n \exp(-\Omega(n)) = o(1), \notag \end{align} the penultimate inequality using $\eps_k \bar{k} = O(n^{7/10})$ by~\cref{eq:ExpEkUpper}. \medskip \noindent \tmbf{If $\lambda < 2$}, then by \cref{lemma:BinMin} $N$ stochastically dominates $\Bi(0.99n, 0.18\lambda^2)$, with expectation $> 0.175 \lambda^2 n$. We shall consider it a \emph{failure} if $N \leq 0.17 \lambda^2 n = 0.17 \eps_k^4 n \bar{k}^2$. Each edge costs at least $\eps_k$ to delete. Assuming success, it thus costs at least $0.17 \eps_k^5 n \bar{k}^2$ to delete them all, which exceeds $B_k$: \begin{align*} \frac{0.17 \eps_k^5 n \bar{k}^2}{B_k} &= 0.17 C_\eps^5 \eqnote{by definition of $\varepsilon_k$} \\ &> 1, \end{align*} using that $0.17 C_\eps^5>1$ (check). By \cref{lemma:BinDev}, the probability of {failure} is \begin{equation}\label{eq:expN2} \Prob \parens{ N \leq 0.17 \eps_k^4 n \bar{k}^2} = \exp(-\Omega(\eps_k^4 n \bar{k}^2)). \end{equation} Over all rounds and adversary choices of edges incident to $s$ and $t$, the total failure probability is at most \begin{align} \sum_k {\binom{k+\rr0}{\rr0}}^2 & \cdot \Prob \parens{N < 0.17 \eps_k^4 n \bar{k}^2} \notag \\ &\leq \sum_k \exp\parens{2\eps_k \bar{k} \ln n- \exp(-\Omega(\eps_k^4 n \bar{k}^2))} \notag \\&\leq n \exp(-\Omega(\eps_k^4 n \bar{k}^2)) , \notag \intertext{% because $\eps_k^4 n \bar{k}^2$ is larger than $\eps_k \bar{k}$ by a factor $\eps_k^3 n \bar{k}$, which by~\cref{eq:ExpEkLower} is $\Omega(n^{-18/25} \bar{k}^{-6/5} n \bar{k})=\Omega(n^{7/25} \bar{k}^{-1/5}) = \Omega(n^{2/25})$. Continuing, this is} \notag \\ &\leq n \exp(-\Omega(n^{1/25} \, \bar{k}^{2/5})) \eqnote{invoking~\cref{eq:ExpEkLower} again} \label{expcase2failure} \\& = o(1) . \notag \end{align} \subsection{Budgets in \cref{expkbig}} \label{expbudgetsbig} We now establish \cref{budget} for the parameters of \cref{expkbig}. \cref{ExpC1budgets} showed that \cref{budget} holds for $k$ up to ${k^\star} \coloneqq \floor{n-\sqrt{n}}$, the point where \cref{expkmedium} ends and just before \cref{expkbig} begins, so in particular $B_{k^\star} \geq U_{k^\star} - I_{k^\star}$. For the regime of \cref{expkbig}, we redefine $I_k$ from \cref{Ik}. Recall that $I_k$ is a lower bound on the edges incident to $s$ and $t$ used by the first $k$ paths. Previously, the sum defining $I_k$ in \cref{Ik} went to $k-1$ to avoid double counting the \ensuremath{\set{s,t}}\xspace edge. In this regime, however, we need the sum to go $k$, as the $W\os i$ increase rapidly. The weight of the \ensuremath{\set{s,t}}\xspace edge is distributed as $\Exp(1)$, thus w.h.p\xperiod} % {with high probability it costs at most $n^{0.01}$. For $k>{k^\star}$, define \begin{align} \label{Ikdefnew} I_k & \coloneqq \sum_{i=1}^k \parens{W\os k^s + W\os k^t} - n^{0.01}, \end{align} so that w.h.p\xperiod} % {with high probability $I_k$ is a lower bound on the incident edges: the $n^{0.01}$ term resolves the potential double-counting of \ensuremath{\set{s,t}}\xspace. We are now ready to check that \cref{budget} holds. Following the derivation of \cref{budgetsClaim2}, for $k$ from ${k^\star}+1$ to $n-2$, \begin{align} U_k - I_k &= (U_{k^\star} - I_{k^\star}) + [(U_k-U_{k^\star}) - (I_k - I_{k^\star})] \notag \\ &\leq B_{k^\star} + \sum_{k={k^\star}+1}^{n-2} {7 \eps_k} - (\Wo{{k^\star}}^s+\Wo{{k^\star}}^t-n^{0.01}) \eqnote{see \cref{Ukdef}, \cref{Ik}, and \cref{Ikdefnew}} \notag \\&\leq 114 \sqrt n + \sqrt n \cdot 7 C'_\varepsilon n^{-1/6} + n^{0.01} \eqnote{see \cref{eq:ExpBkUpper} and \cref{ExpBk2}} \notag \\& \leq 115 \sqrt n \notag \\& \leq B_k \eqnote{see \cref{ExpBk2}}, \label{ExpBudget} \end{align} using that $C_B' \geq 115$ (check). \subsection{Robustness in \cref{expkbig}} \label{exprobustbig} Again, our aim is to establish robustness of $R$ by showing that \cref{robustblurb} holds with high probability, and the argument is similar to but simpler than that for robustness in \cref{expkmedium}. Since $\rr0=1$, both $V'_s$ and $V'_t$ have size 1. For a vertex $v \in M'$, let $Z_v$ be the number of paths from $V'_s$ to $V'_t$ via $v$. There is only one such possible path, hence \[ Z_v \sim \Bern \parens{\eps_k^2}. \] To destroy all $s$--$t$\xspace paths the adversary must delete at least \[ N\coloneqq \sum_{v \in M'} Z_v \] edges. $N$ stochastically dominates $\Bi(0.99n, \eps_k^2)$, with expectation at least $0.99 \eps_k^2$. We declare the event $N \leq 0.98 \eps_k^2 n$ a \emph{failure}. Assuming success, destroying all $s$--$t$\xspace paths would cost at least $\eps_k N \geq 0.98 \eps_k^3 n$. This exceeds $B_k$, since by \cref{ExpBk2} and $0.98 {C'_\varepsilon}^3 > C'_B$ (check), \begin{equation*} \frac{0.98 \eps_k^3 n}{B_k} = \frac{0.98 {C'_\varepsilon}^3}{C'_B} >1 . \end{equation*} The probability of failure is \begin{equation}\label{eq:expN3} \Prob \parens{ N \leq 0.98 \eps_k^2 n} = \exp(-\Omega(\eps_k^2 n)) = \exp(-\Omega(n^{2/3})). \end{equation} Over all rounds and adversary choices, using that $\binom{k+\rr0}{\rr0} = \binom{k+1}1 \leq n$, the total failure probability is at most \begin{align} \sum_k {\binom{k+\rr0}{\rr0}}^2 & \cdot \Prob(N \leq 0.98 \varepsilon^2 n) \notag \\ &\leq \sqrt n \, n^2 \, \exp (-\Omega(n^{2/3})) \eqnote{by~\cref{eq:expN3}} \label{eq:expfailure3} \\& = o(1) . \notag \end{align} \subsection{Lower bound} \label{ExpLB} As argued in the introduction of this section, for any $k=o(n)$, the lower bound follows from the uniform case. Thus it is sufficient if we show the lower bound for $k \geq n^{9/10}$, which we do now. \begin{remark}\label{remdelta} With high probability, for every pair of vertices $u$ and $v$ in $G' = G-s-t$, there is a $u$--$v$ path in $G'$ of cost at most $\delta = 20n^{-1/6}$ that is edge-disjoint from $P_1, \ldots, P_{n-1}$. \end{remark} \begin{proof} The proof of \cref{expkbig} showed that w.h.p\xperiod} % {with high probability, for all $k$ in the claim's range (up to $k=n-2$), there is a cheap $s$--$t$\xspace path (of cost given by \cref{ExpXkUB}) disjoint from $P_1,\ldots,P_k$, \emph{because} for a given pair of neighbours $u,v$ of $s$ and $t$, there is a $u$--$v$ path in $G'$ that is edge-disjoint from these $k$ paths and has cost at most $4 \eps_k = 20n^{-1/6} \eqdef \delta$ (see \cref{ExpBk2}). The existence of a $k+1$st $s$--$t$\xspace path limits $k$ to $n-2$ since after that there are no new neighbours $u$ and $v$ of $s$ and $t$, but the rest of the argument extends to $k={n-1}$. In particular, extending the definition \cref{ExpBk2} of $B_k$ and $\eps_k$ to $k={n-1}$, the derivation of \cref{ExpBudget} extends without change and shows that the budget $B_{n-1}$ covers the middle edges of all paths $P_1,\ldots,P_{n-1}$, and the robustness argument also extends and shows \cref{eq:expN3} to hold for $k={n-1}$. Since the failure probability in \cref{eq:expN3} is exponentially small, and there are fewer than $n^2$ pairs $\set{u,v}$ in $G'$, w.h.p\xperiod} % {with high probability there is a cheap path (of cost $\leq \delta$) for every pair. \end{proof} For the remainder of this section we assume that the high-probability conclusion of \cref{remdelta} holds. Let $H_k^s$ be the weight of the heaviest edge incident to $s$ used by the first $k$ paths, and let $L_k^s$ be the weight of the lightest edge incident to $s$ \emph{not} used by the first $k$ paths. Define $H_k^t$ and $L_k^t$ likewise. We claim that for all $k$ from 1 to ${n-1}$, with $\delta=20n^{-1/6}$ as in \cref{remdelta}, \begin{align}\label{HLdelta} H_k^s - L_k^s \leq \delta . \end{align} We argue by contradiction. Given $k$, let $P_i$, $i\leq k$, be the path using the edge of weight $H^s_k$. By \cref{remdelta}, we can construct an $s$--$t$\xspace path $Q$ whose $s$-incident edge is the one of weight $L^s_k$, whose $t$-incident edge is the same as that of $P_i$, and whose middle edges cost at most $\delta$ and are not used in $P_1,\ldots,P_{n-1}$. This path $Q$ is cheaper than $P_i$: its $s$-incident edge is cheaper by $H^s_k-L^s_k > \delta$, its $t$-incident edge has the same cost, and its middle edges (costing at most $\delta$) cost at most $\delta$ more than those of $P_i$. Also, $Q$ is edge-disjoint from the first $i-1$ paths: its $s$-incident edge $L^s_k$ is not used even by the first $k$ paths, the middle edges are disjoint from those of all $n-1$ paths, and its $t$-incident edge is that used by $P_i$ (so not used by a previous path). Thus, $Q$ should have been chosen in preference to $P_i$, a contradiction, establishing \cref{HLdelta}. Trivially, $H_k^s \geq W\os k^s$. Thus, from \cref{HLdelta}, \begin{align}\label{LW} L^s_k \geq H_k^s - \delta \geq W\os k^s - \delta . \end{align} For $k \leq n-2$, the edge of $P_{k+1}$ incident to $s$ costs at least $L_k^s$ and the edge incident to $t$ at least $L_k^t$. If $P_{k+1}$ is not the single-edge path $\set{s,t}$ these two edges are distinct, so that $X_{k+1} \geq L^s_k + L^t_k$. If $P_{k+1}$ is the single-edge path $\set{s,t}$ then $P_k$ is not, and $X_{k+1} \geq X_k \geq L^s_{k-1} + L^t_{k-1}$. Either way, by \cref{LW}, \begin{align} X_{k+1} &\geq L^s_{k-1} + L^t_{k-1} \notag \\ &\geq \Wo{k-1}^s + \Wo{k-1}^t - 2\delta. \label{bigkLB1} \end{align} Recall that we are concerned here with $k \geq n^{9/10}$. By \cref{lemma:edge-orderstat}, for all such $k$, and for any $\gamma>0$, w.h.p\xperiod} % {with high probability $W\os k \geq (1-\gamma) \EWW k$. Since the exponential random variable is stochastically greater than the uniform, $\EWW k > k/n = \Omega(n^{-1/10})$, while $\delta = 20n^{-1/6} = o(\EWW k)$. From \cref{muX} it is clear that $\EWW {k-1} \asymp \EWW {k+1}$ (for any $k=\omega(1)$), and we subsume the asymptotic error into the constant $\gamma$. Thus, from \cref{bigkLB1}, for any $\gamma>0$, w.h.p\xperiod} % {with high probability, for all $k \geq n^{9/10}$, \begin{align*} X_k \geq (1-\gamma) 2 \EWW k , \end{align*} completing the proof of the lower bound in \cref{Texp}. \section{Expectation}\label{sec:expectation} In this section we prove \cref{thm:expectation}. We treat the uniform and exponential models at the same time. Let $\evp k$ be the event that $P_k$ exists. Clearly $\Pr(\evp k) \geq \Pr(\evp {n-1})$. By \cref{Tmain} (for the uniformly random model) and \cref{Texp} (for the exponential model), $\Pr(\evp {n-1}) = 1-o(1)$. This establishes the first part of the theorem. Then, let $\mu_k = 2\E W\os k + \ln n /n$ (so for the uniform model, $\mu_k=w_0(k)$). It suffices to show that \begin{align} \E[X_k \mid \evp k] &= (1+o(1)) \mu_k \label{Emu} \end{align} uniformly in $k$. First, we show the lower bound implicit in \cref{Emu}. Fix $\varepsilon > 0$. Let $\evl k$ be the event that (jointly) $P_k$ exists and $X_k \geq (1-\varepsilon) \mu_k$. By \cref{Tmain} (for the uniform model) and \cref{Texp} (for the exponential model), $\evl k$ holds with probability $1-o(1)$ uniformly in $k$. Thus, \begin{align*} E[X_k \mid \evp k] &\geq \Pr(\evl k) \E[X_k \mid \evp k \wedge \evl k ] \geq (1-o(1)) \, (1-\varepsilon) \mu_k. \end{align*} Since this holds for any $\varepsilon$, we have that \[ \E[X_k \mid \evp k] \geq (1-o(1)) \mu_k. \] We now establish the corresponding upper bound. \subsection{Small \tp{$k$}{k}} \label{expSmallk} First, we consider the range $k \leq n^{4/10}$. We will need the following lemma in \cref{EXkU}. \begin{lemma}\label{lem:large-eps} There exists an absolute constant $C>0$ such that, for all $\varepsilon>C$, in both the exponential and uniform models, for all $k=o(\sqrt n)$ the probability of the event \begin{align}\label{eq:indC} X_k > (1+\varepsilon) \mu_k \end{align} is $\OO{n^{-1.9}}$. \end{lemma} \begin{proof} By the reasoning given in the introduction of \cref{ExpBounds}, it is sufficient to show the result in the uniform case, where $\mu_k=\frac{2\k + \ln n}{n}$. We use the same argument as developed in \cref{sec:k-small}, where we prove \cref{Tmain} up to $k=o(\sqrt n)$. Our argument in \cref{sec:k-small} (see \cref{indHyp}) was that for any sufficiently small $\varepsilon>0$, \begin{align}\label{eq:ind-large} \text{if } X_i \leq (1+\eps) \parens { \frac{2i}n+\frac{\ln n}n } \text{for all $i \leq k$, then w.h.p\xperiod} % {with high probability the same holds for $i=k+1$.} \end{align} We proved this by constructing a structure $R=R^{(k)}$ in $G$, in which after deleting $k$ paths, each of cost $\leq (1+\eps) (2k/n+\ln n/n)$ from $G$, w.h.p\xperiod} % {with high probability there remains a path in $R$ satisfying the same cost bound. By \cref{ksucceeds}, the probability of failure was $\OO{n^{-1.9}}+\exp(-\Theta(s(k)))$. This does not suffice since for $k$ small the second term may exceed $\OO{n^{-1.9}}$ (recall $s = 2k + \ln n$). To prove the lemma, we will show that for some sufficiently \emph{large} constant $\varepsilon$, the failure probability in \cref{eq:ind-large} is $\OO{n^{-1.9}}$. As noted in \cref{warning}, a few parts of the argument developed in \cref{sec:k-small} rely on $\varepsilon$ being sufficiently small, and here we will detail the changes needed. Principally, we will make one modification (a simplification) to \cref{sec:k-small}'s construction of $R$. We will also track the dependence of key Landau-notation expressions on $\varepsilon$. \medskip Recall from \cref{sdef,w0def} that $s=2k + \ln n$ and $w_0 = s/n$. Parallelling the structure of \cref{sec:k-small}, we start by reviewing the adversary's edge-count budget. This was given by \cref{Bany} which, through its dependence on \cref{pathlength}, held only for sufficiently {small} $\varepsilon$. For sufficiently large $\varepsilon$, modulo the one-time failure probability $\OO{n^{-1.9}}$ from \cref{LLenBd}, each of the first $k$ paths has length $\leq (1+\varepsilon) w_0 \cdot 19 n < 20 s \varepsilon$, and the total length of the first $k$ paths is at most \begin{align}\label{eq:budget-large} 20 k s \varepsilon < 10 s^2 \varepsilon , \end{align} so we now take this to be the adversary's budget. We build level-0 edges of $R$ exactly as in \cref{level0}, and using the same parameter $\rr0$. That is, we add the cheapest $k+\rr0$ edges incident on $s$, with $\rr0= \ceil{\tfrac1{10} \varepsilon s}$ as in \cref{k0}; the opposite endpoints of these edges are the level-1 vertices. Recall that we declared this step a failure if the number $X$ of edges with weights in the interval $[0, \tfrac k n+\frac19 \varepsilon w_0]$ is smaller than $k+\rr0$. Note that $X \sim \Bi(n', \tfrac k n+\frac19 \varepsilon w_0)$, thus $\E X = (1-o(1)) \, (k+\frac19 \varepsilon s)$, and failure means that $X <k+\rr0$, i.e., that \begin{align*} \frac{X}{\E X} = (1+o(1)) \, \frac{k+\frac1{10} \varepsilon s}{k+\frac19 \varepsilon s} \leq \frac{10}{11} \end{align*} for $\varepsilon$ sufficiently large. Then, analogously to \cref{level0fail}, the failure probability by \cref{lemma:BinDev} is at most \begin{align} \Pr(X < \tfrac{10}{11} \E X) &\leq \exp(-\Omega(\E X)) \leq \exp(-\Omega(\varepsilon s)). \label{eq:large-level0fail} \end{align} We skip constructing level-1 edges as in \cref{level1}, instead setting the level-2 vertices identical to level-1 vertices. (There are no edges between these levels; we have ``level 2'' only to keep the level numbering the same as before.) We build level-2 edges exactly as before, with the same parameter $\rr2$, linking to each level-2 vertex its cheapest $\rr2= \frac{1}{10} \varepsilon s$ neighbours (which become the level-3 vertices). The calculations in \cref{level2} hold for any $\varepsilon>0$, and from \cref{level2fail} the probability of any failure on this level is \begin{align}\label{eq:large-level1fail} \leq \exp{-\Theta(\varepsilon s)}. \end{align} The adversary's deletions of edges incident on $s$ must leave $r_0$ vertices at level 1 (a.k.a.\ level 2), thus $\rr0 \rr2 = \varepsilon^2 s^2/100$ edges leading to level~3. By \cref{eq:budget-large} the adversary is allowed to delete at most $10 s^2 \varepsilon$ edges, so for $\varepsilon$ sufficiently large, at least $2 s^2$ level-3 vertices remain; this is the same as before, and will continue to suffice. From level~3 we construct shortest-path trees just as in \cref{level3}, whose calculations hold for any $\varepsilon > 0$. To recapitulate, these trees are built to a size \cref{ddef} independent of $\varepsilon$, the calculations made are valid for all $\varepsilon$, and the result (here as in \cref{sec:k-small}) is that each tree fails with some probability $o(1)$, but the level as a whole fails only if at least $0.01 s^2$ trees fail, which occurs with probability only $\exp(-\Omega(s^2))$ (see \cref{level3fail}). This concludes the modified construction of $R$. The remainder of the argument is unchanged from \cref{sec:k-small}. In the absence of failures, the maximum weight of any $s$--$t$\xspace path in $R$ remains at most $(1+\varepsilon) w_0$ per \cref{Rpathcost} (indeed, a little less as we've skipped the level-1 edges). The number of successful level-3 trees is $\Omega(s^2)$ as before, and the calculations leading to the probability that an adversary can destroy all cheap paths in $R$ are unaffected: this probability remains $\exp(-\Omega(s^2 \ln n))$ as in \cref{smallkAdversaryFailure}, which is dominated by other failure probabilities. Tallying up, as in \cref{sec:small-success}, we have a one-time failure probability of $\OO{n^{-1.9}}$ from \cref{LLenBd}. Out of levels 0, 2 and 3 we have failure probabilities given respectively by \cref{eq:large-level0fail}, \cref{eq:large-level1fail} and \cref{level3fail}, namely $\exp(-\Omega( \varepsilon s))$, $\exp(-\Omega( \varepsilon s))$ and $\exp(-\Omega( s^2))$. Since $s > \ln n$, for some $\varepsilon$ sufficiently large, the net failure probability is $\OO{n^{-1.9}}$, as claimed. \end{proof} Let $C$ be the constant in \cref{lem:large-eps}. Separately, fix any sufficiently small $\varepsilon>0$. Let \begin{align*} U_1 &= [0, (1+\varepsilon){\mu_k}), \\ U_2 &= [(1+\varepsilon){\mu_k}, C{\mu_k}), \\ U_3 &= [C{\mu_k}, \infty). \end{align*} Let $\mathcal{A}_i$ be the event that $X_k \in U_i$. By \cref{Tmain}, $\Pr(\mathcal{A}_1) = 1-o(1)$ and $\Pr(\mathcal{A}_2)=o(1)$, and by \cref{lem:large-eps}, $\Pr(\mathcal{A}_3)=O(n^{-1.9})$. Since here we are considering $k \leq n^{4/10} \leq n/2$, with reference to the proof of \cref{rmk:existence}, one possible choice for $P_k$ is some path of length 2 (there must remain at least one such), and thus, deterministically, \begin{align}\label{eq:length2} X_k \leq W_s + W_t , \end{align} where $W_v$ denotes most expensive edge out of $v$ ($W_v = W^v_{\os{n-1}}$ in the notation of \cref{Wkv}). In the uniform model, \cref{eq:length2} means that, deterministically, $X_k \leq 2$. Then, \begin{align} \E[X_k] &=\Pr(\mathcal{A}_1) \E[X_k \mid \mathcal{A}_1] + \Pr(\mathcal{A}_2) \E[X_k \mid \mathcal{A}_2] + \Pr(\mathcal{A}_3) \E[X_k \mid \mathcal{A}_3] \notag \\ &\leq (1-o(1)) \cdot (1+\varepsilon) {\mu_k} + o(1) \cdot (1+C){\mu_k} + O(n^{-1.9}) \cdot 2 \notag \\ &\leq (1+\varepsilon+o(1)){\mu_k} , \label{EXkU} \end{align} since $\mu_k > \ln n/n$. As this holds for arbitrarily small $\varepsilon > 0$, \begin{align} \E[X_k] \leq (1+o(1)) {\mu_k} . \label{eq:expectLB} \end{align} For the exponential model the same argument applies, once we control $\E[X_k \mid \mathcal{A}_3]$. We make use of the following inequality. Let $Z$ be a random variable with CDF $F$, and $\mathcal{A}$ be an event with $\Pr(\mathcal{A}) = \alpha$. Then, \begin{align}\label{eq:F} \E[Z \mid \mathcal{A}] \leq \E[Z \mid Z > F^{-1} (1-\alpha) ]. \end{align} In the case that $Z$ is an exponential random variable with rate $\lambda$, $F(z)=1-\exp(-\lambda z)$, so $F^{-1} (1-\alpha) = -\ln(\alpha)/\lambda$. By the memoryless property of the exponential, the RHS of \cref{eq:F} is $\E[Z]+F^{-1} (1-\alpha)$, giving \begin{align} \E[Z \mid \mathcal{A}] & \leq \frac{1-\ln(\alpha)}{\lambda} . \label{ExpCondExp} \end{align} Recall from \cref{ordersumexp} that $W_v = \sum_{i=1}^{n-1} Z_i$ where $Z_i \sim \Exp(i)$. Condition on the event $\mathcal{A}_3$, taking $\alpha \coloneqq \Pr(\mathcal{A}_3) = O(n^{-1.9})$. By \cref{ExpCondExp}, \begin{align} \E[W_k \mid \mathcal{A}_3] =\sum_{i=1}^{n-1} \E [Z_i \mid \mathcal{A}_3] \leq \sum_{i=1}^{n-1} \frac{1-\ln(\alpha)}{i} \sim (1-\ln(\alpha)) \ln n = O(\ln^2 n). \label{eq:Walpha} \end{align} By \cref{eq:length2}, \cref{eq:Walpha} and linearity of expectation, \begin{align} \Pr(\mathcal{A}_3) \E[X_k \mid \mathcal{A}_3] \leq \alpha \E[W_s + W_t \mid \mathcal{A}_3] = 2 \alpha O(\ln^2 n) = O(n^{-1.9} \, \ln^2 n) , \label{Exp2Path} \end{align} which is $o(\mu_k)$ since $\mu_k > \ln n/n$. Thus \cref{EXkU} holds also for the exponential model (the change to the middle line of the calculation affects nothing), whereupon so does \cref{eq:expectLB}. \subsection{Large \tp{$k$}{k}} For $k \geq n^{4/10}$, we gather the failure events in \cref{largekUB}. First, we have $X_{n^{4/10}} \leq 3 n^{4/10} / n$ with failure probability $O(n^{-1.9})$, from \cref{Xn0.4} and \cref{Pn0.4}. Then, we have to check two types of failures: failure of \cref{path+} to be an upper bound on \cref{path1} (because the edge order statistics are not as expected), and violation of \cref{XkWok} (because $R$ fails to be robust against the adversary). Failure of \cref{path+} as an upper bound is, in the uniform model, checked through violation of \cref{eq:Wkk0}, the paragraph after \cref{eq:Wkk0} showing failure to occur w.p.\ at most $\exp(-\Omega(n^{0.01}))$. Likewise, in the exponential model it is checked in and following \cref{eq:expWkk0}, with a failure probability of $O(\exp(-\Omega(n^{3/50})))$. The failure probability of \cref{XkWok} in the uniform model is calculated for three cases: near \cref{case1failure} as $n \exp(-\Omega(n))$, near \cref{case2failure} as $n \exp(-\Omega(n^{11/25}))$, and near \cref{eq:unifailure3} as $14 n^{5/2} \exp(-\Omega(n^{2/3}))$. The failure probability in the exponential model is also calculated for three cases: near \cref{expcase1failure} as $n \exp(-\Omega(n))$, near \cref{expcase2failure} as $n \exp(-\Omega(n^{1/25}))$, and near \cref{eq:expfailure3} $n^{5/2} \exp(-\Omega(n^{2/3}))$. Thus, the failure probabilities for \cref{path+} and \cref{XkWok} are all $O(\exp(-n^{0.01}))$, so the probability of any failure affecting any $k>n^{4/10}$ is $O(n^{-1.9})$. Let \begin{align*} U_1 &= [0, (1+\varepsilon){\mu_k}) \\ U_2 &= [(1+\varepsilon){\mu_k}, \infty), \end{align*} and let $\mathcal{A}_i$ be the event that $P_k$ exists and $X_k \in U_i$. Thus $\Pr(\mathcal{A}_1)=1-o(1)$ and $\Pr(\mathcal{A}_2) = O(n^{-1.9})$. Conditioning on the event $\evp k$ that $P_k$ exists, this path clearly has cost \[ X_k \leq Z \coloneqq \sum_{v \in V(G)} W_v \] (analogous to \cref{eq:length2}). In the uniform model, deterministically, $Z \leq n$. In the exponential model, the event $\mathcal{A}_2$ here has the same probability as event $\mathcal{A}_3$ in \cref{expSmallk}, so we may reuse \cref{eq:Walpha}, obtaining \begin{align*} \E[Z \mid \mathcal{A}_2] &= \sum_{v \in V(G)} \E[W_v \mid \mathcal{A}_2] = n \, O(\ln^2 n) = o(n^{1.1}) . \end{align*} Thus, in both the uniform and exponential cases, \begin{align} \E[X_k \mid \evp k] &=\Pr(\mathcal{A}_1) \E[X_k \mid \mathcal{A}_1] + \Pr(\mathcal{A}_2) \E[X_k \mid \mathcal{A}_2] \notag \\ &\leq (1-o(1)) \cdot (1+\varepsilon) {\mu_k} + O(n^{-1.9}) \cdot o(n^{1.1}) \notag \\ &=(1-o(1))(1+\varepsilon){\mu_k} , \label{EXkUlarge} \end{align} since $\mu_k > 2k/n > n^{-6/10} = \omega(n^{-0.8})$. As this holds for arbitrarily small $\varepsilon > 0$, for all $k \geq n^{4/10}$, \begin{align} \E[X_k \mid \evp k] \leq (1+o(1)) \mu_k , \label{eq:expectUB} \end{align} completing the proof. \section*{Acknowledgements} We thank Alan Frieze and Wes Pegden for an initial discussion of the second-shortest path, and Alan for noticing that minimum-cost $k$-flow (\cref{rmk:pathsFk}) was not an open problem but immediately implied by our other results. We also thank two anonymous referees for helpful suggestions. \printbibliography \end{document}
{ "timestamp": "2020-10-13T02:13:52", "yymm": "1911", "arxiv_id": "1911.01151", "language": "en", "url": "https://arxiv.org/abs/1911.01151", "abstract": "Consider a complete graph $K_n$ with edge weights drawn independently from a uniform distribution $U(0,1)$. The weight of the shortest (minimum-weight) path $P_1$ between two given vertices is known to be $\\ln n / n$, asymptotically. Define a second-shortest path $P_2$ to be the shortest path edge-disjoint from $P_1$, and consider more generally the shortest path $P_k$ edge-disjoint from all earlier paths. We show that the cost $X_k$ of $P_k$ converges in probability to $2k/n+\\ln n/n$ uniformly for all $k \\leq n-1$. We show analogous results when the edge weights are drawn from an exponential distribution. The same results characterise the collectively cheapest $k$ edge-disjoint paths, i.e., a minimum-cost $k$-flow. We also obtain the expectation of $X_k$ conditioned on the existence of $P_k$.", "subjects": "Combinatorics (math.CO); Discrete Mathematics (cs.DM); Probability (math.PR)", "title": "Successive shortest paths in complete graphs with random edge weights", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9928785704113701, "lm_q2_score": 0.8198933337131076, "lm_q1q2_score": 0.8140545210668826 }
https://arxiv.org/abs/2204.11084
Decompositions of functions defined on finite sets in $\mathbb{R}^d$
A finite subset $M \subset \mathbb{R}^d$ is basic, if for any function $f \colon M \to \mathbb{R}$ there exists a collection of functions $f_1, \ldots, f_d \colon \mathbb{R} \to \mathbb{R}$ such that for each element $(x_1, \ldots, x_d)\in M$ we have $f(x_1, \ldots, x_d) = f_1(x_1) + \ldots + f_d(x_d)$. For certain finite sets, we prove a criterion for a set to be basic, and we show that it cannot be extended to the general case. In addition, we interpret the above criterion in terms of doubly-weighted graphs and give an estimation for the number of elements in certain basic and non-basic subsets.
\section{Introduction} The concept of \textit{basic subsets} arises in connection with Hilbert's thirteenth problem on the superposition of continuous functions. The first time it was introduced in an explicit form by Sternfeld in 1989 \cite{Sternfeld1989}. He called a~subset $M \subset \mathbb{R}^d$ \emph{(continuously) basic}, if for any continuous function $f \colon M \to \mathbb{R}$ there exists a collection of continuous functions $f_1, \ldots, f_d \colon \mathbb{R} \to \mathbb{R}$ such that \begin{equation}\label{eq: basic set condition} f(x_1, \ldots, x_d) = f_1(x_1) + \ldots + f_d(x_d) \end{equation} for each element $(x_1, \ldots, x_d)\in M$. The origin of this concept goes back to 1958~\cite{Arnold1958} when Arnold raised the question equivalent to the following: what are basic subsets in the case $d=2$? Sternfeld showed that a closed bounded subset $M\subset\mathbb{R}^2$ is basic if and only if $M$ does not contain arbitrary long arrays, where an \emph{array} is a (finite or infinite) sequence of points $(x_i,y_i)$ on the plane such that $x_i=x_{i+1}$, $y_i\ne y_{i+1}$ for odd $i$ and $y_i=y_{i+1}$, $x_i\ne x_{i+1}$ for even $i$. In this paper, we focus on finite subsets of $\mathbb{R}^d$, where $d\geqslant2$. In this case, the condition of continuity can be omitted. Without loss of generality, we identify finite subsets of~$\mathbb{R}^d$ with integer points inside $d$-dimensional cube~$[n]^d$, where $[n] = \{1,\ldots,n\}$. We establish the following estimation for the number of elements in basic subsets. \begin{theorem}\label{th: M is basic => |M| < dn - (d-2)} If $M \subset [n]^d$ is a basic subset, then $$ |M| \leqslant dn - (d-1). $$ This boundary cannot be improved. \end{theorem} On the other hand, we estimate the number of elements in non-basic subsets that are minimal in the sense of inclusion (to be precise, we call a~non-basic subset $M\subset[n]^d$ \emph{minimal}, if any proper subset $K\subset M$ is basic). In order to obtain non-trivial results, we suppose that each layer of $[n]^d$ contains an element of a~subset, where by \emph{layers} we mean hyperplanes orthogonal to the coordinate axes, so that each layer can be determined by the equation $x_i=j$, where $i\in[d]$ and $j\in[n]$. \begin{theorem}\label{th: M is non-basic => 2n-1 < |M| < dn - (d-3)} If $M \subset [n]^d$ is a minimal non-basic subset such that every layer of~$[n]^d$ has a non-empty intersection with $M$, then $$ 2n \leqslant |M| \leqslant dn - (d-2). $$ \end{theorem} The condition for a subset $M \subset [n]^d$ to be basic is equivalent to the consistency of the corresponding system of linear equations with integer coefficients. In particular, there exists an algorithm, polynomial in $|M|$, for determining whether the subset $M$ is basic. For $d=2$, however, one can indicate a~simpler algorithm based on the following criterion: a finite subset $M \subset [n]^2$ is basic if and only if $M$ does not contain a \emph{closed array}, that is, a~non-trivial finite array with the coinciding first and last points \cite{Skopenkov2010}. The reader can think about this criterion in the following way. Let us color the points of a closed array in two colors, say, color odd points in red and even points in blue respectively. Then every layer contains the same number of red and blue points. Extending the idea of coloring onto $\mathbb{R}^d$, we get the following theorem. \begin{theorem} \label{th_set_to_graph} Let $M\subset[n]^d$ be a subset containing two or zero elements in every layer. Then $M$ is non-basic if and only if there exists a non-empty subset $K\subset M$ and a coloring of $K$ in two colors such that every layer contains the same number of elements of each color. \end{theorem} It turns out that the same ideas can be used for studying doubly-weighted graphs as well. In particular, we establish the following result which is of independent interest. \begin{theorem} \label{th_basis_graph} A graph $G = (V, E)$ does not contain a bipartite connected component if and only if for any vertex weight function $w_V \colon V \to \mathbb{R}$ there exists an edge weight function $w_E \colon E \to \mathbb{R}$ such that the weight of any vertex is equal to the sum of weights of the edges incident to this vertex: \[ w_V(v) = \sum\limits_{e\colon v\in e} w_E(e). \] \end{theorem} Assigning colors to elements of a finite subset $M\subset[n]^d$ is a particular case of \emph{weight functions} $M\to\mathbb{Z}$. Identifying red and blue colors with values of the set $\{-1,1\}$, one can notice that if a non-basic subset $M$ satisfies Theorem~\ref{th_set_to_graph}, then, for a subset $K\subset M$, the sum of the weight function values taken over a fixed layer is zero. This observation leads us to the following definition. We will call $f\colon M\to\mathbb{Z}$ \emph{annihilation function of~$M$}, if for each layer $L$: \[ \sum\limits_{\EuScript X\in L\cap M}f(\EuScript X) = 0. \] It turns out that basic and minimal non-basic subsets admit the following descriptions in terms of their annihilation functions. \begin{lemma}\label{lemma: M is non-basic <=> annihilation weight function} A subset $M \subset [n]^d$ is non-basic if and only if there exists a~non-trivial annihilation function of $M$. \end{lemma} \begin{lemma}\label{lemma: M is min. non-basic <=> annihilation weight function is unique} If a non-basic subset $M \subset [n]^d$ is minimal, then the annihilation function is unique up to multiplying by a constant. \end{lemma} As we have just seen, when a non-basic subset $M$ satisfies conditions of Theorem~\ref{th_set_to_graph}, one can choose an annihilation function of $M$ whose values are in $\{-1,0,1\}$. In general, however, the values are not bounded. More precisely, consider the annihilation function of a minimal non-basic subset~$M$ whose values are setwise coprime integers (we will call such annihilation functions \emph{irreducible}). Then the following result takes place. \begin{theorem}\label{th_irreducible_annihilation_function_is_unbounded} For any positive integer $m$ there exist a positive integer $n$ and a minimal non-basic subset $M\subset[n]^3$ whose irreducible annihilation function~$f$ has the value $f(\EuScript X)=m$ for some $\EuScript X \in M$. \end{theorem} Since the coefficients of irreducible annihilation functions are unbounded, one cannot expect to have a criterion similar to Theorem~\ref{th_set_to_graph} in the general case. Nevertheless, one can obtain a simplification by considering an annihilation function $f$ of a subset $M\subset[n]^d$ as a function $f\colon [n]^d \to \mathbb{Z}$ such that $f(\EuScript X)=0$ as long as $\EuScript X\notin M$. The simplest non-trivial annihilation functions of $[n]^d$ correspond to the simplest closed arrays, i.e. rectangles (we call such annihilation functions \emph{simple}, see Definition~\ref{def: simple annihilation function}). It turns out that any annihilation function can be generated as a sum of simple annihilation functions. This observation gives us a rather simple way to construct non-trivial non-basic subsets of $[n]^d$ as domains of annihilation functions of $[n]^d$. \begin{theorem} \label{thBoyarov} Every annihilation function of $[n]^d$ can be decomposed into a~finite sum of simple annihilation functions. \end{theorem} The structure of the paper is the following. In Section~\ref{Section:algebra_app}, we translate the concept of finite basic subsets into the language of systems of linear equations and study their properties. Section~\ref{Section:estimations}, Section~\ref{Section:graphs_app} and Section~\ref{Section:structure_of_non-basic_sets} are devoted to the proofs of Theorems~\ref{th: M is basic => |M| < dn - (d-2)} and~\ref{th: M is non-basic => 2n-1 < |M| < dn - (d-3)}, Theorems~\ref{th_set_to_graph} and~\ref{th_basis_graph}, Theorems~\ref{th_irreducible_annihilation_function_is_unbounded} and~\ref{thBoyarov} respectively. We end the paper by Section~\ref{Section:conclusion} discussing possible directions for further research. \section{Finite basic subsets and systems of linear equations} \label{Section:algebra_app} Let $M$ be a subset of $[n]^d$. Then, in algebraic terms, condition~\eqref{eq: basic set condition} for~$M$ to be basic corresponds to the consistency of the system of $|M|$~linear equations in $dn$~variables. Indeed, for every $i \in [d]$, the variable $x_i$ generates $n$~different layers $x_i = 1$, $\ldots$, $x_i = n$. For every $j \in [n]$, j-th layer corresponds to the unique value~$f_i(j)$ that can be interpreted as a~variable~$X_{ij}$. Thus, \eqref{eq: basic set condition} is equivalent to the system of linear equations of the form \begin{equation}\label{eq: basic set condition, algebraic form} f(\EuScript X) = f(x_1,\ldots,x_d) = \sum\limits_{i=1}^d X_{ix_i}, \end{equation} where $\EuScript X = (x_1,\ldots,x_d) \in M$. In particular, if this system is consistent for any function $f\colon M\to\mathbb{R}$, then $M$ is basic, and vice versa. \begin{notation}\label{notation: A_M} We denote $A_M$ the matrix of system~\eqref{eq: basic set condition, algebraic form}. \end{notation} \begin{remark} The set of all functions $f\colon M \to \mathbb{R}$ form a vector space of dimension~$|M|$ with the basis of \emph{indicator functions} $\textbf{1}_{\EuScript X}$, \[ \textbf{1}_{\EuScript X}(\EuScript Y) = \left\{\begin{array}{cl} 1, & \EuScript Y = \EuScript X \\ 0, & \EuScript Y \ne \EuScript X, \end{array}\right. \] where $\EuScript X,\EuScript Y \in M$. Indeed, every function $f$ can be represented as a linear combination of the indicator functions: \[ f = \sum\limits_{\EuScript X \in M}f(\EuScript X)\textbf{1}_{\EuScript X}. \] \end{remark} \begin{proof}[Proof of Lemma~\ref{lemma: M is non-basic <=> annihilation weight function}] A subset $M \subset [n]^d$ is non-basic if and only if the rows of the matrix~$A_M$ of system~\eqref{eq: basic set condition, algebraic form} are linearly dependent. In other words, there exists a non-trivial linear combination of rows which equals zero. Since the rows and columns of~$A_M$ correspond to the elements of $M$ and the layers respectively, coefficients of this linear combination are the values of annihilation function. Note, that the entries of~$A_M$ are zeroes and ones, therefore, it is possible to choose a~linear combination with integer coefficients. \end{proof} \begin{proof}[Proof of Lemma~\ref{lemma: M is min. non-basic <=> annihilation weight function is unique}] The existence of two different annihilation functions corresponds to the existence of two different linear combinations of rows of~$A_M$ that equal zero. The latter implies that we can construct a non-trivial linear combination of rows such that at least one of its coefficients is zero. Hence, there is a proper subset $K\subset M$ which is non-basic. \end{proof} \begin{remark} The converse of Lemma~\ref{lemma: M is min. non-basic <=> annihilation weight function is unique} does not hold. For example, let $$ M=\{(1,1,1),(2,1,1),(1,2,1),(1,1,2),(2,2,1)\}. $$ Then $M$ is not minimal since we can eliminate $(1,1,2)$ and obtain a~closed plane array (see Fig.~\ref{figure: non-minimal non-basic set}). Nevertheless, any annihilation function of $M$ is a~product of $$ f = \textbf{1}_{(1,1,1)} - \textbf{1}_{(2,1,1)} + \textbf{1}_{(2,2,1)} - \textbf{1}_{(1,2,1)} $$ and some constant. We can see that in this example $f(1,1,2) = 0$. In fact, the uniqueness of the annihilation function implies minimality of a non-basic subset $M$, if we additionally require that $f(\EuScript X)\ne0$ for any $\EuScript X\in M$. \end{remark} \begin{center} \begin{tikzpicture}[scale=1.2,x={(1cm,0cm)},y={(0.54cm,0.36cm)},z={(0cm,1cm)},line width=.5pt] \begin{scope} \coordinate (a1) at (0,0,0); \coordinate (a2) at (1,0,0); \coordinate (a3) at (1,1,0); \coordinate (a4) at (0,1,0); \coordinate (b1) at (0,0,1); \coordinate (b2) at (1,0,1); \coordinate (b3) at (1,1,1); \coordinate (b4) at (0,1,1); \draw [line width=.8pt] (b1)--(b4)--(b3)--(b2)--(b1)--(a1)--(a2)--(b2); \draw [line width=.8pt] (a2)--(a3)--(b3); \draw [dashed] (a1)--(a4); \draw [dashed] (b4)--(a4)--(a3); \foreach \EuScript P in {b2,b3,b4}{ \filldraw[white] (\EuScript P) circle (1.7pt); \draw (\EuScript P) circle (1.7pt); } \foreach \EuScript P in {a1,a2,a3,a4,b1} \filldraw (\EuScript P) circle (1.7pt); \refstepcounter{ris} \draw (0.5,0.5,-0.6) node {Figure \arabic{ris}.\label{figure: non-minimal non-basic set}}; \end{scope} \end{tikzpicture} \end{center} \begin{lemma}\label{lemma: one point in layer => basic} A subset $M \subset [n]^d$ is basic, if for any non-empty subset $K \subset M$ there is a layer containing only one element from $K$. \end{lemma} \begin{proof} In terms of matrix $A_M$, the conditions of Lemma~\ref{lemma: one point in layer => basic} mean that any collection of rows of $A_M$ generates a submatrix having a column with the unique non-zero entry. Hence, there is no non-trivial linear combination of rows of $A_M$ which equals zero. This implies that $M$ is basic. \end{proof} \section{Estimations for basic and non-basic subsets} \label{Section:estimations} The main goal of this section is to prove Theorems~\ref{th: M is basic => |M| < dn - (d-2)} and~\ref{th: M is non-basic => 2n-1 < |M| < dn - (d-3)}. \begin{lemma}\label{lemma: estimations} Let $n\in\mathbb{N}$ and $Y_{ij}$ be coordinates in $\mathbb{R}^{dn}$, where $i\in[d]$ and $j\in[n]$. If the entries of each of $m$ vectors from $\mathbb{R}^{dn}$ satisfy $(d-1)$ equations \begin{equation}\label{eq: subspace condition} Y_{11} + \ldots + Y_{1n} = \ldots = Y_{d1} + \ldots + Y_{dn} \end{equation} and $m > dn-(d-1)$, then these $m$ vectors are linearly dependent. \end{lemma} \begin{proof} Each vector is contained in the subspace $V \subset\mathbb{R}^{dn}$ determined by condition~\eqref{eq: subspace condition}. Hence, $\dim V = dn - (d-1) < m$, which implies that any $m$~vectors in $V$ are linearly dependent. \end{proof} \begin{proof}[Proof of Theorem~\ref{th: M is basic => |M| < dn - (d-2)}] Let us suppose that $|M| > dn - (d-1)$. Then the rows of the matrix~$A_M$ satisfy conditions of Lemma~\ref{lemma: estimations} for $m=|M|$. Hence, they are linearly dependent, meaning, that $M$ is not basic. To show that it is impossible to improve the inequality, it is sufficient to present an appropriate example. For this purpose, set $M$ to be as follows: $$ M = \{(k,1,\ldots,1), (1,k,\ldots,1), \ldots, (1,1,\ldots,k) \mid k\in[n]\} $$ (Figure~\ref{figure: maximal basic set in 4x4x4} illustrates the set $M$ in the case $d=3$, $n=4$). Then $M$ is basic by Lemma~\ref{lemma: one point in layer => basic} and $|M| = dn - (d-1)$. \end{proof} \begin{center} \begin{tikzpicture}[scale=1.2,x={(1cm,0cm)},y={(0.3cm,0.2cm)},z={(0cm,1cm)},line width=.5pt] \begin{scope} \coordinate (a) at (0,0,0); \coordinate (b) at (1,0,0); \coordinate (c) at (2,0,0); \coordinate (d) at (3,0,0); \coordinate (e) at (0,1,0); \coordinate (f) at (0,2,0); \coordinate (g) at (0,3,0); \coordinate (h) at (0,0,1); \coordinate (i) at (0,0,2); \coordinate (j) at (0,0,3); \foreach \EuScript P in {0,1,2,3}{ \draw [line width=.8pt] (0,\EuScript P,3)--++(3,0,0)--++(0,0,-3); \draw [line width=.8pt] (0,0,\EuScript P)--++(3,0,0)--++(0,3,0); } \foreach \EuScript P in {0,1,2}{ \draw [line width=.8pt] (\EuScript P,0,0)--++(0,0,3)--++(0,3,0); \draw [dashed] (0,\EuScript P+1,3)--++(0,0,-3)--++(3,0,0); \draw [dashed] (\EuScript P,0,0)--++(0,3,0)--++(0,0,3); } \foreach \EuScript P in {1,2}{ \draw [dashed] (0,0,\EuScript P)--++(0,3,0)--++(3,0,0); \foreach \EuScript Q in {1,2}{ \draw [dashed] (0,\EuScript P,\EuScript Q)--++(3,0,0); \draw [dashed] (\EuScript P,0,\EuScript Q)--++(0,3,0); \draw [dashed] (\EuScript P,\EuScript Q,0)--++(0,0,3); } } \foreach \EuScript X in {0,1,2,3}{ \foreach \EuScript Y in {0,1,2,3}{ \foreach \z in {0,1,2,3}{ \filldraw[white] (\EuScript X,\EuScript Y,\z) circle (1.7pt); \draw (\EuScript X,\EuScript Y,\z) circle (1.7pt); } } } \foreach \EuScript P in {a,b,c,d,e,f,g,h,i,j} \filldraw (\EuScript P) circle (1.7pt); \refstepcounter{ris} \draw (1.5,1.5,-0.7) node {Figure \arabic{ris}.\label{figure: maximal basic set in 4x4x4}}; \end{scope} \end{tikzpicture} \end{center} \begin{remark} Slightly modifying Lemma~\ref{lemma: estimations}, we can prove that if a basic subset~$M$ belongs to a~parallelepiped of size $n_1 \times \ldots \times n_d$, then $$ |M| \leqslant (n_1 + \ldots + n_d) - (d-1). $$ \end{remark} \begin{proof}[Proof of Theorem~\ref{th: M is non-basic => 2n-1 < |M| < dn - (d-3)}] To obtain the upper bound, it is sufficient to use Theorem~\ref{th: M is basic => |M| < dn - (d-2)}. To obtain the lower bound, suppose that $|M|<2n$. This supposition immediately implies that there exists a layer $L$ such that $|L\cap M|<2$. Due to the conditions of the statement, $L\cap M\ne\emptyset$, hence, there is a single element $\EuScript X\in L\cap M$. As a consequence, if $M$ is non-basic, so is $M\setminus\{\EuScript X\}$. Therefore, $M$ is not minimal, which contradicts our supposition. \end{proof} \begin{remark} \label{Remark: lower bound is reachable} The lower bound of Theorem~\ref{th: M is non-basic => 2n-1 < |M| < dn - (d-3)} cannot be improved. To ensure, consider $$ M = \{(k,k,\ldots,k), (k+1,k,\ldots,k) \mid k\in[n-1]\} \cup \{n,n,\ldots,n\} \cup \{1,n,\ldots,n\}. $$ (see Fig.~\ref{figure: minimal non-basic set in 4x4x4}). We can see, that $|M|=2n$ and $M$ is non-basic, since it is annihilated by $$ f = \sum\limits_{k=1}^{n-1} \Big(\textbf{1}_{(k,k,\ldots,k)} - \textbf{1}_{(k+1,k,\ldots,k)}\Big) + \Big(\textbf{1}_{(n,n,\ldots,n)} - \textbf{1}_{(1,n,\ldots,n)}\Big). $$ \begin{center} \begin{tikzpicture}[scale=1.2,x={(1cm,0cm)},y={(0.3cm,0.2cm)},z={(0cm,1cm)},line width=.5pt] \begin{scope} \coordinate (a) at (0,0,0); \coordinate (b) at (1,0,0); \coordinate (c) at (1,1,1); \coordinate (d) at (2,1,1); \coordinate (e) at (2,2,2); \coordinate (f) at (3,2,2); \coordinate (g) at (3,3,3); \coordinate (h) at (0,3,3); \foreach \EuScript P in {0,1,2,3}{ \draw [line width=.8pt] (0,\EuScript P,3)--++(3,0,0)--++(0,0,-3); \draw [line width=.8pt] (0,0,\EuScript P)--++(3,0,0)--++(0,3,0); } \foreach \EuScript P in {0,1,2}{ \draw [line width=.8pt] (\EuScript P,0,0)--++(0,0,3)--++(0,3,0); \draw [dashed] (0,\EuScript P+1,3)--++(0,0,-3)--++(3,0,0); \draw [dashed] (\EuScript P,0,0)--++(0,3,0)--++(0,0,3); } \foreach \EuScript P in {1,2}{ \draw [dashed] (0,0,\EuScript P)--++(0,3,0)--++(3,0,0); \foreach \EuScript Q in {1,2}{ \draw [dashed] (0,\EuScript P,\EuScript Q)--++(3,0,0); \draw [dashed] (\EuScript P,0,\EuScript Q)--++(0,3,0); \draw [dashed] (\EuScript P,\EuScript Q,0)--++(0,0,3); } } \foreach \EuScript X in {0,1,2,3}{ \foreach \EuScript Y in {0,1,2,3}{ \foreach \z in {0,1,2,3}{ \filldraw[white] (\EuScript X,\EuScript Y,\z) circle (1.7pt); \draw (\EuScript X,\EuScript Y,\z) circle (1.7pt); } } } \foreach \EuScript P in {a,b,c,d,e,f,g,h} \filldraw (\EuScript P) circle (1.7pt); \refstepcounter{ris} \draw (1.5,1.5,-0.7) node {Figure \arabic{ris}.\label{figure: minimal non-basic set in 4x4x4}}; \end{scope} \end{tikzpicture} \end{center} On the other hand, it is not clear whether it is possible to improve the upper bound or not. For instance, if we add an element to the basic subset $$ M = \{(k,1,\ldots,1), (1,k,\ldots,1), \ldots, (1,1,\ldots,k) \mid k\in[n]\} $$ discussed in the proof of Theorem~\ref{th: M is basic => |M| < dn - (d-2)}, then we get a non-basic set which is not minimal. Say, if we add $\EuScript X=(x_1,\ldots,x_d)$, where $x_k>1$ for all $k\in[d]$, then the set $M\cup\{\EuScript X\}$ is annihilated by the function $$ (d-1)\mathbf{1}_{(1,1,\ldots,1)} + \mathbf{1}_{\EuScript X} - \big(\mathbf{1}_{(x_1,1,\ldots,1)} + \mathbf{1}_{(1,x_2,\ldots,1)} + \ldots + \mathbf{1}_{(1,1,\ldots,x_d)}\big), $$ and hence, $M\cup\{\EuScript X\}$ has a non-basic subset of size $d+2$. \end{remark} \section{Criterion for certain subsets to be basic} \label{Section:graphs_app} The main goal of this section is to prove Theorem~\ref{th_set_to_graph} and Theorem~\ref{th_basis_graph}. The second part of the section assumes that the reader is familiar with the basics of hypergraph theory. For an extensive account of this topic, we refer, for example, to~\cite{Bretto2013}. \begin{proof}[Proof of Theorem~\ref{th_set_to_graph}] Let us assume that there is an appropriate coloring of a~subset $K\subset M$. Then this coloring corresponds to the annihilation function $f\colon K\to\{\pm 1\}$. Extending $f$ to the set $M$ by defining $f(\EuScript X)=0$ for $\EuScript X\in M\setminus K$, we obtain an annihilation function of $M$. Hence, by Lemma~\ref{lemma: M is non-basic <=> annihilation weight function}, $M$~is non-basic. Conversely, let $M$ be non-basic. By Lemma~\ref{lemma: M is non-basic <=> annihilation weight function} this implies that there exists a~non-trivial annihilation function $f$ of $M$. Denote $K\subset M$ to be the domain of $f$ and define a function $g\colon M\to\{-1,0,1\}$ by \[ g(\EuScript X) = \left\{ \begin{array}{rl} f(\EuScript X)/|f(\EuScript X)|, & \mbox{if } \EuScript X\in K\\ 0, & \mbox{if } \EuScript X\in M\setminus K. \end{array} \right. \] Since every layer contains two or zero elements of $M$, the function $g$ is annihilation. Hence, it provides us a desired coloring. \end{proof} A subset $M\subset[n]^d$ can be naturally considered as a~hypergraph whose vertices are elements of~$M$ and hyperedges are subsets of elements of $M$ that lie in the same layer. We denote this hypergraph~$G(M)$. If every layer contains two or zero elements of $M$, then the hypergraph becomes a~graph, possibly with multiple edges. The notion of finite basic subsets can be naturally extended to hypergraphs as follows. \begin{definition} \label{basis_graph} We call a hypergraph $G = (V, E)$ \emph{basic}, if for any vertex weight function $w_V \colon V \to \mathbb{R}$ there exists an edge weight function $w_E \colon E \to \mathbb{R}$ such that the weight of any vertex is equal to the sum of the weights of the edges incident to this vertex. In other words, for any $v \in V$: \begin{equation}\label{eq: basic graph condition} w_V(v) = \sum\limits_{e\colon v \in e} w_E(e). \end{equation} \end{definition} \begin{lemma} \label{lemma: set_to_hypergraph} A subset $M\subset[n]^d$ is basic if and only if the corresponding hypergraph~$G(M)$ is basic. \end{lemma} \begin{proof} It follows directly from the observation that there is a natural bijection between the function $f$ in Eq.~\eqref{eq: basic set condition} and the vertex weight function $w_V$ in Eq.~\eqref{eq: basic graph condition}. Thus, a collection of functions $f_1, \ldots, f_d$ determines the edge weight function $w_E$ and vice versa. \end{proof} \begin{definition} \label{co-boundary} Let $G = (V, E)$ be a hypergraph. The \emph{co-boundary} of a~vertex $v \in V$ is an edge weight function $\delta_v$ defined be the formula $$ \delta_v(e) = \left\{\begin{array}{cc} 1, & v \in e \\ 0, & v \notin e. \end{array}\right. $$ In other words, the co-boundary $\delta_v$ is the indicator function of the subset of all edges incident to $v$. Also, the reader can interpret it as a row of the incidence matrix of $G$ corresponding to the vertex $v$. \end{definition} \begin{remark} The notion of co-boundary comes from \cite{Prasolov2006}, although there it has a~slightly different form. Note, that by the co-boundary of a vertex of a~hypergraph we also mean a particular case of the co-boundary of a vertex of a graph. \end{remark} \begin{lemma} \label{claim1} A hypergraph $G = (V, E)$ is basic if and only if the co-boundaries of its vertices are linearly independent in $\mathbb{R}^{|E|}$. \end{lemma} \begin{proof Let $V = \{v_1,\ldots,v_n\}$, $E = \{e_1,\ldots,e_m\}$ and $A$ be the incidence matrix of the hypergraph $G$. Then $G$ is basic if and only if for any vertex weight function $w_V$ there exists an edge weight function $w_E$ such that $$ A\begin{pmatrix} w_E(e_1) \\ \vdots \\ w_E(e_m) \\ \end{pmatrix} = \begin{pmatrix} w_V(v_1) \\ \vdots \\ w_V(v_n) \\ \end{pmatrix}, $$ meaning, that rows of $A$ (i.e. co-boundaries) are linearly independent. \end{proof} \begin{lemma} \label{claim2} The co-boundaries of vertices of a graph $G=(V,E)$ are linearly independent in $\mathbb{R}^{|E|}$ if and only if $G$ does not contain a bipartite connected component. \end{lemma} \begin{proof Assume that there is a linear dependence \begin{equation}\label{eq: dependence} \sum \limits_{i = 1}^{n} \lambda_i \delta_{v_i} = 0. \end{equation} If there is an edge between vertices $v_i$ and $v_j$, then $\lambda_i = -\lambda_j$. Thus, vertices in every connected component are divided into two equivalence classes with coefficients $\lambda_i$ and $-\lambda_i$ respectively. Vertices from each of the classes are adjacent only to vertices from the other class. Hence, this component of the graph is bipartite. Conversely, if $G$ contains a bipartite component $C$ with parts $A$ and $B$, then there exists a linear dependence of form~\eqref{eq: dependence} with $$ \lambda_i = \left\{\begin{array}{rl} 1, & v_i \in A \\ -1, & v_i \in B \\ 0, & v_i \notin C. \end{array}\right. $$ \end{proof} \begin{proof}[Proof of Theorem~\ref{th_basis_graph}] It is sufficient to apply Lemma~\ref{claim1} and Lemma~\ref{claim2}. \end{proof} \begin{corollary}\label{cor:graph_to_set} Let the intersection of a subset $M\subset[n]^d$ with each layer consist of two or zero points. Then $M$ is basic if and only if the corresponding graph $G(M)$ does not contain a bipartite connected component. \end{corollary} \begin{example} \label{example1} Let $M = \{(2,1,1), (1,2,1), (1,1,2), (2,2,2)\}$ (Fig.~\ref{figure: basic set, 4 points}). Then the corresponding graph $G(M)$ is the complete graph $K_4$ with four vertices. Since $K_4$ is connected and not bipartite, in accordance with Corollary~\ref{cor:graph_to_set} the set $M$ is basic. \end{example} \begin{center} \begin{tikzpicture}[scale=1.2,x={(1cm,0cm)},y={(0.54cm,0.36cm)},z={(0cm,1cm)},line width=.5pt] \begin{scope} \coordinate (a1) at (0,0,0); \coordinate (a2) at (1,0,0); \coordinate (a3) at (1,1,0); \coordinate (a4) at (0,1,0); \coordinate (b1) at (0,0,1); \coordinate (b2) at (1,0,1); \coordinate (b3) at (1,1,1); \coordinate (b4) at (0,1,1); \draw [line width=.8pt] (b1)--(b4)--(b3)--(b2)--(b1)--(a1)--(a2)--(b2); \draw [line width=.8pt] (a2)--(a3)--(b3); \draw [dashed] (a1)--(a4); \draw [dashed] (b4)--(a4)--(a3); \foreach \EuScript P in {a1,a3,b2,b4}{ \filldraw[white] (\EuScript P) circle (1.7pt); \draw (\EuScript P) circle (1.7pt); } \foreach \EuScript P in {a2,a4,b1,b3} \filldraw (\EuScript P) circle (1.7pt); \refstepcounter{ris} \draw (0.5,0.5,-0.6) node {Figure \arabic{ris}.\label{figure: basic set, 4 points}}; \end{scope} \end{tikzpicture} \end{center} \section{Structure of non-basic subsets}\label{Section:structure_of_non-basic_sets} By Lemma~\ref{lemma: M is min. non-basic <=> annihilation weight function is unique}, if a non-basic subset $M \subset [n]^d$ is minimal, then there exists the unique annihilation function up to multiplying by a constant. Since the matrix~$A_M$ has integer entries, it is possible to choose the annihilation function $f$ with integer values that are setwise coprime integers, so that the notion of irreducible annihilation function is well-defined. The name \emph{irreducible} is chosen by analogy with integer fractions. \begin{example} \label{example: non-basic 5 points in 2x2x2} If, as it is shown in Fig.~\ref{figure: non-basic set, 5 points}, $$ M = \{(1,1,1), (2,1,1), (1,2,1), (1,1,2), (2,2,2)\}, $$ then the irreducible annihilation function of $M$ has the following form: $$ f = 2\cdot\textbf{1}_{(1,1,1)} - \textbf{1}_{(2,1,1)} - \textbf{1}_{(1,2,1)} - \textbf{1}_{(1,1,2)} + \textbf{1}_{(2,2,2)} $$ \end{example} \begin{center} \begin{tikzpicture}[scale=1.2,x={(1cm,0cm)},y={(0.54cm,0.36cm)},z={(0cm,1cm)},line width=.5pt] \begin{scope} \coordinate (a1) at (0,0,0); \coordinate (a2) at (1,0,0); \coordinate (a3) at (1,1,0); \coordinate (a4) at (0,1,0); \coordinate (b1) at (0,0,1); \coordinate (b2) at (1,0,1); \coordinate (b3) at (1,1,1); \coordinate (b4) at (0,1,1); \draw [line width=.8pt] (b1)--(b4)--(b3)--(b2)--(b1)--(a1)--(a2)--(b2); \draw [line width=.8pt] (a2)--(a3)--(b3); \draw [dashed] (a1)--(a4); \draw [dashed] (b4)--(a4)--(a3); \foreach \EuScript P in {a3,b2,b4}{ \filldraw[white] (\EuScript P) circle (1.7pt); \draw (\EuScript P) circle (1.7pt); } \foreach \EuScript P in {a1,a2,a4,b1,b3} \filldraw (\EuScript P) circle (1.7pt); \refstepcounter{ris} \draw (0.5,0.5,-0.6) node {Figure \arabic{ris}.\label{figure: non-basic set, 5 points}}; \end{scope} \end{tikzpicture} \end{center} \begin{proof}[Proof of Theorem~\ref{th_irreducible_annihilation_function_is_unbounded}] Let $a_1,\ldots,a_m,b_1,\ldots,b_m,c_1,\ldots,c_m,d,e$ be different integers. Define the set $M$ to consist of the following $6m+2$ elements (see Fig.~\ref{figure: non-basic set, 6k+2 points}): \begin{itemize} \item $3m$ points with coordinates $(d,a_k,a_k)$, $(b_k,d,b_k)$, $(c_k,c_k,d)$, $k \in [m]$; \item $3m$ points with coordinates $(e,a_k,a_{k+1})$, $(b_k,e,b_{k+1})$, $(c_k,c_{k+1},e)$,\\ $k \in [m]$ (here, we assume that $a_{m+1}=a_1$, $b_{m+1}=b_1$ and $c_{m+1}=c_1$); \item $1$ point with coordinates $(d,d,d)$; \item $1$ point with coordinates $(e,e,e)$. \end{itemize} \begin{center} \begin{tikzpicture}[line width=.8pt] \begin{scope} \filldraw[rounded corners, green, opacity = 0.2] (6.0,2.3) -- (-0.3,1.2) .. controls (-0.5,0.5) and (3.5,0.5) .. (3.3,1.2)--(6.0,2)--cycle; \filldraw[rounded corners, yellow, opacity = 0.4] (5.5,2.3) -- (3.7,1.2) .. controls (3.5,0.5) and (7.5,0.5) .. (7.3,1.2)--cycle; \filldraw[rounded corners, red, opacity = 0.2] (5.0,2.3) -- (11.3,1.2) .. controls (11.5,0.5) and (7.5,0.5) .. (7.7,1.2)--(5.0,2)--cycle; \draw[rounded corners] (6.0,2.3) -- (-0.3,1.2) .. controls (-0.5,0.5) and (3.5,0.5) .. (3.3,1.2)--(6.0,2)--cycle; \draw[rounded corners] (5.5,2.3) -- (3.7,1.2) .. controls (3.5,0.5) and (7.5,0.5) .. (7.3,1.2)--cycle; \draw[rounded corners] (5.0,2.3) -- (11.3,1.2) .. controls (11.5,0.5) and (7.5,0.5) .. (7.7,1.2)--(5.0,2)--cycle; \filldraw[rounded corners, red, opacity = 0.2] (6.0,-1.3) -- (-0.3,-0.2) .. controls (-0.5,0.5) and (3.5,0.5) .. (3.3,-0.2)--(6.0,-1)--cycle; \filldraw[rounded corners, yellow, opacity = 0.4] (5.5,-1.3) -- (3.7,-0.2) .. controls (3.5,0.5) and (7.5,0.5) .. (7.3,-0.2)--cycle; \filldraw[rounded corners, green, opacity = 0.2] (5.0,-1.3) -- (11.3,-0.2) .. controls (11.5,0.5) and (7.5,0.5) .. (7.7,-0.2)--(5.0,-1)--cycle; \draw[rounded corners] (6.0,-1.3) -- (-0.3,-0.2) .. controls (-0.5,0.5) and (3.5,0.5) .. (3.3,-0.2)--(6.0,-1)--cycle; \draw[rounded corners] (5.5,-1.3) -- (3.7,-0.2) .. controls (3.5,0.5) and (7.5,0.5) .. (7.3,-0.2)--cycle; \draw[rounded corners] (5.0,-1.3) -- (11.3,-0.2) .. controls (11.5,0.5) and (7.5,0.5) .. (7.7,-0.2)--(5.0,-1)--cycle; \foreach \EuScript X in {0,...,2}{ \foreach \EuScript Y in {0,...,2}{ \draw (4*\EuScript X+\EuScript Y,0)--++(1,1); } \draw (4*\EuScript X,1)--++(3,-1); } \foreach \EuScript X in {0,...,11}{ \draw (\EuScript X,0)--++(0,1); \filldraw (\EuScript X,0) circle (1.7pt); \filldraw (\EuScript X,1) circle (1.7pt); } \filldraw (5.5,2) circle (1.7pt); \filldraw (5.5,-1) circle (1.7pt); \refstepcounter{ris} \draw (5.5,-1.8) node {Figure \arabic{ris}.\label{figure: non-basic set, 6k+2 points}}; \end{scope} \end{tikzpicture} \end{center} Then the function $f$ taking the values $1$, $-1$, $-m$ and $m$ at the points of first, second, third and fourth groups respectively is irreducible annihilation. Now we show that the set $M$ is minimal. Indeed, $2m$ points with coordinates $(d,a_k,a_k)$ and $(e,a_k,a_{k+1})$ constitute a cycle of order $2m$, whose elements belong (or do not belong) to a minimal subset $N$ of $M$ simultaneously. Two other cycles behave the same way. To finish the proof, we need to mention that points $(d,d,d)$ and $(e,e,e)$ belong to $N$ for sure, and if $(e,e,e)$ belongs to $N$, then each of three cycles does as well. Hence, all $6m+2$ points are in~$N$ and $N=M$. \end{proof} \begin{definition}\label{def: simple annihilation function} Let $\EuScript P,\EuScript Q,\EuScript R,\EuScript S$ be the consecutive vertices of a rectangle whose sides are parallel to the coordinate axes. A \emph{simple annihilation function} is $$ f_{\EuScript P\EuScript Q\EuScript R\EuScript S} = \textbf{1}_{\EuScript P} - \textbf{1}_{\EuScript Q} + \textbf{1}_{\EuScript R} - \textbf{1}_{\EuScript S}. $$ \end{definition} By double induction on $d$ and $n$, one can verify that every annihilation function of $[n]^d$ can be decomposed into a~finite sum of simple annihilation functions (Theorem~\ref{thBoyarov}). The proof is routine and will be omitted. \begin{example} \label{example: non-basic 5 points in 2x2x2 revisited} For the set $M$ from Example~\ref{example: non-basic 5 points in 2x2x2}, its annihilation function $$ f = 2\cdot\textbf{1}_{(1,1,1)} - \textbf{1}_{(2,1,1)} - \textbf{1}_{(1,2,1)} - \textbf{1}_{(1,1,2)} + \textbf{1}_{(2,2,2)} $$ is decomposed as the sum of simple annihilation functions $$ \textbf{1}_{(1,1,1)} - \textbf{1}_{(1,2,1)} + \textbf{1}_{(2,2,1)} - \textbf{1}_{(2,1,1)}, $$ $$ \textbf{1}_{(1,1,1)} - \textbf{1}_{(1,1,2)} + \textbf{1}_{(2,1,2)} - \textbf{1}_{(2,1,1)} $$ and $$ \textbf{1}_{(2,1,1)} - \textbf{1}_{(2,2,1)} + \textbf{1}_{(2,2,2)} - \textbf{1}_{(2,1,2)}. $$ \end{example} \section{Conclusion} \label{Section:conclusion} As we have seen in Section~\ref{Section:estimations}, Theorem~\ref{th: M is non-basic => 2n-1 < |M| < dn - (d-3)} provides some bounds for the number of elements in certain non-basic subsets of $[n]^d$. While the lower bound is reachable (see Remark~\ref{Remark: lower bound is reachable}), it is not clear, whether the upper bound and the intermediate values are reachable as well. This observation leads us to the following question. \begin{question}\label{question: reachability} Are the intermediate values in Theorem~\ref{th: M is non-basic => 2n-1 < |M| < dn - (d-3)} reachable? In other words, is it true that for any integer $k$, $2n < k \leqslant dn - (d-2)$, there exists a~minimal non-basic subset $M\subset[n]^d$ of size $k$ such that every layer of~$[n]^d$ has a non-empty intersection with $M$? \end{question} Figures~\ref{figure: non-basic sets in 3x3x3},~\ref{figure: non-basic sets, n=4, |M|=8,9} and~\ref{figure: non-basic sets, n=4, |M|=10,11} shows that for $d=3$, $n\in\{3,4\}$ the answer to Question~\ref{question: reachability} is positive (here, a point color indicates the value of the irreducible annihilation function: red, blue, green and black are reserved for $-1$, $1$, $-2$ and $2$ respectively). \begin{center} \begin{tikzpicture}[scale=1.2,x={(1cm,0cm)},y={(0.4cm,0.3cm)},z={(0cm,1cm)},line width=.5pt] \begin{scope} \coordinate (a) at (2,0,0); \coordinate (b) at (2,1,0); \coordinate (c) at (1,0,1); \coordinate (d) at (1,2,1); \coordinate (e) at (0,1,2); \coordinate (f) at (0,2,2); \foreach \EuScript P in {0,1,2}{ \draw [line width=.8pt] (0,\EuScript P,2)--++(2,0,0)--++(0,0,-2); \draw [line width=.8pt] (0,0,\EuScript P)--++(2,0,0)--++(0,2,0); } \foreach \EuScript P in {0,1}{ \draw [line width=.8pt] (\EuScript P,0,0)--++(0,0,2)--++(0,2,0); \draw [dashed] (0,\EuScript P+1,2)--++(0,0,-2)--++(2,0,0); \draw [dashed] (\EuScript P,0,0)--++(0,2,0)--++(0,0,2); } \draw [dashed] (0,0,1)--++(0,2,0)--++(2,0,0); \draw [dashed] (0,1,1)--++(2,0,0); \draw [dashed] (1,0,1)--++(0,2,0); \draw [dashed] (1,1,0)--++(0,0,2); \foreach \EuScript X in {0,1,2}{ \foreach \EuScript Y in {0,1,2}{ \foreach \z in {0,1,2}{ \filldraw[white] (\EuScript X,\EuScript Y,\z) circle (1.7pt); \draw (\EuScript X,\EuScript Y,\z) circle (1.7pt); } } } \foreach \EuScript P in {a,d,e}{ \filldraw[red] (\EuScript P) circle (1.7pt); \draw (\EuScript P) circle (1.7pt); } \foreach \EuScript P in {b,c,f}{ \filldraw[blue] (\EuScript P) circle (1.7pt); \draw (\EuScript P) circle (1.7pt); } \end{scope} \begin{scope}[xshift=4cm] \coordinate (a) at (2,0,0); \coordinate (b) at (2,2,1); \coordinate (c) at (1,1,1); \coordinate (d) at (1,2,2); \coordinate (e) at (0,0,0); \coordinate (f) at (0,1,2); \coordinate (g) at (0,2,2); \foreach \EuScript P in {0,1,2}{ \draw [line width=.8pt] (0,\EuScript P,2)--++(2,0,0)--++(0,0,-2); \draw [line width=.8pt] (0,0,\EuScript P)--++(2,0,0)--++(0,2,0); } \foreach \EuScript P in {0,1}{ \draw [line width=.8pt] (\EuScript P,0,0)--++(0,0,2)--++(0,2,0); \draw [dashed] (0,\EuScript P+1,2)--++(0,0,-2)--++(2,0,0); \draw [dashed] (\EuScript P,0,0)--++(0,2,0)--++(0,0,2); } \draw [dashed] (0,0,1)--++(0,2,0)--++(2,0,0); \draw [dashed] (0,1,1)--++(2,0,0); \draw [dashed] (1,0,1)--++(0,2,0); \draw [dashed] (1,1,0)--++(0,0,2); \foreach \EuScript X in {0,1,2}{ \foreach \EuScript Y in {0,1,2}{ \foreach \z in {0,1,2}{ \filldraw[white] (\EuScript X,\EuScript Y,\z) circle (1.7pt); \draw (\EuScript X,\EuScript Y,\z) circle (1.7pt); } } } \foreach \EuScript P in {g} \filldraw (\EuScript P) circle (1.7pt); \foreach \EuScript P in {b,d,e,f}{ \filldraw[red] (\EuScript P) circle (1.7pt); \draw (\EuScript P) circle (1.7pt); } \foreach \EuScript P in {a,c}{ \filldraw[blue] (\EuScript P) circle (1.7pt); \draw (\EuScript P) circle (1.7pt); } \refstepcounter{ris} \draw (1,1,-0.7) node {Figure \arabic{ris}.\label{figure: non-basic sets in 3x3x3}}; \end{scope} \begin{scope}[xshift=8cm] \coordinate (a) at (2,0,0); \coordinate (b) at (2,0,1); \coordinate (c) at (2,1,0); \coordinate (d) at (1,0,0); \coordinate (e) at (1,2,2); \coordinate (f) at (0,1,2); \coordinate (g) at (0,2,1); \coordinate (h) at (0,2,2); \foreach \EuScript P in {0,1,2}{ \draw [line width=.8pt] (0,\EuScript P,2)--++(2,0,0)--++(0,0,-2); \draw [line width=.8pt] (0,0,\EuScript P)--++(2,0,0)--++(0,2,0); } \foreach \EuScript P in {0,1}{ \draw [line width=.8pt] (\EuScript P,0,0)--++(0,0,2)--++(0,2,0); \draw [dashed] (0,\EuScript P+1,2)--++(0,0,-2)--++(2,0,0); \draw [dashed] (\EuScript P,0,0)--++(0,2,0)--++(0,0,2); } \draw [dashed] (0,0,1)--++(0,2,0)--++(2,0,0); \draw [dashed] (0,1,1)--++(2,0,0); \draw [dashed] (1,0,1)--++(0,2,0); \draw [dashed] (1,1,0)--++(0,0,2); \foreach \EuScript X in {0,1,2}{ \foreach \EuScript Y in {0,1,2}{ \foreach \z in {0,1,2}{ \filldraw[white] (\EuScript X,\EuScript Y,\z) circle (1.7pt); \draw (\EuScript X,\EuScript Y,\z) circle (1.7pt); } } } \foreach \EuScript P in {h} \filldraw (\EuScript P) circle (1.7pt); \foreach \EuScript P in {a}{ \filldraw[green] (\EuScript P) circle (1.7pt); \draw (\EuScript P) circle (1.7pt); } \foreach \EuScript P in {e,f,g}{ \filldraw[red] (\EuScript P) circle (1.7pt); \draw (\EuScript P) circle (1.7pt); } \foreach \EuScript P in {b,c,d}{ \filldraw[blue] (\EuScript P) circle (1.7pt); \draw (\EuScript P) circle (1.7pt); } \end{scope} \end{tikzpicture} \end{center} \begin{center} \begin{tikzpicture}[scale=1.2,x={(1cm,0cm)},y={(0.3cm,0.2cm)},z={(0cm,1cm)},line width=.5pt] \begin{scope} \coordinate (a) at (0,0,0); \coordinate (b) at (1,0,0); \coordinate (c) at (1,1,1); \coordinate (d) at (2,1,1); \coordinate (e) at (2,2,2); \coordinate (f) at (3,2,2); \coordinate (g) at (3,3,3); \coordinate (h) at (0,3,3); \foreach \EuScript P in {0,1,2,3}{ \draw [line width=.8pt] (0,\EuScript P,3)--++(3,0,0)--++(0,0,-3); \draw [line width=.8pt] (0,0,\EuScript P)--++(3,0,0)--++(0,3,0); } \foreach \EuScript P in {0,1,2}{ \draw [line width=.8pt] (\EuScript P,0,0)--++(0,0,3)--++(0,3,0); \draw [dashed] (0,\EuScript P+1,3)--++(0,0,-3)--++(3,0,0); \draw [dashed] (\EuScript P,0,0)--++(0,3,0)--++(0,0,3); } \foreach \EuScript P in {1,2}{ \draw [dashed] (0,0,\EuScript P)--++(0,3,0)--++(3,0,0); \foreach \EuScript Q in {1,2}{ \draw [dashed] (0,\EuScript P,\EuScript Q)--++(3,0,0); \draw [dashed] (\EuScript P,0,\EuScript Q)--++(0,3,0); \draw [dashed] (\EuScript P,\EuScript Q,0)--++(0,0,3); } } \foreach \EuScript X in {0,1,2,3}{ \foreach \EuScript Y in {0,1,2,3}{ \foreach \z in {0,1,2,3}{ \filldraw[white] (\EuScript X,\EuScript Y,\z) circle (1.7pt); \draw (\EuScript X,\EuScript Y,\z) circle (1.7pt); } } } \foreach \EuScript P in {a,c,e,g}{ \filldraw[red] (\EuScript P) circle (1.7pt); \draw (\EuScript P) circle (1.7pt); } \foreach \EuScript P in {b,d,f,h}{ \filldraw[blue] (\EuScript P) circle (1.7pt); \draw (\EuScript P) circle (1.7pt); } \end{scope} \begin{scope}[xshift=5cm] \coordinate (a) at (0,3,3); \coordinate (b) at (0,3,2); \coordinate (c) at (0,2,3); \coordinate (d) at (1,3,3); \coordinate (e) at (1,0,1); \coordinate (f) at (2,2,0); \coordinate (g) at (3,1,2); \coordinate (h) at (2,1,1); \coordinate (i) at (3,0,0); \foreach \EuScript P in {0,1,2,3}{ \draw [line width=.8pt] (0,\EuScript P,3)--++(3,0,0)--++(0,0,-3); \draw [line width=.8pt] (0,0,\EuScript P)--++(3,0,0)--++(0,3,0); } \foreach \EuScript P in {0,1,2}{ \draw [line width=.8pt] (\EuScript P,0,0)--++(0,0,3)--++(0,3,0); \draw [dashed] (0,\EuScript P+1,3)--++(0,0,-3)--++(3,0,0); \draw [dashed] (\EuScript P,0,0)--++(0,3,0)--++(0,0,3); } \foreach \EuScript P in {1,2}{ \draw [dashed] (0,0,\EuScript P)--++(0,3,0)--++(3,0,0); \foreach \EuScript Q in {1,2}{ \draw [dashed] (0,\EuScript P,\EuScript Q)--++(3,0,0); \draw [dashed] (\EuScript P,0,\EuScript Q)--++(0,3,0); \draw [dashed] (\EuScript P,\EuScript Q,0)--++(0,0,3); } } \foreach \EuScript X in {0,1,2,3}{ \foreach \EuScript Y in {0,1,2,3}{ \foreach \z in {0,1,2,3}{ \filldraw[white] (\EuScript X,\EuScript Y,\z) circle (1.7pt); \draw (\EuScript X,\EuScript Y,\z) circle (1.7pt); } } } \foreach \EuScript P in {a} \filldraw (\EuScript P) circle (1.7pt); \foreach \EuScript P in {b,c,d,h,i}{ \filldraw[red] (\EuScript P) circle (1.7pt); \draw (\EuScript P) circle (1.7pt); } \foreach \EuScript P in {e,f,g}{ \filldraw[blue] (\EuScript P) circle (1.7pt); \draw (\EuScript P) circle (1.7pt); } \end{scope} \refstepcounter{ris}\label{figure: non-basic sets, n=4, |M|=8,9} \draw (5.0,-1.8) node {Figure \arabic{ris}.}; \end{tikzpicture} \end{center} \begin{center} \begin{tikzpicture}[scale=1.2,x={(1cm,0cm)},y={(0.3cm,0.2cm)},z={(0cm,1cm)},line width=.5pt] \begin{scope} \coordinate (a) at (0,3,3); \coordinate (b) at (3,0,0); \coordinate (c) at (3,3,2); \coordinate (d) at (2,3,0); \coordinate (e) at (3,1,3); \coordinate (f) at (1,0,3); \coordinate (g) at (0,2,0); \coordinate (h) at (0,0,1); \coordinate (i) at (2,1,1); \coordinate (j) at (1,2,2); \foreach \EuScript P in {0,1,2,3}{ \draw [line width=.8pt] (0,\EuScript P,3)--++(3,0,0)--++(0,0,-3); \draw [line width=.8pt] (0,0,\EuScript P)--++(3,0,0)--++(0,3,0); } \foreach \EuScript P in {0,1,2}{ \draw [line width=.8pt] (\EuScript P,0,0)--++(0,0,3)--++(0,3,0); \draw [dashed] (0,\EuScript P+1,3)--++(0,0,-3)--++(3,0,0); \draw [dashed] (\EuScript P,0,0)--++(0,3,0)--++(0,0,3); } \foreach \EuScript P in {1,2}{ \draw [dashed] (0,0,\EuScript P)--++(0,3,0)--++(3,0,0); \foreach \EuScript Q in {1,2}{ \draw [dashed] (0,\EuScript P,\EuScript Q)--++(3,0,0); \draw [dashed] (\EuScript P,0,\EuScript Q)--++(0,3,0); \draw [dashed] (\EuScript P,\EuScript Q,0)--++(0,0,3); } } \foreach \EuScript X in {0,1,2,3}{ \foreach \EuScript Y in {0,1,2,3}{ \foreach \z in {0,1,2,3}{ \filldraw[white] (\EuScript X,\EuScript Y,\z) circle (1.7pt); \draw (\EuScript X,\EuScript Y,\z) circle (1.7pt); } } } \foreach \EuScript P in {a,b} \filldraw (\EuScript P) circle (1.7pt); \foreach \EuScript P in {c,d,e,f,g,h}{ \filldraw[red] (\EuScript P) circle (1.7pt); \draw (\EuScript P) circle (1.7pt); } \foreach \EuScript P in {i,j}{ \filldraw[blue] (\EuScript P) circle (1.7pt); \draw (\EuScript P) circle (1.7pt); } \end{scope} \begin{scope}[xshift=5cm] \coordinate (a) at (0,3,3); \coordinate (b) at (3,0,0); \coordinate (c) at (1,2,3); \coordinate (d) at (1,2,2); \coordinate (e) at (1,1,1); \coordinate (f) at (2,2,1); \coordinate (g) at (2,3,0); \coordinate (h) at (3,0,2); \coordinate (i) at (0,0,1); \coordinate (j) at (3,3,1); \coordinate (k) at (0,1,0); \coordinate (l) at (2,3,0); \foreach \EuScript P in {0,1,2,3}{ \draw [line width=.8pt] (0,\EuScript P,3)--++(3,0,0)--++(0,0,-3); \draw [line width=.8pt] (0,0,\EuScript P)--++(3,0,0)--++(0,3,0); } \foreach \EuScript P in {0,1,2}{ \draw [line width=.8pt] (\EuScript P,0,0)--++(0,0,3)--++(0,3,0); \draw [dashed] (0,\EuScript P+1,3)--++(0,0,-3)--++(3,0,0); \draw [dashed] (\EuScript P,0,0)--++(0,3,0)--++(0,0,3); } \foreach \EuScript P in {1,2}{ \draw [dashed] (0,0,\EuScript P)--++(0,3,0)--++(3,0,0); \foreach \EuScript Q in {1,2}{ \draw [dashed] (0,\EuScript P,\EuScript Q)--++(3,0,0); \draw [dashed] (\EuScript P,0,\EuScript Q)--++(0,3,0); \draw [dashed] (\EuScript P,\EuScript Q,0)--++(0,0,3); } } \foreach \EuScript X in {0,1,2,3}{ \foreach \EuScript Y in {0,1,2,3}{ \foreach \z in {0,1,2,3}{ \filldraw[white] (\EuScript X,\EuScript Y,\z) circle (1.7pt); \draw (\EuScript X,\EuScript Y,\z) circle (1.7pt); } } } \foreach \EuScript P in {a,b}{ \filldraw[black] (\EuScript P) circle (1.7pt); \draw (\EuScript P) circle (1.7pt); } \foreach \EuScript P in {c}{ \filldraw[green] (\EuScript P) circle (1.7pt); \draw (\EuScript P) circle (1.7pt); } \foreach \EuScript P in {g,h,i,j,k}{ \filldraw[red] (\EuScript P) circle (1.7pt); \draw (\EuScript P) circle (1.7pt); } \foreach \EuScript P in {d,e,f}{ \filldraw[blue] (\EuScript P) circle (1.7pt); \draw (\EuScript P) circle (1.7pt); } \end{scope} \refstepcounter{ris} \draw (5.0,-1.8) node {Figure \arabic{ris}.\label{figure: non-basic sets, n=4, |M|=10,11}}; \end{tikzpicture} \end{center} Surprisingly, in all known examples, including the sets shown in Figs.~\ref{figure: minimal non-basic set in 4x4x4}--\ref{figure: non-basic sets, n=4, |M|=10,11}, the values of irreducible annihilation functions present a specific behavior. This allows us to state the following conjecture. \begin{conjecture} If $M \subset [n]^3$ is a minimal non-basic subset such that every layer of $[n]^3$ has a non-empty intersection with $M$, then its irreducible annihilation function $f$ satisfies $$ \sum\limits_{\EuScript X\in M} |f(\EuScript X)| = 2\big(|M| - n\big). $$ \end{conjecture} As we mentioned before, due to Theorem~\ref{th_irreducible_annihilation_function_is_unbounded}, there is no reason to expect the existence of simple criterion (similar to Theorem~\ref{th_set_to_graph}) for a subset of $[n]^d$ to be basic in the general case $d\geqslant3$. Still, it does not mean that there is no simplification at all, and it would be interesting to find one, at least for $d=3$ or for the case of small values of~$n$. Another possible direction for research would come from the generalization of the initial problem to hypergraphs. As we have seen in Section~\ref{Section:graphs_app}, the concept of basic hypergraphs admits almost the same interpretation in algebraic terms as the one of basic subsets. We can make this similarity even deeper as follows. Let $G = (V, E)$ be a~hypergraph. Define a linear map $\Psi \colon \mathbb{R}^{|V|} \to \mathbb{R}^{|E|}$ on indicators, $$ \Psi(\textbf{1}_{v}) = \sum\limits_{e\colon v \in e} \textbf{1}_e = \delta_v, $$ and extend it on $\mathbb{R}^{|V|}$ by linearity. In other words, for any $v \in V$, we set the image of the indicator function~$\textbf{1}_{v}$ to be the co-boundary $\delta_v$. Then the analogue of Lemma~\ref{lemma: M is non-basic <=> annihilation weight function} is that the hypergraph is basic if and only if $\ker\Psi$ is trivial, while the analogue of Lemma~\ref{lemma: M is min. non-basic <=> annihilation weight function is unique} is that for minimal non-basic hypergraphs we have $\dim\ker\Psi=1$. At the same time, the analogue of Theorem~\ref{th: M is basic => |M| < dn - (d-2)} is that for a basic hypergraph $G = (V, E)$, on has $|V| \leqslant |E|$ (which is trivial). It is natural to state the following general question. \begin{question}\label{question: basic hypergraph} What are the conditions for a hypergraph to be basic? \end{question} For now, this question in its generality is open. \section{Acknowledgements} \label{Section:acknowledgements} We thank A.B.~Skopenkov for useful discussions and criticism, N.~Volkov for searching examples of non-basic subsets and I.~Boyarov for proving the weaker version of Theorem~\ref{thBoyarov}. This work was partly supported by Russian Science Foundation Grant N~22-11-00177.
{ "timestamp": "2022-04-26T02:14:19", "yymm": "2204", "arxiv_id": "2204.11084", "language": "en", "url": "https://arxiv.org/abs/2204.11084", "abstract": "A finite subset $M \\subset \\mathbb{R}^d$ is basic, if for any function $f \\colon M \\to \\mathbb{R}$ there exists a collection of functions $f_1, \\ldots, f_d \\colon \\mathbb{R} \\to \\mathbb{R}$ such that for each element $(x_1, \\ldots, x_d)\\in M$ we have $f(x_1, \\ldots, x_d) = f_1(x_1) + \\ldots + f_d(x_d)$. For certain finite sets, we prove a criterion for a set to be basic, and we show that it cannot be extended to the general case. In addition, we interpret the above criterion in terms of doubly-weighted graphs and give an estimation for the number of elements in certain basic and non-basic subsets.", "subjects": "Combinatorics (math.CO)", "title": "Decompositions of functions defined on finite sets in $\\mathbb{R}^d$", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9873750514614409, "lm_q2_score": 0.8244619263765707, "lm_q1q2_score": 0.8140531369840651 }
https://arxiv.org/abs/1201.6035
How Accurate is inv(A)*b?
Several widely-used textbooks lead the reader to believe that solving a linear system of equations Ax = b by multiplying the vector b by a computed inverse inv(A) is inaccurate. Virtually all other textbooks on numerical analysis and numerical linear algebra advise against using computed inverses without stating whether this is accurate or not. In fact, under reasonable assumptions on how the inverse is computed, x = inv(A)*b is as accurate as the solution computed by the best backward-stable solvers. This fact is not new, but obviously obscure. We review the literature on the accuracy of this computation and present a self-contained numerical analysis of it.
\section{Introduction} Can you accurately compute the solution to a linear equation $Ax=b$ by first computing an approximation $V$ to $A^{-1}$ and then multiplying $b$ by $V$ (\texttt{x=inv(A){*}b} in Matlab)? Unfortunately, most of the literature provides a misleading answer to this question. Many textbooks, including recent and widely-used ones, mislead the reader to think that \texttt{x=inv(A){*}b} is less accurate than \texttt{x=A\textbackslash{}b}, which computes the $LU$ factorization of $A$ with partial pivoting and then solves for $x$ using the factors~\emph{\cite[p. 31]{ForsytheMalcolmMoler}}, \emph{\cite[p. 53]{Moler}}, \emph{\cite[p. 50]{Heath}}, \emph{\cite[p. 77]{OLeary}}, \emph{\cite[p. 166]{ConteDeBoor}}, \emph{\cite[pp. 184, 235, and 246]{Stewart}}. Other textbooks warn against using a computed inverse for performance reasons without saying anything about accuracy. If you still dare use \texttt{x=inv(A){*}b} in Matlab code, Matlab's analyzer issues a wrong and misleading warning~\cite{MLint-R2010b}. As far as we can tell, only two sources in the literature present a correct analysis of this question. One is almost 50 years old~\cite[pp. 128--129]{Wilkinson}, and is therefore hard to obtain and somewhat hard to read. The other is recent, but relegates this analysis to the solution of an exercise, rather than including it in the 27-page chapter on the matrix inverse\emph{~\cite[p. 559; see also p. 260]{Higham}}; even though the analysis there shows that \texttt{x=inv(A){*}b} is as accurate as \texttt{x=A\textbackslash{}b}, the text ends by stating that {}``multiplying by an explicit inverse is simply not a good way to solve a linear system''. The reader must pay careful attention to the analysis if he or she is to answer our question correctly. Our aim in this article is to clarify to researchers (and perhaps also to educators and students) the numerical properties of a solution to $Ax=b$ that is obtained by multiplying by a computed inverse. We do not present new results; we present results that are known, but not as much as they should be. Computing the inverse requires more arithmetic operations than computing an $LU$ factorization. We do not address the question of computational efficiency, but we do note that there is evidence that using the inverse is sometimes preferable from the performance perspective~\cite{DitkowskiFibichGavish}. It also appears that explicit inverses are sometimes used when the inverse must be applied in hardware, as in some MIMO radios~\cite{eberli08,StuderEtAl2011}. The numerical analysis in the literature and in this paper does not apply as-is to these computations, because hardware implementations typically use fixed-point arithmetic rather than floating point. Still, the analysis that we present here provides guiding principles to all implementations (e.g., to solve for the rows of the inverse using a backward-stable solver), and it may also provide a template for an analysis of fixed-point implementations or alternative inversion algorithms. The rest of this paper is organized as follows. Section~\ref{sec:A-Loose-Bound} presents the naive numerical analysis that probably led many authors to claim that \texttt{x=inv(A){*}b} is inaccurate; the analysis is correct, but the error bound that it yields is too loose. Section~\ref{sec:Tightening-the-Bound} presents a much tighter analysis, due to Wilkinson; Higham later showed that this bound holds even in the componentwise sense. Section~\ref{sec:Left-and-Right} explains another aspect of computed inverses that is not widely appreciated: that they are typically good for applying either from the left or from the right, but not both. Even when \texttt{x=inv(A){*}b} is accurate, \texttt{x} is usually not backward stable; Section~\ref{sec:backward-stability-of-xv} discusses conditions under which \texttt{x} is also backward stable. To help the reader fully understand all of these results, Section~\ref{sec:Numerical-Examples} demonstrates them using simple numerical experiments. We present concluding remarks in Section~\ref{sec:Closing-Remarks}. \section{\label{sec:A-Loose-Bound}A Loose Bound} Why did the inverse acquire its bad reputation? Good inversion methods produce a computed inverse $V$ that is, at best, \emph{conditionally} accurate, \begin{equation} \frac{\left\Vert V-A^{-1}\right\Vert }{\left\Vert A^{-1}\right\Vert }=O(\kappa(A)\epsilon_{\text{machine}})\;.\label{eq:conditionally-accurate-V} \end{equation} We cannot hope for an unconditional bound of $O(\epsilon_{\text{machine}})$ on the relative forward error. Some inversion methods guarantee conditional accuracy (for example, computing the inverse column by column using a backward stable linear solver). In particular, \noun{lapack}'s \texttt{xGETRI} satisfies (\ref{eq:conditionally-accurate-V}), and also a componentwise conditional bound~\cite[p. 268]{Higham}. That is, each entry in the computed inverse that \texttt{xGETRI} produces is conditionally accurate. It appears that Matlab's \texttt{inv} function also satisfies (\ref{eq:conditionally-accurate-V}). Let's try to use (\ref{eq:conditionally-accurate-V}) to obtain a bound on the forward error $\|x_{V}-x\|$. Multiplying $b$ by $V$ in floating point produces $x_{V}$ that satisfies $x_{V}=\left(V+\Delta\right)b$ for some $\Delta$ with $\|\Delta\|/\|V\|=O(\epsilon_{\text{machine}})$. Denoting $\Gamma=V-A^{-1}$, we have \begin{eqnarray*} x_{V} & = & \left(V+\Delta\right)b\\ & = & \left(A^{-1}+\Gamma+\Delta\right)b\\ & = & \left(A^{-1}+\Gamma+\Delta\right)Ax\\ & = & x+\Gamma Ax+\Delta Ax\;, \end{eqnarray*} so \begin{eqnarray} \left\Vert x_{V}-x\right\Vert & \leq & \left\Vert \Gamma\right\Vert \left\Vert A\right\Vert \left\Vert x\right\Vert +\left\Vert \Delta\right\Vert \left\Vert A\right\Vert \left\Vert x\right\Vert \nonumber \\ & \leq & O(\kappa(A)\epsilon_{\text{machine}})\left\Vert A^{-1}\right\Vert \left\Vert A\right\Vert \left\Vert x\right\Vert +O(\epsilon_{\text{machine}})\left\Vert V\right\Vert \left\Vert A\right\Vert \left\Vert x\right\Vert \nonumber \\ & = & O(\kappa^{2}(A)\epsilon_{\text{machine}})\left\Vert x\right\Vert +O(\epsilon_{\text{machine}})\left\Vert V\right\Vert \left\Vert A\right\Vert \left\Vert x\right\Vert \;.\label{eq:loose-accuracy-bound-pre} \end{eqnarray} Unless $A$ is so ill conditioned that the left-hand side of (\ref{eq:conditionally-accurate-V}) is larger than $1$ (any constant would do), $\|V\|=\Theta(\|A^{-1}\|)$. Therefore, \begin{equation} \left\Vert x_{V}-x\right\Vert \leq O(\kappa^{2}(A)\epsilon_{\text{machine}})\left\Vert x\right\Vert \;.\label{eq:loose-bound} \end{equation} In contrast, solving $Ax=b$ using a backward stable solver such as one based on the $QR$ factorization (or on an $LU$ factorization with partial pivoting provided there is no growth) yields $x_{\text{backward-stable}}$ for which \begin{equation} \left\Vert x_{\text{backward-stable}}-x\right\Vert \leq O(\kappa(A)\epsilon_{\text{machine}})\left\Vert x\right\Vert \;.\label{eq:backward-stable-accuracy} \end{equation} The bound (\ref{eq:loose-bound}) is correct, but it is just an upper bound on the error, and it turns out that it is loose by a factor of $\kappa(A)$. It appears that this easy-to-derive but loose bound gave the matrix inverse its bad reputation. In fact, $x_{V}$ satisfies an accuracy bound just like (\ref{eq:backward-stable-accuracy}). \section{\label{sec:Tightening-the-Bound}Tightening the Bound} The bound (\ref{eq:loose-bound}) is loose because of a single term, $\left\Vert \Gamma\right\Vert \left\Vert A\right\Vert $, which we used to bound the norm of $\Gamma A$. The other term in the bound, $O(\kappa(A)\epsilon_{\text{machine}})\left\Vert x\right\Vert $, is tight. The key insight is that rows of $\Gamma=V-A^{-1}$ tend to lie mostly in the directions of left singular vectors of $A$ that are associated with small singular values. The smaller the singular value of $A$, the stronger the influence of the corresponding singular vector (or singular subspace) on the rows of $\Gamma$. Therefore, the norm of the product of $\Gamma$ and $A$ is much smaller than the product of the norms; $A$ shrinks the strong directions of the error matrix $\Gamma$. This explains why the norm of $\Gamma A$ is small. This relationship between the singular vectors of $A$ and $\Gamma$ depends on a backward stability criterion on $V$, which we define and analyze below. Suppose that we use a backward stable solver to compute the rows of $V$ one by one by solving $v_{i}A=e_{i}$ where $e_{i}$ is row $i$ of $I$. Each computed row satisfies \[ v_{i}\left(A+\Xi_{i}\right)=e_{i} \] with $\|\Xi_{i}\|/\|A\|=O(\epsilon_{\text{machine}})$. Rearranging the equation, we obtain \[ VA-I=\left[\begin{array}{c} -v_{1}\Xi_{1}\\ \vdots\\ -v_{n}\Xi_{n} \end{array}\right]\;, \] so $\|VA-I\|=O(\|V\|\|A\|\epsilon_{\text{machine}})=O(\kappa(A)\epsilon_{\text{machine}})$. For a componentwise version of this bound and related bounds for other methods of computing $V$, see~\cite[section 14.3]{Higham}. This is the key to the conditional accuracy of $x_{V}$. Since $\Gamma A=(V-A^{-1})A=VA-I$, the norm of $\Gamma A$ is $O(\kappa(A)\epsilon_{\text{machine}})$. We therefore have the following theorem. \begin{thm} \label{thm:inv-accurate}Let $Ax=b$ be a linear system with a coefficient matrix that satisfies $\kappa(A)\epsilon_{\text{machine}}=O(1)$. Assume that $V$ is an approximate inverse of $A$ that satisfies $\|VA-I\|=O(\kappa(A)\epsilon_{\text{machine}})$. Then the floating-point product $x_{V}$ of $V$ and $b$ satisfies \[ \frac{\left\Vert x_{V}-x\right\Vert }{\left\Vert x\right\Vert }=O\left(\kappa(A)\epsilon_{\text{machine}}\right)\;. \] \end{thm} The essence of this analysis appears in Wilkinson's 1963 monograph~\cite[pp. 128--129]{Wilkinson}. \begin{comment} The reference to Wilkinson has already been given above. \end{comment} Wilkinson did not account for the rounding errors in the multiplication $Vb$, which are not asymptotically significant, but otherwise his analysis is complete and correct. \section{\label{sec:Left-and-Right}Left and Right Inverses} In this article, we multiply $b$ by the inverse from the left to solve $Ax=b$. This implies that the approximate inverse $V$ should be a good left inverse. Indeed, we have seen that a $V$ with a small left residual $VA-I$ guarantees a conditionally accurate solution $x_{V}$. Whether $V$ is also a good right inverse, in the sense that $AV-I$ is small, is irrelevant for solving $Ax=b$. If we were trying to solve $x^{T}A=b^{T}$, we would need a good right inverse. Wilkinson noted that if rows of $V$ are computed using $LU$ with partial pivoting, then $V$ is usually both a good left inverse and a good right inverse, but not always~\cite[page~113]{Wilkinson}. Du~Croz and Higham show matrices for which this is not the case, but they also note that such matrices are the exception rather than the rule~\cite{DuCrozHigham}. Other inversion methods tend to produce a matrix that is either a left inverse or a right inverse but not both. A good example is Newton's method. If one iterates with $V^{(t)}=(2I-V^{(t-1)}A)V^{(t-1)}$ then $V^{(t)}$ converges to a left inverse. If one iterates with $V^{(t)}=V^{(t-1)}(2I-AV^{(t-1)})$ then $V^{(t)}$ converges to a right inverse. Strassen's inversion formula~\cite{BaileyFergusonStrassenInv,Strassen69} sometimes produces an inverse that is neither a good left inverse nor a good right inverse~\cite[Section~26.3.2]{Higham}. \section{\label{sec:backward-stability-of-xv}Multiplication by the Inverse is (Sometimes) Backward Stable} The next section presents a simple example in which the computed solution $x_{V}$ is conditionally accurate but not backward stable. In this section we show that under certain conditions, the solution is also backward stable. The analysis also clarifies in what ways backward stability can be lost. Suppose that we use a $V$ that is a good right inverse, $\|AV-I\|=O(\kappa(A)\epsilon_{\text{machine}})$. We can produce such a $V$ by solving for its columns using a backward-stable solver. We have \begin{eqnarray*} Ax_{V}-b & = & A(V+\Delta)b-b\\ & = & \left(AV-I\right)b+A\Delta b \end{eqnarray*} for some $\Delta$ with $\|\Delta\|/\|V\|=O(\epsilon_{\text{machine}})$. Here too, the $\Delta$ term does not influence the asymptotic upper bound. The assumption that $V$ is a good right inverse bounds the other term, \begin{eqnarray} \left\Vert Ax_{V}-b\right\Vert & \leq & \left\Vert AV-I\right\Vert \left\Vert b\right\Vert +\left\Vert A\right\Vert \left\Vert \Delta\right\Vert \left\Vert b\right\Vert \nonumber \\ & \leq & O(\kappa(A)\epsilon_{\text{machine}})\left\Vert b\right\Vert +O(\epsilon_{\text{machine}})\left\Vert A\right\Vert \left\Vert V\right\Vert \left\Vert b\right\Vert \nonumber \\ & = & O(\kappa(A)\epsilon_{\text{machine}})\left\Vert b\right\Vert \;.\label{eq:loose-accuracy-bound-pre-1} \end{eqnarray} The relative backward error is given by the expression $\|Ax_{V}-b\|/(\|A\|\|x_{V}\|+\|b\|)$~\cite{RigalGaches}. Filling in the bound on the norm of the residual, we obtain \begin{eqnarray*} \frac{\left\Vert Ax_{V}-b\right\Vert }{\|A\|\|x_{V}\|+\|b\|} & \leq & \frac{\left\Vert Ax_{V}-b\right\Vert }{\|A\|\|x_{V}\|}\\ & = & O\left(\frac{\|A\|\|A^{-1}\|\epsilon_{\text{machine}}\|b\|}{\|A\|\|x_{V}\|}\right)\\ & = & O\left(\frac{\|A^{-1}\|\|b\|}{\|x_{V}\|}\epsilon_{\text{machine}}\right)\;. \end{eqnarray*} If we assume that $V$ is a reasonable enough left inverse so that at least $\|x_{V}\|$ is close to $\|x\|$ (that is, if the forward error is $O(1)$), then a solution $x$ that has norm close to $\|A^{-1}\|\|b\|$ guarantees backward stability to within $O(\epsilon_{\text{machine}})$. Let $A=L\Sigma R^{*}$ be the SVD of $A$, so \begin{eqnarray*} x & = & A^{-1}b\\ & = & R\Sigma^{-1}L^{*}b\\ & = & \sum_{i}\frac{L_{i}^{*}b}{\sigma_{i}}R_{i}\;, \end{eqnarray*} where $L_{i}$ and $R_{i}$ are the left and right singular vectors of $A$. If $L_{n}^{*}b=O(\|b\|)$, then $\|x\|\geq\left|L_{n}^{*}b\right|/\sigma_{n}=O(\|A^{-1}\|\|b\|)$ and $x_{V}$ is backward stable. If the projection of $b$ on $L_{n}$ is not large but the projection on, say, $L_{n-1}$ is large and $\sigma_{n-1}$ is close to $\sigma_{n}$, the solution is still backward stable, and so on. Perhaps more importantly, we have now identified the ways in which $x_{V}$ can fail to be backward stable: \begin{enumerate} \item $V$ is not a good right inverse, or \item $V$ is such a poor left inverse that $\|x_{V}\|$ is much smaller than $\|x\|$, or \item the projection of $b$ on the left singular vectors of $A$ associated with small singular values is small. \end{enumerate} The next section shows an example that satisfies the last condition. \section{\label{sec:Numerical-Examples}Numerical Examples} Let us demonstrate the theory with a small numerical example. We set $n=256$, $\sigma_{1}=10^{4}$ and $\sigma_{n}=10^{-4}$, generate a random matrix $A$ with $\kappa(A)=10^{8}$, and generate its inverse. The matrix and the inverse are produced by matrix multiplications, and each multiplication has at least one unitary factor, so both are accurate to within a relative error of about $\epsilon_{\text{machine}}$. We also compute an approximate inverse $V$ using \noun{Matlab}'s \texttt{inv} function. \begin{lyxcode} {[}L,dummy,R{]}~=~svd(randn(n));~ svalues~=~logspace(log10(sigma\_1),~log10(sigma\_n),~n); S~=~diag(svalues); invS~=~diag(svalues.\textasciicircum{}-1); A~=~L~{*}~S~{*}~R'; AccurateInv~=~R~{*}~invS~{*}~L'; V~=~inv(A); \end{lyxcode} The approximate inverse $V$ is only conditionally accurate, as predicted by (\ref{eq:conditionally-accurate-V}), but its use as a left inverse leads to a conditionally small residual. \begin{lyxcode} Gamma~=~V~-~AccurateInv; norm(Gamma)~/~norm(AccurateInv) ~~ans~=~3.4891e-09 norm(V~{*}~A~-~eye(n)) ~~ans~=~1.6976e-08 \end{lyxcode} We now generate a random right hand-side $b$ and use the inverse to solve $Ax=b$. The result is backward stable to within a relative error of $\epsilon_{\text{machine}}$. \begin{lyxcode} b~=~randn(n,~1); x~=~R~{*}~(invS~{*}~(L'~{*}~b)); xv~=~V~{*}~b; norm(A~{*}~xv~-~b)~/~(norm(A)~{*}~norm(xv)~+~norm(b)) ~~ans~=~8.8078e-16~ \end{lyxcode} Obviously, the solution should be conditionally accurate, and it is. \begin{lyxcode} norm(xv~-~x)~/~norm(x) ~~ans~=~3.102e-09~ \end{lyxcode} We now perform a similar experiment, but with a random $x$, which leads to a right-hand side $b$ which is nearly orthogonal to the left singular vectors of $A$ that correspond to small singular values; now the solution is only conditionally backward stable. \begin{lyxcode} x~=~randn(n,~1); b~=~L~{*}~(S~{*}~(R'~{*}~x)); xv~=~V~{*}~b; norm(A~{*}~xv~-~b)~/~(norm(A)~{*}~norm(xv)~+~norm(b)) ~~ans~=~2.1352e-10 \end{lyxcode} Theorem~\ref{thm:inv-accurate} predicts that the solution should still be conditionally accurate. It is. \noun{Matlab}'s backslash operator, which is a linear solver based on Gaussian elimination with partial pivoting, produces a solution with a similar accuracy. \begin{lyxcode} norm((A\textbackslash{}b)~-~x)~/~norm(x) ~~ans~=~4.0801e-09 norm(xv~-~x)~/~norm(x) ~~ans~=~4.5699e-09 \end{lyxcode} The magic is in the special structure of rows of $\Gamma$. Figure~\ref{fig:proj-Gamma-on-sign-vectors} displays this structure graphically. We can see that a row of $\Gamma$ is almost orthogonal to the left singular vectors of $A$ associated with large singular values, and that the magnitude of the projections increases with decreasing singular values. If we produce an approximate inverse with the same magnitude of error as in \texttt{inv(A)} but with a random error matrix, it will not solve $Ax=b$ conditionally accurately. \begin{lyxcode} BadInv~=~AccurateInv~+~norm(Gamma)~{*}~randn(n); xv~=~BadInv~{*}~b; norm(A~{*}~xv~-~b)~/~(norm(A)~{*}~norm(xv)~+~norm(b))~\\ ~~ans~=~0.075727 norm(xv~-~x)~/~norm(x) ~~ans~=~0.83552 \end{lyxcode} \begin{figure} \begin{centering} \includegraphics[width=0.75\textwidth]{proj_Gamma_on_singvects} \par\end{centering} \caption{\label{fig:proj-Gamma-on-sign-vectors}The magnitude of the projections of three rows of $\Gamma$ (the first, last, and middle, but all rows produce similar plots) on the left singular vectors of $A$, as a function of the corresponding singular values of $A$.} \end{figure} \section{\label{sec:Closing-Remarks}Closing Remarks} Solving a linear system of equations $Ax=b$ using a computed inverse $V$ produces a conditionally accurate solution, subject to an easy to satisfy condition on the computation of $V$. Using Gaussian elimination with partial pivoting or a $QR$ factorization produces a solution with errors that have the same order of magnitude as those produced by $V$. If the right-hand side $b$ does not have any special relationship to the left singular subspaces of $A$, then the solution produced by $V$ is also backward stable (under a slightly different technical condition on $V$), and hence as good as a solution produced by GEPP or $QR$. As far as we know, this result is new. If $b$ is close to orthogonal to the left singular subspaces of $A$ corresponding to small singular values, then the solution produced by $V$ is conditionally accurate, but usually not backward stable. Whether this is a significant defect or not depends on the application. In most applications, it is not a serious problem. One difficulty with a conditionally-accurate solution that is not backward stable is that it does not come with a certificate of conditional accuracy. We normally take a small backward error to be such a certificate. There might be applications that require a backward stable solution rather than an accurate one. Strangely, this is exactly the case with the computation of $V$ itself; the analysis in this paper relies on rows being computed in a backward-stable way, not on their forward accuracy. We are not aware of other cases where this is important. \bibliographystyle{plain}
{ "timestamp": "2012-01-31T02:02:27", "yymm": "1201", "arxiv_id": "1201.6035", "language": "en", "url": "https://arxiv.org/abs/1201.6035", "abstract": "Several widely-used textbooks lead the reader to believe that solving a linear system of equations Ax = b by multiplying the vector b by a computed inverse inv(A) is inaccurate. Virtually all other textbooks on numerical analysis and numerical linear algebra advise against using computed inverses without stating whether this is accurate or not. In fact, under reasonable assumptions on how the inverse is computed, x = inv(A)*b is as accurate as the solution computed by the best backward-stable solvers. This fact is not new, but obviously obscure. We review the literature on the accuracy of this computation and present a self-contained numerical analysis of it.", "subjects": "Numerical Analysis (math.NA)", "title": "How Accurate is inv(A)*b?", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9693241991754918, "lm_q2_score": 0.839733963661418, "lm_q1q2_score": 0.8139744518465655 }
https://arxiv.org/abs/1809.10263
Counting Shellings of Complete Bipartite Graphs and Trees
A shelling of a graph, viewed as an abstract simplicial complex that is pure of dimension 1, is an ordering of its edges such that every edge is adjacent to some other edges appeared previously. In this paper, we focus on complete bipartite graphs and trees. For complete bipartite graphs, we obtain an exact formula for their shelling numbers. And for trees, we relate their shelling numbers to linear extensions of tree posets and bound shelling numbers using vertex degrees and diameter.
\section{Introduction} In combinatorial topology, shelling of a simplicial complex is a very useful and important notion that has been well-studied. \begin{definition}\label{def:shellingcomplex} An (abstract) simplicial complex $\Delta$ is called \textit{pure} if all of its maximal simplicies have the same dimension. Given a finite (or countably infinite) simplicial complex $\Delta$ that is pure of dimension $d$, a \textit{shelling} is a total ordering of its maximal simplicies $C_1,C_2,\ldots$ such that for every $k>1$, $C_k\cap\left(\bigcup_{i=1}^{k-1}C_i\right)$ is pure of dimension $d-1$. A simplicial complex that admits a shelling is called \textit{shellable}. \end{definition} Shellable complexes enjoy many strong algebraic and topological properties. For example, a shellable complex is homotopy equivalent to a wedge sum of spheres, thus not allowing torsion in its homology. The study of shellability in its combinatorial aspects has turned out to be very fruitful as well. The arguably earliest notable result that polytopes are shellable is due to Brugesser and Mani (Section 8 of \cite{ziegler2012lectures}). Later on, Bjorner and Wachs developed theories on lexicographic shellability (Section 12 of \cite{kozlov2007combinatorial}). In particular, shellable posets, which are posets whose order complexes are shellable, are studied and powerful notions such as $EL$-shellability and $CL$-shellability are invented. In a recent work, testing shellability is proved to be NP-complete \cite{goaoc2017shellability}. As there is rich literature on shellability, little work has been done on counting the number of shellings for a specific simplicial complex. It is generally believed that if a simplicial complex is shellable, then it usually admits a lot of shellings, but no precise arguments are given. In this paper, we investigate the problem of counting shellings, aiming to start a new line of research. We restrict our attentions to finite simplicial complexes that are pure of dimension 1, namely, undirected graphs, where interesting combinatorial arguments are already taking place. Let's first reformulate Definition~\ref{def:shellingcomplex} in the language of graph theory. \begin{definition}[Graph Shelling]\label{def:shellinggraph} Given an undirected graph $G=(V,E)$, where $V$ is the vertex set of $G$ and $E$ is the edge set of $G$, a \textit{shelling} of $G$ is a total ordering of the edge set $\sigma\in\S_E$, where $\S$ stands for symmetric group, such that $\sigma(1),\ldots,\sigma(k)$ form a connected subgraph of $G$ for all $k=1,\ldots,|E|$. \end{definition} We will adopt the following notation throughout the paper. \begin{definition} For a graph $G$, let $F(G)$ denote the number of shellings of $G$. \end{definition} Clearly, a graph admits a shelling if and only if it is connected, which is equivalent to $F(G)>0$. A few results are already known. \begin{theorem}[\cite{MO297411}] Let $K_n$ be the complete graph on $n$ vertices. Then $$F(K_n)=\frac{2^{n-2}}{C_{n-1}}\binom{n}{2}!$$ where $C_{n-1}=\binom{2n-2}{n-1}/n$ is the $(n-1)^{th}$ Catalan number. \end{theorem} As an overview for the paper, in Section~\ref{sec:cbg}, we will give an explicit formula for the number of shellings of complete bipartite graphs, resolving a MathOverflow question \cite{MO297385}; in Section~\ref{sec:trees}, we will provide methods to compute the number of shellings of trees and obtain some upper and lower bounds for them. \section{Complete Bipartite Graphs}\label{sec:cbg} Denote $K_{m,n}$ as the complete bipartite graph with part sizes $m$ and $n$. The following is our main theorem. \begin{theorem}\label{thm:bipartite} $$F(K_{m,n})=\frac{m!n!(mn)!}{(m+n-1)!}.$$ \end{theorem} The formula in Theorem~\ref{thm:bipartite} is conjectured in the MathOverflow post \cite{MO297385}. Partial progress has been made. In particular, Lemma~\ref{lem:bipartiteStanley}, given by Richard Stanley, serves as an important tool for our computation. \begin{lemma}\label{lem:bipartiteStanley} $F(K_{m,n})$ is equal to the following expression: $$m!n!(mn-1)!\sum_{\alpha}\frac{b_{1}b_{2}\cdots b_{m+n-2}}{b_{m+n-2}(b_{m+n-2}+b_{m+n-3})\cdots (b_{m+n-2}+b_{m+n-3}+\ldots+b_1)},$$ where the sum is over all sequences $\alpha = (a_1,a_2,\ldots,a_{m+n-2})$ of $(m-1)$ 0's and $(n-1)$ 1's, and $$b_{i} = 1 + |\{1\leq j\leq i: a_j \neq a_i\}|.$$ \end{lemma} \begin{proof} Let $\sigma$ be a shelling of $K_{m,n}$. In each part of $K_{m,n}$, consider the order of the appearance of the vertices. Here, we say that vertex $v$ appears in $\sigma$ at time $t$ if $t$ is the first index such that $v\in \sigma(t)$. There are $m!$ ways to choose such order in the part of size $m$ and $n!$ ways in the part of size $n$. Fix the order of vertex appearance in each part to be $(u_0,u_1,\ldots,u_{m-1}), (v_0,v_1,\ldots,v_{n-1})$, respectively. Consider a fixed order of appearance of all $(m+n)$ vertices $w = w_{-1}w_0\ldots w_{m+n-2}.$ Note that $\sigma(1)$ must be the edge $e_0=(u_0,v_0)$, so $\{w_{-1},w_0\} = \{u_0,v_0\}.$ For $1\leq i\leq m+n-2$, define $$a_{i} = \begin{cases} 0, & \text{ if } w_{i} = u_j \text{ for some }j, \\ 1, & \text{ if } w_{i} = v_k \text{ for some }k, \end{cases}$$ and $$b_i = 1 + |\{1\leq j\leq i: a_j \neq a_i\}|.$$ Now, for each $w_i$ ($i\geq 1$), consider the first edge $e_i$ incident to $w_i$ in $\sigma$. This edge must be of the form $(w_i,w_j)$ where $j<i$ and $w_i, w_j$ are in different parts of $K_{m,n}$. There are $b_{i}$ choices for this edge. Thus, there are $b_1b_2\cdots b_{m+n-2}$ ways to choose $e_1,e_2,\ldots,e_{m+n-2}.$ We further fix the edges $e_0,e_1,\ldots,e_{m+n-2}$. Note that the rest of the $b_{m+n-2}$ edges incident to $w_{m+n-2}$ must appear after $e_{m+n-2}$ in $\sigma$, so there are $(b_{m+n-2}-1)!$ ways to arrange these edges. After making this arrangement, the edges which are incident to $w_{m+n-3}$ and not yet arranged must appear after $e_{m+n-3}$, so there are $$(b_{m+n-2}+1)(b_{m+n-2}+2)\cdots (b_{m+n-2}+b_{m+n-3}+1) = \frac{(b_{m+n-2}+b_{m+n-3}+1)!}{b_{m+n-2}!}$$ ways to arrange them (since there are already $b_{m+n-2}$ edges arranged after $e_{m+n-3}$). Similarly, for each $i$, after making the arrangement of all edges incident to vertices appearing after $w_i$, there are $$\frac{(b_{m+n-2}+b_{m+n-3}+\ldots + b_i + 1)!}{(b_{m+n-2}+b_{m+n-3}+\ldots + b_{i+1})!}$$ ways to arrange all the edges which are incident to $w_i$ and not yet arranged. Therefore, after fixing $e_0,e_1,\ldots,e_{m+n-2}$, the number of shellings is $$\prod_{i=1}^{m+n-2}\frac{(b_{m+n-2}+\ldots + b_i + 1)!}{(b_{m+n-2}+\ldots + b_{i+1})!} = \frac{(mn-1)!}{b_{m+n-2}(b_{m+n-2}+b_{m+n-3})\cdots (b_{m+n-2}+\ldots+b_1)}.$$ Combining all discussions above, we obtain Lemma~\ref{lem:bipartiteStanley}. \end{proof} We first prove a few lemmas which are essential to Theorem~\ref{thm:bipartite}. These lemmas involve binomial coefficients whose entries are not necessarily integers. For this reason, in the rest of this section, we will use the generalized binomial coefficient $$\binom{x}{y} = \frac{\Gamma(x+1)}{\Gamma(y+1)\Gamma(x-y+1)},$$ where $\Gamma$ is the Gamma function that extends the factorial function. In particular, if $y$ is a positive integer, $$\binom{x}{y} = \frac{x(x-1)\cdots (x-y+1)}{y!}.$$ \begin{lemma}\label{lem:story} For positive integers $x,y$ and positive real numbers $z,w$ such that $w-z\geq x$ is a postive integer, $$\sum_{j=x}^{w-z}\binom{j}{y}\binom{w-j}{z}=\sum_{i=\max\{0,x+y+z-w\}}^y \binom{x}{i}\binom{w-x+1}{z+y-i+1}.$$ \end{lemma} \begin{proof} We first prove this lemma assuming that $z,w$ are both integers. Consider the following problem: we want to arrange $(y+z+1)$ letter A's in $(w+1)$ positions, such that each position has at most one A and there are at most $y$ A's in the first $x$ positions. The number of such arrangements is $$\sum_{i=0}^y \binom{x}{i}\binom{w-x+1}{z+y-i+1} = \sum_{i=\max\{0,x+y+z-w\}}^y \binom{x}{i}\binom{w-x+1}{z+y-i+1}$$ by considering the number of A's in the first $x$ positions. On the other hand, consider the position of the $(y+1)^{th}$ A. It must be at some position $p>x$. For a fixed $p$, there are $\binom{p-1}{y}$ ways to arrange the first $y$ A's and $\binom{w-p+1}{z}$ ways to arrange the last $z$ A's, so the total number of such arrangements is $$\sum_{p=x+1}^{w-z+1}\binom{p-1}{y}\binom{w-p+1}{z} = \sum_{j=x}^{w-z}\binom{j}{y}\binom{w-j}{z}.$$ Thus, Lemma~\ref{lem:story} follows under additional assumption. For the general case, we fix $z' = w-z \in \mathbb{N}$. Lemma~\ref{lem:story} is equivalent to \begin{equation}\label{eq:story} \sum_{j=x}^{z'}\binom{j}{y}\binom{w-j}{z'-j}=\sum_{i=\max\{0,x+y-z'\}}^y \binom{x}{i}\binom{w-x+1}{z'-x-y+i}. \end{equation} Both sides of Equation~\eqref{eq:story} are polynomials in $w$ of degree at most $z'$. From our previous discussion, every positive integer greater than $z'$ is a root of ~\eqref{eq:story}. Thus, the two sides of ~\eqref{eq:story} agree as polynomials in $w$ and the proof is complete. \end{proof} Lemma~\ref{lem:story} serves to prove the following lemma, which will be crucial in calculating the sum in Lemma~\ref{lem:bipartiteStanley}. \begin{lemma}\label{lem:binomialComputation} For positive integers $k<n$ and $s<m+n-k-1$, \begin{equation}\label{eq:lemma} \begin{split} &\sum_{t=s+1}^{m+n-k-1}(t-n+k+1)(t+2)(t+3)\cdots(t+k)\binom{\frac{mn}{n-k}+n-k-t-2}{\frac{mk}{n-k}-1} \\ =&\frac{m}{m+n-k}(s+2)(s+3)\cdots(s+k+1)\binom{\frac{mn}{n-k}+n-k-s-2}{\frac{mk}{n-k}}, \end{split} \end{equation} where general binomial coefficients are used. \end{lemma} \begin{proof} First, note that $$(t-n+k+1)(t+2)(t+3)\cdots (t+k) = k!\bigg[\binom{t+k}{k} +\frac{k-n}{k}\binom{t+k}{k-1}\bigg].$$ We shall split the sum in the left hand side of $(\ref{eq:lemma})$ based on the equation above. Applying Lemma~\ref{lem:story} with replacements $x=s+k+1$, $y=k$, $z=\frac{mk}{n-k}-1$, $w=\frac{mn}{n-k}+n-2$ (notice that $w-z = m+n-1$ is a positive integer), we obtain $$\sum_{j = s+k+1}^{m+n-1}\binom{j}{k}\binom{\frac{mn}{n-k}+n-2-j}{\frac{mk}{n-k}-1} = \sum_{i=i_0}^{k} \binom{s+k+1}{i}\binom{\frac{mn}{n-k}+n-k-s-2}{\frac{mk}{n-k}+k-i},$$ where $i_0 = \max\{0, s+2k+2-m-n\}.$ Writing $t=j-k$, we have $$\sum_{t = s+1}^{m+n-k-1}\binom{t+k}{k}\binom{\frac{mn}{n-k}+n-k-t-2}{\frac{mk}{n-k}-1} = \sum_{i=i_0}^{k} \binom{s+k+1}{i}\binom{\frac{mn}{n-k}+n-k-s-2}{\frac{mk}{n-k}+k-i}.$$ Similarly, replacing $x=s+k+1$, $y=k-1$, $z=\frac{mk}{n-k}-1$, $w=\frac{mn}{n-k}+n-2$ in Lemma~\ref{lem:story}, \begin{align*} \sum_{t=s+1}^{m+n-k-1}\binom{t+k}{k-1}\binom{\frac{mn}{n-k}+n-k-t-2}{\frac{mk}{n-k}-1} &= \sum_{i=i_1}^{k-1} \binom{s+k+1}{i}\binom{\frac{mn}{n-k}+n-k-s-2}{\frac{mk}{n-k}+k-i-1}\\ &=\sum_{i=i_1+1}^{k} \binom{s+k+1}{i-1}\binom{\frac{mn}{n-k}+n-k-s-2}{\frac{mk}{n-k}+k-i}, \end{align*} where $i_1 = \max\{0, s+2k+1-m-n\}.$ Therefore, the left hand side of (\ref{eq:lemma}) \begin{equation*} \begin{split} &\frac{1}{k!}\sum_{t=s+1}^{m+n-k-1}(t-n+k+1)(t+2)(t+3)\cdots(t+k)\binom{\frac{mn}{n-k}+n-k-t-2}{\frac{mk}{n-k}-1} \\ =& \sum_{i=i_0}^{k} \binom{s+k+1}{i}\binom{\frac{mn}{n-k}+n-k-s-2}{\frac{mk}{n-k}+k-i}+\frac{k-n}{k}\sum_{i=i_1+1}^{k} \binom{s+k+1}{i-1}\binom{\frac{mn}{n-k}+n-k-s-2}{\frac{mk}{n-k}+k-i}. \end{split} \end{equation*} We claim that the following identity (\ref{eq:inductionLemma}) holds for all $i_0\leq \ell\leq k$. \begin{equation}\label{eq:inductionLemma} \begin{split} & \sum_{i=i_0}^{\ell} \binom{s+k+1}{i}\binom{\frac{mn}{n-k}+n-k-s-2}{\frac{mk}{n-k}+k-i}+\frac{k-n}{k}\sum_{i=i_1+1}^{\ell} \binom{s+k+1}{i-1}\binom{\frac{mn}{n-k}+n-k-s-2}{\frac{mk}{n-k}+k-i} \\ =&\frac{\frac{mk}{n-k}+k-\ell}{\frac{mk}{n-k}+k}\binom{s+k+1}{\ell}\binom{\frac{mn}{n-k}+n-k-s-2}{\frac{mk}{n-k}+k-\ell}. \end{split} \end{equation} There are two cases: $i_0=0$ and $i_0>0$. \noindent\textbf{Case 1.} $i_0 = i_1 = 0.$ In this case, the left hand side of (\ref{eq:inductionLemma}) is \begin{equation*} \binom{\frac{mn}{n-k}+n-k-s-2}{\frac{mk}{n-k}+k}+\sum_{i=1}^\ell \bigg[\binom{s+k+1}{i}+\frac{k-n}{k}\binom{s+k+1}{i-1}\bigg]\binom{\frac{mn}{n-k}+n-k-s-2}{\frac{mk}{n-k}+k-i}. \end{equation*} Induct on $\ell$. When $\ell=0$, both sides of (\ref{eq:inductionLemma}) are equal to $\binom{\frac{mn}{n-k}+n-k-s-2}{\frac{mk}{n-k}+k}.$ Assume that (\ref{eq:inductionLemma}) holds for $\ell-1$ and consider $\ell$ case. Then, the formula above becomes \begin{align*} & \bigg[\binom{s+k+1}{\ell}+\frac{k-n}{k}\binom{s+k+1}{\ell-1}\bigg]\binom{\frac{mn}{n-k}+n-k-s-2}{\frac{mk}{n-k}+k-\ell} + \\ &\qquad\frac{\frac{mk}{n-k}+k-\ell+1}{\frac{mk}{n-k}+k}\binom{s+k+1}{\ell-1}\binom{\frac{mn}{n-k}+n-k-s-2}{\frac{mk}{n-k}+k-\ell+1} \\ =&\bigg[\binom{s+k+1}{\ell}+\frac{k-n}{k}\cdot\frac{\ell}{s+k+2-\ell}\binom{s+k+1}{\ell}\bigg]\binom{\frac{mn}{n-k}+n-k-s-2}{\frac{mk}{n-k}+k-\ell} + \\ &\ \frac{\frac{mk}{n-k}+k-\ell+1}{\frac{mk}{n-k}+k}\frac{\ell}{s+k+2-\ell}\binom{s+k+1}{\ell}\frac{m+n+\ell-2k-s-2}{\frac{mk}{n-k}+k-\ell+1}\binom{\frac{mn}{n-k}+n-k-s-2}{\frac{mk}{n-k}+k-\ell} \\ =& \left(1+\frac{(k-n)\ell}{k(s+k+2-\ell)}+ \frac{(m+n+\ell-2k-s-2)\ell}{(\frac{mk}{n-k}+k)(s+k+2-\ell)}\right)\binom{s+k+1}{\ell}\binom{\frac{mn}{n-k}+n-k-s-2}{\frac{mk}{n-k}+k-\ell} \\ =& \frac{\frac{mk}{n-k}+k-\ell}{\frac{mk}{n-k}+k}\binom{s+k+1}{\ell}\binom{\frac{mn}{n-k}+n-k-s-2}{\frac{mk}{n-k}+k-\ell}. \end{align*} Thus, (\ref{eq:inductionLemma}) follows by induction. \noindent\textbf{Case 2.} $i_0 = s+2k+2-m-n > 0$ and $i_1 = i_0 - 1.$ We can simplify the left hand side of (\ref{eq:inductionLemma}) as \begin{equation*} \sum_{i=i_0}^{\ell} \bigg[\binom{s+k+1}{i}+\frac{k-n}{k}\binom{s+k+1}{i-1}\bigg]\binom{\frac{mn}{n-k}+n-k-s-2}{\frac{mk}{n-k}+k-i}. \end{equation*} Induct on $\ell$. When $\ell = i_0$, \begin{align*} &\bigg[\binom{s+k+1}{i_0}+\frac{k-n}{k}\cdot \frac{i_0}{s+k+2-i_0}\binom{s+k+1}{i_0}\bigg]\binom{\frac{mn}{n-k}+n-k-s-2}{\frac{mk}{n-k}+k-i_0} \\ =& \frac{\frac{mk}{n-k}+k-i_0}{\frac{mk}{n-k}+k}\binom{s+k+1}{i_0}\binom{\frac{mn}{n-k}+n-k-s-2}{\frac{mk}{n-k}+k-i_0}, \end{align*} as desired. The inductive step $(\ell-1) \Rightarrow \ell$ holds by the same calculation as the previous case $i_0=0$. \begin{comment} \begin{align*} (\ref{eq:simplification2}) =& \bigg[\binom{s+k+1}{l}+\frac{k-n}{k}\binom{s+k+1}{l-1}\bigg]\binom{\frac{mn}{n-k}+n-k-s-2}{\frac{mk}{n-k}+k-l} + \\ &\qquad\frac{\frac{mk}{n-k}+k-l+1}{\frac{mk}{n-k}+k}\binom{s+k+1}{l-1}\binom{\frac{mn}{n-k}+n-k-s-2}{\frac{mk}{n-k}+k-l+1} \\ =& \frac{\frac{mk}{n-k}+k-l}{\frac{mk}{n-k}+k}\binom{s+k+1}{l}\binom{\frac{mn}{n-k}+n-k-s-2}{\frac{mk}{n-k}+k-l}. \end{align*} \end{comment} Thus, the claim follows by induction. In particular, when $\ell=k$, (\ref{eq:inductionLemma}) becomes \begin{equation*} \begin{split} & \sum_{i=i_0}^{k} \binom{s+k+1}{i}\binom{\frac{mn}{n-k}+n-k-s-2}{\frac{mk}{n-k}+k-i}+\frac{k-n}{k}\sum_{i=i_1+1}^{k} \binom{s+k+1}{i-1}\binom{\frac{mn}{n-k}+n-k-s-2}{\frac{mk}{n-k}+k-i} \\ =&\frac{1}{k!}\cdot\frac{m}{m+n-k}(s+2)(s+3)\cdots(s+k+1)\binom{\frac{mn}{n-k}+n-k-s-2}{\frac{mk}{n-k}}. \end{split} \end{equation*} Therefore, the proof of this lemma is complete. \end{proof} Now we are ready to prove Theorem~\ref{thm:bipartite}. \begin{proof}[Proof of Theorem~\ref{thm:bipartite}] According to Lemma~\ref{lem:bipartiteStanley}, it suffices to show that \begin{equation*} (mn-1)!\sum_{\alpha}\frac{b_{1}b_{2}\cdots b_{m+n-2}}{b_{m+n-2}(b_{m+n-2}+b_{m+n-3})\cdots (b_{m+n-2}+b_{m+n-3}+\ldots+b_1)} = \frac{(mn)!}{(m+n-1)!}. \end{equation*} where the sum is over all sequences $\alpha=(a_1,a_2,\ldots,a_{m+n-2})$ consisting of $(m-1)$ 0's and $(n-1)$ 1's. Suppose $a_{r_1}=a_{r_2}=\ldots=a_{r_{n-1}} = 1$ where $1\leq r_1<r_2<\ldots<r_{n-1}\leq m+n-2$. Denote $r_0 = 0$. Then for $k=1,2,\ldots,n-1$, \begin{comment} \begin{align*} b_1 = b_2 = \ldots = b_{r_1-1} = 1,&\ b_{r_1} = r_1,\\ \vdots \qquad & \\ b_{r_{k-1}+1} = \ldots = b_{r_k-1} = k,&\ b_{r_k} = r_{k}-k+1,\\ \vdots \qquad & \\ b_{r_{n-1}+1} = \ldots = b_{m+n-2} = n. \end{align*} \end{comment} $$b_{r_{k-1}+1} = b_{r_{k-1}+2} = \cdots = b_{r_k-1} = k, b_{r_k} = r_{k}-k+1$$ and $$b_{r_{n-1}+1} = \cdots = b_{m+n-2} = n.$$ Therefore, \begin{equation*} \prod_{i=1}^{m+n-2}b_i = n^{m+n-2-r_{n-1}}\prod_{j=1}^{n-1} (r_j -j+1) j^{r_{j}-r_{j-1}-1}. \end{equation*} For $1\leq i\leq m+n-2$, write $c_i = b_{m+n-2}+\ldots + b_{i}$, then $$c_{m+n-2} = n, c_{m+n-3} = 2n, \ldots, c_{r_{n-1}+1} = mn+n(n-2-r_{n-1})$$ $$\implies c_{m+n-2}c_{m+n-3}\cdots c_{r_{n-1}+1} = n^{m+n-2-r_{n-1}}\Gamma(m+n-1-r_{n-1})$$ \begin{comment} $$c_{r_{n-1}} = mn +(n-1)(n-2-r_{n-1}),\ldots, c_{r_{n-2}+1} = mn+(n-1)(n-3-r_{n-2}),$$ \begin{align*} \implies c_{r_{n-1}}\cdots c_{r_{n-2}+1} &= (n-1)^{r_{n-1}-r_{n-2}}\prod_{i=r_{n-2}+1}^{r_{n-1}} (\frac{mn}{n-1}+n-2-i) \\ &= (n-1)^{r_{n-1}-r_{n-2}}\frac{\Gamma(\frac{mn}{n-1}+n-2-r_{n-2})}{\Gamma(\frac{mn}{n-1}+n-2-r_{n-1})}. \end{align*} \end{comment} For $k=1,2,\ldots,n-1,$ we have $$c_{r_{k}} = mn+k(k-1-r_k),\ldots, c_{r_{k-1}+1} = mn+k(k-2-r_{k-1})$$ \begin{align*} \implies c_{r_{k}}\cdots c_{r_{k-1}+1} &= k^{r_{k}-r_{k-1}}\prod_{i=r_{k-1}+1}^{r_{k}} (\frac{mn}{k}+k-1-i) \\ &= k^{r_{k}-r_{k-1}}\frac{\Gamma(\frac{mn}{k}+k-1-r_{k-1})}{\Gamma(\frac{mn}{k}+k-1-r_{k})}. \end{align*} \begin{comment} $$c_{r_1} = mn-r_1,\ldots, c_1 = mn-1.$$ $$\implies c_{r_1}\cdots c_1 = \frac{\Gamma(mn)}{\Gamma(mn-r_1)}.$$ \end{comment} Denote $r_n = m+n-2$, we have \begin{equation*} \begin{split} \prod_{i=1}^{m+n-2}c_i &= \prod_{j=1}^{n} j^{r_{j}-r_{j-1}}\frac{\Gamma(\frac{mn}{j}+j-1-r_{j-1})}{\Gamma(\frac{mn}{j}+j-1-r_{j})} \\ &=(mn-1)!\bigg(\prod_{j=1}^{n} j^{r_{j}-r_{j-1}}\bigg)\bigg(\prod_{k=1}^{n-1}\frac{\Gamma(\frac{mn}{k+1}+k-r_{k})}{\Gamma(\frac{mn}{k}+k-1-r_{k})}\bigg) \end{split} \end{equation*} Comparing the product of $b_i$'s and $c_i$'s, we obtain $$(mn-1)!\prod_{i=1}^{m+n-2}\frac{b_i}{c_i} = \frac{1}{(n-1)!}\prod_{k=1}^{n-1}(r_k-k+1)\frac{\Gamma(\frac{mn}{k}+k-1-r_{k})}{\Gamma(\frac{mn}{k+1}+k-r_{k})}.$$ \begin{align*} \implies (mn-1)!\sum_{\alpha}\prod_{i=1}^{m+n-2}\frac{b_i}{c_i} =& \frac{1}{(n-1)!}\sum_{1\leq r_1<\ldots<r_{n-1}\leq m+n-2}\prod_{j=1}^{n-1}(r_j-j+1)\frac{\Gamma(\frac{mn}{j}+j-1-r_{j})}{\Gamma(\frac{mn}{j+1}+j-r_{j})} \\ =& \frac{1}{(n-1)!}\sum_{1\leq r_1<\ldots<r_{n-1}\leq m+n-2}\prod_{j=1}^{n-1}R_j, \end{align*} where $R_j = (r_j-j+1)\frac{\Gamma(\frac{mn}{j}+j-1-r_{j})}{\Gamma(\frac{mn}{j+1}+j-r_{j})}.$ We claim that the sum \begin{equation}\label{eq:inductionTheorem} \begin{split} &\frac{1}{(n-1)!}\sum_{1\leq r_1<\ldots<r_{n-1}\leq m+n-2}\prod_{j=1}^{n-1}R_j \\ =&\frac{(m+k)!\Gamma(\frac{m(n-k)}{k})}{(m+n-1)!k!(n-k-1)!}\sum_{1\leq r_1<\ldots<r_{k}\leq m+k-1}(r_k-k+1)\frac{(r_k+n-k)!}{(r_k+1)!}\binom{\frac{mn}{k}+k-2-r_k}{\frac{m(n-k)}{k}-1}\prod_{j=1}^{k-1}R_j \end{split} \end{equation} for all $1\leq k\leq n-1$. To prove this claim, we reversely induct on $k$. When $k=n-1$, the right hand side of (\ref{eq:inductionTheorem}) \begin{align*} &\frac{\Gamma(\frac{m}{n-1})}{(n-1)!}\sum_{1\leq r_1<\ldots<r_{n-1}\leq m+n-2}(r_{n-1}-n+2)\frac{\Gamma(\frac{mn}{n-1}+n-2-r_{n-1})}{\Gamma(\frac{m}{n-1})\Gamma(m+n-1-r_{n-1})}\prod_{j=1}^{n-2}R_j \\ =& \frac{1}{(n-1)!}\sum_{1\leq r_1<\ldots<r_{n-1}\leq m+n-2}\prod_{j=1}^{n-1}R_j, \end{align*} as desired. Assume that the claim holds for $k+1$, then the left hand side of (\ref{eq:inductionTheorem}) becomes \begin{equation}\label{eq:inductionStep1} \frac{(m+k+1)!\Gamma(\frac{m(n-k-1)}{k+1})}{(m+n-1)!(k+1)!(n-k-2)!}\sum_{1\leq r_1<\ldots<r_{k+1}\leq m+k}(r_{k+1}-k)\frac{(r_{k+1}+n-k-1)!}{(r_{k+1}+1)!}\binom{\frac{mn}{k+1}+k-1-r_{k+1}}{\frac{m(n-k-1)}{k+1}-1}\prod_{j=1}^{k}R_j \end{equation} Setting $t=r_{k+1},s= r_{k}, k\rightarrow n-k-1$ in Lemma~\ref{lem:binomialComputation}, we have \begin{align*} &\sum_{1\leq r_1<\ldots<r_{k+1}\leq m+k}(r_{k+1}-k)\frac{(r_{k+1}+n-k-1)!}{(r_{k+1}+1)!}\binom{\frac{mn}{k+1}+k-1-r_{k+1}}{\frac{m(n-k-1)}{k+1}-1}\prod_{j=1}^{k}R_j \\ =& \sum_{1\leq r_1<\ldots<r_k\leq m+k-1}\bigg[\bigg(\prod_{j=1}^{k}R_j\bigg)\sum_{r_{k+1} = r_k+1}^{m+k}(r_{k+1}-k)\frac{(r_{k+1}+n-k-1)!}{(r_{k+1}+1)!}\binom{\frac{mn}{k+1}+k-1-r_{k+1}}{\frac{m(n-k-1)}{k+1}-1}\bigg] \\ =& \sum_{1\leq r_1<\ldots<r_k\leq m+k-1}\bigg[\bigg(\prod_{j=1}^{k}R_j\bigg)\frac{m}{m+k+1}(r_k+2)(r_k+3)\cdots(r_k+n-k)\binom{\frac{mn}{k+1}+k-r_k-1}{\frac{m(n-k-1)}{k+1}}\bigg] \\ =& \sum_{1\leq r_1<\ldots<r_k\leq m+k-1}\frac{m}{m+k+1}\cdot\frac{(r_k+n-k)!}{(r_k+1)!}\binom{\frac{mn}{k+1}+k-r_k-1}{\frac{m(n-k-1)}{k+1}}\prod_{j=1}^{k}R_j. \end{align*} Thus, \begin{align*} (\ref{eq:inductionStep1}) &= \frac{(m+k+1)!\Gamma(\frac{m(n-k-1)}{k+1})}{(m+n-1)!(k+1)!(n-k-2)!}\sum_{1\leq r_1<\ldots<r_k\leq m+k-1}\frac{m(r_k+n-k)!}{(m+k+1)(r_k+1)!}\binom{\frac{mn}{k+1}+k-r_k-1}{\frac{m(n-k-1)}{k+1}}\prod_{j=1}^{k}R_j \\ &=\frac{(m+k)!\Gamma(\frac{m(n-k-1)}{k+1})}{(m+n-1)!(k+1)!(n-k-2)!} \sum_{1\leq r_1<\ldots<r_k\leq m+k-1}\frac{m(r_k+n-k)!}{(r_k+1)!}\frac{\Gamma(\frac{mn}{k+1}+k-r_k)}{\Gamma(\frac{m(n-k-1)}{k+1}+1)\Gamma(m+k-r_k)}\prod_{j=1}^{k}R_j \\ &=\frac{(m+k)!}{(m+n-1)!k!(n-k-1)!}\cdot \\ &\qquad \sum_{1\leq r_1<\ldots<r_k\leq m+k-1}\frac{(r_k+n-k)!}{(r_k+1)!}\frac{\Gamma(\frac{mn}{k+1}+k-r_k)}{\Gamma(m+k-r_k)} (r_k-k+1)\frac{\Gamma(\frac{mn}{k}+k-1-r_{k})}{\Gamma(\frac{mn}{k+1}+k-r_{k})}\prod_{j=1}^{k-1}R_j\\ &=\frac{(m+k)!\Gamma(\frac{m(n-k)}{k})}{(m+n-1)!k!(n-k-1)!}\sum_{1\leq r_1<\ldots<r_{k}\leq m+k-1}(r_k-k+1)\frac{(r_k+n-k)!}{(r_k+1)!}\binom{\frac{mn}{k}+k-2-r_k}{\frac{m(n-k)}{k}-1}\prod_{j=1}^{k-1}R_j. \end{align*} and the claim follows by (reverse) induction. In particular, when $k=1$, $(\ref{eq:inductionTheorem})$ becomes \begin{equation*} \frac{(m+1)!(mn-m-1)!}{(m+n-1)!(n-2)!}\sum_{r_1=1}^m r_1\frac{(r_1+n-1)!}{(r_1+1)!}\binom{mn-r_1-1}{mn-m-1}. \end{equation*} Again, setting $t=r_1, s=0, k=n-1$ in Lemma~\ref{lem:binomialComputation}, $$\sum_{r_1=1}^m r_1\frac{(r_1+n-1)!}{(r_1+1)!}\binom{mn-r_1-1}{mn-m-1} = \frac{m}{m+1}\cdot n!\binom{mn-1}{mn-m}.$$ Therefore, \begin{align*} (\ref{eq:inductionTheorem}) &= \frac{(m+1)!(mn-m-1)!}{(m+n-1)!(n-2)!}\cdot\frac{m}{m+1}\cdot n!\binom{mn-1}{mn-m} \\ &= \frac{(mn)!}{(m+n-1)!}. \end{align*} Therefore, the proof of Theorem~\ref{thm:bipartite} is complete. \end{proof} \section{Trees}\label{sec:trees} \subsection{Tree Shelling Number Computation} \ Trees are one of the most fundamental type of graphs. However, unlike the complete bipartite graph case, there is no simple formula for tree shelling numbers. The goal of this section is to give a relatively easy method to compute the number of shellings of a tree. Throughout this section, let $T$ be a tree with $n$ vertices and $n-1$ edges. We first focus on computing the number of shellings of rooted trees, whose definition is given below. \begin{definition}\label{def:shellingOfRootedTree} Let $v$ be a vertex of $T$. The rooted tree induced by $T$ and rooted at $v$ is denoted as $T_v$. A \textit{shelling of rooted tree} $T_v$ is a shelling $\sigma$ of $T$ such that $\sigma(1)$ is an edge incident to $v$. \end{definition} The following definitions are used to efficiently describe structures in a (rooted) tree. \begin{definition}\label{def:parent} Let $T_v$ be a tree rooted at vertex $v$. We say a vertex $u$ is a \textit{parent} of vertex $w$ (and $w$ is a \textit{child} of $u$) if $(w,u)$ is an edge and $u$ lies closer to the root than $w$. A \textit{descending path} from $u$ to $w$ in the rooted tree $T_v$ is a structure $$u-v_1-v_2-\cdots -v_r-w$$ where each vertex is a parent of the subsequent vertex. We say $u$ is an \textit{ancestor} of $w$ (and $w$ is a \textit{descendant} of $u$) if there exists a descending path from $u$ to $w$. \end{definition} \begin{definition}\label{def:rootedSubtree} Let $u,v\in T.$ The (rooted)\textit{ subtree of $T_v$ rooted at $u$}, denoted as $T_v(u)$, is a subgraph of $T$ rooted at $u$ and induced by the set of vertices $$\{w\in T: w \text{ is a descendant of } u \text{ in }T_v\}.$$ See Figure~\ref{fig:definition} for an example. \end{definition} \begin{figure}[h] \begin{tikzpicture}[scale=1] \draw (2,-1)--(0,0)--(-2,-1); \draw (0,0)--(0.5,-1); \draw (-1.5,-2)--(-2,-1)--(-2.5,-2); \draw (0.5,-2)--(0.5,-1)--(1.5,-2); \draw (-0.5,-2)--(0.5,-1); \draw (-3,-3)--(-2.5,-2)--(-2,-3); \draw (-1.5,-3)--(-1.5,-2); \draw (0,-3)--(0.5,-2)--(1,-3); \draw (1.75,-3)--(1.5,-2); \node at (0,0){$\bullet$}; \node at (-2,-1){$\bullet$}; \node at (2,-1){$\bullet$}; \node at (0.5,-1){$\bullet$}; \node at (-2.5,-2){$\bullet$}; \node at (-1.5,-2){$\bullet$}; \node at (0.5,-2){$\bullet$}; \node at (-0.5,-2){$\bullet$}; \node at (1.5,-2){$\bullet$}; \node at (-3,-3){$\bullet$}; \node at (-2,-3){$\bullet$}; \node at (-1.5,-3){$\bullet$}; \node at (1,-3){$\bullet$}; \node at (0,-3){$\bullet$}; \node at (1.75,-3){$\bullet$}; \draw (0,0) node[above]{$v$}; \draw (-2,-1) node[above]{$u$}; \draw (-2.2,-2.2) ellipse (1.2 and 1.7); \draw (-3.1,-1) node[left]{$T_v(u)$}; \end{tikzpicture} \caption{Definition of $T_v(u)$.} \label{fig:definition} \end{figure} For a tree $T$, the edge set of $T$ is denoted as $E(T)$. The vertex set of $T$ is denoted as $V(T)$, or $T$ for simplicity. Accordingly, $|T|$ is the number of vertices in $T$. The same notations are used for rooted trees. The following proposition provides a way to calculate the number of shellings of a rooted tree $T_v$ based on the size of its rooted subtrees. \begin{proposition}\label{prop:rootedTreeCounting} $$F(T_v) = \frac{n!}{\prod_{u\in T} |T_v(u)|}.$$ \end{proposition} \begin{proof} The proposition holds for $n=2$ by regular check. Assume that it holds for $n-1$ and consider a tree $T$ with $n$ vertices. Suppose the neighbors of $v$ are $u_1, u_2,\ldots,u_r.$ For $1\leq i\leq r,$ define $T^{(i)}$ to be the tree $T_{v}(u_i)$ with an additional edge $(u_i,v)$. Given fixed shellings $\sigma_i$ of $T_v^{(i)}$ for all $1\leq i\leq r$, we can construct shellings of $T_v$ by merging $\sigma_i$'s together while preserving the order of each $\sigma_i$. Every shelling of $T_v$ can be uniquely constructed in this way. Therefore, $$F(T_v) = \binom{|E(T)|}{|E(T^{(1)})|,|E(T^{(2)})|,\ldots,|E(T^{(r)})|}\prod_{i=1}^r F(T_v^{(i)})$$ By induction hypothesis, \begin{align*} F(T_v^{(i)}) &= \frac{|T^{(i)}|!}{\prod_{w\in T^{(i)}}|T_v^{(i)}(w)|} = \frac{|T^{(i)}|!}{|T^{(i)}_v(v)|\prod_{w\neq v, w\in T^{(i)}}|T_v(w)|} \\ &=\frac{|E(T^{(i)})|!}{\prod_{w\neq v, w\in T^{(i)}}|T_v(w)|}. \end{align*} Therefore, \begin{align*} F(T_v) &= \frac{|E(T)|!}{|E(T^{(1)})|!\cdots |E(T^{(r)})|!}\prod_{i=1}^r\frac{|E(T^{(i)})|!}{\prod_{w\neq v, w\in T^{(i)}}|T_v(w)|} \\ &= \frac{|E(T)|!}{\prod_{w\neq v, w\in T} |T_v(w)|} \\ &= \frac{n!}{\prod_{w\in T} |T_v(w)|}. \end{align*} and the induction is complete. \end{proof} \begin{corollary}\label{cor:rootedTreeRatio} Suppose that $(u,v)$ is an edge of $T$, then $$\frac{F(T_v)}{F(T_u)} = \frac{|T_u(v)|}{|T_v(u)|} = \frac{|T_u(v)|}{n-|T_u(v)|}.$$ \end{corollary} \begin{proof} For any vertex $w\neq u,v$, $T_u(w)$ and $T_v(w)$ are the same subtree of $T_v$. Therefore, by Proposition~\ref{prop:rootedTreeCounting}, \begin{align*} \frac{F(T_v)}{F(T_u)} &= \frac{\prod_{w\in T} |T_u(w)|}{\prod_{w\in T} |T_v(w)|} = \frac{|T_u(v)|\cdot |T_u(u)|}{|T_v(u)|\cdot |T_v(v)|} \\ &= \frac{|T_u(v)|}{|T_v(u)|} = \frac{|T_u(v)|}{n-|T_u(v)|}. \end{align*} \end{proof} Corollary ~\ref{cor:rootedTreeRatio} establishes a simple relationship between the number of shellings of $T$ rooted at adjacent edges. In this way, by only calculating $F(T_v)$ for a single vertex $v$, one can quickly derive $F(T_u)$ for all $u\in T.$ For example, suppose $T$ is a path of length $n-1$, as shown in figure~\ref{fig:path}. Then $F(T_{v_1}) = 1$, and $$F(T_{v_{i+1}}) = \frac{|T_{v_i}(v_{i+1})|}{n-|T_{v_i}(v_{i+1})|}F(T_{v_i}) = \frac{n-i}{i}F(T_{v_i})$$ by corollary~\ref{cor:rootedTreeRatio}. This gives $F(T_{v_i}) = \binom{n-1}{i-1}$ for all $i=1,2,\ldots,n.$ \begin{figure}[h] \begin{tikzpicture}[scale=1] \draw(0,0)--(4,0); \draw(5,0)--(7,0); \node at (0,0){$\bullet$}; \node at (1,0){$\bullet$}; \node at (2,0){$\bullet$}; \node at (3,0){$\bullet$}; \node at (7,0){$\bullet$}; \node at (6,0){$\bullet$}; \node[draw=none] (ellipsis2) at (4.5,0) {$\cdots$}; \draw (0,0) node[below]{$v_1$}; \draw (1,0) node[below]{$v_2$}; \draw (2,0) node[below]{$v_3$}; \draw (3,0) node[below]{$v_4$}; \draw (6,0) node[below]{$v_{n-1}$}; \draw (7,0) node[below]{$v_n$}; \end{tikzpicture} \caption{A path of length $n-1$. The shelling number is $2^{n-2}$.} \label{fig:path} \end{figure} Finally, the following proposition relates the number of shellings of $T$ with that of its rooted trees. \begin{proposition}\label{prop:treeCounting} $$F(T) = \frac{1}{2}\sum_{v\in T} F(T_v).$$ \end{proposition} \begin{proof} Note that any shelling of $T$ beginning with edge $(u,v)$ is counted as a shelling of both $T_u$ and $T_v$. Thus, Proposition~\ref{prop:treeCounting} follows. \end{proof} \begin{example}\label{ex:path} By Proposition~\ref{prop:treeCounting} and the discussion under Corollary~\ref{cor:rootedTreeRatio}, the number of shellings of a path of length $n-1$ is $$\frac{1}{2}\sum_{i=1}^n \binom{n-1}{i-1} = 2^{n-2}.$$ \end{example} \subsection{Bounds on Tree Shelling Number} \ The goal of this section is to give several bounds of tree shelling numbers based on various parameters of a graph, such as vertex degree and diameter. A trivial upper bound is $(n-1)!$, since every shelling is also a permutation of edges. The upper bound is achieved when $T$ is a star, in which every two edges are adjacent to each other. Here are the main theorems of the section. \begin{theorem}\label{thm:lowerBoundDegree} \begin{equation*} F(T) \geq \prod_{v\in T} d(v)!, \end{equation*} where $d(v)$ is the degree of a vertex $v$ in $T$. The equality holds if and only if $T$ is a path of length $n-1$ or a star. \end{theorem} \begin{remark} A weaker lower bound $F(T)\geq\prod_{v\in T}\big(d(v)-1\big)!$ can be shown easily by observation. However, an extra factor of $\prod_{v\in T}d(v)$ in Theorem~\ref{thm:lowerBoundDegree} requires much more efforts. \end{remark} \begin{theorem}\label{thm:upperBoundDiameter} Suppose the diameter of $T$ is $\ell$. When $\ell$ is even, $$F(T)\leq \frac{2(n-1-\frac{\ell}{2})!}{(\frac{\ell}{2})!}\bigg[\binom{n-2}{\frac{\ell}{2}}+\sum_{i=0}^{\frac{\ell}{2}-1}\binom{n-1}{i}\bigg].$$ When $\ell$ is odd, $$F(T) \leq \frac{(n-\frac{\ell+3}{2})!}{(\frac{\ell+1}{2})!}\bigg[(n-1-\ell)\binom{n-2}{\frac{\ell-1}{2}}+n\sum_{i=0}^{\frac{\ell-1}{2}}\binom{n-1}{i}\bigg].$$ The equality holds if and only if $T$ has the following form: there exists a path $$v_0 - v_1 - \cdots -v_\ell$$ such that every edge not in this path is adjacent to $v_{\lfloor \frac{\ell}{2}\rfloor}.$ \end{theorem} Before proving Theorem~\ref{thm:lowerBoundDegree}, it is worth noticing the following inequality, which relates the number of shellings of $T$ and $T_v$. \begin{lemma}\label{lem:weightInequality} Let $v$ be a vertex in $T$ and $\ell$ be the length of the longest descending path in $T_v$. Then $$F(T)\leq \bigg[\sum_{k=0}^{\ell-1}\binom{n-2}{k}\bigg] F(T_v).$$ In particular, $F(T)\leq 2^{n-2}F(T_v).$ \end{lemma} \begin{proof} Let $L = v - v_1 - v_2 -\cdots - v_\ell$ be the longest descending path in $T_v$. Consider the following operations on $T$: \begin{enumerate} \item[1.] Suppose $i\leq \ell-2$ is the first index such that $v_i$ has a children $v'\neq v_{i+1}$ in $T_v$. Remove $T_v(v')$ and attach it on $v_{i+1}$ (i.e., children of $v'$ become children of $v_{i+1}$). Furthermore, remove edge $(v',v_i)$ and add a new edge $(v',v_{i+1}).$ This operation is illustrated in Figure~\ref{fig:pathAdjustmentForWeight}. \item[2.] Repeat step 1 until no further operations can be performed. \end{enumerate} \begin{figure}[h] \begin{tikzpicture}[scale=1] \draw(0,0)--(2,0); \draw(3,0)--(6,0); \draw(7,0)--(8,0); \draw(4,0)--(4.5,0.866); \draw(4,0)--(4.707,-0.707); \draw(4,0)--(4.259,-0.966); \node at (0,0){$\bullet$}; \node at (1,0){$\bullet$}; \node at (4,0){$\bullet$}; \node at (5,0){$\bullet$}; \node at (4.5,0.866){$\bullet$}; \node at (4.707,-0.707){$\bullet$}; \node at (4.259,-0.966){$\bullet$}; \node at (8,0){$\bullet$}; \draw[rotate around={-15:(4.5,-0.866)}] (5.5,-0.866) ellipse (1.5 and 0.5); \draw[rotate around={15:(4.5,0.866)}] (5.5,0.866) ellipse (1.5 and 0.5); \node[draw=none] (ellipsis2) at (2.5,0) {$\cdots$}; \node[draw=none] (ellipsis2) at (6.5,0) {$\cdots$}; \draw (0,0) node[below]{$v$}; \draw (1,0) node[below]{$v_1$}; \draw (4,0) node[below left]{$v_i$}; \draw (5,0) node[below]{$v_{i+1}$}; \draw (4.5,0.866) node[above]{$v'$}; \draw (5.5,1.1) node[]{$T_{v}(v')$}; \draw (8,0) node[below]{$v_\ell$}; \end{tikzpicture} $$\Big\Downarrow$$ \begin{tikzpicture}[scale=1] \draw(0,0)--(2,0); \draw(3,0)--(6,0); \draw(7,0)--(8,0); \draw(5,0)--(4.5,0.866); \draw(4,0)--(4.707,-0.707); \draw(4,0)--(4.259,-0.966); \node at (0,0){$\bullet$}; \node at (1,0){$\bullet$}; \node at (4,0){$\bullet$}; \node at (5,0){$\bullet$}; \node at (4.5,0.866){$\bullet$}; \node at (4.707,-0.707){$\bullet$}; \node at (4.259,-0.966){$\bullet$}; \node at (8,0){$\bullet$}; \draw[rotate around={-15:(4.5,-0.866)}] (5.5,-0.866) ellipse (1.5 and 0.5); \draw[rotate around={36:(5,0)}] (6,0) ellipse (1.5 and 0.5); \node[draw=none] (ellipsis2) at (2.5,0) {$\cdots$}; \node[draw=none] (ellipsis2) at (6.5,0) {$\cdots$}; \draw (0,0) node[below]{$v$}; \draw (1,0) node[below]{$v_1$}; \draw (4,0) node[below left]{$v_i$}; \draw (5,0) node[below]{$v_{i+1}$}; \draw (4.5,0.866) node[above]{$v'$}; \draw (5.8,0.6) node[]{$T_{v}(v')$}; \draw (8,0) node[below]{$v_\ell$}; \end{tikzpicture} \caption{Operation on $T$: moving edges away from root.} \label{fig:pathAdjustmentForWeight} \end{figure} Such operations preserve the length of the longest descending path in $T_v$ and would eventually stop within finite steps. Let $T^{(k)}$ be the tree after $k$th operation. For $u\in T$, define the weight of $u$ in $T^{(k)}$ $$W_k(u) = \frac{F(T^{(k)}_u)}{F(T^{(k)}_v)}.$$ We claim that the sum of weights of all vertices is non-decreasing after each operation, i.e. \begin{equation}\label{eq:weightInequality} \sum_{u\in T}W_k(u) \leq \sum_{u\in T}W_{k+1}(u). \end{equation} It suffices to prove the claim for $k=0.$ By Corollary~\ref{cor:rootedTreeRatio}, suppose $(u,w)$ is an edge in $T^{(k)}$, then \begin{equation}\label{eq:weightRatio} \frac{W_k(u)}{W_k(w)} = \frac{|T^{(k)}_w(u)|}{|T^{(k)}_u(w)|} = \frac{|T^{(k)}_w(u)|}{n-|T^{(k)}_w(u)|}. \end{equation} Therefore, suppose $v-u_1-u_2-\cdots - u_r = u$ is a path in $T^{(k)}_v$, then \begin{equation*} W_k(u) = \prod_{j=1}^r \frac{|T^{(k)}_v(u_j)|}{n-|T^{(k)}_v(u_j)|}. \end{equation*} Note that for all $u\neq v', v_{i+1}$, $|T^{(1)}_v(u)| = |T_v(u)|$. For $w\not\in T_v(v')\cup T_v(v_{i+1})$, $v',v_{i+1}$ are not on the path from $v$ to $w$, so \begin{equation*} W_0(w) = W_1(w). \end{equation*} Write $|T_v(v'))|=a, |T_v(v_{i+1})| = b$, then $|T^{(1)}_v(v')| = 1$, $|T^{(1)}_v(v_{i+1})| = |T_v(v')|+|T_v(v_{i+1})| = a+b.$ By (\ref{eq:weightRatio}), $$W_0(v') = \frac{a}{n-a}W_0(v_i).$$ $$W_0(v_{i+1}) = \frac{b}{n-b}W_0(v_i).$$ $$W_1(v_{i+1}) = \frac{a+b}{n-a-b}W_1(v_i).$$ Since $W_0(v_i) = W_1(v_i)$, \begin{equation*} W_0(v')+W_0(v_{i+1})\leq W_1(v_{i+1}). \end{equation*} For $w\in T_v(v')\setminus\{v'\}$, by (\ref{eq:weightRatio}), \begin{equation*} \frac{W_1(w)}{W_1(v_{i+1})} = \frac{W_0(w)}{W_0(v')} \implies W_0(w)\leq W_1(w). \end{equation*} Similarly, for $w\in T_v(v_{i+1})\setminus\{v_{i+1}\}$, \begin{equation*} \frac{W_1(w)}{W_1(v_{i+1})} = \frac{W_0(w)}{W_0(v_{i+1})}\implies W_0(w)\leq W_1(w). \end{equation*} Therefore, we conclude that $$\sum_{w\in T} W_0 (w)\leq \sum_{w\in T} W_1(w),$$ and (\ref{eq:weightInequality}) is proved. Finally, suppose the operation stops after step $M$, then $T^{(M)}$ is the tree where all vertices not in $L$ are incident to $v_{\ell-1}$. Thus, by~\ref{eq:weightRatio}, \begin{align*} \sum_{u\in T} W_{M}(u) &= W_M(v)+W_M(v_1)+\cdots+W_M(v_{\ell-1}) + (n-\ell)W_M(v_\ell) \\ &=\sum_{i=0}^{\ell-1}\binom{n-1}{i} + (n-l)\frac{\binom{n-1}{\ell-1}}{n-1}\\ &=2\sum_{i=0}^{\ell-1}\binom{n-2}{i}. \end{align*} According to equation (\ref{eq:weightInequality}), Proposition~\ref{prop:treeCounting}, $$\frac{F(T)}{F(T_v)} = \frac{1}{2}\sum_{u\in T}W_0(u) \leq \frac{1}{2}\sum_{u\in T}W_M(u) = \sum_{i=0}^{\ell-1}\binom{n-2}{i},$$ so the proof is complete. \end{proof} Now we are ready to prove Theorem~\ref{thm:lowerBoundDegree} and ~\ref{thm:upperBoundDiameter}. \begin{proof}[Proof of Theorem~\ref{thm:lowerBoundDegree}] Induct on $|T|$. When $|T| = 2$, $F(T) = 2 = \prod_{v\in T} d(v)!$. The equality holds if and only if $T$ is a path (in this case $T$ is also a star). Assume that statement holds for all $|T|<n$, consider the case where $|T| = n$. If $|T|$ is a path of length $n-1$, then by Example~\ref{ex:path}, $$F(T) = 2^{n-2} = \prod_{v\in T} d(v)!,$$ as desired. Suppose that $T$ is not a single path, then there exists a vertex $v$ of degree $d\geq 3.$ Let $u_1,u_2,\ldots,u_d$ be vertices adjacent to $v$ and write $|T_v(u_i)| = s_i$ for $i=1,2,\ldots,d$. Assume $s_1\leq s_2\leq \ldots \leq s_d.$ Let $T'$ be the subtree of $T$ obtained by removing all vertices in $T_v(u_1)$ and all edges incident to those vertices. Let $T''$ be the subtree of $T$ induced by edges in $E(T)\setminus E(T').$ See Figure~\ref{fig:theoremLowerBound} for illustration. \begin{figure}[h] \begin{tikzpicture}[scale=1] \draw(0,0)--(1,0); \draw(0,0)--(0,1); \draw(0,0)--(0,-1); \draw(0,0)--(-0.866,-0.5); \draw(0,0)--(-0.866,0.5); \node at (0,0){$\bullet$}; \node at (1,0){$\bullet$}; \node at (0,1){$\bullet$}; \node at (0,-1){$\bullet$}; \node at (-0.866,-0.5){$\bullet$}; \node at (-0.866,0.5){$\bullet$}; \draw (1.5,0) ellipse (2 and 0.5); \draw (-0.7,0) ellipse (1.3 and 2); \draw (0,0) node[below right]{$v$}; \draw (1,0) node[below]{$u_1$}; \draw (0,1) node[above]{$u_2$}; \draw (-0.866,0.5) node[above]{$u_3$}; \draw (0,-1) node[below]{$u_d$}; \draw (2,0) node[]{$T''$}; \draw (-1.2,0) node[]{$T'$}; \end{tikzpicture} \caption{Merging a shelling of $T'$ and $T''_v$ to a shelling of $T$.} \label{fig:theoremLowerBound} \end{figure} Suppose $\sigma'$ is a shelling of $T'$ and $\sigma''$ a shelling of $T''_v$. Consider the following method to merge $\sigma'$ and $\sigma''$ into $\sigma$, a permutation of $E(T)$, such that (i) the order of edges in $\sigma'$ and in $\sigma''$ are preserved; (ii) $\sigma''(1) = (v,u_1)$ is not one of the first $s_d$ edges after merge. Note that $\sigma$ must be a shelling of $T$, since at least one of $\{\sigma'(k): 1\leq k\leq s_d\}$ is incident to $v$ and $(v,u_1)$ is adjacent to some previous edges in $\sigma$. For fixed $\sigma'$ and $\sigma''$, the number of $\sigma$ constructed by the above merging method is $$\binom{n-1-s_d}{|E(T'')|} = \binom{s_1+s_2+\cdots s_{d-1}}{s_1}.$$ Therefore, \begin{equation}\label{eq:shellingExtensionNumber} F(T) \geq F(T')F(T''_v)\binom{s_1+s_2+\cdots+ s_{d-1}}{s_1}. \end{equation} Note that the shellings of $T$ constructed above do not include those whose first edge is $(v,u_1)$, so we can replace ``$\geq$" with ``$>$" in (\ref{eq:shellingExtensionNumber}). Furthermore, by Lemma~\ref{lem:weightInequality}, $$F(T''_v) \geq \frac{F(T'')}{2^{s_1-1}}.$$ By induction hypothesis, $$F(T')F(T'') \geq \frac{1}{d}\prod_{u\in T} d(u)!.$$ Thus, (\ref{eq:shellingExtensionNumber}) implies $$F(T) > \frac{1}{2^{s_1-1}d}\binom{s_1+s_2+\cdots+s_{d-1}}{s_1}\prod_{u\in T}d(u)!.$$ If for some choices of $v$ with degree $d\geq 3$, $\binom{s_1+s_2+\cdots+s_{d-1}}{s_1} \geq 2^{s_1-1}d$, then $F(T) > \prod_{u\in T} d(u)!$ and equality never holds. If not, for all choices of $v$, $\binom{s_1+s_2+\cdots+s_{d-1}}{s_1} < 2^{s_1-1}d.$ By Lemma~\ref{app:ineq1}, $s_1 = s_2 = \cdots = s_{d-1} = 1.$ Therefore, $T$ must be the following type of trees: for every vertex $v$ of degree $d(v) \geq 3$, it connects at least $d(v) - 1$ leaves. If $T$ is a star, then $F(T) = (n-1)!$ is an equality case. If not, $T$ has the form shown in Figure~\ref{fig:Stars}, where $v_0$ and $v_m$ are the only two possible vertices with degree at least 3. \begin{figure}[h] \begin{tikzpicture}[scale=1] \draw(0,0)--(2,0); \draw(3,0)--(4,0); \draw(0,0)--(0,1); \draw(0,0)--(0,-1); \draw(0,0)--(-0.707,0.707); \draw(0,0)--(-1,0); \draw(4,0)--(4,-1); \draw(4,0)--(4,1); \draw(4,0)--(5,0); \draw(4,0)--(4.707,0.707); \node[draw=none] (ellipsis2) at (2.5,0) {$\cdots$}; \node at (0,0){$\bullet$}; \node at (1,0){$\bullet$}; \node at (0,1){$\bullet$}; \node at (0,-1){$\bullet$}; \node at (-0.707,0.707){$\bullet$}; \node at (-1,0){$\bullet$}; \node at (4,0){$\bullet$}; \node at (5,0){$\bullet$}; \node at (4,1){$\bullet$}; \node at (4,-1){$\bullet$}; \node at (4.707,0.707){$\bullet$}; \node[rotate around={45:(0,0)}] (ellipsis1) at (-0.55,-0.45) {$\vdots$}; \node[rotate around={45:(0,0)}] (ellipsis1) at (4.5,-0.5) {$\cdots$}; \draw (0,0) node[below right]{$v_0$}; \draw (1,0) node[below]{$v_1$}; \draw (4,0) node[below left]{$v_m$}; \end{tikzpicture} \caption{The only type of trees that satisfy case 2 condition.} \label{fig:Stars} \end{figure} Suppose $d(v_0) = d_1$, $d(v_m) = d_2$ with $2\leq d_1\leq d_2$. If $m=1$, by Proposition~\ref{prop:rootedTreeCounting}, ~\ref{prop:treeCounting}, and Lemma~\ref{app:ineq2}, \begin{align*} F(T) &= \frac{d_1^2+d_2^2+d_1d_2-d_1-d_2}{d_1d_2}(d_1+d_2-2)! \\ &\geq 2\cdot(d_1+d_2-2)!\\ &\geq d_1!d_2! = \prod_{u\in T} d(u)!. \end{align*} The equality holds only if $d_1 = d_2 = 2$ and $T$ is a single path. Now suppose $m\geq 2.$ Consider the following type of shelling of $T$: The first $m-1$ edges of $\sigma$ consist of $\{(v_i,v_{i+1}): 0\leq i\leq m-2\}$. The number of shellings of such type is $$2^{m-2}(d_1-1)!(d_2-1)!\binom{d_1+d_2-1}{d_2}.$$ Similarly, the number of shellings whose first $m-1$ edges consist of $\{(v_i,v_{i+1}): 1\leq i\leq m-1\}$ is $$2^{m-2}(d_1-1)!(d_2-1)!\binom{d_1+d_2-1}{d_1}.$$ Thus by Lemma~\ref{app:ineq3}, \begin{align*} F(T) &\geq 2^{m-2}(d_1-1)!(d_2-1)!\bigg[\binom{d_1+d_2-1}{d_2}+\binom{d_1+d_2-1}{d_1}\bigg] \\ &= 2^{m-2}(d_1-1)!(d_2-1)!\binom{d_1+d_2}{d_1}\\ &> 2^{m-1}d_1!d_2! = \prod_{u\in T}d(u)! \end{align*} unless $d_1=2, d_2\leq 4$, in which cases we have: \begin{itemize} \item $(d_1,d_2) = (2,2).$ $F(T) = 2^{n-2} = \prod_{u\in T}d(u)!.$ In this equality case, $T$ is a single path. \item $(d_1,d_2) = (2,3).$ $F(T) = 2^{n-1}-2 > 3!\cdot 2^{n-4} = \prod_{u\in T}d(u)!.$ \item $(d_1,d_2) = (2,4).$ $F(T) = 6(2^{n-2}-n+1) > 4!\cdot 2^{n-5} = \prod_{u\in T}d(u)!.$ \end{itemize} By induction, the proof of Theorem~\ref{thm:lowerBoundDegree} is complete. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:upperBoundDiameter}] Let $v_0 - v_1 -\cdots - v_\ell$ be a longest path in $T$. Firstly, we reduce the problem to the case where all edges in $T$ are incident to $\{v_1,v_2,\ldots,v_{\ell-1}\}$. If not, construct a new tree $T'$ by removing every edge $e$ not incident to $\{v_i:1\leq i\leq \ell-1\}$ and adding a corresponding edge incident to $v_j$, where $v_j$ is the closest vertex from $e$ among $L$. Every shelling of $T$ is still a shelling of $T'$ by considering the corresponding edges. Thus, $F(T)\leq F(T')$ while the longest path remains the same. Under this assumption, denote $V' = T\setminus \{v_0,v_1,\ldots,v_\ell\}.$ Consider the following operations: \begin{enumerate} \item[1.] Let $i$ be the smallest index such that $v_i$ has degree $\geq 3.$ If $i<\frac{\ell}{2}$, we remove all edges of the form $(v_i,u)$ for $u\in V'$ and add edges $(u,v_{i+1}).$ \item[2.] Repeat step 1 until no further operations can be performed. \item[3.] Let $j$ be the largest index such that $v_j$ has degree $\geq 3.$ If $j>\frac{\ell}{2}$, we remove all edges of the form $(v_j,u)$ for $u\in V'$ and add edges $(u,v_{j-1}).$ \item[4.] Repeat step 3 until no further operations can be performed. \end{enumerate} \begin{figure}[h] \begin{tikzpicture}[scale=1.3] \draw(1,0)--(7,0); \draw(2.293,0.707)--(3,0)--(3.707,0.707); \draw(3,0)--(3,1); \draw(4,-1)--(4,0); \draw(4.5,-0.866)--(5,0)--(5.5,-0.866); \draw(6,0)--(6,1); \node at (1,0){$\bullet$}; \node at (2,0){$\bullet$}; \node at (3,0){$\bullet$}; \node at (4,0){$\bullet$}; \node at (5,0){$\bullet$}; \node at (6,0){$\bullet$}; \node at (7,0){$\bullet$}; \node at (2.293,0.707){$\bullet$}; \node at (3,1){$\bullet$}; \node at (3.707,0.707){$\bullet$}; \node at (4,-1){$\bullet$}; \node at (4.5,-0.866){$\bullet$}; \node at (5.5,-0.866){$\bullet$}; \node at (6,1){$\bullet$}; \draw (1.5,0) node[below]{5}; \draw (2.5,0) node[below]{2}; \draw (3.5,0) node[below]{4}; \draw (4.5,0) node[below]{6}; \draw (5.5,0) node[below]{10}; \draw (6.5,0) node[below]{11}; \draw (2.5,0.5) node[below]{1}; \draw (2.95,0.55) node[right]{8}; \draw (3.5,0.5) node[below]{3}; \draw (4,-0.5) node[left]{9}; \draw (4.7,-0.5) node[left]{7}; \draw (5.3,-0.5) node[right]{13}; \draw (6,0.5) node[right]{12}; \end{tikzpicture} $$\Big\Downarrow$$ \begin{tikzpicture}[scale=1.3] \draw(1,0)--(7,0); \draw(3.293,0.707)--(4,0)--(4.707,0.707); \draw(4,0)--(4,1); \draw(4,-1)--(4,0); \draw(4.5,-0.866)--(5,0)--(5.5,-0.866); \draw(6,0)--(6,1); \node at (1,0){$\bullet$}; \node at (2,0){$\bullet$}; \node at (3,0){$\bullet$}; \node at (4,0){$\bullet$}; \node at (5,0){$\bullet$}; \node at (6,0){$\bullet$}; \node at (7,0){$\bullet$}; \node at (3.293,0.707){$\bullet$}; \node at (4,1){$\bullet$}; \node at (4.707,0.707){$\bullet$}; \node at (4,-1){$\bullet$}; \node at (4.5,-0.866){$\bullet$}; \node at (5.5,-0.866){$\bullet$}; \node at (6,1){$\bullet$}; \draw (1.5,0) node[below]{10}; \draw (2.5,0) node[below]{6}; \draw (3.5,0) node[below]{4}; \draw (4.5,0) node[below]{2}; \draw (5.5,0) node[below]{5}; \draw (6.5,0) node[below]{11}; \draw (3.5,0.5) node[below]{1}; \draw (3.95,0.55) node[right]{8}; \draw (4.5,0.5) node[below]{3}; \draw (4,-0.5) node[left]{9}; \draw (4.7,-0.5) node[left]{7}; \draw (5.3,-0.5) node[right]{13}; \draw (6,0.5) node[right]{12}; \end{tikzpicture} \caption{An example of operation on $T$: moving edges towards middle. The shellings are indicated by the number on the edges. $g$ maps a shelling of the first tree to a shelling of the second tree.} \label{fig:theoremUpperBound} \end{figure} Suppose the above operations end in step $M$. Let $T^{(t)}$ be the tree after $t^{\text{th}}$ operation. We claim that for all $t<M$, \begin{equation*} F(T^{(t+1)}) \geq F(T^{(t)}). \end{equation*} It suffices to prove the case when $t = 0.$ By symmetry, we can assume $i<\frac{\ell}{2}$. Let $V_i$ be the set of vertices adjacent to $v_i$ in $T$ except $v_{i-1}, v_{i+1}$. Define \begin{align*} S_{T\cap T^{(1)}} &:= \{\sigma\text{ is a shelling of }T: \exists u\in V_i, (v_i,u) \text{ appears after }(v_i,v_{i+1}) \text{ in } \sigma\}, \\ S_{T^{(1)}\cap T} &:= \{\tau\text{ is a shelling of }T^{(1)}: \exists u\in V_i, (v_{i+1},u) \text{ appears after }(v_i,v_{i+1}) \text{ in } \tau\}, \\ S_{T\setminus T^{(1)}} &:= \{\sigma\text{ is a shelling of }T: \exists u\in V_i, (v_i,u) \text{ appears before }(v_i,v_{i+1}) \text{ in } \sigma\},\\ S_{T^{(1)}\setminus T} &:= \{\tau\text{ is a shelling of }T^{(1)}: \exists u\in V_i, (v_{i+1},u) \text{ appears before }(v_i,v_{i+1}) \text{ in } \tau\}. \end{align*} Note that there is a bijection between $S_{T\cap T^{(1)}}$ and $S_{T^{(1)}\cap T}$ by replacing edges of the form $(v_i,u)$ in every $\sigma\in S_{T\cap T^{(1)}}$ with $(v_{i+1},u)$, for all $u\in V_i$. Thus, $|S_{T\cap T^{(1)}}|=|S_{T^{(1)}\cap T}|$ and $$F(T^{(1)}) - F(T) = |S_{T^{(1)}\setminus T}| - |S_{T\setminus T^{(1)}}|.$$ Consider a function $g: S_{T\setminus T^{(1)}} \rightarrow S_{T^{(1)}\setminus T}$. For $\sigma \in S_{T\setminus T^{(1)}}$, define $\tau = g(\sigma)$ as follows. If $\sigma(k) = (v_i, u)$ for some $u\in V_i$, $\tau(k) = (v_{i+1}, u)$; if $\sigma(k) = (v_j,u)$ for some $j\neq i$ and $u\in V'$, $\tau(k) = (v_j,u)$. It remains to define $\tau(k)$'s where $\sigma(k)$ is an edge of $L$. Write $e(j) = (v_{j},v_{j+1})$ for $j=0,1,\ldots,\ell-1.$ For $1\leq r\leq \ell$, suppose $\sigma(k_r) = e(j_r)$ where $k_1<k_2<\cdots < k_\ell.$ Define $\tau(k_r)$ inductively: when $r=1$, $\tau(k_1) = e(2i-j_1).$ When $r\geq 2$, $$\tau(k_{r}) = \begin{cases} e(j_{r}), & \text{ if } \{\tau(k_1), \tau(k_2),\ldots, \tau(k_{r-1})\} =\{e(j_1), e(j_2),\ldots, e(j_{r-1})\} \\ e(2i-j_{r}), & \text{ otherwise.} \end{cases}$$ The idea is that $g$ maps edges not in $L$ to the corresponding edges. For edges in $L$, $g$ acts as a reflection with respect to $e(i)$, until the reflection image matches with the preimage. An example of $g$ is in Figure~\ref{fig:theoremUpperBound}. We check the following properties of $g$: \begin{itemize} \item $g$ is well-defined. We first note that for any $r\leq \ell$, both $\{\sigma(k_1),\sigma(k_2),\ldots,\sigma(k_r)\}$ and $\{\tau(k_1),\tau(k_2),\ldots,\tau(k_r)\}$ form a path $P_r$ and $P^{(1)}_r$ in $L$, respectively. Since $j_1 \leq i$, the right endpoint of $P^{(1)}_r$ is never on the left side of the right endpoint of $P_r$ (assuming that $L$ is a horizontal path with left endpoint $v_0$ and right endpoint $v_\ell$, as illustrated in Figure~\ref{fig:theoremUpperBound}). Furthermore, since the ``branching edges" of $T$ (edges in $E(T)\setminus E(L)$) are not on the left side of $v_i$, every branching edge adjacent to $P_r$ must be adjacent to $P^{(1)}_r$. Thus, $\tau$ is a shelling of $T^{(1)}$. Moreover, $\tau\in S_{T^{(1)}\setminus T}$ by the correspondence between $(v_i,u)\in \sigma$ and $(v_{i+1},u)\in \tau$ for all $u\in V_i$. Therefore, $g$ is well-defined. \item $g$ is injective. Suppose $g(\sigma) = \tau.$ By the definition of $g$, $\sigma(k)$ is uniquely determined whenever $\tau(k) \not\in L.$ Suppose $\tau(k_r) = e(i_r)$ for $1\leq r\leq \ell$. we can recover $\sigma(k_r)$ from $\tau$: $\sigma(k_1) = e(2i-i_1)$. When $r\geq 2$, $$\sigma(k_{r}) = \begin{cases} e(i_{r}), & \text{ if } \{\sigma(k_1), \sigma(k_2),\ldots, \sigma(k_{r-1})\} =\{e(i_1), e(i_2),\ldots, e(i_{r-1})\} \\ e(2i-i_{r}), & \text{ otherwise.} \end{cases}$$ Therefore, $\sigma$ is uniquely determined by $\tau$ and $g$ is injective. \end{itemize} Since $g$ is injective, $|S_{T^{(1)}\setminus T}| \geq |S_{T\setminus T^{(1)}}|$ and thus $F(T^{(1)}) \geq F(T).$ Finally, note that $T^{(M)}$ is the tree where all edges not in $L$ are incident to $v_{\lfloor\frac{\ell}{2}\rfloor}$. By Proposition~\ref{prop:rootedTreeCounting} and ~\ref{prop:treeCounting}, $$F(T^{(M)})= \begin{cases} \frac{2(n-1-\frac{\ell}{2})!}{(\frac{\ell}{2})!}\bigg[\binom{n-2}{\frac{\ell}{2}}+\sum_{i=0}^{\frac{\ell}{2}-1}\binom{n-1}{i}\bigg], &\text{ if }\ell \text{ is even,} \\ \frac{(n-\frac{\ell+3}{2})!}{(\frac{\ell+1}{2})!}\bigg[(n-1-\ell)\binom{n-2}{\frac{\ell-1}{2}}+n\sum_{i=0}^{\frac{\ell-1}{2}}\binom{n-1}{i}\bigg], &\text{ if } \ell \text{ is odd.} \end{cases} $$ Thus, the proof of inequality is complete. Futhermore, we shall prove that $g$ is surjective only if $T$ is isomorphic to $T^{(M)}.$ If not, then there are two cases: \noindent \textbf{Case 1.} $i< \frac{\ell-1}{2}$. In this case, $2i<\ell-1$. Thus, for every $\sigma \in S_{T\setminus T^{(1)}}$, $g(\sigma)(1)\neq e(\ell-1) = (v_{\ell-1},v_\ell)$. However, there exists $\tau\in S_{T^{(1)}\setminus T}$ whose first edge is $(v_{\ell-1},v_{\ell})$, contradiction! \noindent \textbf{Case 2.} $i= \frac{\ell-1}{2}$ and there exists another vertex $v_j$ of degree at least 3. Suppose $(v_j, u)$ is an edge not in $L$, then for every $\sigma \in S_{T\setminus T^{(1)}}$, $g(\sigma)(1)$ cannot be this edge. However, there exists $\tau\in S_{T^{(1)}\setminus T}$ whose first edge is $(v_j,u)$, contradiction! Therefore, $g$ is surjective only if $T$ is isomorphic to $T^{(M)}$, so $$F(T) = F(T^{(M)})$$ if and only if $T$ is isormorphic to $T^{(M)}$. This completes the proof of Theorem~\ref{thm:upperBoundDiameter}. \end{proof} \section*{Acknowledgements} This research was carried out as part of the 2018 Summer Program in Undergraduate Research (SPUR) of the MIT Mathematics Department. The authors would like to thank Prof. Richard Stanley for suggesting the project and Prof. Ankur Moitra and Prof. Davesh Maulik for helpful suggestions. \bibliographystyle{plain}
{ "timestamp": "2018-09-28T02:04:40", "yymm": "1809", "arxiv_id": "1809.10263", "language": "en", "url": "https://arxiv.org/abs/1809.10263", "abstract": "A shelling of a graph, viewed as an abstract simplicial complex that is pure of dimension 1, is an ordering of its edges such that every edge is adjacent to some other edges appeared previously. In this paper, we focus on complete bipartite graphs and trees. For complete bipartite graphs, we obtain an exact formula for their shelling numbers. And for trees, we relate their shelling numbers to linear extensions of tree posets and bound shelling numbers using vertex degrees and diameter.", "subjects": "Combinatorics (math.CO)", "title": "Counting Shellings of Complete Bipartite Graphs and Trees", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9845754461077707, "lm_q2_score": 0.8267117983401363, "lm_q1q2_score": 0.8139601376532971 }
https://arxiv.org/abs/1611.01153
On Perfectness of Intersection Graph of Ideals of $\mathbb{Z}_n$
In this paper, we characterize the positive integers $n$ for which intersection graph of ideals of $\mathbb{Z}_n$ is perfect.
\section{Introduction} The idea of associating graphs to algebraic structures for characterizing the algebraic structures with graphs and vice versa dates back to Bosak \cite{bosak}. Till then, a lot of research, e.g., \cite{graph-ideal,anderson-livingston,badawi,power2,mks-ideal,angsu-comm-alg-1,angsu-lin-mult-alg,angsu-comm-alg-2,angsu-jaa,survey2} has been done in connecting graph structures to various algebraic objects like groups, rings, vector spaces etc. However, the most prominent among them are the zero-divisor graphs \cite{anderson-livingston} and intersection graph of ideals of rings \cite{mks-ideal}. Recently, authors in \cite{weakly-perfect} proved that intersection graph of ideals of $\mathbb{Z}_n$ is weakly perfect for all $n>0$. In this paper, we characterize the values of $n$ for which the intersection graph of ideals of $\mathbb{Z}_n$ is perfect. In particular, we prove the following theorem. \begin{theorem*} The intersection graph of ideals of $\mathbb{Z}_n$ is perfect if and only if $n={p_1}^{\alpha_1}{p_2}^{\alpha_2}{p_3}^{\alpha_3}{p_4}^{\alpha_4}$ where $p_i$'s are distinct primes and $\alpha_i \in \mathbb{N}\cup\{0\}$, i.e., the number of distinct prime factors of $n$ is less than or equal to $4$. \end{theorem*} \section{Definition, Preliminaries and Known Results} In this section, for convenience of the reader and also for later use, we recall some definitions, notations and results concerning elementary graph theory and intersection graph of ideals of a ring. For undefined terms and concepts the reader is referred to \cite{west-graph-book}. By a graph $G=(V,E)$, we mean a non-empty set $V$ and a symmetric binary relation (possibly empty) $E$ on $V$. The set $V$ is called the set of vertices and $E$ is called the set of edges of $G$. Two element $u$ and $v$ in $V$ are said to be adjacent if $(u,v) \in E$. $H=(W,F)$ is called an {\it induced subgraph} of $G$ if $\phi \neq W \subseteq V$ and $F$ consists of all the edges between the vertices in $W$ in $G$. A complete subgraph of a graph $G$ is called a {\it clique}. A {\it maximal clique} is a clique which is maximal with respect to inclusion. The {\it clique number} of $G$, written as $\omega(G)$, is the maximum size of a clique in $G$. The {\it chromatic number} of $G$, denoted as $\chi(G)$, is the minimum number of colours needed to label the vertices so that the adjacent vertices receive different colours. It is easy to observe that $\omega(G)\leq \chi(G)$. A graph $G$ is said to be {\it weakly perfect} if $\omega(G)= \chi(G)$ and it is said to be {\it perfect} if $\omega(H)= \chi(H)$ for all induced subgraphs $H$ of $G$. Chudnovsky {\it et.al.} \cite{spgt} in 2004 settled a long standing conjecture regarding perfect graphs and provided a characterization of perfect graphs. {\theorem[Strong Perfect Graph Theorem] \label{spgt} \cite{spgt} A graph $G$ is perfect if and only if neither $G$ nor its complement contains an odd cycle of length at least $5$ as an induced subgraph.} Let $R$ be a ring. The intersection graph of ideals of $R$ (introduced in \cite{mks-ideal}), denoted by $G(R)$, consists of all non-trivial ideals as vertices and two ideals $I$ and $J$ are adjacent if and only if $I \cap J\neq \{0\}$. Throughout this paper, we take the ring $R$ to be $\mathbb{Z}_n$, the ring of integers modulo $n$. We know that $\mathbb{Z}_n$ is a principal ideal ring and each of its ideals is generated by $\overline{m}\in \mathbb{Z}_n$ where $m$ is a factor of $n$. For convenience, we denote this ideal by $(m)$. Also without loss of generality, whenever we take an ideal $(m)$ of $\mathbb{Z}_n$, we assume that $m$ is a factor of $n$. It was proved in \cite{weakly-perfect} proved that intersection graph of ideals of $\mathbb{Z}_n$ is weakly perfect, i.e., $\omega(G( \mathbb{Z}_n))=\chi(G( \mathbb{Z}_n))$ for all $n>0$. \section{Perfectness of Intersection Graph of Ideals of $\mathbb{Z}_n$} In this section, we prove some preparatory results and subsequently use them to prove the main theorem of the paper. {\proposition \label{lcm-lemma} Let $G(\mathbb{Z}_n)$ be the intersection graph of ideals of $\mathbb{Z}_n$ and $(a)$ and $(b)$ be two ideals in $\mathbb{Z}_n$ such that $a \mid n$ and $b\mid n$. Then $(a)$ and $(b)$ are adjacent in $G(\mathbb{Z}_n)$ if and only if $lcm(a,b)$ is a factor of $n$ and $1<lcm(a,b)<n$.}\\ \\ \noindent {\bf Proof: } Since $\mathbb{Z}_n$ is isomorphic to $\mathbb{Z}/n\mathbb{Z}$ as ring via the correspondence $\overline{a}\leftrightarrow a+n\mathbb{Z}$, the ideal $(a)$ in $\mathbb{Z}_n$ corresponds to the ideal $\langle a \rangle + n\mathbb{Z}$ in $\mathbb{Z}/n\mathbb{Z}$ where $\langle a \rangle$ denote the set of integer multiples of $a$. Now, let $(a)\sim (b)$ in $G(\mathbb{Z}_n)$, i.e., $(a)\cap(b)\neq \{\overline{0}\}$. Since $a \mid n$ and $b\mid n$, we have $lcm(a,b)\mid n$. On the other hand, using the correspondence described above, we have $\langle a \rangle + n\mathbb{Z} \cap \langle b \rangle + n\mathbb{Z}\neq \{n\mathbb{Z}\}$. But, we know that $\langle a \rangle + n\mathbb{Z} \cap \langle b \rangle + n\mathbb{Z}=\langle lcm(a,b) \rangle + n\mathbb{Z}$. Hence, we have $\langle lcm(a,b) \rangle + n\mathbb{Z} \neq \{n\mathbb{Z}\}$. This, together with the fact that $lcm(a,b)\mid n$, implies that $1<lcm(a,b)<n$. Conversely, let $lcm(a,b)$ is a factor of $n$ and $1<lcm(a,b)<n$. Clearly, $\overline{0} \neq \overline{lcm(a,b)} \in (a) \cap (b)$ in $\mathbb{Z}_n$ and hence $(a)\sim (b)$ in $G(\mathbb{Z}_n)$. \hfill \rule{2mm}{2mm} {\theorem \label{not-perfect} Let $n={p_1}^{\alpha_1}{p_2}^{\alpha_2}\cdots {p_k}^{\alpha_k}$. If $k\geq 5$, then $G(\mathbb{Z}_n)$ is not perfect.}\\ \\ \noindent {\bf Proof: } Let $n={p_1}^{\alpha_1}{p_2}^{\alpha_2}\cdots {p_5}^{\alpha_5}.s$ where $s=1$ if $k=5$ and $s={p_6}^{\alpha_6}\cdots {p_k}^{\alpha_k}$ if $k>5$. Consider the cycle $C$ given by $({p_1}^{\alpha_1}{p_2}^{\alpha_2} {p_3}^{\alpha_3}s)\sim({p_2}^{\alpha_2}{p_3}^{\alpha_3} {p_4}^{\alpha_4}s)\sim({p_3}^{\alpha_3}{p_4}^{\alpha_4} {p_5}^{\alpha_5}s)\sim({p_4}^{\alpha_4}{p_5}^{\alpha_5} {p_1}^{\alpha_1}s)\sim({p_5}^{\alpha_5}{p_1}^{\alpha_1}{p_2}^{\alpha_2} s)\sim ({p_1}^{\alpha_1}{p_2}^{\alpha_2} {p_3}^{\alpha_3}s)$. Simple calculation using Proposition \ref{lcm-lemma} shows that $C$ is an induced $5$-cycle in $G(\mathbb{Z}_n)$ and hence by Theorem \ref{spgt}, $G(\mathbb{Z}_n)$ is not perfect.\hfill \rule{2mm}{2mm} {\theorem \label{no-odd-hole} Let $n={p_1}^{\alpha_1}{p_2}^{\alpha_2}{p_3}^{\alpha_3}{p_4}^{\alpha_4}$. Then $G(\mathbb{Z}_n)$ does not contain any induced cycle of length greater than $4$.}\\ \\ \noindent {\bf Proof: } Let, if possible $G(\mathbb{Z}_n)$ contains an induced cycle $C$ of length greater than $4$, say $(a_1)\sim(a_2)\sim(a_3)\sim(a_4)\sim(a_5)\sim \cdots \sim (a_1)$. By Proposition \ref{lcm-lemma}, we have $$lcm(a_1,a_3)=lcm(a_1,a_4)=lcm(a_2,a_4)=lcm(a_2,a_5)=lcm(a_3,a_5)=n.$$ {\it Claim: $gcd(a_1,a_3)>1$.} If possible, let $gcd(a_1,a_3)=1$. Since $gcd(a_1,a_3)\cdot lcm(a_1,a_3)=a_1a_3$, we have $lcm(a_1,a_3)=a_1a_3=n$. Note that as $lcm(a_3,a_5)=n$, we have $n=\frac{a_1a_3}{gcd(a_1,a_3)}=\frac{a_3a_5}{gcd(a_3,a_5)}$, i.e., $a_1\cdot gcd(a_3,a_5)=a_5\cdot gcd(a_1,a_3)$, i.e., $a_5=a_1\cdot gcd(a_3,a_5)$, i.e., $a_5$ is a multiple of $a_1$. Now as $a_1$ and $a_3$ are coprime and their lcm is $n$, without loss of generality, two cases may arise: either $a_1={p_1}^{\alpha_1}{p_2}^{\alpha_2}; a_3={p_3}^{\alpha_3}{p_4}^{\alpha_4}$ or $a_1={p_1}^{\alpha_1};a_3={p_2}^{\alpha_2}{p_3}^{\alpha_3}{p_4}^{\alpha_4}$. If $a_1={p_1}^{\alpha_1}{p_2}^{\alpha_2}; a_3={p_3}^{\alpha_3}{p_4}^{\alpha_4}$, we have $a_5={p_1}^{\alpha_1}{p_2}^{\alpha_2}\cdot s$ for some natural number $s$ such that $a_5 \mid n$. Also as $lcm(a_1,a_4)=n$, we have $a_4={p_3}^{\alpha_3}{p_4}^{\alpha_4}\cdot t$, for some natural number $t$ such that $a_4 \mid n$. Thus $lcm(a_4,a_5)=n$ contradicting Proposition \ref{lcm-lemma} and the fact that $a_4\sim a_5$ in $C$. If $a_1={p_1}^{\alpha_1};a_3={p_2}^{\alpha_2}{p_3}^{\alpha_3}{p_4}^{\alpha_4}$, similarly we have $a_5={p_1}^{\alpha_1}\cdot s$ and $a_4={p_2}^{\alpha_2}{p_3}^{\alpha_3}{p_4}^{\alpha_4} \cdot t$ and hence $lcm(a_4,a_5)=n$ thereby leading to a contradiction. Thus by combining above two cases, we have $gcd(a_1,a_3)>1$. Thus we have $lcm(a_1,a_3)=n$ and $gcd(a_1,a_3)>1$ with $a_1 \mid n$ and $a_3\mid n$. Without loss of generality, let $p_1$ be a common factor of $a_1$ and $a_3$ and let $a_1={p_1}^x\cdot s$ and $a_3={p_1}^y \cdot t$ where $p_1$ is coprime with $s$ and $t$. Now, if $max\{x,y\}<\alpha_1$, then $lcm(a_1,a_3)<n$, a contradiction. Thus either $x=\alpha_1$ or $y=\alpha_1$, i.e., for any common prime divisor $p_i$ of $a_1$ and $a_3$, either ${p_i}^{\alpha_i} \mid a_1$ or ${p_i}^{\alpha_i} \mid a_3$ or both. Also as $lcm(a_1,a_3)=n$, all the ${p_i}^{\alpha_i}$ are factors of either $a_1$ or $a_3$ or both. Thus, without loss of generality, the forms of $a_1$ and $a_3$ are as follows: either $$\mathsf{Case ~ 1:}~ a_1={p_1}^{\alpha_1}{p_2}^{\alpha_2}{p_3}^{\beta_3}{p_4}^{\beta_4};a_3={p_1}^{\beta_1}{p_2}^{\beta_2}{p_3}^{\alpha_3}{p_4}^{\alpha_4}$$ or $$\mathsf{Case ~ 2:}~a_1={p_1}^{\alpha_1}{p_2}^{\beta_2}{p_3}^{\beta_3}{p_4}^{\beta_4};a_3={p_1}^{\beta_1}{p_2}^{\alpha_2}{p_3}^{\alpha_3}{p_4}^{\alpha_4}$$ or $$\mathsf{Case ~ 3:}~a_1={p_1}^{\alpha_1}{p_2}^{\alpha_2}{p_3}^{\beta_3}{p_4}^{\beta_4};a_3={p_1}^{\beta_1}{p_2}^{\alpha_2}{p_3}^{\alpha_3}{p_4}^{\alpha_4}$$ or $$\mathsf{Case ~ 4:}~a_1={p_1}^{\alpha_1}{p_2}^{\alpha_2}{p_3}^{\alpha_3}{p_4}^{\beta_4};a_3={p_1}^{\beta_1}{p_2}^{\alpha_2}{p_3}^{\alpha_3}{p_4}^{\alpha_4}$$ where $\beta_i < \alpha_i$. Note that in first two cases, $a_1$ and $a_3$ do not share any ${p_i}^{\alpha_i}$ as common factor. In the third case, they share only one ${p_i}^{\alpha_i}$ as common factor and in the fourth case, they share two ${p_i}^{\alpha_i}$'s as common factor. {\it Case 1: ($a_1={p_1}^{\alpha_1}{p_2}^{\alpha_2}{p_3}^{\beta_3}{p_4}^{\beta_4};a_3={p_1}^{\beta_1}{p_2}^{\beta_2}{p_3}^{\alpha_3}{p_4}^{\alpha_4}$)} Since $lcm(a_1,a_4)=n$, we have $a_4={p_1}^{\gamma_1}{p_2}^{\gamma_2}{p_3}^{\alpha_3}{p_4}^{\alpha_4}$ where $\gamma_1\leq \alpha_1,\gamma_2 \leq \alpha_2$ and $(\gamma_1,\gamma_2)\neq (\alpha_1,\alpha_2)$. Again, since $lcm(a_3,a_5)=n$, we have $a_5={p_1}^{\alpha_1}{p_2}^{\alpha_2}{p_3}^{\delta_3}{p_4}^{\delta_4}$ where $\delta_3\leq \alpha_3,\delta_4 \leq \alpha_4$ and $(\delta_3,\delta_4)\neq (\alpha_3,\alpha_4)$. Hence, we have $lcm(a_4,a_5)=n$, a contradiction to the fact that $a_4 \sim a_5$. Thus Case 1 is an impossibility. {\it Case 2: ($a_1={p_1}^{\alpha_1}{p_2}^{\beta_2}{p_3}^{\beta_3}{p_4}^{\beta_4};a_3={p_1}^{\beta_1}{p_2}^{\alpha_2}{p_3}^{\alpha_3}{p_4}^{\alpha_4}$)} Since $lcm(a_1,a_4)=n$, we have $a_4={p_1}^{\gamma_1}{p_2}^{\alpha_2}{p_3}^{\alpha_3}{p_4}^{\alpha_4}$ where $\gamma_1< \alpha_1$. Again, since $lcm(a_3,a_5)=n$, we have $a_5={p_1}^{\alpha_1}{p_2}^{\delta_2}{p_3}^{\delta_3}{p_4}^{\delta_4}$ where $\delta_i\leq \alpha_i$ and $(\delta_2,\delta_3,\delta_4)\neq (\alpha_2,\alpha_3,\alpha_4)$. Hence, we have $lcm(a_4,a_5)=n$, a contradiction to the fact that $a_4 \sim a_5$. Thus Case 2 is an impossibility. {\it Case 3: ($a_1={p_1}^{\alpha_1}{p_2}^{\alpha_2}{p_3}^{\beta_3}{p_4}^{\beta_4};a_3={p_1}^{\beta_1}{p_2}^{\alpha_2}{p_3}^{\alpha_3}{p_4}^{\alpha_4}$)} Since $lcm(a_1,a_4)=n$, we have ${p_3}^{\alpha_3}{p_4}^{\alpha_4}\mid a_4$. Again, since $lcm(a_3,a_5)=n$, we have ${p_1}^{\alpha_1}\mid a_5$. Now, as $lcm(a_2,a_5)=n$, we have either ${p_2}^{\alpha_2}\mid a_2$ or ${p_2}^{\alpha_2}\mid a_5$. But if ${p_2}^{\alpha_2}\mid a_5$, then we have $lcm(a_4,a_5)=n$, a contradiction. Thus, we have ${p_2}^{\alpha_2}\mid a_2$. Again, as $lcm(a_2,a_4)=n$, we have either ${p_1}^{\alpha_1}\mid a_2$ or ${p_1}^{\alpha_1}\mid a_4$. If ${p_1}^{\alpha_1}\mid a_2$, then $lcm(a_2,a_3)=n$, a contradiction. On the other hand, if ${p_1}^{\alpha_1}\mid a_4$, then $lcm(a_3,a_4)=n$, a contradiction. Thus Case 3 is an impossibility. {\it Case 4: ($a_1={p_1}^{\alpha_1}{p_2}^{\alpha_2}{p_3}^{\alpha_3}{p_4}^{\beta_4};a_3={p_1}^{\beta_1}{p_2}^{\alpha_2}{p_3}^{\alpha_3}{p_4}^{\alpha_4}$)} Since $lcm(a_1,a_4)=n$, we have ${p_4}^{\alpha_4}\mid a_4$. Now, as $lcm(a_2,a_4)=n$, we have either ${p_1}^{\alpha_1}\mid a_2$ or ${p_1}^{\alpha_1}\mid a_4$. If ${p_1}^{\alpha_1}\mid a_2$, then $lcm(a_2,a_3)=n$, a contradiction. On the other hand, if ${p_1}^{\alpha_1}\mid a_4$, then $lcm(a_3,a_4)=n$, a contradiction. Thus Case 4 is an impossibility. Thus, combining all the cases we conclude that $G(\mathbb{Z}_n)$ does not contain any induced cycle of length greater than $4$.\hfill \rule{2mm}{2mm} {\theorem \label{no-odd-antihole} Let $n={p_1}^{\alpha_1}{p_2}^{\alpha_2}{p_3}^{\alpha_3}{p_4}^{\alpha_4}$. Then $\overline{G(\mathbb{Z}_n)}$, the complement of $G(\mathbb{Z}_n)$, does not contain any induced cycle of length greater than $4$.}\\ \\ \noindent {\bf Proof: } Let, if possible $\overline{G(\mathbb{Z}_n)}$ contains an induced cycle $C$ of length greater than $4$, say $(a_1)\sim(a_2)\sim(a_3)\sim(a_4)\sim \cdots \sim(a_t)\sim (a_1)$ with $t\geq 5$. Then, by Proposition \ref{lcm-lemma}, $lcm(a_1,a_2)=lcm(a_2,a_3)=lcm(a_3,a_4)=\cdots =lcm(a_t,a_1)=n$. [{\it Claim: $gcd(a_2,a_3)>1$}] If possible, let $gcd(a_2,a_3)=1$. Since $lcm(a_2,a_3)=n$, we have $n=a_2a_3$. Thus without loss of generality, either $$a_2={p_1}^{\alpha_1}{p_2}^{\alpha_2};a_3={p_3}^{\alpha_3}{p_4}^{\alpha_4} \mbox{ or } a_2={p_1}^{\alpha_1};a_3={p_2}^{\alpha_2}{p_3}^{\alpha_3}{p_4}^{\alpha_4}$$ If $a_2={p_1}^{\alpha_1}{p_2}^{\alpha_2};a_3={p_3}^{\alpha_3}{p_4}^{\alpha_4}$, as $lcm(a_3,a_4)=lcm(a_1,a_2)=n$, we have $a_1={p_3}^{\alpha_3}{p_4}^{\alpha_4}\cdot s$ and $a_4={p_1}^{\alpha_1}{p_2}^{\alpha_2}\cdot t$ for some positive integer $s,t$. But this implies that $lcm(a_1,a_4)=n$, i.e., $a_1\sim a_4$ in $\overline{G(\mathbb{Z}_n)}$, a contradiction. On the other hand, if $a_2={p_1}^{\alpha_1};a_3={p_2}^{\alpha_2}{p_3}^{\alpha_3}{p_4}^{\alpha_4}$, as $lcm(a_3,a_4)=lcm(a_1,a_2)=n$, we have $a_1={p_2}^{\alpha_2}{p_3}^{\alpha_3}{p_4}^{\alpha_4}\cdot s$ and $a_4={p_1}^{\alpha_1}\cdot t$ for some positive integer $s,t$. But this implies that $lcm(a_1,a_4)=n$, i.e., $a_1\sim a_4$ in $\overline{G(\mathbb{Z}_n)}$, a contradiction. Hence the claim is true. Now, we have $lcm(a_2,a_3)=n$ and $gcd(a_2,a_3)>1$ with $a_2 \mid n$ and $a_3\mid n$. Without loss of generality, let $p_1$ be a common factor of $a_2$ and $a_3$ and let $a_2={p_1}^x\cdot s$ and $a_3={p_1}^y \cdot t$ where $p_1$ is coprime with $s$ and $t$. Now, if $max\{x,y\}<\alpha_1$, then $lcm(a_2,a_3)<n$, a contradiction. Thus either $x=\alpha_1$ or $y=\alpha_1$, i.e., for any common prime divisor $p_i$ of $a_2$ or $a_3$, either ${p_i}^{\alpha_i} \mid a_2$ or ${p_i}^{\alpha_i} \mid a_3$ or both. Also as $lcm(a_2,a_3)=n$, all the ${p_i}^{\alpha_i}$ are factors of either $a_2$ or $a_3$. Thus, without loss of generality, the forms of $a_2$ and $a_3$ are as follows: either $$\mathsf{Case ~ 1:}~ a_2={p_1}^{\alpha_1}{p_2}^{\alpha_2}{p_3}^{\beta_3}{p_4}^{\beta_4};a_3={p_1}^{\beta_1}{p_2}^{\beta_2}{p_3}^{\alpha_3}{p_4}^{\alpha_4}$$ or $$\mathsf{Case ~ 2:}~a_2={p_1}^{\alpha_1}{p_2}^{\beta_2}{p_3}^{\beta_3}{p_4}^{\beta_4};a_3={p_1}^{\beta_1}{p_2}^{\alpha_2}{p_3}^{\alpha_3}{p_4}^{\alpha_4}$$ or $$\mathsf{Case ~ 3:}~a_2={p_1}^{\alpha_1}{p_2}^{\alpha_2}{p_3}^{\beta_3}{p_4}^{\beta_4};a_3={p_1}^{\beta_1}{p_2}^{\alpha_2}{p_3}^{\alpha_3}{p_4}^{\alpha_4}$$ or $$\mathsf{Case ~ 4:}~a_2={p_1}^{\alpha_1}{p_2}^{\alpha_2}{p_3}^{\alpha_3}{p_4}^{\beta_4};a_3={p_1}^{\beta_1}{p_2}^{\alpha_2}{p_3}^{\alpha_3}{p_4}^{\alpha_4}$$ where $\beta_i < \alpha_i$. Note that in first two cases, $a_2$ and $a_3$ do not share any ${p_i}^{\alpha_i}$ as common factor. In the third case, they share only one ${p_i}^{\alpha_i}$ as common factor and in the fourth case, they share two ${p_i}^{\alpha_i}$'s as common factor. {\it Case 1: ($a_2={p_1}^{\alpha_1}{p_2}^{\alpha_2}{p_3}^{\beta_3}{p_4}^{\beta_4};a_3={p_1}^{\beta_1}{p_2}^{\beta_2}{p_3}^{\alpha_3}{p_4}^{\alpha_4}$)} Since $lcm(a_1,a_2)=lcm(a_3,a_4)=n$, we have ${p_3}^{\alpha_3}{p_4}^{\alpha_4} \mid a_1$ and ${p_1}^{\alpha_1}{p_2}^{\alpha_2}\mid a_4$. But this implies $lcm(a_1,a_4)=n$, i.e., $a_1\sim a_4$ in $\overline{G(\mathbb{Z}_n)}$, a contradiction and hence Case 1 is an impossibility. {\it Case 2: ($a_2={p_1}^{\alpha_1}{p_2}^{\beta_2}{p_3}^{\beta_3}{p_4}^{\beta_4};a_3={p_1}^{\beta_1}{p_2}^{\alpha_2}{p_3}^{\alpha_3}{p_4}^{\alpha_4}$)} Since $lcm(a_1,a_2)=lcm(a_3,a_4)=n$, we have ${p_2}^{\alpha_2}{p_3}^{\alpha_3}{p_4}^{\alpha_4}\mid a_1$ and ${p_1}^{\alpha_1}\mid a_4$. But this implies $lcm(a_1,a_4)=n$, a contradiction and hence Case 2 is an impossibility. {\it Case 3: ($a_2={p_1}^{\alpha_1}{p_2}^{\alpha_2}{p_3}^{\beta_3}{p_4}^{\beta_4};a_3={p_1}^{\beta_1}{p_2}^{\alpha_2}{p_3}^{\alpha_3}{p_4}^{\alpha_4}$)} Since $lcm(a_1,a_2)=n$, we have ${p_3}^{\alpha_3}{p_4}^{\alpha_4} \mid a_1$. Also, since $lcm(a_t,a_1)=n$, either ${p_1}^{\alpha_1}\mid a_1$ or ${p_1}^{\alpha_1}\mid a_t$. If ${p_1}^{\alpha_1}\mid a_1$, then we have ${p_1}^{\alpha_1}{p_3}^{\alpha_3}{p_4}^{\alpha_4} \mid a_1$ which implies $lcm(a_1,a_3)=n$, i.e., $a_1\sim a_3$ in $\overline{G(\mathbb{Z}_n)}$, a contradiction. On the other hand, if ${p_1}^{\alpha_1}\mid a_t$, we have $lcm(a_t,a_3)=n$, i.e., $a_t \sim a_3$ in $\overline{G(\mathbb{Z}_n)}$, a contradiction. Thus combining both the possibilities, Case 3 is an impossibility. {\it Case 4: ($a_2={p_1}^{\alpha_1}{p_2}^{\alpha_2}{p_3}^{\alpha_3}{p_4}^{\beta_4};a_3={p_1}^{\beta_1}{p_2}^{\alpha_2}{p_3}^{\alpha_3}{p_4}^{\alpha_4}$)} Since $lcm(a_1,a_2)=n$, we have ${p_4}^{\alpha_4} \mid a_1$. Also, since $lcm(a_t,a_1)=n$, either ${p_1}^{\alpha_1}\mid a_1$ or ${p_1}^{\alpha_1}\mid a_t$. If ${p_1}^{\alpha_1}\mid a_1$, then we have ${p_1}^{\alpha_1}{p_4}^{\alpha_4} \mid a_1$ which implies $lcm(a_1,a_3)=n$, i.e., $a_1\sim a_3$ in $\overline{G(\mathbb{Z}_n)}$, a contradiction. On the other hand, if ${p_1}^{\alpha_1}\mid a_t$, we have $lcm(a_t,a_3)=n$, i.e., $a_t \sim a_3$ in $\overline{G(\mathbb{Z}_n)}$, a contradiction. Thus combining both the possibilities, Case 4 is an impossibility. Thus, combining all the cases we conclude that $\overline{G(\mathbb{Z}_n)}$ does not contain any induced cycle of length greater than $4$.\hfill \rule{2mm}{2mm} Finally, with Theorems \ref{spgt}, \ref{not-perfect}, \ref{no-odd-hole} and \ref{no-odd-antihole} in hand, we are now in a position to prove the main result of this paper. \begin{theorem*} The intersection graph of ideals of $\mathbb{Z}_n$ is perfect if and only if $n={p_1}^{\alpha_1}{p_2}^{\alpha_2}{p_3}^{\alpha_3}{p_4}^{\alpha_4}$ where $p_i$'s are distinct primes and $\alpha_i \in \mathbb{N}\cup\{0\}$, i.e., the number of distinct prime factors of $n$ is less than or equal to $4$. \end{theorem*} \noindent {\bf Proof: } Clearly, Theorem \ref{not-perfect} shows that the condition is necessary. For the sufficiency part, first with the help of Theorems \ref{no-odd-hole} and \ref{no-odd-antihole}, along with Theorem \ref{spgt}, we conclude that the intersection graph of ideals of $\mathbb{Z}_n$ is perfect if $n$ has exactly four distinct prime factors. The proofs for the cases when $n$ has exactly three, two or one distinct prime factors follows similarly by suitably taking some of the $\alpha_i$'s to be zero. \hfill \rule{2mm}{2mm} \section*{Acknowledgement} The author is thankful to Sabyasachi Dutta and Jyotirmoy Pramanik for some fruitful discussions on the paper. The research is partially funded by NBHM Research Project Grant, (Sanction No. 2/48(10)/2013/ NBHM(R.P.)/R\&D II/695), Govt. of India.
{ "timestamp": "2016-11-07T02:00:10", "yymm": "1611", "arxiv_id": "1611.01153", "language": "en", "url": "https://arxiv.org/abs/1611.01153", "abstract": "In this paper, we characterize the positive integers $n$ for which intersection graph of ideals of $\\mathbb{Z}_n$ is perfect.", "subjects": "General Mathematics (math.GM)", "title": "On Perfectness of Intersection Graph of Ideals of $\\mathbb{Z}_n$", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9845754447499795, "lm_q2_score": 0.8267117876664789, "lm_q1q2_score": 0.813960126021774 }
https://arxiv.org/abs/1508.02851
On interval edge-colorings of bipartite graphs of small order
An edge-coloring of a graph $G$ with colors $1,\ldots,t$ is an interval $t$-coloring if all colors are used, and the colors of edges incident to each vertex of $G$ are distinct and form an interval of integers. A graph $G$ is interval colorable if it has an interval $t$-coloring for some positive integer $t$. The problem of deciding whether a bipartite graph is interval colorable is NP-complete. The smallest known examples of interval non-colorable bipartite graphs have $19$ vertices. On the other hand it is known that the bipartite graphs on at most $14$ vertices are interval colorable. In this work we observe that several classes of bipartite graphs of small order have an interval coloring. In particular, we show that all bipartite graphs on $15$ vertices are interval colorable.
\section{Introduction} In this paper we consider only finite, undirected graphs, without loops and multiple edges. $V(G)$ and $E(G)$ denote the sets of vertices and edges, respectively. The degree of the vertex $v \in V(G)$ is denoted by $d_G(v)$. The concepts and notations not defined here can be found in \cite{West}. A proper edge-coloring of a graph $G$ is a coloring of the edges of $G$ such that no two adjacent edges receive the same color. If $\alpha$ is a proper edge-coloring of $G$ and $v \in V(G)$, then by $S(v,\alpha)$ we denote the set of colors of the edges incident to $v$. The largest color of $S(v,\alpha)$ is denoted by $\overline{S}(v,\alpha)$. A proper edge-coloring of a graph $G$ with colors $1,...,t$ is called an interval $t$-coloring if all colors are used, and for any vertex $v \in V(G)$, the set $S(v,\alpha)$ is an interval of integers. A graph $G$ is interval colorable if it has an interval $t$-coloring for some $t \in \mathbb{N}$. The set of all interval colorable graphs is denoted by $\mathfrak{N}$. The concept of interval edge-coloring of graphs was introduced by Asratian and Kamalian \cite{AK1987,AK1994}. In \cite{K1989}, Kamalian proved that all complete bipartite graphs and trees are interval colorable. In \cite{P2010,PKT2013}, it was shown that $n$-dimensional cubes have interval $t$-coloring if and only if $ n \leq t \leq \frac{n(n+1)}{2}$. In \cite{Sevastjanov1990}, Sevastjanov proved that it is an NP-complete problem to decide whether a bipartite graph has an interval coloring or not. \begin{figure}[t] \label{fig19} \begin{center} \includegraphics[width=8cm]{fig3.jpg} \caption{Interval non-colorable bipartite graph $F$ on $19$ vertices} \vspace{1.7cm} \end{center} \end{figure} In \cite{Hansen1992} Hansen proved that every bipartite graph $G$ with $\Delta(G) \leq 3$ is interval colorable. In \cite{JensenToft} Jensen and Toft formulated the following question: is there an interval non-colorable bipartite graph $G$ with $4 \leq \Delta(G) \leq 12$? A partial answer to this question was given by Petrosyan and the first author in \cite{PK2014}. In particular they constructed two interval non-colorable bipartite graphs, $G$ and $H$, where $|V(G)|=20$, $\Delta(G)=11$, and $|V(H)|=19$, $\Delta(H)=12$. Another example of interval non-colorable bipartite graph $F$ on $19$ vertices (Fig. \ref{fig19}) was discovered by Mirumyan in 1989, but was not published, and was independently found by Giaro, Kubale and Malafiejski in \cite{GKM1999}. On the other hand, in \cite{G1999} it was shown that all bipartite graphs on at most $14$ vertices are interval colorable. Based on these results the following problem was posed in \cite{PK2014}: \begin{problem} Is there a bipartite graph $G$ with $15 \leq |V(G)| \leq 18$ and $G \notin \mathfrak{N}$? \end{problem} In this work we partially solve this problem by showing that all bipartite graphs on at most $15$ vertices are interval colorable. We also show that some other classes of bipartite graphs of small order are interval colorable. \section{Related work} In 1999, Giaro \cite{G1999} used computer search to show that the following result holds. \begin{theorem} \label{Giaro14} All bipartite graphs of order at most $14$ are interval colorable. \end{theorem} In \cite{GKM1999} it was observed that the both parts of an interval non-colorable bipartite graph should be relatively large. \begin{theorem} \label{GiaroBipartition} If $G$ is a bipartite graph with bipartition $(X,Y)$ and $\min\{|X|,|Y|\} \leq 3$, then $G \in \mathfrak{N}$. \end{theorem} Several algorithms for finding a generalized version of interval edge-colorings of graphs are presented and compared in \cite{BHD2009}. \section {Implementation details} We use a computer search to look for interval non-colorable graphs of small order. First we generate a set of candidate graphs, then we try to color them using distributed computing system, and finally we double check the obtained colorings for possible errors. \subsection{Graph generation} We use the \textbf{nauty} package \cite{McKay} to generate the graphs. In particular, \textbf{genbg} program from the package generates all bipartite graphs according to the given bipartition. Moreover, it allows to specify the minimum and maximum degrees of the graphs, whether the generated graphs should be connected and several other options. \subsection{Distributed computing} In order to color the generated graphs we use CrowdProcess, a web-based distributed computing system \cite{CrowdProcess}. CrowdProcess provides an easy to use interface and REST API to submit a program written in JavaScript language together with a list of tasks as a JSON file. It distributes the program and the tasks to multiple computers (between 1000 and 5000 computers during our experiments), gathers the results and passes back through a web interface or an API. Most of the graphs are colored in less than 1 millisecond, so we send up to 200 graphs for each computer. \subsection{Coloring algorithm} We represent the bipartite graph $G$ with bipartition $(X,Y)$ and its coloring as a biadjacency matrix $B(G) = (b_{ij})_{n \times m}$, where $|X|=n$ and $|Y|=m$. Here $b_{ij}$ is the color of the edge joining the $i$-th vertex of $X$ and the $j$-th vertex of $Y$, if the edge exists, and is set to $0$ otherwise. We use backtracking to fill in the matrix with colors. At each step we calculate the set of possible colors for the current matrix cell (by taking into account already colored edges). We color the edge by a randomly selected color from the set of possible colors and move to the next cell. If for some edge the set of possible colors is empty, we return to the previous edge and change the color (if there still exist a color in the set of possible colors). The algorithm stops when all the edges are colored, or when all the possibilities are tested and the graph has no interval coloring. In practice the latter never happens because CrowdProcess puts a 5 minute time limit on the computation per computer. \subsection{Verification} After downloading the colorings from CrowdProcess we use a C++ program to verify the colorings. We discovered one case when the coloring returned from the CrowdProcess contained an error. We do not know how such an error could occur. We use one more C++ program to gather the graphs which were not colored (due to an error in the coloring or timeout) and send them again. We repeat this process until all the graphs are colored. \section{Results} Let $\mathfrak{F}$ be some set of bipartite graphs. Denote by $C(\mathfrak{F})$ the set of all connected bipartite graphs from $\mathfrak{F}$ having minimum degree at least $2$ and having at least $4$ vertices on each of its parts. Also let $M(\mathfrak{F})$ be the set of all graphs obtained by taking any graph $G \in \mathfrak{F}$ and removing any vertex from it. The following lemma holds. \begin{figure}[t] \label{fig19-2} \begin{center} \includegraphics[width=8cm]{fig6.eps} \caption{Interval non-colorable bipartite graph $H$ on $19$ vertices} \vspace{1.7cm} \end{center} \end{figure} \begin{lemma} \label{mainLemma} If for any set of bipartite graphs $\mathfrak{F}$, all graphs from the sets $M(\mathfrak{F})$ and $C(\mathfrak{F})$ are interval colorable, then all graphs from $\mathfrak{F}$ are also interval colorable. \end{lemma} Let $G$ be a bipartite graph from the set $\mathfrak{F}$. If $G \in C(\mathfrak{F})$, then it is interval colorable. Otherwise it is disconnected, has a minimum degree $1$, or has less than $4$ vertices on at least one of its parts. If $G$ is disconnected, then each of its connected components belongs to $M(\mathfrak{F})$, so the union of the colorings of the connected components will be a coloring of $G$. If there exists some pendant edge $uv \in E(G)$, where $d_G(v)=1$, then we take the coloring $\alpha$ of the graph $G-v$ (which belongs to $M(\mathfrak{F})$) and color the edge $uv$ by the color $\overline{S}(u,\alpha) + 1$. Finally, if one of the parts of $G$ has less than $4$ vertices, then $G$ is interval colorable by the Theorem \ref{GiaroBipartition}. \endproof \begin{theorem} \label{th15} All bipartite graphs on $15$ vertices are interval colorable. \end{theorem} Let $\mathfrak{F}$ be the set of all bipartite graphs on $15$ vertices. All graphs from the set $M(\mathfrak{F})$ are interval colorable due to the Theorem \ref{Giaro14}. According to the Lemma \ref{mainLemma} it is sufficient to show that all graphs from the set $C(\mathfrak{F})$ are interval colorable. The number of graphs in the set $C(\mathfrak{F})$ is 288 643 868. We color them all by using a computer algorithm described in the previous section. Some details of the performed computation are presented in Table \ref{table15}. \endproof \begin{theorem} \label{th4x} All bipartite graphs having $4$ vertices on one part and up to $15$ vertices on the other part are interval colorable except for the graph $G_1$ in Fig. \ref{fig19}. \end{theorem} Let $\mathfrak{F}_{i,j}$ be the set of all bipartite graphs with bipartition $(X,Y)$ where $|X|=i$ and $|Y| = j$, $i, j \in \mathbb{N}$. Note that $M(\mathfrak{F}_{i,j})=\mathfrak{F}_{i-1,j} \cup \mathfrak{F}_{i,j-1}$, for any $i,j>1$. We need to prove that all graphs from the sets $\mathfrak{F}_{4,j}$, $12 \leq j \leq 15$, are interval colorable except for the graph $F$ from the Fig. \ref{fig19}. Note that all the graphs from the sets $\mathfrak{F}_{3,j}$ are interval colorable due to the Theorem \ref{GiaroBipartition}. All graphs from the set $\mathfrak{F}_{4,11}$ are also interval colorable due to the Theorem \ref{th15}. We use the computer algorithm described in the previous section to color all graphs from the sets $C(\mathfrak{F}_{4,j})$, $12 \leq j \leq 15$, except for the graph $F$. The details of the computation are presented in Table \ref{table4x}. To complete the proof, we iteratively apply the Lemma \ref{mainLemma} to the sets $\mathfrak{F}_{4,j}$, for $j=12,13,14,15$. \endproof \begin{table}[t] \renewcommand{\arraystretch}{1.2} \label{table15} \begin{center} \begin{tabular}{|c|c|c|} \hline No. of vertices & No. of graphs & CPU hours\\ \hline 4 / 11 & 16308 & 3.04 \\ \hline 5 / 10 & 1583646 & 146.35 \\ \hline 6 / 9 & 43739172 & 340.51\\ \hline 7 / 8 & 243304742 & 15537.42\\ \hline \end{tabular} \caption{Details of the computation for coloring bipartite graphs of order $15$. CPU hours are reported by CrowdProcess} \vspace{0.3cm} \end{center} \end{table} \begin{table}[t] \renewcommand{\arraystretch}{1.2} \label{table4x} \begin{center} \begin{tabular}{|c|c|c|} \hline No. of vertices & No. of graphs & CPU hours\\ \hline 4 / 12 & 29515 & 4.96 \\ \hline 4 / 13 & 51616 & 19.19 \\ \hline 4 / 14 & 87609 & 96.95\\ \hline 4 / 15 & 144766 & N/A \\ \hline \end{tabular} \caption{Details of the computation for coloring bipartite graphs having $4$ vertices on one part and $j$ vertices on the other part, $12 \leq j \leq 15$. CPU hours are reported by CrowdProcess} \vspace{1.7cm} \end{center} \end{table} \section{Future work} Different algorithms can be tried to find interval colorings of the graphs even faster. In fact, the experiments show that most of the graphs have many different interval colorings and possibly some approximation algorithms will be sufficient to find colorings the easily. So far we have tried algorithms based on simulated annealing with no luck. Currently we are working on coloring the bipartite graphs on $16$ vertices. The number of such graphs after filtering out easily colorable ones (similar to the case of $15$-vertex bipartite graphs) is 12 322 367 816. We have colored more than 98\% of these graphs, but still it cannot be excluded that there exist interval non-colorable graphs on $16$ vertices. \section{Acknowledgements} We would like to thank the CrowdProcess team for donating practically unlimited computational time to us and for the excellent technical support. Also we would like to thank UtopianLab coworking space for providing broadband internet access. Finally, we would like to thank P. A. Petrosyan for many valuable comments and ideas. This work was made possible by a research grant from the Armenian National Science and Education Fund (ANSEF) based in New York, USA.
{ "timestamp": "2015-08-13T02:06:54", "yymm": "1508", "arxiv_id": "1508.02851", "language": "en", "url": "https://arxiv.org/abs/1508.02851", "abstract": "An edge-coloring of a graph $G$ with colors $1,\\ldots,t$ is an interval $t$-coloring if all colors are used, and the colors of edges incident to each vertex of $G$ are distinct and form an interval of integers. A graph $G$ is interval colorable if it has an interval $t$-coloring for some positive integer $t$. The problem of deciding whether a bipartite graph is interval colorable is NP-complete. The smallest known examples of interval non-colorable bipartite graphs have $19$ vertices. On the other hand it is known that the bipartite graphs on at most $14$ vertices are interval colorable. In this work we observe that several classes of bipartite graphs of small order have an interval coloring. In particular, we show that all bipartite graphs on $15$ vertices are interval colorable.", "subjects": "Discrete Mathematics (cs.DM); Combinatorics (math.CO)", "title": "On interval edge-colorings of bipartite graphs of small order", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9817357184418848, "lm_q2_score": 0.8289388019824946, "lm_q1q2_score": 0.8137988303086396 }
https://arxiv.org/abs/1004.4953
The Number of Eigenvalues of a Tensor
Eigenvectors of tensors, as studied recently in numerical multilinear algebra, correspond to fixed points of self-maps of a projective space. We determine the number of eigenvectors and eigenvalues of a generic tensor, and we show that the number of normalized eigenvalues of a symmetric tensor is always finite. We also examine the characteristic polynomial and how its coefficients are related to discriminants and resultants.
\section{Introduction} In the current numerical analysis literature, considerable interest has arisen in extending concepts that are familiar from linear algebra to the setting of multilinear algebra. One such familiar concept is that of an eigenvalue of a square matrix. Several authors have explored definitions of eigenvalues and eigenvectors for higher-dimensional tensors, and they have argued that these notions are useful for wide range of applications \cite{Lim}. Our approach in this paper is based on the definition of {\em E-eigenvalues} of tensors introduced by Liqun Qi in \cite{NQWW,Qi}. Throughout this paper, eigenvalue will mean E-eigenvalue, as defined in Definition~\ref{def:eig}. We fix two positive integers $m$ and $n$, and we consider order-$m$ tensors $A = (a_{i_1 i_2 \cdots i_m})$ of format $\,n \times n \times \cdots \times n\,$ with entries in the field of complex numbers $\mathbb C$. \begin{defn} \label{def:eig} \rm Let $x$ be in $\mathbb C^n$ and $A$ a tensor as above. Adopting the notation introduced in \cite{NQWW, Qi}, we define $Ax^{m-1}$ to be the vector in $\mathbb C^n$ whose $j$-th coordinate is the scalar \begin{equation} \label{eq:Am} (Ax^{m-1})_{j} \quad = \quad \sum_{i_2 = 1}^n \cdots \sum_{i_m=1}^n a_{j i_2 \cdots i_m} x_{i_2}\cdots x_{i_m}. \end{equation} If $\lambda$ is a complex number and $x \in \mathbb C^n$ a non-zero vector such that $A x^{m-1} = \lambda x$, then $\lambda$ is an \emph{eigenvalue} of~$A$ and $x$ is an \emph{eigenvector} of $A$. We will refer to the pair of $\lambda$ and~$x$ as an \emph{eigenpair}. Two eigenpairs $(\lambda,x)$ and $(\lambda', x')$ of the same tensor $A$ are considered to be {\em equivalent} if there exists a complex number $t \neq 0$ such that $t^{m-2}\lambda = \lambda'$ and $t x = x'$. \end{defn} If $m=2$ then $Ax^1$ is the ordinary matrix-vector product, and Definition \ref{def:eig} recovers the familiar eigenvalues and eigenvectors of a square matrix $A$. In that case, our notion of equivalence amounts to rescaling the eigenvector, but the eigenvalue is uniquely determined. For $m \geq 3$, Qi \cite{Qi} normalizes the eigenvectors $x$ of $A$ by additionally requiring $x \cdot x = 1$. When $x \cdot x$ is non-zero, this has the effect of choosing two distinguished representatives, related by $\lambda' = (-1)^m \lambda$ and $x' = -x$, from each equivalence class. In particular, when $m$ is even, the eigenvalue is uniquely determined, and when $m$ is odd, it is determined up to sign. However, since equivalence classes with $x \cdot x = 0$ are not allowed by Qi's normalization, his definition does not strictly generalize the classical eigenvalues of a matrix. We will call eigenvalues $\lambda$ with an eigenvector satisfying $x \cdot x = 1$ \emph{normalized eigenvalues} of the tensor~$A$. In Section~6 of~\cite{Qi}, Qi considers an alternative normalization by requiring $x \cdot \overline x = 1$, where $\overline x$ is the complex conjugate. This reduces the equivalence classes from two real dimensions to one real dimension. One can still get an equivalent eigenpair $(t^{m-2}\lambda, t x)$ for any complex number $t$ with unit modulus. Yet another normalization, based on the $p$-norm over the real numbers $\mathbb R$, was introduced by Lek-Heng Lim in his variational approach \cite{Lim}. For most of this paper, we prefer not to choose any normalization whatsoever. Instead, we depend on the notion of equivalence in Definition~\ref{def:eig} in order to have a finite number of equivalence classes of eigenpairs in the generic case. This equivalence is a generalization of the usual ambiguity of eigenvectors of an $n {\times} n$-matrix $A$, which at best, are only unique up to scaling. The following theorem generalizes, from $m=2$ to $m \geq 3$, the familiar linear algebra fact that an $n {\times} n$-matrix $A$ has precisely $n$ eigenvalues over the complex numbers. \begin{theorem}\label{thm:count} If a tensor $A$ has finitely many equivalence classes of eigenpairs over~$\mathbb C$ then their number, counted with multiplicity, is equal to $\,((m-1)^n - 1)/(m-2)$. If the entries of $A$ are sufficiently generic, then all multiplicities are equal to $1$, so there are exactly $\,((m-1)^n - 1)/(m-2)\,$ equivalence classes of eigenpairs. \end{theorem} For the case when the tensor order $m$ is even, the above formula was derived in \cite[Theorem~3.4]{NQWW} by means of a detailed analysis of the Macaulay matrix for the multivariate resultant. For arbitrary $m$, it was stated as a conjecture in line 2 on page 1228 of \cite{NQWW}. We here present a short proof of this conjecture that is based on techniques from toric geometry \cite{CS,fulton}. The rest of this paper is organized as follows. In Section~\ref{sec:count}, we prove Theorem~\ref{thm:count}. In Section~\ref{sec:characteristic}, we investigate the characteristic polynomial. Tensors with vanishing characteristic polynomial are interpreted as singular tensors. In Section~\ref{sec:dynamics}, we relate eigenvalues of tensors to dynamics on projective space. Finally, in Section~\ref{sec:symmetric}, we specialize to the case of symmetric tensors. We show that, in that case, the set of normalized eigenvalues is always finite. \section{Intersections in a Weighted Projective Space}\label{sec:count} We shall formulate the problem of computing the eigenvalues and eigenvectors of the tensor~$A$ as an intersection problem in the $n$-dimensional weighted projective space \begin{equation*} X \,\,\, = \,\,\, \mathbb P(1,1,\ldots, 1, m-2). \end{equation*} The textbook definition of~$X$ can be found, for example, in \cite[\S 2.0]{CS} and \cite[page 35]{fulton}. Points in~$X$ are represented by vectors of complex numbers $(u_1: \cdots: u_n: \lambda)$, not all zero, modulo the rescaling $(t u_1:\cdots: t u_n: t^{m-2} \lambda)$ for any non-zero complex number~$t$. The corresponding algebraic representation of our weighted projective space is $\, X = \operatorname{Proj}(R)$, where $R = \mathbb C[x_1, \ldots, x_n, \lambda]$ is the polynomial ring with $x_1,\ldots,x_n$ having degree~$1$ and $\lambda$ having degree~$m-2$. The following proof uses basic toric intersection theory as in \cite[Ch.~5]{fulton}. \begin{proof}[Proof of Theorem \ref{thm:count}] For $m = 2$, the expression $((m-1)^n-1)/(m-2)$ simplifies to $ n$, which is the number of eigenvalues of an ordinary $n {\times} n$-matrix. Hence we shall now assume that $\,m \geq 3$. For a fixed tensor~$A$, the $n$ equations determined by $\,Ax^{m-1} = \lambda x\,$ correspond to $n$ homogeneous polynomials of degree $m-1$ in our graded polynomial ring $R$. Since $R$ is generated in degree~$m-2$, the line bundle $\mathcal O_X(m-2)$ is very ample. The corresponding lattice polytope~$\Delta$ is an $n$-dimensional simplex with vertices at $(m-2)e_i$ for $1 \leq i \leq n$, and $e_{n+1}$, where the $e_i$ are the basis vectors in $\mathbb R^{n+1}$. The affine hull of~$\Delta$ is the hyperplane $\,x_1+\cdots + x_n + (m{-}2)\lambda = m{-} 2$. The normalized volume of this simplex equals \begin{equation} \label{eq:simplexvolume} {\rm Vol}(\Delta) \,\, = \,\, (m-2)^{n-1} . \end{equation} The lattice polytope $\Delta$ is smooth, except at the vertex~$e_{n+1}$, where it is simplicial with index $m-2$. Therefore, the projective toric variety $X$ is simplicial, with precisely one isolated singular point corresponding to the vertex~$e_{n+1}$. By \cite[p.~100]{fulton}, the variety $X$ has a rational Chow ring $A^*(X)_{\mathbb Q}$, which we can use to compute intersection numbers of divisors on~$X$. Our system of equations $Ax^{m-1} = \lambda x$ consists of $n$~polynomials of degree $m-1$ in~$R$. Let $D$ be the divisor class corresponding to $\mathcal O_X(m-1)$, and let $H$ be the very ample divisor class corresponding to $\mathcal O_X(m-2)$. The volume formula (\ref{eq:simplexvolume}) is equivalent to $\,H^n = (m-2)^{n-1}\,$ in $A^*(X)_{\mathbb Q}$, and we compute the self-intersection number of $D$ as the following rational number: \begin{equation*} D^n \quad = \quad \left(\frac{m-1}{m-2} \cdot H\right)^{\!\! n} \quad = \quad \left(\frac{m-1}{m-2}\right)^{\!\! n} \! \cdot (m-2)^{n-1} \quad = \quad \frac{(m-1)^n}{m-2}. \end{equation*} From this count we must remove the trivial solution $\{x=0\}$ of $\,Ax^{m-1} = \lambda x$. That solution corresponds to the singular point $e_{n+1}$ on~$X$. Since that point has index $m-2$, the trivial solution counts for $1/(m-2)$ in the intersection computation, as shown in \cite[p.~100]{fulton}. Therefore the number of non-trivial solutions in $X$ is equal to \begin{equation} \label{nicenumber} D^n \, - \, \frac{1}{m-2} \quad \, = \, \quad \frac{(m-1)^n-1}{m-2}, \end{equation} Therefore, when the tensor $A$ admits only finitely many equivalence classes of eigenpairs, then their number, counted with multiplicities, coincides with the positive integer in (\ref{nicenumber}). In Example~\ref{ex:diagonal} below we exhibit a tensor $A$ which attains the upper bound (\ref{nicenumber}). For that $A$, each solution $(x,\lambda)$ has multiplicity $1$. It follows that the same holds for generic $A$. \end{proof} \begin{remark} An alternative presentation of our proof is to perform the substitution $\lambda = \tilde\lambda^{m-2}$ in the equations $Ax^{m-1} = \lambda x$. This makes the system of equations homogeneous of degree~$m-1$. B\'ezout's theorem says that there are generically $(m-1)^n$ solutions in the projective space $\mathbb P^n$. If we remove the trivial solution, this leaves $(m-1)^n - 1$ solutions. Assuming that none of these have $\tilde\lambda = 0$, the orbits formed by multiplying $\tilde \lambda$ by $e^{2 \pi i/(m-2)}$ each yield the same value of $\lambda = \tilde \lambda^{m-2}$ and $x$. Thus, there are $((m - 1)^n - 1)/(m - 2)$ classes. The delicate point in such a proof would be to argue that the solution to $A x^{m-1} = \tilde \lambda^{m-2} x$ has multiplicity $m-2$ even when $\tilde \lambda = 0$. In effect, toric geometry conveniently does the bookkeeping in the correspondence between solutions to $A x^{m-1} = \lambda x$ and solutions to $A x^{m-1} = \tilde \lambda^{m-2} x$. \end{remark} \begin{ex}\label{ex:diagonal} Let $A$ be the diagonal tensor of order $m$ and size $n$ defined by setting $A_{i i \ldots i} = 1$ and all other entries zero. An eigenpair $(\lambda,x)$ is a solution to the equations \begin{equation} \label{eq:binomials} x_i^{m-1} \,\,=\,\, \lambda x_i \qquad \hbox{for $\,1 \leq i \leq n$.} \end{equation} All non-trivial solutions in $X = \mathbb P(1,\ldots,1,m-2)$ satisfy $\lambda \not= 0 $. By rescaling, we can assume that $\lambda = 1$. Fix the root of unity $\zeta = e^{2\pi i/(m-2)}$, and let $S = \{0, \ldots, m-3, *\}$. For any string $\sigma$ in $S^n$ other than the all $*$s string, we define $x_i = \zeta^{\sigma_i}$ if $\sigma_i$ is an integer and $x_i = 0$ if $\sigma_i = *$. This defines $(m-1)^n - 1$ eigenpairs. However, some of these are equivalent. Incrementing each integer in our string by $1$ modulo $m-2$ corresponds to multiplying our eigenvector by~$\zeta$. Thus, we have defined $((m-1)^n - 1)/(m-2)$ equivalence classes of eigenvalues and eigenvectors. These are all equivalence classes of solutions to (\ref{eq:binomials}). More generally, suppose that $A$ is a diagonal tensor with $A_{ii\ldots i}$ equal to some non-zero complex number~$a_i$. Then the eigenpairs are similarly given by $\lambda = 1$ and $x_i = a_i^{1/{m-2}}\zeta^{\sigma_i}$ or $x_i = 0$, as above, where $a_i^{1/{m-2}}$ is a fixed root of~$a_i$. In particular, for generic $a_i$ all $((m-1)^n - 1)/(m-2)$ eigenpairs will have distinct normalized eigenvalues. \qed \end{ex} From Theorem~\ref{thm:count}, we get the following result guaranteeing the existence of real eigenpairs. \begin{corollary}\label{cor:real-eig} If $A$ has real entries and either $m$ or $n$ is odd, then $A$ has a real eigenpair. \end{corollary} \begin{proof} When either $m$ or $n$ is odd, then one can check that the integer $\,((m-1)^n - 1)/(m-2)\,$ in Theorem~\ref{thm:count} is odd. This implies that $A$ has a real eigenpair by \cite[Corollary 13.2]{fulton-intersection}. \end{proof} Corollary~\ref{cor:real-eig} is sharp, in the sense that there exist real tensors with no real eigenpairs whenever both $m$ and~$n$ are even. We illustrate this in the following example. \begin{ex} Let $m$ be even, $n=2$, and $A$ the $2 {\times} \cdots {\times} 2$ tensor which is zero except for the entries $a_{12\cdots2} = 1$ and $a_{21\cdots1} = -1$. The eigenpairs of $A$ are the solutions to the equations: \begin{align*} x_2^{m-1} &\,=\, \lambda x_1 \\ -x_1^{m-1} &\,=\, \lambda x_2. \end{align*} Eliminating $\lambda$, we obtain $x_1^m + x_2^m = 0$, which has no non-zero real solutions for even~$m$. For $n$ any even integer, let $B$ be the tensor whose $n/2$ diagonal $2\times \cdots \times 2$ blocks are the tensor~$A$ above, and which is zero elsewhere. A non-trivial eigenpair must be an eigenpair for at least one of the blocks, and therefore cannot be real. Thus, $B$ has no real eigenpairs. \qed \end{ex} \section{Characteristic Polynomial and Singular Tensors}\label{sec:characteristic} The {\em characteristic polynomial} $\phi_A(\lambda)$ of a generic tensor $A$ was defined as follows in \cite{NQWW, Qi}. Consider the univariate polynomial in $\lambda $ that arises by eliminating the unknowns $x_1,\ldots,x_n$ from the system of equations $A x^{m-1} = \lambda x$ and $x \cdot x = 1$. If $m$ is even then this polynomial equals $\phi_A(\lambda)$. If $m$ is odd then this polynomial has the form $\phi_A(\lambda^2)$, i.e.\ the characteristic polynomial evaluated at $\lambda^2$. With these definitions, Theorem \ref{thm:count} implies the following: \begin{corollary} \label{cor:charpol} For a generic tensor $A$, the characteristic polynomial $\phi_A(\lambda)$ is irreducible and has degree $((m-1)^n-1)/(m-2)$. Hence this is the number of normalized eigenvalues. \end{corollary} For any particular tensor $A$, the {\em characteristic polynomial} $\phi_A(\lambda)$ is obtained by specializing the entries in the coefficients of the generic characteristic polynomial. Ni {\it et al.} \cite{NQWW} expressed $\phi_A(\lambda)$ as a Macaulay resultant, which implies a formula as a ratio of determinants. For the present work, we used Gr\"obner-based software to compute the characteristic polynomials of various tensors. It is tempting to surmise that all zeros of the characteristic polynomial $\phi_A(\lambda)$ are normalized eigenvalues of the tensor $A$. This statement is almost true, but not quite. There is some subtle fine print, to be illustrated by Example \ref{ex:fineprint} below. Qi \cite[Question 1]{Qi} asked whether the set of normalized eigenvalues of a tensor is either finite or all of~$\mathbb C$. We answer this question by showing a tensor where neither of these alternatives holds: \begin{ex} \label{ex:fineprint} Consider the complex $2 \times 2 \times 2$ tensor $A$ whose nonzero entries are \begin{equation*} a_{111} = a_{221} \, =\, 1 \quad \hbox{and} \quad a_{112} = a_{222} \,=\, i = \sqrt{-1} . \end{equation*} We claim that any complex number other than $0$ is a normalized eigenvalue of $A$, but $0$ is not a normalized eigenvalue. The equations for an eigenvalue and eigenvector of~$A$ are $$ x_1^2 + ix_1 x_2 \,= \, \lambda x_1 \quad \hbox{and} \quad x_1 x_2 + ix_2^2 \,= \, \lambda x_2. $$ For any $\lambda \neq 0$ we obtain a matching eigenvector that also satisfies $\,x \cdot x = 1\,$ by taking \begin{equation*} x = \left( \frac{\lambda^2 + 1}{2\lambda}, \,\frac{\lambda^2 - 1}{2 i \lambda} \right). \end{equation*} Hence $\lambda$ is a normalized eigenvalue. However, if $\lambda = 0$, then an eigenvector must satisfy $$ x_1^2 + i x_1 x_2 \, =\, 0 \quad \hbox{and} \quad x_1 x_2 + i x_2^2 \, = \, 0. $$ These imply that $\,x \cdot x = x_1^2 + x_2^2 \,$ is zero, so $\lambda = 0$ cannot be a normalized eigenvalue. \qed \end{ex} However, we have the following weaker statement: \begin{proposition}\label{prop:normalized-eigen} The set of normalized eigenvalues of a tensor is either finite or it consists of all complex numbers in the complement of a finite set. \end{proposition} \begin{proof} The set $\mathcal{E}(A)$ of normalized eigenvalues $\lambda$ of the tensor $A$ is defined by the condition \begin{equation*} \exists \,x \in \mathbb C^n \,\,: \,\,A x^{m-1} = \lambda x \,\,\, \hbox{and} \,\,\, x \cdot x = 1. \end{equation*} Hence $\mathcal{E}(A)$ is the image of an algebraic variety in $\mathbb C^{n+1}$ under the projection $(x,\lambda) \mapsto \lambda$. Chevalley's Theorem states that the image of an algebraic variety under a polynomial map is constructible, that is, defined by a Boolean combination of polynomial equations and inequations. We conclude that the set $\mathcal{E}(A)$ of normalized eigenvalues is a constructible subset of $\mathbb C$. This means that $\mathcal{E}(A)$ is either a finite set or the complement of a finite set. \end{proof} The relationship between the normalized eigenvalues and the characteristic polynomial is summarized in the following proposition. \begin{proposition} \label{prop:sing} For a tensor~$A$, each of the following conditions implies the next: \begin{enumerate} \item The set $\,\mathcal E(A)$ of all normalized eigenvalues consists of all complex numbers \item The set $\,\mathcal{E}(A)$ is infinite. \item The characteristic polynomial $\phi_A(\lambda)$ vanishes identically. \end{enumerate} \end{proposition} \begin{proof} Clearly, (1) implies (2). By the projection argument in the proof above, the zero set in $\mathbb C$ of the characteristic polynomial $\phi_A(\lambda)$ contains the set $\mathcal{E}(A)$. Hence (2) implies (3). \end{proof} From Example~\ref{ex:fineprint}, we see that (2) does not necessarily imply (1), and in Example~\ref{ex:dreizwei}, we shall see that (3) does not imply (2). Qi defines a singular tensor to be one for which the first statement of Proposition~\ref{prop:sing} holds. However, we suggest that the last condition is a better definition: a \emph{singular tensor} is a tensor such that $\phi_A(\lambda)$ vanishes identically. This definition has the advantage that the limit of singular tensors is again singular. In particular, the set of all singular tensors is a closed subvariety in the $n^m$-dimensional tensor space $\mathbb C^{n \times \cdots \,\times n}$. Its defining polynomial equations are the coefficients of the characteristic polynomial $\phi_A(\lambda)$ where $A$ is the tensor whose entries $a_{i_1 \cdots i_n}$ are indeterminates. \begin{example} \label{ex:dreizwei} Let $m = 3$ and $n=2$. Here $A = (a_{ijk})$ is a general tensor of format $2 \times 2 \times 2$. The characteristic polynomial $\phi_A$ is obtained by eliminating $x_1$ and $x_2$ from the ideal $$ \langle \, a_{111} x_1^2 + (a_{112} + a_{121}) x_1 x_2 + a_{122} x_2^2 - \lambda x_1\, , \, a_{211} x_1^2 + (a_{212} + a_{121}) x_1 x_2 + a_{222} x_2^2 - \lambda x_2 \,, \, x_1^2 + x_2^2 - 1 \, \rangle . $$ We find that $\phi_A$ has degree $3$, as predicted by Theorem \ref{thm:count}. Namely, the elimination yields $$ \phi_A(\lambda^2) \quad = \quad C_2 \lambda^6 \, + \,C_4 \lambda^4 \,+\, C_6 \lambda^2 + C_8, $$ where $C_i$ is a certain homogeneous polynomial of degree $i$ in the eight unknowns $a_{ijk}$. The set of singular $2 {\times} 2 {\times} 2$-tensors is the variety in the projective space $\mathbb P^7 = \mathbb P(\mathbb C^{2 \times 2 \times 2})$ given by \begin{equation} \label{eq:ideal} \langle C_2, C_4, C_6, C_8 \rangle \,\,\subset \,\, \mathbb C[a_{111},a_{112}, \ldots,a_{222}] \end{equation} This is an irreducible variety of codimension~$2$ and degree~$4$, but the ideal (\ref{eq:ideal}) is not prime. The constant coefficient $C_8$ is the square of a quartic. That quartic is the {\em Sylvester resultant} $$ {\rm Res}_x (A x^2) \quad = \quad {\rm det} \begin{pmatrix} \, a_{111} & a_{112} + a_{121} & a_{122} & 0 \, \\ \, 0 & a_{111} & a_{112} + a_{121} & a_{122} \, \\ \, a_{211} & a_{212} + a_{221} & a_{222} & 0 \, \\ \, 0 & a_{211} & a_{212} + a_{221} & a_{222} \, \end{pmatrix}. \qquad $$ The leading coefficient of the characteristic polynomial is a sum of squares $$ C_2 \, \,\,= \, \,\, (-a_{111} + a_{122} + a_{212} + a_{221} )^2 \,+\, ( a_{112} + a_{121} + a_{211} - a_{222})^2 $$ This indicates that among singular $2 {\times} 2 {\times} 2$-tensors those with real entries are scarce. Indeed, the real variety of (\ref{eq:ideal}) is the union of two linear spaces of codimension~$4$, with defining ideal \begin{equation*} \langle a_{122}, a_{211}, a_{112}+a_{121}-a_{222}, a_{212}+a_{221}-a_{111} \rangle \,\cap \, \langle a_{111} - a_{122}, a_{211} - a_{222}, a_{112} + a_{121}, a_{212} + a_{221} \rangle. \end{equation*} This explains why the singular tensor in Example \ref{ex:fineprint} had to have a non-real coordinate. We now look more closely at the real singular tensors defined by the second ideal in this intersection. These are tensors $A$ for which $\,Ax^2 = \bigl( a_{111}(x^2 + y^2) \,,\, a_{211}(x^2 +y^2) \bigr)$. It is easy to see that, so long as $a_{111}$ and~$a_{211}$ are not both zero, the only normalized eigenvector is \begin{equation*} \left( \frac{a_{111}}{\sqrt{a_{111}^2 + a_{211}^2}}, \frac{a_{211}}{\sqrt{a_{111}^2 + a_{211}^2}} \right), \mbox{ which has eigenvalue } \lambda = \sqrt{a_{111}^2 + a_{211}^2}. \end{equation*} In particular, the number of eigenvalues of such a tensor must be finite. This example shows that (3) does not imply (2) in Proposition~\ref{prop:sing}. \qed \end{example} The reader will not have failed to notice that the notion of ``singular'' used here (and in \cite{Qi}) is more restrictive than the one familiar from the classical case $m=2$. Indeed, a matrix is singular if it has $\lambda = 0$ as an eigenvalue, or, equivalently, if the constant term of the characteristic polynomial vanishes. That constant term is a power of the resultant $\,{\rm Res}_x(A x^{m-1})$, and its vanishing means that the homogeneous equations $A x^{m-1} = 0$ have a non-trivial solution $x \in \mathbb P^{n-1}$. This holds when the tensor $A$ is singular but not conversely. Yet another possible notion of singularity for a tensor $A$ arises from its {\em hyperdeterminant} ${\rm Det}(A)$, as defined in \cite{gkz}. For example, the hyperdeterminant of a $2 {\times} 2 {\times} 2 $-tensor equals $$ \begin{matrix} {\rm Det}(A) &=& a_{122}^2 a_{211}^2 +a_{121}^2 a_{212}^2 +a_{112}^2 a_{221}^2 +a_{111}^2 a_{222}^2 \qquad \qquad \qquad \qquad \qquad \qquad \qquad \\ & & - 2 a_{121} a_{122} a_{211} a_{212} -2 a_{112} a_{122} a_{211} a_{221} -2 a_{112} a_{121} a_{212} a_{221} -2 a_{111} a_{122} a_{211} a_{222} \\ & & - 2 a_{111} a_{121} a_{212} a_{222} - 2 a_{111} a_{112} a_{221} a_{222} +4 a_{111} a_{122} a_{212} a_{221} + 4 a_{112} a_{121} a_{211} a_{222} . \end{matrix} $$ The hyperdeterminant vanishes if the hypersurface defined by the multilinear form associated with $A$ has a singular point in $(\mathbb P^{n-1})^m$. This property is unrelated to the coefficients of the characteristic polynomial $\phi_A(\lambda)$. In particular, $\,{\rm Det}(A) \not= {\rm Res}_x(A x^{m-1}) $. For an example take the $2 {\times} 2 {\times} 2$-tensor $A$ all of whose entries are $a_{ijk} = 1$ is not singular but ${\rm Det}(A) = 0$. On the other hand, the following tensor $A$ is singular but has ${\rm Det}(A) = -1$: \begin{equation*} a_{111} = -1, \, a_{112} = 0, \, a_{121} = 0, \, a_{122} = -1, \,\, a_{211} = 1,\, a_{212} = -1, \, a_{221} = 0, \, a_{222} = 1-i. \end{equation*} This highlights the distinction between our setting here and that in \cite[Proposition 2]{Lim}. \section{Dynamics on Projective Space}\label{sec:dynamics} The purpose of this short section is to point out a connection to dynamical systems. Dynamics on projective space is a well-established field of mathematics \cite{BJ, ivashkovich}. We believe that the interpretation of eigenpairs of tensors in terms of fixed points on $\mathbb P^{n-1}$ could be of interest to applied mathematicians as a new tool for modeling and numerical computations. We consider the map $\psi_A$ defined by the formula $\psi_A(x) = Ax^{m-1}$. This is a rational map from complex projective space $\mathbb P^{n-1}$ to itself. The fixed points of the map $\psi_A \colon \mathbb P^{n-1} \dashrightarrow \mathbb P^{n-1}$ are exactly the eigenvectors of the tensor $A$ with non-zero eigenvalue, and the base locus of $\psi_A$ is the set of eigenvectors with eigenvalue zero. In particular, the map $\psi_A$ is defined everywhere if and only if $0$ is not an eigenvalue of~$A$. Note that every such rational map arises from some tensor~$A$, but the tensor is not unique. Indeed, $A$ has $n^m$ entries while the map is determined by $n$ polynomials, which have only $n \binom{n +m -2}{ m-1}$ distinct coefficients. For instance, in Example~\ref{ex:dreizwei}, with $m=3,n=2$, the eight entries of the tensor translate into six distinct coefficients of the two binary quadrics that specify the self-map of the projective line $\,\psi_A\colon \mathbb P^1 \dashrightarrow \mathbb P^1$. It is instructive to revisit the classical case $m=2$, where $\psi_A\colon \mathbb P^{n-1} \dashrightarrow \mathbb P^{n-1}$ is a linear map. The condition that every eigenvalue of the matrix~$A$ is zero is equivalent to saying that $A$ is {\em nilpotent}, that is, some matrix power of~$A$ is zero. Geometrically, this means that some iterate of the rational map $\psi_A$ is defined nowhere in projective space $\mathbb P^{n-1}$. We use the same definition for tensors: A is {\em nilpotent} if some iterate of $\psi_A$ is nowhere defined. \begin{proposition} If the tensor $A$ is nilpotent then $0$ is the only eigenvalue of $A$. The converse is not true: there exist tensors with only eigenvalue $0$ that are not nilpotent. \end{proposition} \begin{proof} Suppose $\lambda \not= 0 $ is an eigenvalue and $x \in \mathbb C^n\backslash \{0\}$ a corresponding eigenvector. Then $x$ represents a point in $\mathbb P^{n-1}$ that is fixed by $\psi_A$. Hence it is fixed by every iterate $\psi_A^{(r)}$ of $\psi_A$. In particular, $\psi_A^{(r)}$ is defined at (an open neighborhood) of $x \in\mathbb P^{n-1}$, and $A$ is not nilpotent. Let $A$ be the $2 {\times} 2 {\times} 2$-tensor with $a_{111} = a_{211} = a_{212} = 1$ and the other five entries zero. The eigenpairs of $A$ are the solutions to $\, x_1^2 \, = \, \lambda x_1 \,$ and $\, x_1^2 + x_1x_2 \, = \, \lambda x_2 $. Up to equivalence, the only eigenpair is $x = (0, 1)$ and $\lambda = 0$. However, the self-map $\psi_A$ on $\mathbb P^1$ is dominant. To see this, note that $\psi_A$ acts by translation on the affine line $\,\mathbb A^1 = \{ x_1 \neq 0\}$ because $(x_1^2: x_1^2 + x_1x_2) = (x_1: x_1+x_2)$. All iterates of $\psi_A$ are defined on $\mathbb A^1$, i.e.~there are no base points with $x_1 \not= 0$, and hence $A$ is not nilpotent. \end{proof} The example in the previous proof works because the two binary quadrics in $Ax^{m-1}$ have $x_1$ as a common factor. Indeed, whenever $n=2$, an eigenvector~$x$ has eigenvalue zero if and only if $x$ is a solution to a common linear factor of the two binary forms of $Ax^{m-1}$. However, for $n \geq 3$, this is no longer true. Work in dynamics by Ivashkovic \cite[Theorem~1]{ivashkovich} implies that, for $n=3$, one can construct tensors~$A$ such that zero is the only eigenvalue, the polynomials in $Ax^{m-1}$ have no common factors, but $A$ is not nilpotent. \begin{example} This example is taken from \cite[Example~4.1]{ivashkovich}. Let $m=n=3$ and take $A$ to be any tensor whose corresponding map is the Cremona transformation $$ \psi_A \colon \mathbb P^2 \dashrightarrow \mathbb P^2 \,:\, (x_1,x_2,x_3) \mapsto (x_1 x_2, x_1 x_3, 2 x_2 x_3).$$ This map has no fixed points, but it is not nilpotent. The base locus of $\psi_A$ consists of the three points $(1,0,0)$, $(0,1,0)$, and~$(0,0,1)$, and, up to scaling, these are the only eigenvectors of~$A$, all with eigenvalue~$0$. \qed \end{example} \section{Symmetric Tensors}\label{sec:symmetric} Of particular interest in numerical multilinear algebra is the situation when the tensor $A$ is symmetric and has real entries. Here $A$ being {\em symmetric} means that the entries $a_{i_1 i_2 \cdots i_n}$ are invariant under permuting the $n$ indices $i_1, i_2,\ldots,i_n$. Each symmetric $n {\times} \cdots {\times} n$ tensor $A$ of order~$m$ corresponds to a unique homogeneous polynomial~$f(x)$ of degree~$m$ in~$n$ unknowns. The symmetric case is of interest because a real polynomial~$f(x)$ of even degree~$m$ is positive semidefinite if and only if every real eigenpair of the corresponding symmetric tensor~$A$ has non-negative eigenvalue~\cite[Theorem~5(a)]{Qi-sym}. This is illustrated in Example~\ref{ex:motzkin}. In the notation of \cite{Qi-sym}, the relation between the tensor and the polynomial is written as \begin{equation} \label{eqn:iswrittenas} Ax^m \,\,=\,\, mf(x) \quad \hbox{and} \quad A x^{m-1} \,\, =\,\, \nabla f(x) , \end{equation} where $Ax^m$ is defined to be \begin{equation*} \sum_{i_1 = 1}^n \sum_{i_2 = 1}^n \cdots \sum_{i_m = 1}^n a_{i_1 \ldots i_m} x_{i_1} x_{i_2} \cdots x_{i_m} = x \cdot Ax^{m-1}. \end{equation*} The first equation in~(\ref{eqn:iswrittenas}) follows from the second because $x \cdot \nabla f(x) = mf(x)$. Note that the second equation in~(\ref{eqn:iswrittenas}) says that the coordinates of the gradient of $f(x)$ are precisely the entries of $A x^{m-1}$. The gradient $\nabla f(x)$ vanishes at a point $x$ in $\mathbb P^{n-1}$ if and only if $x$ is a singular point of the hypersurface in $\mathbb P^{n-1}$ defined by the polynomial $f(x)$. This implies: \begin{corollary} \label{cor:sing2} The singular points of the projective hypersurface $\{x \in \mathbb P^{n-1}:f(x) = 0\}$ are precisely the eigenvectors of the corresponding symmetric tensor $A$ which have eigenvalue~$0$. \end{corollary} The other eigenvectors of $A$ can also be characterized in terms of the polynomial $f(x)$. \begin{proposition} Fix a non-zero $\lambda$ and suppose $m \geq 3$. Then $x \in \mathbb C^n$ is a normalized eigenvector with eigenvalue $\lambda$ if and only if $x$ is a singular point of the affine hypersurface defined by the polynomial \begin{equation}\label{eqn:eig-char} f(x) - \frac{\lambda}{2} x \cdot x - \left(\frac{1}{m} - \frac{1}{2}\right) \lambda. \end{equation} \end{proposition} \begin{proof} The gradient of~(\ref{eqn:eig-char}) is $\nabla f - \lambda x = Ax^{m-1} - \lambda x$, so every singular point $x$ is an eigenvector with eigenvalue~$\lambda$. Furthermore, if we substitute $f(x) = \frac{1}{m} x \cdot \nabla f = \frac{\lambda}{m} x\cdot x\,$ into (\ref{eqn:eig-char}), then we obtain $x\cdot x = 1$. This argument is reversible: if $x$ is a normalized eigenvector of $A$ then $x \cdot x = 1$ and $\nabla f(x) = \lambda x$, and this implies that (\ref{eqn:eig-char}) and its derivatives vanish. \end{proof} \begin{corollary} The characteristic polynomial $\phi_A(\lambda)$ is a factor of the discriminant of (\ref{eqn:eig-char}). \end{corollary} Here we mean the classical multivariate discriminant \cite{gkz} of an inhomogeneous polynomial of degree $m$ in $n$ variables $x$ evaluated at (\ref{eqn:eig-char}), where $\lambda$ is regarded as a parameter. Besides the characteristic polynomial $\phi_A(\lambda)$, this discriminant may contain other irreducible factors. \begin{example}[\em Discriminantal representation of the characteristic polynomial of a symmetric tensor] If $n=2$ and $m=3$ then the discriminant at bivariate cubic in (\ref{eqn:eig-char}) equals $\,\lambda^4 \cdot \phi_A(\lambda)$. If $n=2$ and $m=4$ then we evaluate the discriminant of the ternary quartic using Sylvester's formula \cite[\S 3.4.D]{gkz}. The output has the discriminant of binary quartic as an extraneous factor: $$ \hbox{\rm Discriminant of (\ref{eqn:eig-char})} \quad = \quad \bigl(\phi_A(\lambda) \bigr)^2 \cdot \lambda^9 \cdot \hbox{\rm Discriminant of $f(x)$} $$ It would be interesting to determine the analogous factorization for arbitrary $m$ and $n$. \qed \end{example} \smallskip The subject of this paper is the number of normalized eigenvalues of a tensor. In Section~2 we gave an upper bound for that number under the hypothesis that the number is finite. Remarkably, this hypothesis is not needed if we restrict our attention to symmetric tensors. \begin{theorem}\label{thm:finite-norm-eig} Every symmetric tensor $A$ has at most $((m-1)^n-1)/(m-2)$ distinct normalized eigenvalues. This bound is attained for generic symmetric tensors $A$. \end{theorem} \begin{proof} It suffices to show that the number of normalized eigenvalues of \underbar{every} symmetric tensor $A$ is finite. Recall from the proof of Theorem~\ref{thm:count} that the set of eigenpairs is the intersection of $n$ linearly equivalent divisors on a weighted projective space. Since these divisors are ample, each connected component of the set of eigenpairs contributes at least one to the intersection number. Therefore, the number of connected components of eigenpairs can be no more than $((m-1)^n - 1)/(m-2)$. We conclude that the number of normalized eigenvalues of $A$, if finite, must be bounded above by that quantity as well. Finally, Example~\ref{ex:diagonal} shows that the bound is tight. We now prove that the number of normalized eigenvalues of a symmetric tensor $A$ is finite. Let $S$ be the affine hypersurface in $\mathbb C^n$ defined by the equation $x_1^2 + \cdots + x_n^2 = 1$. We claim that a point $x \in S$ is an eigenvector of~$A$ if and only if $x$ is a critical point of $f$ restricted to~$S$, in which case, the corresponding eigenvalue $\lambda$ equals $\frac{1}{m}f(x)$. By definition, a point $x \in S$ is a critical point of $f \vert_S$ if and only if the gradient $\nabla (f \vert_S)$ is zero at~$x$. The latter condition is equivalent to the gradient $\nabla f$ being a multiple of $\nabla(x_1^2 + \cdots + x_n^2 - 1) = 2x$. This is exactly the definition of an eigenvector. Finally, if $x\in S$ is a critical point of $f \vert_S$, then $mf(x) = x \cdot \nabla f(x) = \lambda x \cdot x = \lambda$, and hence $\,\lambda = \frac{1}{m}f(x)$. Finally, to prove Theorem~\ref{thm:finite-norm-eig}, we note that, by generic smoothness~\cite[Cor. III.10.7]{Hartshorne}, a polynomial function on a smooth variety has only finitely many critical values. Equivalently, Sard's theorem in differential geometry says that the set of critical values of a differentiable function has measure zero, so, by Proposition~\ref{prop:normalized-eigen}, that set must be finite. \end{proof} We note two subtleties about Theorem~\ref{thm:finite-norm-eig}. First, it does not imply that the characteristic polynomial of every symmetric tensor is non-trivial. Second, the result is intrinsically tied to the normalization $x \cdot x = 1$. We begin with an example of the first. \begin{example} Let $A$ be the symmetric $2 \times 2 \times 2$ tensor with \begin{equation*} a_{111} = -2i \,,\quad a_{112} = a_{121} = a_{211} = 1 \,,\quad a_{122} = a_{212} = a_{221} = 0 \,, \quad a_{222} = 1. \end{equation*} Then, up to equivalence, the only eigenvectors are $(0, 1)$ with eigenvalue~$1$ and $(1, i)$ with eigenvalue~$0$. Note that the second cannot be rescaled to be a normalized eigenvector, so the only normalized eigenvalue is~$1$. However, the characteristic polynomial of~$A$ is identically zero. The reason is that, for a small perturbation of $A$, the perturbation of the eigenvector~$(1,i)$ can take on any given normalized eigenvalue. \qed \end{example} No analogue of Theorem~\ref{thm:finite-norm-eig} is possible with the alternative normalization of requiring $x \cdot \overline x = 1$. In this case, each equivalence class yields infinitely many eigenvalues, which nonetheless have the same magnitude. However, the following example shows that the magnitudes of the eigenvalues with $x \cdot \overline x = 1$ may still be an infinite set. \begin{example} Let $A$ be the symmetric $3 {\times} 3 {\times} 3$ tensor whose non-zero entries are \begin{equation*} a_{111} = 2 \quad \hbox{and} \quad a_{122} = a_{212} = a_{221} \,=\, a_{133} = a_{313} = a_{331} \,=\, 1. \end{equation*} The eigenpairs of $A$ are the solutions to the equations \begin{align*} 2 x_1^2 + x_2^2 + x_3^2 &\,=\, \lambda x_1, \\ 2 x_1 x_2 &\,=\, \lambda x_2, \\ 2 x_1 x_3 &\,=\, \lambda x_3. \end{align*} For any $\alpha \in \mathbb C$, the vector $\,x = (1, i \alpha, \alpha)$ is an eigenvector with eigenvalue $\lambda = 2$. Rescaling, $x/\sqrt{x \cdot \overline x}$ is an eigenvector with unit length and eigenvalue \begin{equation*} \frac{2}{\sqrt{1 + 2 \lvert \alpha \rvert}}. \end{equation*} The magnitude of this eigenvalue can be any real number in the interval $(0,2\,]$. Note that the family of eigenvectors above all have $x \cdot x = 1$, so $\lambda = 2$ is the only normalized eigenvalue. \qed \end{example} One application of eigenvalues of symmetric tensors is that these can be used to decide whether a polynomial $f$ is {\em positive semidefinite}, i.e., whether $f(x) \geq 0$ for all $x \in \mathbb R^n$. \begin{example}\label{ex:motzkin} The {\em Motzkin polynomial} $f(x,y,z) = z^6 + x^4 y^2 + x^2 y^4 - 3 x^2 y^2 z^2$ is a well-known example of a positive semidefinite polynomial which cannot be written as a sum of squares. Let $A$ be the corresponding $3 {\times} 3 {\times} 3 {\times} 3 {\times} 3 {\times} 3 $-tensor. This tensor has $25$ eigenvalues, counting multiplicities, six less than our upper bound of $31$. Disregarding multiplicities, there are only four distinct eigenvalues. All four are real and they are equal to: $0$ (with multiplicity~$14$), $3/32$ (with multiplicity~$8$), $3/2$ (with multiplicity~$2$), and $6$ (with multiplicity~$1$). By~\cite[Theorem 5(a)]{Qi-sym}, this confirms the fact that the Motzkin polynomial $f$ is positive semidefinite. \qed \end{example} \bigskip {\bf Acknowledgments.} We thank Tamara Kolda for inspiring this project, with a question she asked us at the {\em Berkeley Optimization Day} on March 6, 2010. Both authors were supported in part by the National Science Foundation (DMS-0456960 and DMS-0757207). \bigskip
{ "timestamp": "2010-05-18T02:00:14", "yymm": "1004", "arxiv_id": "1004.4953", "language": "en", "url": "https://arxiv.org/abs/1004.4953", "abstract": "Eigenvectors of tensors, as studied recently in numerical multilinear algebra, correspond to fixed points of self-maps of a projective space. We determine the number of eigenvectors and eigenvalues of a generic tensor, and we show that the number of normalized eigenvalues of a symmetric tensor is always finite. We also examine the characteristic polynomial and how its coefficients are related to discriminants and resultants.", "subjects": "Numerical Analysis (math.NA); Algebraic Geometry (math.AG)", "title": "The Number of Eigenvalues of a Tensor", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9843363512883317, "lm_q2_score": 0.8267117940706734, "lm_q1q2_score": 0.8137624709425574 }
https://arxiv.org/abs/1509.07908
Helly-type theorems for the diameter
We study versions of Helly's theorem that guarantee that the intersection of a family of convex sets in $R^d$ has a large diameter. This includes colourful, fractional and $(p,q)$ versions of Helly's theorem. In particular, the fractional and $(p,q)$ versions work with conditions where the corresponding Helly theorem does not. We also include variants of Tverberg's theorem, Bárány's point selection theorem and the existence of weak epsilon-nets for convex sets with diameter estimates.
\section{Introduction} Quantitative results in combinatorial geometry have recently caught new interest. Those surrounding Helly's theorem have as aim to show that \textit{given a finite family of convex set in $\mathds{R}^d$, if the intersection of every small subfamily is large, then the intersection of the whole family is also large} \cite{Amenta:2015tp}. When the size of a convex set is measured according to a function that varies discretely, such as the number of points in a lattice, there are very sharp results (e.g. \cite{Aliev:2014va}). However, when the size of a convex set is measured according to a function that varies continuously, such as the volume or diameter, the behaviour changes considerably. The first results of this kind were presented by B\'ar\'any, Katchalski and Pach \cite{Barany:1982ga, Barany:1984ed}, where Helly-type theorems were made regarding the volume and diameter of the intersection of families of convex sets. They showed that, given a finite family of convex sets in $\mathds{R}^d$, if the intersection of every $2d$ of them has volume at least one, one can obtain lower bounds on the volume of the intersection, and the same holds for the diameter. The constant $2d$ is optimal, but the downside is that the guarantee of the diameter or volume of the intersection decreases quickly with the dimension. Helly's theorem has an impressive number of variations and generalisations (see, for instance, the surveys \cite{Danzer:1963ug,Eckhoff:1993uy, Matousek:2002td, Wenger:2004uf, Amenta:2015tp}). Thus, it is natural to determine which results can be extended in this quantitative framework. For the volume, several advances have been made in this direction \cite{Naszodi:2015vi, DeLoera:2015wp, Soberon:2015tsa}. This includes optimising the original result by B\'ar\'any, Katchalski and Pach, and finding colourful versions, fractional versions and $(p,q)$ type theorems. These are classical variations of Helly's theorem found in \cite{Barany:1982va}, \cite{Katchalski:1979bq} and \cite{Alon:1992gb}, respectively. The aim of this paper is to present analogues to these results for the diameter. For example, the guarantee of the size of the intersection can be improved if we are willing to check larger families. Regarding the diameter, the following result makes this clear. \begin{theoremp}[Helly's theorem for diameter, De Loera et al. {\cite[Thm 1.5]{DeLoera:2015wp}}] Let $d$ be a positive integer and $1>\delta>0$, then, there is an integer $n=n(\diam, d, \delta)$ such that for any finite family $\mathcal{F}$ of convex sets in $\mathds{R}^d$, if the intersection of every subfamily of size $n$ has diameter greater than or equal to one, then $\diam (\cap \mathcal{F}) \ge 1- \delta$. Moreover, $n(\diam, d, \delta) = \Omega_d (\delta^{-(d-1)/2})$. \end{theoremp} Given two functions $g(d,\delta)$ and $f(d,\delta)$, we say $g(d,\delta)=\Omega_d (f(\delta))$ if, for any fixed $d$, $g(d,\delta) = \Omega (f(\delta))$, and similarly with other notation for asymptotic bounds. For an upper bound to the result above, one can apply the main theorem of \cite{Langberg:2009go} to get $n(\diam, d, \delta) = O(\delta^{-d/2})$. The equivalent result for volume {\cite[Thm 1.4]{DeLoera:2015wp}} has similar upper and lower bounds in terms of $\delta$, giving $n(\vol, d, \delta)= \Theta_d (\delta^{-(d-1)/2})$. In the same spirit as Lov\'asz's generalisation of Helly's theorem \cite{Barany:1982va}, we show a ``colourful version'' of De Loera et al.'s diameter Helly in Section \ref{section-colourful}. Moreover, Theorem \ref{theorem-colourful-diameter} implies an asymptotically optimal bound for the diameter as well: $n(\diam, d, \delta) = \Theta_d (\delta^{-(d-1)/2})$. Asymptotic result as above hold for a very general family of functions, and are closely related to the approximability of convex sets by polyhedra \cite{Amenta:2015tp}. If we allow for a loss of diameter $\delta$, other versions of Helly's theorem can be recreated. In particular, we show a version of Alon and Kleitman's $(p,q)$ theorem \cite{Alon:1992gb} for the diameter. The $(p,q)$ theorem was conjectured originally by Hadwiger and Gr\"unbaum \cite{Hadwiger:1957we}, and asks if a slight weakening of Helly's condition is still enough information to bound the number of points needed to intersect all members of a finite family of convex sets in $\mathds{R}^d$. A lucid description of the theorem and its variations is contained in a survey by Eckhoff \cite{Eckhoff:2003ed}, and more recent results are summarised in \cite{Amenta:2015tp}. \begin{theorem}[$(p,q)$ theorem for diameter]\label{theorem-p,q-diameter} Let $p \ge q \ge 2d$ be positive integers and $1 > \delta > 0$. Then, there is a $c = c(p,q,d,\delta)$ such that for any finite family $\mathcal{F}$ of at least $p$ convex sets in $\mathds{R}^d$ of diameter at least one each, if out of every $p$ sets in $\mathcal{F}$, there are $q$ of them whose intersection has diameter at least one, then we can find $c$ convex sets $K_1, K_2, \ldots, K_c$ of diameter at least $1-\delta$ such that every set in $\mathcal{F}$ contains at least one $K_i$. \end{theorem} In the volumetric version \cite[Thm. 1.2]{Soberon:2015tsa}, the lower bound on $q$ depends on heavily on $\delta$. The proof of the original $(p,q)$ theorem is a tour de force of combinatorial geometry, and requires many classic results. In order to prove Theorem \ref{theorem-p,q-diameter}, we give diameter version of these as well. Notice that if one stubbornly refuses to check large families as Helly's theorem for the diameter requires, the result above gives non-trivial consequences with the same condition as B\'ar\'any, Katchalski and Pach used. \begin{corollary}\label{corollary-chido} Let $d$ be a positive integer and $1 > \delta > 0$. Let $\mathcal{F}$ be a finite family of convex sets in $\mathds{R}^d$ such that the intersection of every $2d$ of them has diameter greater than or equal to one. Then, $\mathcal{F}$ may be split into $c(2d,2d,d,\delta)$ parts such that the diameter of the intersection of each of them is at least $1-\delta$. \end{corollary} The loss of diameter $\delta$ is necessary in Theorem \ref{theorem-p,q-diameter} and Corollary \ref{corollary-chido}. This is shown in section \ref{section-remarks} with a construction. The rest of the paper is organised as follows. In section \ref{section-width} we show Helly-type results for the property \textit{having $v$-width at least one}, for some fixed direction $v$. In section \ref{section-colourful} we show a colourful version of Helly's theorem for the diameter. In section \ref{section-fractional} we prove diameter versions of the fractional Helly theorem \cite{Katchalski:1979bq}, Tverberg's theorem \cite{Tverberg:1966tb}, B\'ar\'any's selection theorem (sometimes called the ``first selection lemma'') \cite{Barany:1982va} and the existence of weak $\varepsilon$-nets for convex sets \cite{Alon:2008ek} in order to prove Theorem \ref{theorem-p,q-diameter}. Finally, in section \ref{section-remarks} we include some remarks and open problems. \section{Results for fixed direction width}\label{section-width} If instead of looking at the diameter, one is interested in the width in a fixed direction $v$, we can obtain similar Helly-type to the ones mentioned in the introduction. The original proofs for Helly-type theorems can be translated to this setting with minimal effort. However, since some of them are useful for the results regarding the diameter, we present them completely here. Let $v$ be a unit vector in $\mathds{R}^d$. Given a compact convex set $K \subset \mathds{R}^d$, we say that $p \in K$ is a $v$-directional minimum if $\langle v, p \rangle \le \langle v, x\rangle$ for all $x \in K$, where $\langle \cdot, \cdot \rangle$ denotes the usual dot product. We define a $v$-directional maximum similarly. Throughout the rest of the paper we will assume that all convex sets we work with are compact and their boundary contains no segments. This guarantees that the $v$-directional minimums for the sets and their non-empty finite intersections exist and are unique. Standard approximation techniques show that there is no loss of generality. Given a compact convex set $K$, we define its $v$-width as $\langle q, v\rangle - \langle p, v\rangle$ where $q, p$ are its $v$-directional maximum and $v$-direction minimum, respectively. \begin{theorem}[Helly for $v$-width]\label{theorem-v-width-basic-helly} Let $v$ be a unit vector in $\mathds{R}^d$ and $\mathcal{F}$ be a finite family of convex sets in $\mathds{R}^d$ such that the intersection of every $2d$ sets of $\mathcal{F}$ has a $v$-width greater than or equal to one. Then, the $v$-width of $\cap \mathcal{F}$ is greater than or equal to one. \end{theorem} \begin{proof} Let $A$ be a subfamily of size $d$ whose $v$-directional minimum $p$ maximizes $\langle p, v\rangle$. Given any other set $K _0 \in \mathcal{F}$, let us show $p \in K_0$. We know that $A \cup \{K_0\}$ must be intersecting, so let $u$ be a point of the intersection. The minimality of $p$ implies $\langle u, v\rangle \ge \langle p, v\rangle$. If we denote by $K_1, K_2, \ldots, K_d$ the sets in $A$, every $d$-tuple of $A \cup \{K_0\}$ must be intersecting. We call $p_i$ the $v$-directional minimum of $(A \cup \{K_0\})\setminus \{K_i\}$. We know that $\langle p_i, v\rangle \le \langle p, v\rangle$ for each $i$ (also notice $p_0 = p$). By convexity, there is a point $u_i \in (A \cup \{K_0\})\setminus \{K_i\}$ such that $\langle u_i, v \rangle = \langle p, v \rangle$ for each $i$. This gives us $d+1$ points in the hyperplane $\{y : \langle y , v\rangle = \langle p, v \rangle \}$, of dimension $d-1$. By Radon's lemma \cite{Radon:1921vh}, these points can be partitioned into two sets $B, C$ such that $\conv (B) \cap \conv (C) \neq \emptyset$. Let $p'$ be a point in $\conv (B) \cap \conv (C) \neq \emptyset$. It is immediate that $p' \in K_i$ for all $0 \le i \le d$. Thus, $p = p'$ and we have $p \in K_0$, as desired. Let $B$ be a subfamily of size $d$ with minimal $v$-directional maximum $q$. Again, every set in the family contains $q$. Since the $v$-width of $A \cup B$ is at least one, and this is realised by the segment $[p,q]$, the $v$-width of $\cap \mathcal{F}$ is at least one. \end{proof} \begin{theoremp}[Colourful Carath\'eodory for two points]\label{theorem-colourful-caratheodory} Given $2d$ sets of points $S_1, S_2, \ldots, S_{2d}$, and a set $S$ of two points $x,y$ such that $\{x,y\} \subset \conv (S_i)$ for each $1 \le i \le 2d$, there is a choice of points $s_1 \in S_1, \ldots, s_{2d} \in S_{2d}$ such that $\{x,y\} \in \conv \{s_1, \ldots, s_{2d}\}$ \end{theoremp} This is a particular case of Theorem 1.3 in \cite{DeLoera:2015wp}. \begin{theorem}[Colourful Helly for $v$-width] Let $\mathcal{F}_1, \mathcal{F}_2, \ldots, \mathcal{F}_{2d}$ be finite families of convex sets in $\mathds{R}^d$, considered as colour classes. If the $v$-width of the intersection of every rainbow choice $F_1 \in \mathcal{F}_1, \ldots, F_{2d} \in \mathcal{F}_{2d}$ is at least one, then there is a colour class $\mathcal{F}_i$ such that the $v$-width of $\cap \mathcal{F}_i$ is at least one. \end{theorem} \begin{proof} A systematic way to obtain a colourful Helly theorem from a ``monochromatic'' version was presented in \cite[Thm. 5.3]{DeLoera:2015tc}. Thus, it is sufficient to check the conditions of that result. Let $\mathcal{P}(K)$ stand for \textit{``$K$ has $v$-width at least one''}. Then, the following properties are satisfied. \begin{itemize} \item $\mathcal{P}$ is a Helly property (Theorem \ref{theorem-v-width-basic-helly}), with Helly number $2d$. \item $\mathcal{P}$ is a monotone property. i.e. if $K \subset K'$ then $\mathcal{P}(K)$ implies $\mathcal{P}(K')$. \item Let $v'$ be a unit vector in $\mathds{R}^d$ sufficiently close to $v$, but different. We consider a $v'$-semispace a set of the form $\{x \in \mathds{R}^d : \langle x, v' \rangle \le \alpha \}$ for some $\alpha$. Then, for every compact convex $K$ without segments in its boundary such that $\mathcal{P}(K)$ holds, there is a containment minimal $v'$-semispace $H$ such that $\mathcal{P}(K \cap H)$ holds. Moreover, if we denote by $p, q$ the $v$-directional minimum and $v$-directional maximum of $K \cap H$ (which exist and are unique since $v' \neq v$), then every closed convex subset $K' \subset K \cap H$ with $\mathcal{P'}$ satisfies that $\conv\{p,q\} \subset K'$. \end{itemize} Having these properties, \cite[Thm. 5.3]{DeLoera:2015tc} implies the result we were seeking. \end{proof} The same idea leads to a fractional version. \begin{theorem}\label{theorem-helly-v-width} Let $\alpha >0 $, $d$ a positive integer and $v$ a unit vector in $\mathds{R}^d$. Then, there is a positive constant $\beta$ depending only on $\alpha, d$ such that for any family $\mathcal{F}$ of $n$ convex sets in $\mathds{R}^d$ such that the intersection of at least $\alpha {{n}\choose{2d}}$ of the $2d$-tuples has $v$-width greater than or equal to one, there is a subfamily $\mathcal{F}'$ of $\mathcal{F}$ of cardinality at least $\beta n$ such that its intersection has $v$-width at least one. \end{theorem} \begin{proof} Let $v' \neq v$ be a unit vector in $\mathds{R}^d$ sufficiently close to $v$. For each subfamily $A$ of $\mathcal{F}$ of cardinality $2d-1$ whose intersection has $v$-width at least one, let $H_A$ be the containment-minimal $v'$-semispace such that $\cap (A \cup \{H_A\})$ has $v$-width at least one. Notice that if we consider $p_A, q_A$ the $v$-directional maximum and minimum of $\cap (A \cup \{H_A\})$ respectively (which exist and are unique since $v' \neq v$), then every convex set $C$ of $v$-width at least one such that $C \subset \cap (A \cup \{H_A\})$ satisfies $\{p_A, q_A\} \subset C$. For each subfamily $B$ of $2d$ sets of $\mathcal{F}$ whose intersection has $v$-width greater than or equal to one, let $A$ be its subfamily of size $2d-1$ with containment-maximal $H_A$. Then, Theorem \ref{theorem-v-width-basic-helly} implies that the intersection of $B \cup \{H_A\}$ has $v$-width at least one, so all the sets in $B$ contain $\{p_A, q_A\}$ Consider the function that assigns to each $2d$-tuple $B$ with $v$-width at least one a $(2d-1)$-tuple $A$ as above. Since a positive fraction of the $2d$-tuples satisfy this property, a direct double counting argument shows that there is a $(2d-1)$-tuple $A_0$ which was assigned at least $\beta n$ times for some positive $\beta$ depending only $d$ and $\alpha$. Thus, at least $\beta n$ sets contain $\{p_{A_0}, q_{A_0}\}$, as desired. \end{proof} With the results above, the methods in Section \ref{section-fractional} can be carried out verbatim to the $v$-width case to prove the following result. \begin{theorem}[$(p,q)$ theorem for $v$-width] Let $p \ge q \ge 2d$ be positive integers and $v$ a unit vector in $\mathds{R}^d$. Then, there is a $c' = c'(p,q,d)$ such that for any finite family $\mathcal{F}$ of at least $p$ convex sets of $v$-width at least one each, if out of every $p$ sets in $\mathcal{F}$, there are $q$ of them whose intersection has $v$-width at least one, then we can find $c'$ convex sets $K_1, K_2, \ldots, K_{c'}$ of $v$-width at least one such that every set in $\mathcal{F}$ contains at least one $K_i$.\end{theorem} \section{Colourful Helly for the diameter}\label{section-colourful} \begin{theorem}\label{theorem-colourful-diameter} There is an $n'=n'\left(\diam, d, \delta\right)$ such that for any $n'$ finite families $\mathcal{F}_1, \mathcal{F}_2, \ldots, \mathcal{F}_{n'}$ of convex sets in $\mathds{R}^d$, considered as colour classes, if the intersection of every colourful choice $F_1 \in \mathcal{F}_1, \ldots, F_{n'} \in \mathcal{F}_{n'}$ has diameter at least one, then there is a colour class $\mathcal{F}_i$ with $\diam (\cap \mathcal{F}_i) \ge 1- \delta$. Moreover, $n'(\diam , d, \delta ) = \Theta_d \left(\delta^{-(d-1)/2}\right)$. \end{theorem} If $\mathcal{F}_1 = \ldots = \mathcal{F}_{n'}$, we obtain the monochromatic result, and the upper bound matches the one mentioned in the introduction. The equivalent result for the volume \cite[Thm. 1.5]{Soberon:2015tsa} has a worse bound $n'(\vol, d, \delta) = O_d(\delta^{-(d^2-1)/4})$. \begin{proof} Given a point $x \in S^{d-1}$, we denote by $C_{\delta}(x)$ the cap \[ C_{\delta}(x):= \left\{y \in S^{d-1}: \langle x, y \rangle \ge 1-{\delta} \right\}. \] We denote by $c_{\delta}$ its measure under the usual probability Haar measure of $S^{d-1}$. It is known that $c_{\delta} = \Omega (\delta^{(d-1)/2})$ {\cite[Lemma 2.3]{ball1997elementary}}. Let $m= \left\lfloor\frac{1}{c_{\delta / 4}}\right\rfloor$ and consider $n' = 2dm$. Assume that $\diam \mathcal{F}_i < 1-\delta$ for all $i$. We look for a contradiction. We can find $v_1, v_2, \ldots, v_{m}$ directions in $S^{d-1}$ such that for any $v \in S^{d-1}$, there is a $v_j$ with $\langle v, v_j \rangle \ge 1-\delta$. In order to see this, take a set of points in $S$ of maximal cardinality such the caps $C_{\delta /4} (x)$ for $x \in S$ are have pairwise disjoint interiors. By counting surface area one gets $|S| \le \lfloor{(c_{\delta / 4})^{-1}}\rfloor$. However, if there was a direction $v$ not satisfying the conditions, an elementary geometric argument shows that we would be able to include $v$ in $S$, contradicting its maximality. For each $v_j$, consider $2d$ colour classes associated to it. Since $\diam (\mathcal{F}_i) < 1- \delta$ for all $i$, then their $v_j$-widths are also smaller than $1-\delta$. Thus, by Theorem \ref{theorem-helly-v-width}, there must be a rainbow choice of these $2d$ colours such that the $v_j$-width of its intersection is strictly smaller than $1-\delta$. Take $X$ to be the union of all these $2d$-tuples. Notice that the $v_j$-width of $\cap X$ is strictly smaller than $1-\delta$ for all $v_j$. Let $\lambda = \diam (\cap X) \ge 1$, and $v$ a direction realising it. Thus, $X$ contains a segment parallel to $v$ of length $\lambda$. Let $v_j$ be such that $\langle v_j, v \rangle \ge 1- \delta$. This implies that the $v_j$-width of $X$ is at least $1-\delta$, a contradiction. \end{proof} \section{Fractional and $(p,q)$ results for the diameter}\label{section-fractional} In order to prove Theorem \ref{theorem-p,q-diameter}, we need to recreate the results needed for the proof of the original $(p,q)$ theorem for the diameter. Simplified versions of Alon and Kleitman's method can be found in \cite{Alon:1996uf, Matousek:2002td}. There are two main ingredients needed. One is a fractional Helly theorem and the second is the existence for weak $\epsilon$-nets for convex sets of small size. Their equivalents are Theorem \ref{theorem-fractional} and \ref{theorem-weak-nets} described below. The structure of the proof we present here is the same. However, some definitions, such as the one for weak $\varepsilon$-net, must be adapted. In the case of volume, it is possible to recreate these results using properties of floating bodies \cite{Soberon:2015tsa}. Namely, given a convex set $K$ of volume one, and $\varepsilon >0$, we define its floating body $K_{\varepsilon}$ as \[ K_{\varepsilon} = K \setminus \cup \{H : H \ \mbox{is a halfsapce}, \vol (H \cap K) \le \varepsilon \}. \] There estimates on $\vol (K_{\varepsilon})$ allow for the proofs to work \cite{Barany:2010cy}. For the diameter, there is no similar ``floating body''. However, pigeonhole arguments on the directions realising the diameter, as in section \ref{section-colourful}, are sufficient. Let us begin with a fractional Helly for diameter in the same spirit as \cite{Katchalski:1979bq}. \begin{theorem}[Fractional Helly for the diameter]\label{theorem-fractional} Let $\alpha >0$, $1> \delta >0$ and $d$ a positive integer. Then, there is a positive constant $\beta$ depending only on $\alpha, d, \delta$ such that for any finite family $\mathcal{F}$ of $n$ convex sets in $\mathds{R}^d$ such that the intersection of at least $\alpha {{n}\choose{2d}}$ of the $2d$-tuples has diameter greater than or equal to one, there is a subfamily $\mathcal{F}'$ of $\mathcal{F}$ of cardinality at least $\beta n'$ such that its intersection has diameter at least $1-\delta$. \end{theorem} The equivalent result for volume \cite[Thm. 1.4]{Soberon:2015tsa} has the disadvantage that the size of the subfamilies needed to check grows as $\delta$ decreases. Namely, it needs to check families of size $O(\delta^{-(d^2-1)/4})$, which is much worse than the requirements of the volumetric Helly theorem. Theorem \ref{theorem-fractional} is an example of a fractional Helly-type theorem which goes beyond its corresponding Helly theorem. Such examples have appeared previously for set systems of bounded VC-dimension \cite{Matousek:2004cs}, for convexity on the integer lattice \cite{Anonymous:PHt9HPGF} or for checking the existence of hyperplane transversals in $\mathds{R}^d$ \cite{Alon:1995fs}. \begin{proof}[Proof of Theorem \ref{theorem-fractional}] Consider the usual probability Haar measures on $S^{d-1}$. For $y \in S^{d-1}$, let $C_\delta (y)$ be the set of points $x \in S^{d-1}$ such that $\langle x, y \rangle \ge 1-\delta$. Let $c_{\delta}$ be the measure of $C_{\delta} (y)$. A double counting argument shows that for any set $D$ of points in $S^{d-1}$, there must be a subset $D'$ of cardinality at least $c_{\delta} |D|$ and a point $v$ in $S^{d-1}$ such that $D' \subset C_{\delta} (v)$. For each such $2d$-tuple $B \subset \mathcal{F}$, consider a direction $v_B$ realising its diameter. Each of these directions can be represented by an antipodal pair on $S^{d-1}$. Using the observation above, there must be a direction $v$ and set of at least $2 \alpha c_{\delta}{{n}\choose{2d}}$ of the $2d$-tuples of $\mathcal{F}$ whose intersections have $v$-width greater than or equal to $1-\delta$. Applying Theorem \ref{theorem-helly-v-width}, we are done. \end{proof} In order to get to the existence of weak $\varepsilon$-nets, we start by getting results showing that given a set of objects in $\mathds{R}^d$, there is a point $p$ that is ``sufficiently well surrounded'' by them. The first result of this type is Tverberg's theorem. Tverberg's theorem \cite{Tverberg:1966tb} says that given enough points in $\mathds{R}^d$, they can be split into $m$ parts such that the convex hulls of the parts intersect. A point in this intersection is in some sense ``very deep'' within the original set of points. In order to recreate this for the diameter, we need to work with sets with large diameter instead of points, giving the following statement. \begin{theorem}[Tverberg's theorem for diameter]\label{theorem-tverberg-diameter} Let $d, m$ be positive integer, $1>\delta > 0$ and $n = \left\lfloor 4d^2(m-1)c_{\delta}^{-1}\right\rfloor+1$. Given a family $\mathcal{T} = \{T_1, T_2, \ldots, T_{n}\}$ of sets in $\mathds{R}^d$ of diameter greater than or equal to one each, there is a partition of them into $m$ parts $A_1, A_2, \ldots, A_m$ so that \[ \diam \left(\bigcap_{i=1}^m \conv(\cup A_i)\right) \ge 1- \delta. \] \end{theorem} \begin{proof} By a double counting as before, there is a subfamily $\mathcal{T}' \subset \mathcal{T}$ of cardinality greater than or equal to $4d^2(m-1)+1$ and a direction $v$ such that the $v$-width of every member of $\mathcal{T}'$ is at least $1-\delta$. Now consider \[ \mathcal{F} = \{\conv (\cup \mathcal{G}): \mathcal{G} \subset \mathcal{T}', |\mathcal{G}| = (m-1)(2d-1)2d+1 \}. \] Notice that every family forming an element of $\mathcal{F}$ is missing at most $2d(m-1)$ members of $\mathcal{T}'$. Thus, the intersection of every $2d$ of them contains some $T_i \in \mathcal{T'}$ which in turn implies that the $v$-width of their intersection is at least $1-\delta$. Thus, by Theorem \ref{theorem-v-width-basic-helly} the $v$-width of $\cap \mathcal{F}$ is at least $1-\delta$. Take two points $x, y \in \cap \mathcal{F}$ that realise its $v$-width. Every half-space containing either of them has non-empty intersection with at least $2d(m-1)+1$ sets of $\mathcal{T}'$. Otherwise, it would contradict the fact that they are contained in $\cap \mathcal{F}$. Thus, $x,y$ are contained in the convex hull of $\cup \mathcal{T}'$. By the colourful Carath\'eodory theorem for two points (see Section \ref{section-width}) with $\mathcal{F}_i = \cup {\mathcal T}'$ for $1 \le i \le 2d$, the set $\{x,y\}$ is contained in the convex hull of $2d$ points of $\cup \mathcal{T}'$. If we remove the sets $T_i$ that generated these points and set them aside in a set called $A_1$, we have that every half-space containing either of $x,y$ has non-empty intersection with at least $2d(m-2)+1$ sets of what is left of $\mathcal{T}'$. Thus, we can continue this process and generate the desired sets $A_1, A_2, \ldots, A_m$ inductively. \end{proof} Using Tverberg's theorem and colourful Carath\'edory, one can prove B\'ar\'any's selection theorem \cite{Barany:1982va}, also called the ``first selection lemma'' in \cite{Matousek:2002td}. It says that, given a finite set $S$ of points in $\mathds{R}^d$, there is a point $p$ in a positive fraction of the simplices spanned by $S$. This result holds in much more general settings, by either replacing the word ``simplex'' by the image of a different operator or requiring additional properties on the simplices containing the point \cite{Gromov:2010eb, Karasev:2012bj, Pach:1998vx, Magazinov:2015td}. For our purposes we only need a diameter version of the original result by B\'ar\'any. \begin{theorem}[Selection lemma for diameter]\label{theorem-selection-diameter} Let $d$ be a positive integer and $1 > \delta > 0$. There is a constant $\lambda = \lambda (\delta, d)$ such that for any finite family $\mathcal{F}$ of convex sets in $\mathds{R}^d$ of diameter one each, there is a set $K$ of diameter at least $1-\delta$ such that $K \subset \conv (\cup A)$ for at least $\lambda {{|\mathcal{F}|}\choose{2d}}$ subsets $A \subset \mathcal{F}$ of cardinality $2d$. Moreover, $\lambda = \Omega_d( \delta ^{d(d-1)})$. \end{theorem} \begin{proof} First, there is a subfamily $\mathcal{F}' \subset \mathcal{F}$ of cardinality at least $c_{\delta} |\mathcal{F}|$ and a direction $v$ such that the $v$-width of every member of $\mathcal{F}'$ is at least $1-\delta$. By the second part of the proof of Theorem \ref{theorem-tverberg-diameter}, there is a partition of $\mathcal{F}'$ into at least $\frac{|\mathcal{F}'|-1}{4d^2}+1$ parts such that the intersection of the convex hull of the union of the parts contains a set $K$ of $v$-width at least $1-\delta$. Moreover, we may assume that $K$ is the convex hull of two points. Colour each part by a different colour. By the colourful Carath\'eodory theorem for two points (see section \ref{section-width}), for each $2d$-tuple of colours, there is an heterochromatic set whose convex hull contains $K$. Thus, up to constants in the dimension there are at least \[ {{\frac{|\mathcal{F}'|-1}{4d^2}+1}\choose{2d}} \sim (4d)^{-2d}{{c_{\delta}|\mathcal{F}|}\choose{2d}} \sim c_{\delta}^{2d}(4d)^{-2d}{{|\mathcal{F}|}\choose{2d}} \] subsets $A \subset \mathcal{F}$ of cardinality $2d$ such that $K \subset \conv(\cup A)$. \end{proof} The final ingredient needed is the existence of weak $\varepsilon$-nets for diameters. The original results aims to find, for a given $S \subset \mathds{R}^d$, a set $T$ whose cardinality depends only on $\varepsilon$ and $d$ that intersects the convex hull of every subset of $S$ of cardinality at least $\varepsilon|S|$ \cite{Alon:2008ek}. For our purposes we need both $S$ and $T$ to be families of sets with large diameter. \begin{theorem}[Weak $\varepsilon$-nets for diameter]\label{theorem-weak-nets} Let $d$ be a positive integer, $1 > \delta > 0$, $1 > \varepsilon > 0$. Then, there is a constant $m = m(d, \delta, \varepsilon)$ such that for any finite family $\mathcal{F}$ of sets of diameter one each in $\mathds{R}^d$, there are $m$ sets $K_1, K_2, \ldots, K_m$ of diameter at least $1-\delta$ each such that for any subfamily $\mathcal{F}' \subset \mathcal{F}$ with $|\mathcal{F}'| \ge \varepsilon|\mathcal{F}|$, there is an $i$ satisfying $K_i \subset \conv (\cup \mathcal{F}')$. Moreover, $m(d, \delta, \varepsilon) = O_d (\varepsilon^{-2d}\cdot\delta^{-d(d-1)})$. \end{theorem} \begin{proof} We construct the set $\mathcal{K}=\{K_1, \ldots, K_m\}$ inductively, starting with an empty set. Let $T$ be the number of $2d$-tuples $A \subset \mathcal{F}$ such that $\conv (\cup A)$ contains no set in $\mathcal{K}$. If there is a subfamily $\mathcal{F}'$ with $|\mathcal{F}'| \ge \varepsilon|\mathcal{F}|$ such that $\conv (\cup \mathcal{F}')$ does not contain any set of $\mathcal{K}$, we can apply Theorem \ref{theorem-selection-diameter} to $\mathcal{F}'$. Thus, we can find a set $K$ contained in the convex hull of the union of at least $\lambda {{|\mathcal{F}'|}\choose{2d}} \sim \lambda \varepsilon^{2d} {{|\mathcal{F}|}\choose{2d}}$ different subsets $A \subset \mathcal{F}$ of cardinality $2d$, effectively reducing $T$ by that number if we add $K$ to $\mathcal{K}$. The process cannot be repeated more than $O_d\left((\lambda \varepsilon^{2d})^{-1}\right)$ times, as desired. \end{proof} We call $\mathcal{K}$ a diameter weak $\varepsilon$-net for the pair $(\mathcal{F} , \delta)$. At this point we have all the ingredients needed to prove Theorem \ref{theorem-p,q-diameter}. The proof of this theorem relies on the linear programming technique by Alon and Kleitman. For this, we need the following definitions. We consider $\mathcal{C}_{d, \delta} = \{F \subset \mathds{R}^d: \diam (F) \ge 1- \delta, \ F \ \mbox{is convex}\}$. Then, given a finite family of convex sets $\mathcal{F}$ in $\mathds{R}^d$, we define \begin{itemize} \item the diameter transversal number $\tau_{\delta} (\mathcal{F})$ as the minimum $\sum_{C \in \mathcal{C}_{d,\delta}} w(C)$ over all functions $w:\mathcal{C}_{d,\delta} \to \{0,1\}$ such that \[ \sum_{{C:C \subset F, \ C \in \mathcal{C}_{d,\delta}}} w(C) \ge 1 \] for all $F \in \mathcal{F}$, \item the fractional diameter transversal number $\tau^*_{\delta} (\mathcal{F})$ as the minimum $\sum_{C \in \mathcal{C}_{d,\delta}} w(C)$ over all functions $w:\mathcal{C}_{d,\delta} \to [0,1]$ such that \[ \sum_{C: C \subset F, \ C \in \mathcal{C}_{d,\delta}} w(C) \ge 1 \] for all $F \in \mathcal{F}$, and \item the fractional diameter packing number $\nu^*_k (\mathcal{F})$ as the maximum $\sum_{F \in \mathcal{F}} w(F)$ for $w: \mathcal{F} \to [0,1]$ such that \[ \sum_{F: C \subset F, \ F \in \mathcal{F}} w(F) \le 1 \] for all $C \in \mathcal{C}_{d,\delta}$. \item Given two finite families $\mathcal{F}$, ${\mathcal T}$ of convex sets in $\mathds{R}^d$, we say ${\mathcal T}$ is a $(1-\delta)$-diameter transversal for $\mathcal{F}$ if every set in ${\mathcal T}$ has diameter at least $1- \delta $ and every set in $\mathcal{F}$ contains at least one set in ${\mathcal T}$. Note that if $w:\mathcal{C}_{d,\delta} \to \{0,1\}$ is a function satisfying the condition of $\tau_{\delta}(\mathcal{F})$, then the family $\{K\in \mathcal{C}_{d, \delta}: w(K)=1\}$ is a $(1-\delta)$-diameter transversal of $\mathcal{F}$. \item We refer to the conditions of Theorem \ref{theorem-p,q-diameter} as the diameter $(p,q)$ condition. \end{itemize} \begin{lemma}\label{lemma-one} Let $\mathcal{F}$ be a finite family of convex sets in $\mathds{R}^d$, all with diameter at least one. Then, $\tau_{\delta} (\mathcal{F})$ is bounded by a function depending only $\tau^*_{\delta/2}(\mathcal{F})$, $d$ and $\delta$. \end{lemma} \begin{proof} Consider a function $w:\mathcal{C}_{d, \delta/2} \to [0,1]$ which realises $\tau^{*}_{\delta/2}$. Without loss of generality, we may assume that $w$ has finite support and only has rational values. Let $M$ be the common denominator of $w(K)$ for all $K \in \mathcal{C}_{d, \delta/2}$. Let ${\mathcal T}$ be the family that formed by the disjoint union of $M\cdot w(K)$ copies of $K$, for each $K \in \mathcal{C}_{d, \delta/2}$. Now consider $\mathcal{K}$ a diameter weak $\left(\frac{1}{\tau^*_{\delta/2}(\mathcal{F})}\right)$-net of $({\mathcal T}, \delta/2)$, as in Theorem \ref{theorem-weak-nets}. Notice that the diameter of every member of $\mathcal{K}$ is at least $(1-\frac{\delta}{2})^2 \ge 1-\delta$. By the definition of $\tau^*_{\delta/2}$, for $F \in \mathcal{F}$, the number of copies of sets in ${\mathcal T}$ which are contained in $F$ is at least $(\tau^*_{\delta/2} (\mathcal{F}))^{-1} |{\mathcal T}|$. Thus, there is an element of $\mathcal{K}$ contained in $F$. In other words, $\mathcal{K}$ is a $(1-\delta)$-diameter transversal to to $\mathcal{F}$, so $\tau_{\delta} (\mathcal{F}) \le |\mathcal{K}|$, which in turn is only bounded by a function of $\tau^*_{\delta/2} (\mathcal{F})$, $d$ and $\delta$. \end{proof} \begin{lemma}\label{lemma-two} If $p \ge q \ge 2d$ and $\mathcal{F}$ is a finite family of convex sets with the diameter $(p,q)$ condition, then $\nu^*_{\delta/2} (\mathcal{F})$ is bounded by a function that depends only on $p,q,d,\delta$. \end{lemma} \begin{proof} Let $w: \mathcal{F} \to [0,1]$ be a function that realises $\nu^*_{\delta/2} (\mathcal{F})$. We may assume without loss of generality that $w(C)$ is rational for all $C \in \mathcal{F}$. Let $w(C) = \frac{n_C}{m}$ where $m$ is the common denominator for all $w(C)$ for $C \in \mathcal{F}$. Let $\mathcal{F}'$ be the family consisting of $n_C$ copies of $C$ for each $C \in \mathcal{F}$ and let $N= |\mathcal{F}'|$. Note that $\frac{N}{m}= \sum_{C \in \mathcal{F}} \frac{n_C}{m} = \nu^*_{\delta/2}(\mathcal{F})$. The family $\mathcal{F}'$ satisfies the diameter $((q-1)(p-1)+1,q)$ property. This comes immediately from the fact that every $[(q-1)(p-1)+1]$-tuple from $\mathcal{F}'$ contains either $q$ copies of the same set or $p$ different sets of $\mathcal{F}$. In either case we have a $q$-tuple whose intersection has diameter at least one. However, since $q\ge 2d$ this implies that there is a positive fraction of the $2d$-tuples of $\mathcal{F}'$ whose intersection has diameter at least one. Theorem \ref{theorem-fractional} implies then that there is a positive fraction $\beta$ depending only on $p,q,\delta$ such that there is a set $K_0$ of diameter at least $1-\frac{\delta}{2}$ contained in the intersection of at least $\beta N$ sets of $\mathcal{F}'$. Thus \[ 1 \ge \sum_{C \in \mathcal{F}: K_0 \subset C}w(C) = \sum_{C \in \mathcal{F}: K_0 \subset C} \frac{n_C}{m} \ge \frac{1}m \cdot \beta N = \beta \nu^*_{\delta/2}(\mathcal{F}). \] This implies $\nu^*_{\delta/2} (\mathcal{F}) \le \frac{1}{\beta}$, as desired. \end{proof} \begin{proof}[Proof of Theorem \ref{theorem-p,q-diameter}] As in the Alon-Kleitman proof of the $(p,q)$ theorem, linear programming duality implies that $\nu_{\delta/2}^*(\mathcal{F}) = \tau^*_{\delta/2}(\mathcal{F})$. Thus, the lemmata \ref{lemma-one} and \ref{lemma-two} finish the proof. \end{proof} \section{Remarks and open problems}\label{section-remarks} The Helly-type results we obtain for the diameter here improve upon their volumetric equivalent. However, we see no reason they should not hold for the same strength in that setting. Corollary \ref{corollary-chido} seems like a good result to test for that purpose. \begin{question} Is there a constant $r(d, \delta)$ such that any finite family $\mathcal{F}$ of convex sets in $\mathds{R}^d$ such that the intersection of every $2d$ of them has volume at least one can be partitioned into $r(d, \delta)$ parts so that the volume of the intersection of each part is at least $1-\delta$? \end{question} Let us construct an example to show that the diameter loss $\delta$ is needed in Corollary \ref{corollary-chido}, Theorem \ref{theorem-p,q-diameter} and Theorem \ref{theorem-fractional}. \begin{claim} For any $k$, there is a family $\mathcal{F}$ of $2dk+1$ convex sets in $\mathds{R}^d$ such that the intersection of any $2d$ of them has diameter at least one and for any partition of $\mathcal{F}$ into $k$ parts, there is one whose intersection has diameter strictly smaller than one. \end{claim} \begin{proof} Let $n={{2k+1}\choose{2d}}$, and for each $2d$-tuple $A \subset \{1,2,\ldots, 2kd+1\}$, let $v_A$ be a pair of antipodal points in $\frac12S^{d-1}$. For each $i \in \{1,2,\ldots, 2kd+1\}$, let \[ K_i = \conv \left\{v_A: A \in {{[2kd+1]}\choose{2d}}, \ i \in A \right\}. \] For any $2d$-tuple of sets, by construction their intersection contains some $v_A$, so the diameter is at least one. Given a partition of $\{K_i : i\in [2kd+1]\}$ into $k$ parts, there must be one, say $P$, of cardinality at least $2d+1$. For any $A \in {{[2kd+1]}\choose{2d}}$, there is an $K_i \in P$ for which $i \not\in A$, so $v_A \not\in \conv (\cap P)$. Thus, $\cap P$ is contained in the interior of $\conv (\frac12S^{d-1})$ and is closed, so its diameter is strictly less than one. If one wants shaper estimates, we can choose the antipodal points $v_A$ so that the circular caps $C_{\delta} (v_A)$ are pairwise disjoint for some $\delta$, similarly to the argument of Theorem \ref{theorem-colourful-diameter}. \end{proof} The original conjecture by B\'ar\'any, Katchalski and Pach is still open, so we state it again to bring more attention to it. \begin{conjecture}[B\'ar\'any, Katchalski, Pach \cite{Barany:1982ga}] Let $\mathcal{F}$ be a finite family of convex sets such that the intersection of every $2d$ of them has diameter at least one. Then, $\diam (\cap \mathcal{F}) \ge c d^{-1/2}$ for some absolute constant $c$. \end{conjecture} For the conjecture above, the best lower bound on $\diam (\cap \mathcal{F})$ is $O(d^{-2d})$ \cite{Barany:1982ga}. \bibliographystyle{amsalpha}
{ "timestamp": "2015-09-29T02:02:47", "yymm": "1509", "arxiv_id": "1509.07908", "language": "en", "url": "https://arxiv.org/abs/1509.07908", "abstract": "We study versions of Helly's theorem that guarantee that the intersection of a family of convex sets in $R^d$ has a large diameter. This includes colourful, fractional and $(p,q)$ versions of Helly's theorem. In particular, the fractional and $(p,q)$ versions work with conditions where the corresponding Helly theorem does not. We also include variants of Tverberg's theorem, Bárány's point selection theorem and the existence of weak epsilon-nets for convex sets with diameter estimates.", "subjects": "Metric Geometry (math.MG); Combinatorics (math.CO)", "title": "Helly-type theorems for the diameter", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9869795098861571, "lm_q2_score": 0.8244619263765707, "lm_q1q2_score": 0.8137270280149447 }
https://arxiv.org/abs/2209.10045
New Lower Bounds for Cap Sets
A cap set is a subset of $\mathbb{F}_3^n$ with no solutions to $x+y+z=0$ other than when $x=y=z$. In this paper, we provide a new lower bound on the size of a maximal cap set. Building on a construction of Edel, we use improved computational methods and new theoretical ideas to show that, for large enough $n$, there is always a cap set in $\mathbb{F}_3^n$ of size at least $2.218^n$.
\section{Introduction} \begin{definition}\label{def:cap} A \emph{cap set} is a set $A \subseteq \mathbb{F}_3^n$ with no solutions to $x+y+z=0$ other than when $x=y=z$, or equivalently $A$ has no 3 distinct elements in arithmetic progression. \end{definition} In this paper, we prove the following result. \begin{theorem} \label{main} There is a cap set in $\mathbb{F}_3^{56232}$ of size \[\binom{11}{7}^{141} \cdot 6^{572} \cdot 12^{572} \cdot 112^{8800} \cdot 37 \cdot 142\] and hence, for large $n$, there is a cap set $A \subseteq \mathbb{F}_3^n$ of size $(2.218021\ldots)^n$. \end{theorem} Since \cite{Pellegrino} in 1970, there have been 2 improvements to the lower bound on the size of a maximal cap set. A lower bound of $(2.210147\ldots)^n$ was given in \cite{CalderbankFishburn}, and then Edel gave an improved lower bound of $(2.217389\ldots)^n$ in \cite{Edel}. In this paper, we obtain the first new lower bound for nearly 2 decades, and prove that a maximal cap set has size at least $(2.218021\ldots)^n$. Our method is based on that of Edel, which in turn is based on the work of Calderbank and Fishburn, and involves taking cap sets which are known to be maximal in a relatively low dimension and then combining them carefully to produce large cap sets in higher dimensions. \bigbreak The upper bound, by contrast, has received significant attention. In \cite{Meshulam}, an upper bound of $\frac{3^n}{n}$ was shown, which was then improved to $\frac{3^n}{n^{1+\varepsilon}}$ for some $\varepsilon > 0$ in \cite{BK}. It was an open problem for over 20 years as to whether a cap set in $\mathbb{F}_3^n$ has size at most $c^n$, for some $c<3$. This was finally solved by Ellenberg and Gijswijt in \cite{EllenbergGijswijt}, who used the polynomial method developed by Croot, Lev and Pach in \cite{CLP} to show that a cap set in $\mathbb{F}_3^n$ has size at most $(2.7552\ldots)^n$. A symmetric version of the polynomial method proof for the upper bound has since been formulated by Tao in \cite{tao_2016}, and Ellenberg and Gijswijt's result was formalised in the Lean theorem prover in \cite{dahmen2019formalizing}. The upper bound has since been improved by an extra factor of $\sqrt{n}$ in \cite{jiang2021improved}. \bigbreak One reason for the interest in cap sets is that they can provide useful insights into analogous problems in more complicated sets. For example, the current best upper bound for Roth's theorem due to Bloom and Sisask in \cite{bloom2020}, which asks how large a subset of integers can be without containing a 3 term arithmetic progression, employs several of the methods which were used in the upper bound for the cap set problem given by Bateman and Katz in \cite{BK}. \smallbreak Cap sets are also of interest in finite geometry, design of experiments and various other problems in combinatorics and number theory. For an excellent survey on the motivation and background of the problem, and its application to several other interesting questions, we recommend the article \cite{grochow2019new}. \subsection*{Structure of the paper} In section 2, we present the extended product construction of Edel, and combine it with improved computational techniques to obtain some new lower bounds. We then introduce a new construction in section 3, which takes the extended product from section 2 up a level, and use this to achieve the best lower bound in this paper. In section 4, we discuss possible ideas for future work, as well as the limitations of our approach. Finally, we describe our computational methods, which make use of a SAT solver, in section 5. \section{The Extended Product Construction} In this section, we describe a construction due to Edel in \cite{Edel}, which extends the ideas from \cite{CalderbankFishburn} to produce larger cap sets. We use Edel's method to construct a cap set in 396 dimensions which gives a lower bound of $(2.217981\ldots)^n$, already an improvement on Edel's lower bound of $(2.217389\ldots)^n$. We first describe a relatively simple construction. \begin{proposition}[Product Caps] \label{def:ProdCap} Let $A \subseteq \mathbb{F}_3^n$, $B \subseteq \mathbb{F}_3^m$ be cap sets. Then, by taking a direct product of $A$ and $B$, there is a cap set of size $|A||B|$ in $\mathbb{F}_3^{n+m}$. \end{proposition} \begin{proof} Define $A \times B = \{(a,b) : a \in A, b \in B\}$. Clearly $|A \times B| = |A||B|$, and we show that $A \times B$ is a cap set. \smallbreak Assume we have a solution to $x+y+z = 0$ in $A \times B$. So $x = (x_a, x_b)$, $y = (y_a, y_b)$ and $z = (z_a, z_b)$ where $x_a, y_a, z_a \in A$ and $x_b, y_b, z_b \in B$. Therefore, $x_a + y_a + z_a = 0 = x_b + y_b + z_b$. Since $A,B$ are both cap sets, we must have $x_a = y_a = z_a$ and $x_b = y_b = z_b$, so $x=y=z$. Hence $A \times B$ has no non trivial solutions to $x+y+z = 0$, and is therefore a cap set. \end{proof} \smallbreak The above proposition shows that there is always a cap set in $\mathbb{F}_3^n$ of size $2^n$, by taking direct products of the set $\{0,1\} \subseteq \mathbb{F}_3$, to form the cap set $\{0,1\}^n \subseteq \mathbb{F}_3^n$. We now use the direct product construction to show how we can derive an asymptotic lower bound for the cap set problem. \begin{proposition} \label{prop:lower bound asymptotic} Let $A \subseteq \mathbb{F}_3^n$ be a cap set of size $c^n$. Then for any $\varepsilon > 0$, there is an $M$ such that for all $m \geq M$, there is a cap set of size greater than $\left(c - \varepsilon\right)^{m}$ in $\mathbb{F}_3^m$. \end{proposition} \begin{proof} Let $A \subseteq \mathbb{F}_3^n$ be a cap set of size $c^n$. For $m > n$, there is $k$ such that $m = nk + r$, where $0 \leq r<n$. Applying the product construction $k$ times to $A$, we have a cap set in $\mathbb{F}_3^m$ of size $c^{nk} = \left(c^{1-r/m}\right)^m$. \smallbreak Given $\varepsilon>0$, we can always choose a large enough $M$ such that $c^{1-n/M} > c - \varepsilon$. Since $r<n$, for all $m \geq M$ we have $\frac{r}{m}< \frac{n}{M}$, and hence $c^{1-r/m} > c - \varepsilon$. Therefore, we have constructed a cap set in $\mathbb{F}_3^m$ of size greater than $(c - \varepsilon)^m$. \end{proof} \begin{remark} This proposition demonstrates that finding an asymptotic lower bound for the cap set problem amounts to finding a cap set $A \subseteq \mathbb{F}_3^n$ where $|A|^{1/n}$ is as large as possible. If $A \subseteq \mathbb{F}_3^n$ and $B \subseteq \mathbb{F}_3^m$ are cap sets, we can use the previous proposition to achieve a lower bound of $|A|^{1/n}$ and $|B|^{1/m}$ respectively. We know we can form the direct product $A \times B$, but since $|A \times B|^{\frac{1}{n+m}} \leq \text{max}\left(|A|^{\frac{1}{n}},|B|^{\frac{1}{m}}\right)$, the bound from $A \times B$ will never beat the better of the bounds from $A$ and $B$. \end{remark} In \cite{Edel}, Edel introduced a new construction based on the work in \cite{CalderbankFishburn}, which one can think of as a sort of twisted product. The idea is simple - if we start with a collection of cap sets, and construct several different direct products, can we take the union of the direct products and still be a cap set? The answer is yes, under certain conditions on the cap sets and the way we combine them. \begin{definition}[Extendable collection] \label{def:extendable} Let $A_0, A_1, A_2 \subseteq \mathbb{F}_{3}^{n}$ be cap sets. We say that this collection of cap sets is \emph{extendable} if the following 2 conditions hold: \begin{enumerate} \item If $x,y\in A_0$ and $z\in A_1\cup A_2$ then $x+y+z \neq 0$. \item If $x\in A_0$, $y\in A_1$ and $z\in A_2$ then $x+y+z\neq 0$. \end{enumerate} Note that taking $x=y$ in condition (1) shows that $A_0$ is disjoint from $A_1$ and $A_2$. \end{definition} \smallbreak \begin{definition}[Admissible set\footnote{To avoid confusion, we note that our terminology and definitions are not the same as those in \cite{Edel}. In particular, the object which Edel calls a `cap' we call a `cap set', and Edel's `capset' is our `admissible set'.}] \label{def:admissible} Let $S \subseteq \{ 0,1,2\}^m$. $S$ is \emph{admissible} if: \begin{enumerate} \item For all distinct $s, s' \in S$, there are coordinates $i$ and $j$ such that $s_i = 0 \neq s_i'$ and $s_j \neq 0 = s_j'$. \item For all distinct $s, s', s'' \in S$, there is a coordinate $k$ such that $\{s_k, s_k', s_k''\} = \{0, 1, 2\}$, $\{0, 0, 1\}$ or $\{0, 0, 2\}$. \end{enumerate} \end{definition} \begin{definition}[Extended product construction] \label{def:extended product} As the name suggests, we can extend an extendable collection of cap sets by an admissible set. The construction is as follows: for $s = (s_1, \ldots, s_m) \in \{0,1,2\}^m$ and $A_0, A_1, A_2 \subseteq \mathbb{F}_{3}^{n}$, we define \[s(A_0, A_1, A_2) = A_{s_1} \times \cdots \times A_{s_m} \subseteq \mathbb{F}_{3}^{nm}.\] If $S \subseteq \{0,1,2\}^{m}$ is an admissible set, we define \[S(A_0, A_1, A_2) = \bigcup_{s \in S} \ s\left(A_0, A_1, A_2\right) \subseteq \mathbb{F}_{3}^{nm}.\] \end{definition} The following lemma, which although rather different in presentation is essentially Lemma 10 of \cite{Edel}, demonstrates the usefulness of these definitions. \begin{lemma} \label{lemma:extended product cap} If $(A_0, A_1, A_2)$ is an extendable collection of cap sets in $\mathbb{F}_{3}^{n}$, and $S \subseteq \{ 0,1,2\}^m$ is an admissible set, then $S(A_0, A_1, A_2)$ is a cap set in $\mathbb{F}_{3}^{nm}$. \end{lemma} \begin{proof} We want to prove that \[\bigcup_{s \in S} s(A_0, A_1, A_2) \] is a cap set, where $s(A_0,A_1, A_2)= A_{s_1} \times \cdots \times A_{s_{m}}$. \smallbreak Suppose we have distinct $x,y,z \in S(A_0, A_1, A_2)$ such that $x+y+z=0$. We have 3 cases, depending on where $x,y,z$ come from. \smallbreak \textbf{Case 1:} By the direct product construction, we know that each $s(A_0,A_1, A_2)$ is a cap set. So, there are no distinct $x,y,z \in s(A_0,A_1, A_2)$ such that $x+y+z=0$. \smallbreak \textbf{Case 2:} Suppose we have $x,y\in s(A_0,A_1, A_2)$ and $z \in s'(A_0,A_1, A_2)$, where $s \neq s'$. So $x=(x_{s_1},\ldots,x_{s_m})$, $y=(y_{s_1},\ldots,y_{s_m})$ and $z=(z_{s_1'},\ldots,z_{s_m'})$. By property (1) of being admissible, there is some coordinate $j$ in which $s_j=0$ and $s_j'\neq 0$. So $x_{s_j}+y_{s_j}+z_{s'_j}=0$ where $x_{s_j},y_{s_j}\in A_0$ and $z_{s'_j} \in A_1 \cup A_2$, contradicting property (1) of extendable. \smallbreak \textbf{Case 3:} Suppose $x,y,z$ come from the distinct vectors $s,s',s''$. By condition (2) of admissible, there is a coordinate $k$ such that $\{s_k, s_k', s_k''\}$ is $\{0, 1, 2\}$, $\{0, 0, 1\}$ or $\{0, 0, 2\}$. If $\{s_k,s_k',s_k''\}=\{0,0,1\}$ or $\{0,0,2\}$, then we have a contradiction of property (1) of extendable, as above. Otherwise, if $\{s_k,s_k',s_k''\}=\{0,1,2\}$, we have a contradiction of property (2) of extendable. \end{proof} We will now discuss some important types of admissible sets, which will be used in our later constructions. \begin{definition}[Recursively admissible set\footnote{Again, note that our terminology is different to Edel's. In \cite{Edel}, these objects are simply called `admissible sets'.}] \label{def:recursive} $S$ is a \emph{recursively admissible} set if $S$ is an admissible set, $|S| \geq 2$ and for all distinct pairs $s, s' \in S$ at least one of the following holds: \begin{enumerate}[label=(\roman*)] \item There are coordinates $i, j$ such that $\{s_i, s_i'\} = \{0, 1\}$ and $\{s_j, s_j'\} = \{0, 2\}$. \item There is a coordinate $k$ such that $s_k = s_k' = 0$. \end{enumerate} \end{definition} Given the name `recursively admissible', the reader may not be too surprised by the flavour of the following lemma. \begin{lemma} \label{lemma:recursive} If $(A_0, A_1, A_2)$ is an extendable collection of cap sets, and $S \subseteq \{0,1,2\}^m$ is a recursively admissible set, then $\left(S\left(A_0, A_1, A_2\right), A_1^m, A_2^m\right)$ is an extendable collection of cap sets. \end{lemma} \begin{proof} $S(A_0, A_1, A_2)$ is a cap set by \eqref{lemma:extended product cap}, and takes the place of $A_0$ from the definition of extendable. First of all, we show that there are no $x,y\in S(A_0, A_1, A_2)$ and $z\in A_1^m\cup A_2^m$ such that $x+y+z=0$. \bigbreak Assume $x,y \in S(A_0, A_1, A_2)$ and $z \in A_1^m \cup A_2^m$. If $x,y \in s(A_0, A_1, A_2)$, then since $|S|\geq 2$, it must be that $s$ has at least one coordinate 0. Let $s_k= 0$, and let the corresponding blocks of $x,y,z$ be $x_k, y_k, z_k$ respectively. Then $x_k, y_k \in A_0$, $z_k \in A_1 \cup A_2$, so by property (1) of $(A_0, A_1, A_2)$ being extendable, $x_k + y_k + z_k \neq 0$. \smallbreak Now assume $x \in s(A_0, A_1, A_2)$ and $y \in s'(A_0, A_1, A_2)$. Since $S$ is recursively admissible, either there is a coordinate $k$ such that $s_k = s'_k = 0$ or there are coordinates $j,k$ such that $\{s_j, s'_j\} = \{0,1\}$ and $\{s_k, s'_k\} = \{0,2\}$. \smallbreak In the first case where $s_k = s'_k = 0$, we have $x_k, y_k \in A_0$ and $z_k \in A_1 \cup A_2$, so by the same reasoning as above $x_k + y_k + z_k \neq 0$ by property (1) for extendable. \smallbreak In the case where $\{s_j, s'_j\} = \{0,1\}$ and $\{s_k, s'_k\} = \{0,2\}$, either $z_j = z_k = 1$ or $z_j = z_k = 2$. Assume that $z_j = z_k = 2$ and $s_j =1$. Then $x_j \in A_1$, $y_j \in A_0$ and $z_j \in A_2$. By property (2) of extendable, we deduce that $x_j + y_j + z_j \neq 0$. \smallbreak Similarly, if $z_j = z_k = 2$ and $s'_j = 1$, then $x \in A_0$, $y \in A_1$ and $z \in A_2$, so $x_j+y_j+z_j \neq 0$ by property (2) of extendable again. Finally, if $z_j = z_k = 1$, then either $s_k = 0$ and $s'_k = 2$, or $s_k = 2$ and $s'_k = 0$, and once again we use property (2) of extendable to show that $x_k + y_k + z_k \neq 0$. \smallbreak Hence, we can never have $x,y\in S(A_0, A_1, A_2)$ and $z\in A_1^m\cup A_2^m$ such that $x+y+z=0$, so condition (1) of extendable holds. \bigbreak Now we want to show that if $x\in S(A_0, A_1, A_2)$, $y\in A_1^m$ and $z\in A_2^m$ then $x+y+z\neq 0$. This is relatively straightforward - by the same reasoning as above, $s$ has a coordinate $k$ where $s_k = 0$. So, $x_k \in A_0$, $y_k \in A_1$ and $z_k \in A_2$. Then, by property (2) of extendable, $x_k + y_k + z_k \neq 0$, so $x + y + z \neq 0$. \smallbreak So we have condition (2) for extendable, and hence $\left(S\left(A_0, A_1, A_2\right), A_1^m, A_2^m\right)$ is an extendable collection of cap sets, as required. \end{proof} In addition to recursively admissible sets, we will also be making use of admissible sets where every element has the same weight. By `weight', we mean the number of non zero entries in a vector. We will also talk about the `support' of a vector, meaning the set of non zero coordinates. \begin{definition}[Constant weight admissible sets] Write $S = I(m,w)$ if $S \subseteq \{0,1,2\}^{m}$ is an admissible set consisting of $\binom{m}{w}$ vectors, each of weight $w$. If in addition $S$ is recursively admissible, write $S = \tilde{I}(m,w)$.\footnote{Once again, we mention the difference between our notation and that of Edel. In \cite{Edel}, the second parameter is not the weight $w$ of each vector, but the number of zeroes $t$. Note that $t = m -w$, and so $\binom{m}{w}$ = $\binom{m}{t}$. It is also worth pointing out that the meanings of $I$ and $\tilde{I}$ are swapped in \cite{Edel} - whereas we use $\tilde{I}$ to denote a stronger property than $I$, Edel uses $\tilde{I}$ to denote the weaker property of being an admissible set (which is referred to there as a capset), and uses $I$ for a recursively admissible set (which Edel simply calls an admissible set).} \end{definition} \smallbreak One good thing about this class of admissible sets is that they satisfy the pairwise condition for admissible automatically. There are $\binom{m}{w}$ different ways to choose $w$ of the $m$ coordinates to be non zero, and any 2 distinct vectors $x,y$ then necessarily have coordinates $i,j$ such that $x_i = 0 \neq y_i$ and $x_j \neq 0 = y_j$. This will turn out to be helpful when we need to find admissible sets later, as we can start with the $\binom{m}{w}$ different support sets, which immediately gives us the first condition for admissible, and then we can view the second condition as a 2-colouring problem on the non zero entries of each vector. \smallbreak Another reason this type of admissible set is useful to work with is that it makes calculating the size of the extended product cap sets relatively simple. \begin{lemma} \label{lemma:size} If we extend a collection of cap sets $(A_0, A_1, A_2)$ by $S = I(m,w) \subseteq \{ 0,1,2\}^m$, where $|A_1| = |A_2|$, then \[|S(A_0, A_1, A_2)| = \binom{m}{w}|A_0|^{m-w} |A_1|^{w} .\] \end{lemma} \begin{proof} Let $s \in S$. If $s_i = 1$ or $s_i = 2$, then $|A_{s_i}| = |A_1|$, and if $s_i = 0$ then $|A_{s_i}| = |A_0|$. Recall that we defined $s(A_0, A_1, A_2) = A_{s_1} \times \cdots \times A_{s_m}$. Since there are $m-w$ zero coordinates and $w$ non zero coordinates in $s$, a simple counting argument gives $|s(A_0, A_1, A_2)| = |A_0|^{m-w} |A_1|^{w}$. Then, as $S(A_0, A_1, A_2) = \bigcup\limits_{s \in S} s(A_0, A_1, A_2)$, the $s(A_0, A_1, A_2)$ are all disjoint and $|S| = \binom{m}{w}$, it follows that $|S(A_0, A_1, A_2)| = \binom{m}{w}|A_0|^{m-w} |A_1|^{w}$. \end{proof} It is finally time for some examples of admissible sets. For our first example, we prove the existence of an important family of recursively admissible sets, by a relatively simple construction due to Edel. We then use a computer search to produce particular examples of admissible sets. \begin{lemma} \label{lemma:m-1} For any $m \geq 2$, there exists a recursively admissible set $\tilde{I}(m,m-1)$. \end{lemma} This is a special case of Lemma 13 in \cite{Edel}, when $c=2$. \begin{proof} Construct a set $S$ as follows: consider the $m$ vectors who have exactly one coordinate $0$, and all others non zero. For each vector, let all entries before the $0$ be $1$, and all entries after the $0$ be $2$. We show S is a recursively admissible set. \smallbreak Let $x,y$ be distinct elements of $S$. Then they must have zeroes in different coordinates, so the pairwise condition for admissible holds: there are $i,j$ such that $x_i = 0 \neq y_i$ and $x_j \neq 0 = y_j$. \smallbreak Without loss of generality, let $i<j$. Then $y_i = 1$ and $x_j = 2$ by our construction, so we also have the condition for recursively admissible. That is, there exist $i,j$ such that $\{x_i, y_i\} = \{0,1\}$ and $\{x_j, y_j\} = \{0,2\}$. \smallbreak Let $x,y,z$ be distinct elements of $S$, with zero in coordinate $i,j,k$ respectively. Without loss of generality, let $i<k<j$. Then $x_k = 2$, $y_k = 1$, $z_k = 0$, so we have a coordinate $k$ such that $\{x_k, y_k, z_k\} = \{0,1,2\}$, which gives the triples condition for admissible. \end{proof} \begin{lemma} \label{computer caps} There exist admissible sets $I(11,7)$, $I(11,6)$ and $I(10,6)$. \end{lemma} The admissible sets can be found on the author's webpage at \url{fredtyrrell.com/cap-sets}. The computational methods we employed, including the use of a SAT solver, are described in section 5. Several other admissible sets were given by Edel, including the $I(10,5)$ used to find the previous lower bound in \cite{Edel}. The admissible sets mentioned in \cite{Edel} can be found on Edel's webpage \cite{EdelSets}. \bigbreak We now present an example of an extendable collection of cap sets in $\mathbb{F}_3^6$. This is exactly the collection used in \cite{Edel} and \cite{CalderbankFishburn}, presented slightly differently. We will summarise the construction - for more details, see Section 3 of \cite{Edel} or Section 2, Figure 3 of \cite{CalderbankFishburn}. \begin{lemma} \label{extendable caps} There is an extendable collection $(A_0, A_1, A_2)$ of cap sets in $\mathbb{F}_3^6$, where $|A_0| = 12$, $|A_1| = |A_2| = 112$. \end{lemma} \begin{proof} \textbf{Constructing the collection:} \smallbreak Consider the following 6 x 10 matrix: \begin{center} $\begin{pmatrix} 1 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 0\\ 1 & 1 & 0 & 0 & 0 & 1 & 1 & 1 & 0 & 0\\ 1 & 0 & 1 & 0 & 1 & 0 & 0 & 0 & 1 & 1\\ 0 & 1 & 0 & 1 & 0 & 0 & 1 & 0 & 1 & 1\\ 0 & 0 & 1 & 0 & 1 & 0 & 1 & 1 & 1 & 0\\ 0 & 0 & 0 & 1 & 1 & 1 & 0 & 1 & 0 & 1\\ \end{pmatrix}$ \end{center} This is the incidence matrix of a $(6,3,2)$-design, an example of a balanced incomplete block design, meaning any pair of rows are both 1 in exactly 2 coordinates. Let $D \subseteq \mathbb{F}_{3}^{6}$ be the vectors with non zero entries given in the coordinates corresponding to the 1s in the matrix. By this, we mean $D$ is the set of vectors whose non-zero coordinates are 123, 124, 135, 146, 156, 236, 245, 256, 345 or 346. \smallbreak Let $D'$ be the remaining vectors of $\mathbb{F}_{3}^{6}$ with 3 non zero entries. There are $\binom{6}{3} \times \ 2^3 = 160$ vectors of weight 3, $|D|$ = $2^3 \times 10 = 80$, so $|D| = |D'| = 80$. \smallbreak Let $R$ be the vectors with no zeros, and an even number of 1s, let $R'$ be the other weight 6 vectors, with an odd number of 1s. $|R| = |R'| = 32$. \smallbreak Define $A_1 = D \cup R$, $A_2 = D' \cup R$, and let $A_0$ be the vectors of weight 1. Then $A_0$ is a cap set of size 12 and $A_1, A_2$ are cap sets of size 112 in $\mathbb{F}_{3}^{6}$. Furthermore, $|A_1 \cap A_2|$ = 32 and $A_1 + A_2 = \mathbb{F}_{3}^{6} \setminus A_0$. This all follows from the properties of a block design, and by checking that $D + D'$, $R+R$, $D+R$ and $D' + R$ don't contain any weight 1 vectors. \bigbreak \textbf{This collection is extendable:} \smallbreak Let $x,y \in A_0$. Since all elements of $A_0$ have weight 1, $x+y$ must have weight $0, 1$ or $2$. If $z \in A_1 \cup A_2$ is such that $x + y + z = 0$, then $z$ needs to have the same weight as $x+y$. Since $A_1 \cup A_2$ consists only of vectors of weights 3 or 6, we cannot have such a $z$. So, there are no solutions to $x+y+z=0$ where $x,y \in A_0$ and $z \in A_1 \cup A_2$, which is condition (1) for extendable. \smallbreak Let $x \in A_1$ and $y \in A_2$. Since $A_1 + A_2 = \mathbb{F}_{3}^{6} \setminus A_0$, and $z \in A_0 \iff 2z \in A_0$, there is no $z \in A_0$ such that $x + y + z = 0$. This is condition (2) for extendable, and so we have an extendable collection of cap sets. \end{proof} We are now ready to prove the main result of this section, and obtain our first new lower bound for maximal cap sets. We will use several results and examples from this section, to show the following. \begin{theorem} \label{thm:396} There exists a cap set $A \subseteq \mathbb{F}_3^{396}$ of size \[\binom{11}{7} \cdot 6^4 \cdot 12^4 \cdot 112^{62}.\] \end{theorem} \begin{proof} First, we take the extendable collection $(A_0, A_1, A_2)$ from \eqref{extendable caps}, and extend it by the recursively admissible set $S = \tilde{I}(6,5)$, which exists by \eqref{lemma:m-1}. This gives a cap set $B \subseteq \mathbb{F}_{3}^{36}$ by \eqref{lemma:extended product cap}, where $|B| = 6 \times 112^5 \times 12$ by \eqref{lemma:size}, and an extendable collection $(B, A_{1}^6, A_{2}^6)$ by \eqref{lemma:recursive}. \smallbreak Then we extend our new collection $(B, A_{1}^6, A_{2}^6)$ by $T = I(11,7)$ from \eqref{computer caps}, to produce a cap set in $\mathbb{F}_3^{396}$, which by \eqref{lemma:size} has size \[\binom{11}{7} \cdot (6 \times 112^5 \times 12)^4 \cdot (112^6)^7 .\] \end{proof} \begin{remark} \label{otherbounds} Since $|A|^{1/396} \approx 2.217981$, we see that our cap in $\mathbb{F}_3^{396}$ gives an exponential improvement on the asymptotic lower bound on the size of a cap set. We note that Edel's lower bound uses essentially the same method, but with the admissible sets $S=\tilde{I}(8,7)$ and $T=I(10,5)$. Using either $I(10,6)$ or $I(11,6)$ from \eqref{computer caps} produces a better lower bound than Edel, but neither is better than our new bound of $\approx 2.217981^n$. \end{remark} \section{Extending The Admissible Set Construction} In this section, we will extend Edel's methods, by mimicking the extended product for cap sets to find large admissible sets. This will allow us to construct large admissible sets, much larger than is computationally feasible, which will be used to obtain a lower bound of $2.218^n$. \smallbreak The following proposition is similar to \eqref{def:ProdCap}, where we showed that the direct product of cap sets is a cap set. \begin{proposition} \label{prop: ad prod} If $S, T$ are admissible sets, then so is their direct product $S \times T$. \end{proposition} \begin{proof} \textbf{Pairwise condition:} \\Let $a,b \in S \times T$ be distinct. Then $a,b$ are of the form $(s,t)$, $(s',t')$ for some $s,s' \in S$ and $t,t' \in T$. If $s,s' \in S$ are distinct, there are coordinates $i,j$ such that $s_i = 0 \neq s_i'$ and $s_j \neq 0 = s_j'$ by condition (1) of admissibility for S. The same is true for $t,t' \in T$, and since $(s,t) \neq (s',t')$, we must have $s \neq s'$ or $t \neq t'$. It follows that for all $a \neq b \in S \times T$, we have coordinates $i,j$ such that $a_i = 0 \neq b_i$ and $a_j \neq 0 = b_j$. So condition (1) for admissibility is satisfied. \bigbreak \textbf{Triples condition:} \\Let $a,b,c \in S \times T$ be distinct. As before, we must have $a = (s,t)$, $b = (s',t')$ and $c = (s'',t'')$ for some (not necessarily distinct) $s,s',s'' \in S$ and $t,t',t'' \in T$. \\ \textbf{Case 1:} If $s,s',s''$ or $t,t',t''$ are all distinct, then condition (2) of admissibility follows immediately by the admissibility of $S$ or $T$. \\ \textbf{Case 2:} Assume that neither $s,s',s''$ nor $t,t',t''$ are all distinct. Without loss of generality, we may assume $s = s'$. Since $(s,t) \neq (s',t')$, we cannot also have $t=t'$, but since $t,t',t''$ are not all distinct, without loss of generality $t=t''$. So, we must have: $a = (s,t)$, $b = (s,t')$ and $c = (s'',t)$. Since $a,b,c$ are distinct, we must have $s \neq s''$. By condition (1) of admissibility of S, there is a coordinate $k$ such that $s_k = 0 \neq s''_k$. So, $\{a_k, b_k, c_k\} = \{0,0,1\}$ or $\{0,0,2\}$, hence condition (2) for admissibility also holds for $S \times T$. \end{proof} \begin{remark} Now we have a direct product construction for admissible sets, there are at least 2 natural questions: \begin{enumerate} \item Does the direct product construction allow us to produce better admissible sets, by combining known admissible sets? \item Can we generalise the direct product construction for admissible sets, in an analogous way to the extension construction \eqref{def:extended product} for cap sets? \end{enumerate} \end{remark} The answer to the first question is no: much like the situation for direct products of cap sets, taking a direct product of admissible sets only ever does as well as the best of the individual admissible sets. \smallbreak We focus on the 2nd question - can we improve the product construction for admissible sets, in a similar way to the extended product construction for cap sets? We will answer this question in the affirmative. In particular, in this section we will construct an admissible set in $\{0,1,2\}^{1562}$, which we use to prove \eqref{main} and obtain the lower bound of $2.218^n$. \begin{definition}[Meta-admissible] \label{def:meta ad} We say a set $T \subseteq \{0, 1,2\}^r$ is \emph{meta-admissible} if it is admissible, as in \eqref{def:admissible}. Recall that admissible means: \begin{enumerate} \item For all distinct $t, t' \in T$, there are coordinates $i$ and $j$ such that $t_i = 0 \neq t_i'$ and $t_j \neq 0 = t_j'$. \item For all $t, t', t'' \in T$ distinct, there is a coordinate $k$ such that $\{t_k, t_k', t_k''\} = \{0, 1, 2\}$, $\{0,0,1\}$ or $\{0, 0, 2\}$. \end{enumerate} \end{definition} \begin{definition}[Meta-extendable] \label{def:meta ext} A collection $S_0, S_1, S_2 \subseteq \{0,1,2\}^m$ of admissible sets is \emph{meta-extendable} if: \begin{enumerate} \item For any $s \in S_0$ and $s' \in S_1 \cup S_2$, the weight of $s$ is less than the weight of $s'$, so all the vectors in $S_0$ have more zeroes than any vector in $S_1$ or $S_2$. \item If $x,y \in S_0$ and $z \in S_1 \cup S_2$ then there is a coordinate $k$ such that $\{x_k, y_k, z_k\} = \{0, 1, 2\}$, $\{0,0,1\}$ or $\{0, 0, 2\}$. \item If $x \in S_0$, $y \in S_1$ and $z \in S_2$, then there is a coordinate $k$ such that $\{x_k, y_k, z_k\} = \{0, 1, 2\}$, $\{0,0,1\}$ or $\{0, 0, 2\}$. \end{enumerate} \end{definition} We have a similar construction to \eqref{def:extended product}, but this time we extend a collection of admissible sets rather than cap sets. \begin{definition} \label{def:meta con} If $S_0, S_1, S_2$ are admissible sets, and $T \subseteq \{0, 1,2\}^r$, for each $t = (t_1, \ldots, t_r) \in T$ we define \[t(S_0, S_1, S_2) = S_{t_1} \times \cdots \times S_{t_r}.\] Predictably, we then define \[T(S_0, S_1, S_2) = \bigcup_{t \in T} t\bra{S_0, S_1, S_2} = \bigcup_{t \in T} \bra{S_{t_1} \times \cdots \times S_{t_r}}.\] \end{definition} \bigbreak It will not come as a shock that our new definitions of meta-admissible \eqref{def:meta ad} and meta-extendable \eqref{def:meta ext} allow us to use the extended product construction on admissible sets \eqref{def:meta con} to produce new admissible sets. \begin{lemma} If $S_0, S_1, S_2 \subseteq \{0,1,2\}^m$ is a meta-extendable collection of admissible sets, and $T \subseteq \{0, 1,2\}^r$ is meta-admissible, then $T(S_0, S_1, S_2)$ is an admissible set. \end{lemma} \begin{proof} \textbf{First condition for admissible (pairs):} Let $x,y \in T(S_0, S_1, S_2)$ be distinct. So we know $x \in S_{t_1} \times \cdots \times S_{t_r}$ and $y \in S_{t'_1} \times \cdots \times S_{t'_r}$ for some $t,t' \in T$. There are 2 cases: $t=t'$ or $t \neq t'$. \smallbreak If $t=t'$, we know $x,y \in S_{t_1} \times \cdots \times S_{t_r}$, which is admissible by \eqref{prop: ad prod}. \smallbreak Assume $t \neq t'$. By condition (1) of meta-admissible, there is a coordinate $i$ such that $t_i = 0 \neq t_i'$, and $j$ such that $t_j \neq 0 = t'_j$. Without loss of generality, assume $t'_i < t_j$. So \[x \in S_{t_1} \times \cdots \times S_{0} \times \cdots \times S_{t_j} \times \cdots \times S_{t_r}\] and \[y \in S_{t'_1} \times \cdots \times S_{t'_i} \times \cdots \times S_0 \times \cdots \times S_{t'_r}.\] Let $s \in S_0$, $s' \in S_{t'_i}$. Since $t'_i \neq 0$, by property (1) of meta-extendable $s$ has lower weight than $s'$. By the pigeonhole principle there must be a coordinate $k$ such that $s_{k} = 0 \neq s'_k$, as $s$ has more zero entries than $s'$, so $s$ must be zero somewhere $s'$ is not. Similarly, for $s'' \in S_0$, $s''' \in S_{t_j}$, since $t_j \neq 0$ there is a coordinate $\ell$ such that $s''_{\ell} = 0 \neq s'''_{\ell}$. So, it follows that there are coordinates $i', j'$ such that $x_{i'} = 0 \neq y_{i'}$ and $x_{j'} \neq 0 = y_{j'}$, so we have the first condition for admissibility in this case too. \bigbreak \textbf{Second condition of admissibility (triples):} Let $x,y,z \in T(S_0, S_1, S_2)$ be distinct. If $x,y,z$ all come from the same $t \in T$ then we are done, by the direct product construction. So, we need to consider the other cases: $x,y$ come from $t$ and $z$ comes from $t'$ or $x,y,z$ come from distinct $t, t', t''$ \smallbreak If $x,y$ are from $t$ and $z$ is from $t'$ where $t \neq t'$, then by definition of $T$ being meta-admissible, there is a coordinate $i$ such that $t_i = 0 \neq t'_i$. So, the $i$-th blocks $x_i, y_i$ of $x,y$ are from $S_0$, and the $i$-th block $z_i$ of $z$ is in $S_1 \cup S_2$. Then by condition (2) of meta-extendable applied to $x_i, y_i, z_i$, there is a coordinate $k$ where $\{x_k, y_k, z_k\} = \{0, 1, 2\}$, $\{0,0,1\}$ or $\{0, 0, 2\}$. \smallbreak Finally, if $x,y,z$ from distinct $t, t', t''$ then by condition (2) of meta-admissible, we have a coordinate $k$ where either $\{t_k, t'_k, t''_k\} = \{0, 0, 1\}$, $\{0, 0, 2\}$ or $\{0, 1, 2\}$. \smallbreak In the first case, two of $x_k$, $y_k$, $z_k$ are in $S_0$ and the other one is from $S_1 \cup S_2$, and we are done as above, with condition (2) of meta-extendable. \smallbreak If $\{t_k, t'_k, t''_k\} = \{0, 1, 2\}$, then exactly one of $x_k$, $y_k$, $z_k$ is in each of $S_0$, $S_1$, $S_2$. Then by condition 3 of meta-extendable, we are done. \end{proof} \smallbreak \begin{lemma} \label{metacoll} There is a meta-extendable collection of admissible sets $S_0, S_1, S_2$, where $S_1$ and $S_2$ are both $I(11,7)$ admissible sets, and $S_0 \subseteq I(11,3)$ has size 37. \end{lemma} \begin{proof} We use $S_1 = I(11,7)$ from \eqref{computer caps}, and then we take $S_2$ as the set formed by swapping all 1s and 2s in $S_1$. Note that this is still an admissible set, since the doubles condition is unaffected by swapping nonzero coordinates, and if there was a coordinate $k$ where $\{x_k, y_k, z_k\} = \{0,0,1\}$, $\{0,0,2\}$ or $\{0,1,2\}$, then after swapping 1s and 2s we will have $\{x_k, y_k, z_k\} = \{0,0,1\}$, $\{0,0,2\}$ or $\{0,1,2\}$. \smallbreak Using a computer search, we can find $S_0 \subseteq I(11,3)$ such that $|S_0| = 37$ and $S_0, S_1, S_2$ satisfy the conditions for meta-extendable given in \eqref{def:meta ext}. The admissible set $S_0 \subseteq I(11,3)$ can be found on the author's webpage at \url{fredtyrrell.com/cap-sets}. \end{proof} \begin{lemma} There is an admissible set $T' \subseteq \{0, 1,2\}^{1562}$ such that $|T'| = 142 \cdot 37 \cdot \binom{11}{7}^{141}$ and all $t \in T'$ have weight 990. \end{lemma} \begin{proof} Let $T = I(142,141)$, which exists by \eqref{lemma:recursive}. Using the meta-extendable collection $(S_0, S_1, S_2)$ from \eqref{metacoll} above, we can then produce a new admissible set $T'=T(S_0, S_1, S_2)$. Since $S_0, S_1, S_2 \subseteq \{0,1,2\}^{11}$, and $11 \times 142 = 1562$, we see that $T(S_0, S_1, S_2) \subseteq \{0,1,2\}^{1562}$. \smallbreak Each element $T(S_0, S_1, S_2)$ contains 141 $S_1$ or $S_2$ blocks, and one $S_0$, so has weight $141 \times 7 + 3 = 990$. For each $t \in T$, the set $t(S_0, S_1, S_2)$ has $\binom{11}{7}^{141} \cdot 37$ different vectors, so $|T'| = 142 \cdot 37 \cdot \binom{11}{7}^{141}$. \end{proof} Using this new admissible set, we can finally prove the main result of this paper. \begin{proof}[Proof of theorem \eqref{main}] We use the admissible sets $S=\tilde{I}(6,5)$, which exists by \eqref{lemma:recursive}, and $T' \subseteq \{0, 1,2\}^{1562}$ from the previous lemma. First, we apply the recursively admissible set $S$ to $(A_0, A_1, A_2)$, the 6 dimensional extendable collection of cap sets from \eqref{extendable caps}, to produce the extendable collection $(B, A_1^6, A_2^6)$, where $B$ has size $6 \cdot 112^5 \cdot 12$. Then we apply $T'$, and our final cap set has size \[|T'| \cdot |B|^{1562-990} \cdot (112^6)^{990} = \binom{11}{7}^{141} \cdot 6^{572} \cdot 12^{572} \cdot 112^{8800} \cdot 37 \cdot 142.\] \end{proof} \begin{remark} This cap set in $\mathbb{F}_3^{56232}$ has size $|A| \approx 2.21 \times 10^{19455} \approx 10^{10^{4.3}}$, and since $|A|^{1/56232} \approx 2.218021$, this example gives a slight improvement to the lower bound in \eqref{thm:396}. Similar to the remark \eqref{otherbounds} at the end of section 2, it is possible to find other meta-extendable collections. For example, there is a collection $S_1 = I(11,6)$, $S_2$ is flipped $S_1$ and $S_0 \subseteq I(11,2)$ of size 20. However, none of these give a better bound than the above. \end{remark} \subsection{Summary} Now we have proved all of the lower bounds in this paper, it seems a good idea to present the results we have achieved, alongside the previous lower bounds. Note that the bounds of Edel in \cite{Edel} and Calderbank and Fishburn in \cite{CalderbankFishburn} both come from what we are now calling the extended product construction. \smallbreak \begin{center} \renewcommand{\arraystretch}{1.5} \begin{tabular}{ |c|c|c|c| } \hline \textbf{Bound} & \textbf{Construction} & \textbf{Dimension} & \textbf{Appears in} \\ [1ex] \hline\hline 2 & $\{0,1\}^n$ & All & Trivial \\[1ex] \hline 2.114742\ldots & Maximal cap of size 20 in $\mathbb{F}_3^4$ & 4 & \cite{Pellegrino} \\ [1ex] \hline 2.141127\ldots & Maximal cap of size 45 in $\mathbb{F}_3^5$ & 5 & \cite{10.1006/jcta.2002.3261} \\ [1ex] \hline 2.195514\ldots & Maximal cap of size 112 in $\mathbb{F}_3^6$ & 6 & \cite{dim6} \\ [1ex] \hline 2.210147\ldots & $\tilde{I}(25,24)$ and $I(90,89)$ & 13500 & \cite{CalderbankFishburn} \\ [1ex] \hline 2.217389\ldots & $\tilde{I}(8,7)$ and ${I}(10,5)$ & 480 & \cite{Edel} \\ [1ex] \hline \hline 2.2175608\ldots & $\tilde{I}(7,6)$ and ${I}(10,6)$ & 420 & \eqref{otherbounds} \\[1ex] \hline 2.217950\ldots & $\tilde{I}(7,6)$ and ${I}(11,6)$ & 462 & \eqref{otherbounds} \\[1ex] \hline 2.217981\ldots & $\tilde{I}(6,5)$ and ${I}(11,7)$ & 396 & \eqref{thm:396} \\[1ex] \hline 2.218021\ldots & Meta-extendable collection & 56232 & \eqref{main} \\[1ex] \hline \end{tabular} \end{center} \section{Limits to the admissible set construction} Given that our new lower bounds have come from finding new admissible sets, it is natural to ask what happens if we continue to find more admissible sets. In particular, we would like to know how much we could improve the lower bound by, if all admissible sets were to exist. Edel gave an answer to this question in the very final section of \cite{Edel}, which we present as the following proposition. \begin{proposition} \label{prop:asymptotic limit} For a collection $(A_0, A_1, A_2)$ of extendable cap sets in $\mathbb{F}_3^{n}$, where $|A_1| = |A_2|$, the best admissible sets are those of the form $I(m,m \alpha)$ where $\alpha = \frac{|A_1|}{|A_0|+|A_1|}$ and $m$ is large. Using the extended product construction, the best constant we could achieve in our asymptotic lower bound is $c = \left(|A_0| + |A_1|\right)^{1/{n}}$. \end{proposition} \begin{proof} Let $\alpha \in (0,1)$. If we apply the admissible set $I(m,m \alpha)$ to $(A_0, A_1, A_2)$, we have a cap set in $\mathbb{F}_3^{nm}$ of size \[\binom{m}{m \alpha} \cdot |A_1|^{m \alpha} \cdot |A_0|^{m(1-\alpha)}.\] For large $m$, we can use the well known asymptotic estimate $\binom{m}{m\alpha} \sim 2^{m h(\alpha)}$, where $h(x) = -x \log_2(x) - (1-x)\log_2(1-x)$ is the binary entropy function. Taking logs, applying a change of base and removing constants, we see that we want to maximise the function \[f(x) = x \log\bra{\frac{|A_1|}{|A_0|}} - x\log(x) - (1-x)\log(1-x).\] The derivative is given by $f'(x) = \log\bra{\frac{|A_1|}{|A_0|}} + \log(1-x) - \log(x)$, which has its root at $x = \frac{|A_1|}{|A_0|+|A_1|}$. We can then substitute $w=m \alpha = m\frac{|A_1|}{|A_0|+|A_1|}$ back into our formula from \eqref{lemma:size} for the size of the cap set, giving a cap set in $\mathbb{F}_3^{nm}$ of size $\left(|A_0|+|A_1|\right)^m$. \end{proof} \begin{remark} For the collection of cap sets in $\mathbb{F}_3^6$ from \eqref{extendable caps}, the best admissible sets are those of the form $I\left(m, \frac{28m}{31}\right)$ for large $m$, and the best asymptotic lower bound we could get using these 6 dimensional cap sets is $\left(124^{1/6}\right)^n = (2.233\ldots)^n$. \end{remark} Our experimental evidence, combined with some heuristic arguments (and perhaps a little wishful thinking), lead us to explicitly state the following conjecture, which was implied at the end of \cite{Edel}. \begin{conjecture} \label{conj1} For any $m>w>0$, there always exists an $I(m,w)$ admissible set. Therefore, the maximum size of a cap set in $\mathbb{F}_3^n$ is at least $124^{n/6} \approx 2.233^n$. \end{conjecture} \begin{remark} In addition to all admissible sets in dimensions up to 11, we can prove the existence of admissible sets of weight 2 and 3 for all $m$, via a similar construction to \eqref{lemma:recursive}. These, combined with the admissible sets $I(m,0)$, $I(m,1)$, $I(m,m-1)$ and $I(m,m)$, are all of the admissible sets currently known to exist. \end{remark} The following table shows the lowest dimension admissible set which would be needed to achieve new bounds, using the extended product construction in \eqref{def:extended product} with the cap sets in $\mathbb{F}_3^6$ from \eqref{extendable caps}. \smallbreak \begin{center} \renewcommand{\arraystretch}{1.4} \begin{tabular}{ |c|c|c|c| } \hline \textbf{Bound} & \textbf{Admissible Sets} & \textbf{Dimension} \\[1ex] \hline\hline 2.220\ldots & $\tilde{I}( 5 , 4 )$ and $I( 17 , 11 )$ & 510 \\[1ex] \hline 2.225\ldots & $\tilde{I}( 3 , 2 )$ and $I( 54 , 41 )$ & 972 \\[1ex] \hline 2.230\ldots & $I( 311 , 281 )$ & 1866 \\[1ex] \hline 2.233\ldots & $I( 22948 , 20727 )$ & 137688 \\[1ex] \hline 2.233076\ldots & $I\left(m, \frac{28m}{31}\right)$ for large $m$ & 6$m$ \\[1ex] \hline \end{tabular} \end{center} \smallbreak While small improvements to our bound are possible by constructing better admissible sets, we do not expect that a significant increase in the lower bound is possible by simply finding more admissible sets through a computer search. It is perhaps not a huge surprise then that although Edel was able to find an $I(10,5)$ about 20 years ago, the best we have been able to do is an $I(11,7)$. \section{The SAT Solver} \begin{definition}[Boolean satisfiability] The Boolean satisfiability problems asks whether, given a propositional formula, there exists an assignment of true or false to each variable in the formula such that the formula is true. If this is the case, we say that the formula is \emph{satisfiable}. \end{definition} Significant research has gone into finding efficient algorithms for the Boolean satisfiability problem, known as SAT solvers. We used the kissat SAT solver, which is described in \cite{SAT}. Most SAT solvers, including kissat, take inputs in a format called conjuctive normal form (CNF). \begin{definition}[Conjunctive normal form] A propositional formula is in \emph{conjuctive normal form} if the formula consists of a conjunction of clauses, where each clause is a disjunction of propositional variables or their negations. \end{definition} Our use of the SAT solver requires three steps. First, we convert the problem of a particular admissible set existing into a statement in CNF. Once we have defined the problem in CNF, we can use a SAT solver program to check whether this formula is satisfiable. If our formula is satisfiable, the SAT solver returns the assignment of the variables which satisfy the formula, which we convert back into a set of vectors, producing our admissible set. The first step is the interesting one, which we will discuss in this section. \subsection{CNF algorithm} \subsubsection{Variables} We begin by generating the $\binom{m}{w}$ different support sets. In other words, for each $1 \leq i \leq \binom{m}{w}$ we have a different $S_i \subseteq \{1, \ldots, m\}$, where $|S_i| = w$. We then define a variable for each non zero coordinate in each support set - let $S_i^k$ correspond to the element $k$ in $S_i$, meaning the $k$ coordinate of the $i$th support set. There are $w \cdot \binom{m}{w}$ such variables $S_i^k$. \smallbreak For any pair of support sets, we want to record when a given coordinate is different. So, for each coordinate in the first support set, we define a variable which is true if this coordinate is the same in the second support set as well, and false if it is not. We can represent this variable as $S_{i,j}^k$, corresponding to the pair $S_i, S_j$ of support sets, and the coordinate $k$. This variable $S_{i,j}^k$ is true if coordinate $k$ is in both support sets and $S_i^k = S_j^k$, and is false if coordinate $k$ is only in $S_i$ or $S_i^k \neq S_j^k$. \subsubsection{Reconciling the pair and triple variables} We now add constraints, starting with constraints which combine the single variables and the pairwise variables. This is basically just a common sense constraint, to make sure the variables looking at individual coordinates and the variables looking at pairs of coordinates are compatible. \smallbreak For any pair $S_i$ and $S_j$ of different support sets, for every coordinate $k$ in both $S_i$ and $S_j$ we ask that either $S_{i,j}^k$ is true or $\{S_i^k, S_j^k\} = \{1,2\}$. An equivalent way to phrase this is as follows: take any pair $x,y$ with different support sets. For each coordinate $k$ where both $x_k$ and $y_k$ are non zero we add the constraints $(x_k=y_k) \lor (x_k=1) \lor (y_k=1)$ and $(x_k=y_k) \lor (x_k=2) \lor (y_k=2)$. \smallbreak In other words, if the variable which records when $x_k \neq y_k$ and $x_k$, $y_k >0$ is true, then exactly one of the 2 variables which record whether $x_k$ or $y_k$ are 1 is true. This is essentially just saying that $x_k \neq 0 \neq y_k$ and $x_k \neq y_k$ implies $\{x_k, y_k\} = \{1,2\}$. \subsubsection{Checking the condition on triples} We now ensure every triple $x,y,z$ in our set has a coordinate $k$ such that $\{x_k,y_k,z_k\} = \{0,0,1\}$, $\{0,0,2\}$ or $\{0,1,2\}$. This is the triples condition for admissible sets from \eqref{def:admissible}. \smallbreak Take any 3 distinct support sets, and check all of the coordinates from $1$ to $m$. If a coordinate is in exactly one of the 3 support sets, we are done and don't need to worry about this triple of support sets - we will be automatically guaranteed a coordinate where our triple is $\{0,0,1\}$ or $\{0,0,2\}$. \bigbreak The problem we need to consider is when there is no coordinate in exactly one of the 3 supports. That is, every coordinate in one of the three supports is in at least one of the others. In this case, we need to look at the coordinates in exactly 2 of the supports, and force one of these coordinates to give us $\{0,1,2\}$. If $k$ is in the support of $x,y$, but not $z$, we can add the condition $x_k \neq y_k$, which would mean $\{x_k, y_k, z_k\} = \{0,1,2\}$. \smallbreak We do this for every coordinate in exactly 2 of the support sets of $x,y,z$, and we then take the disjunction of these conditions, meaning we require this to be true for only one coordinate. This produces a constraint which asks for $\{x_k,y_k,z_k\} = \{0,1,2\}$ in at least one coordinate $k$. So, if this is satisfied for all triples without a coordinate in exactly one of the three supports, we are done, and have an admissible set. \subsection{Improving the SAT solver} In order for the SAT solver to return an output in a reasonable time, we add more constraints to the problem, reducing the search space of all potential assignments and hence hopefully allowing the SAT solver to work faster. We need to be clever with our choice of extra conditions, to make algorithm more efficient without turning the problem into one which is impossible. By studying smaller examples of admissible sets, heuristic arguments and a healthy dose of educated guesswork, we tried various combinations of further constraints on our problem. Some made the problem unsatsfiable, and some still did not allow the SAT solver to finish within a reasonable time. However, we used this information to refine our method, and eventually we were successful in producing 3 new admissible sets. The following additional constraints allowed us to find the admissible sets $I(11,7)$, $I(11,6)$ and $I(10,6)$. \subsubsection{Extra constraints for $I(11,7)$} \begin{enumerate}[label=(\roman*)] \item Every vector must have the first 2 non zero entries different - that is, the first 2 non zero coordinates are always $(1,2)$ or $(2,1)$. \item Every vector must have at least one of each digit 1 and 2 in their 4th, 5th or 6th coordinates - they cannot all be 1 or all be 2. \item If the third non zero entry is in coordinates 1 to 7, it is always a 1. \item If the fourth non zero entry is in coordinates 1 to 7, it is always a 2. \end{enumerate} \subsubsection{Extra constraints for $I(11,6)$} \begin{enumerate}[label=(\roman*)] \item Every vector must have the first 2 non zero entries different - that is, the first 2 non zero coordinates are always $(1,2)$ or $(2,1)$. \item Every vector must have at least one of each digit 1 and 2 in the final 3 non zero coordinates - they cannot all be 1 or 2. \item If the third non zero entry is in coordinates 1 to 7, it is always a 1. \end{enumerate} \subsubsection{Extra constraints for $I(10,6)$} \begin{enumerate}[label=(\roman*)] \item If the second non zero entry is in the first 6 coordinates, make it a 1. \item If the third non zero entry is in the first 6 coordinates, make it a 2. \item No vector ends in $(2,2)$. \end{enumerate} \begin{remark} The CNF generation code and the code to transform the SAT output back to vectors can be found on the author's website, at \url{fredtyrrell.com/cap-sets}. We would like to make it clear that our use of the SAT solver is unlikely to be the most efficient way to produce admissible sets, and we would be very interested in suggestions to improve this aspect of our construction, from those with more experience and knowledge of SAT solvers and other computational methods. \end{remark} \section*{Acknowledgements} I am extremely grateful to Thomas Bloom for suggesting the problem and supervising my research. I would like to thank Thomas for his support, encouragement and advice throughout my project. In addition, I thank him for many helpful discussions and useful suggestions in preparing this paper. \smallbreak Thanks also go to Akshat, Albert, Ittihad, Maria, and Yifan for being an excellent audience for my presentations in the Maths Institute, where they provided me with a valuable opportunity to share and discuss my research. \smallbreak Finally, I would like to thank Anubhab, Chris, Claire, Flora, James and Jess for allowing me to try and explain my work to them, with varying levels of success. \smallbreak The work in this paper was completed while the author was employed as a Summer Project Intern in the Mathematical Institute, University of Oxford, under the supervision of Dr Thomas Bloom, and was supported by the supervisor's Royal Society University Research Fellowship. \printbibliography \end{document}
{ "timestamp": "2022-09-22T02:05:34", "yymm": "2209", "arxiv_id": "2209.10045", "language": "en", "url": "https://arxiv.org/abs/2209.10045", "abstract": "A cap set is a subset of $\\mathbb{F}_3^n$ with no solutions to $x+y+z=0$ other than when $x=y=z$. In this paper, we provide a new lower bound on the size of a maximal cap set. Building on a construction of Edel, we use improved computational methods and new theoretical ideas to show that, for large enough $n$, there is always a cap set in $\\mathbb{F}_3^n$ of size at least $2.218^n$.", "subjects": "Combinatorics (math.CO); Discrete Mathematics (cs.DM); Number Theory (math.NT)", "title": "New Lower Bounds for Cap Sets", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9896718486991903, "lm_q2_score": 0.8221891327004132, "lm_q1q2_score": 0.8136974389400018 }
https://arxiv.org/abs/2004.06097
Saturation problems in the Ramsey theory of graphs, posets and point sets
In 1964, Erdős, Hajnal and Moon introduced a saturation version of Turán's classical theorem in extremal graph theory. In particular, they determined the minimum number of edges in a $K_r$-free, $n$-vertex graph with the property that the addition of any further edge yields a copy of $K_r$. We consider analogues of this problem in other settings. We prove a saturation version of the Erdős-Szekeres theorem about monotone subsequences and saturation versions of some Ramsey-type theorems on graphs and Dilworth-type theorems on posets.We also consider semisaturation problems, wherein we allow the family to have the forbidden configuration, but insist that any addition to the family yields a new copy of the forbidden configuration. In this setting, we prove a semisaturation version of the Erdős-Szekeres theorem on convex $k$-gons, as well as multiple semisaturation theorems for sequences and posets.
\section{Introduction} Extremal problems have a long history in combinatorics originating with the results of Mantel~\cite{Mantel} in 1907 and Turán~\cite{Turan} in 1947 determining the maximum number of edges in a triangle- and $K_r$-free, $n$-vertex graph, respectively. Erdős, Hajnal and Moon~\cite{ehm} investigated the dual problem, called the saturation problem, wherein one aims to minimize the number of edges in a $K_r$-free, $n$-vertex graph, such that the addition of any edge yields a copy of $K_r$. Since their initial result, the saturation problem has been considered for a variety of graphs. Of particular note is a theorem of Kászonyi and Tuza~\cite{Tuza}, which showed that for any graph $H$, the minimum number of edges in an $H$-saturated, $n$-vertex graph is at most linear in $n$. Going beyond graphs, saturation problems have been considered in several other settings. A structure which is maximal with respect to some property is said to \emph{saturate} that property. A maximum size saturating structure is called an \emph{extremal structure}, while a minimum size saturating structure is called a \emph{minimal saturating structure}. For intersecting hypergraphs, a saturation version of the Erdős-Ko-Rado theorem~\cite{ekr} (uniform setting) was proven by Füredi~\cite{furedi}. In particular, he showed that for a given uniformity $r$, there exists a family of approximately $3r^2/4$ sets of size $r$ with the property that adding any further set yields a pair of disjoint sets, disproving a conjecture of Meyer~\cite{meyer}. In the nonuniform setting, it is well known that all maximal intersecting families of subsets of an $n$ element set have the same size, namely $2^{n-1}$. This result was extended to the case of families without $k$-matchings by Buci\'c~\emph{et al.}~\cite{bucic}. In the setting of forbidden (induced or non-induced) posets in the Boolean lattice the saturation problem has been investigated by Ferrara~\emph{et al.}~\cite{Ferrara}, and further results in this direction were obtained in~\cite{mar} and~\cite{satsolved}. Parallel with the development of extremal combinatorics, Ramsey theory has been investigated extensively. This topic begins with the seminal result of Ramsey~\cite{ramsey}, which states that for any integers $c,r,k$ there is an integer $N$ such that any $c$-coloring of the edges of an $r$-uniform hypergraph on $N$ vertices contains a monochromatic complete graph of size $k$ in some color. This initial result gave rise to a variety of problems where in place of a complete hypergraph one is given hypergraphs $F_1,F_2,\dots,F_c$, and seeks to minimize the value of $N$ which yields, for all $c$-colorings of the complete $r$-uniform, $N$-vertex hypergraph, a copy of $F_i$ in color $i$ for some $i=1,2,\dots,c$. Ramsey-type problems may be interpreted as extremal problems in the following way. One wishes to maximize the number of vertices $n$ in such a way that there exists a coloring, such that for all $i$, we find no copy of $F_i$ in color $i$ (so $n=N-1$, where $N$ is defined as above). With this interpretation, it becomes natural to ask the corresponding minimal saturation problem, wherein we seek to minimize $n$ such that the hypergraph can be $c$-colored without a monochromatic copy of $F_i$ in color $i$, but if we extend this $c$-colored hypergraph to a $c$-colored hypergraph on $n+1$ vertices, then we have for some $i$, a monochromatic copy of some $F_i$ in color $i$. Finally, we mention that many classical results, such as Dilworth's theorem on posets and the Erd\H{o}s-Szekeres theorem for sequences or cups and caps, can be interpreted as Ramsey-type problems where the allowed colorings of the hypergraph are restricted in some way. As such, we may again consider the corresponding saturation versions of these results. In this paper, we initiate such a study of Ramsey-type saturation problems. We concentrate on well-known settings (graphs, posets, monotone and convex subsets of point sets), and in several cases we manage to prove tight bounds. In addition, we also consider the corresponding semisaturation problems, a notion introduced by F\"{u}redi and Kim~\cite{semi} (also sometimes called oversaturation or strong saturation). In the graph setting the semisaturation problem is the following: Given a graph $F$, what is the minimum number of edges in an $n$-vertex graph $G$ with the property that adding any edge to $G$ yields a copy of $F$ containing that edge. Note that now we allow the graph $G$ to contain $F$ as a subgraph. We will consider semisaturation problems for sequences, cups and caps, posets and point sets as well as for the Ramsey problem on graphs. Note that by definition, the semisaturation number is always at most the saturation number which is in turn at most the extremal (Ramsey) number. In the rest of this section we provide a precise formulation of each of the saturation and semisaturation problems that are considered in the paper and our results for each case. We also briefly summarize the known results about the corresponding extremal problem in order to contrast them with our minimal saturation results. Sections~\ref{section:graph}--\ref{section:conv} contain the proofs of these results. Finally, in Section~\ref{general} we rigorously define the general framework that was hinted at above and illustrate how these problems fit into it. \subsection*{Graphs} Let $\mbox{\ensuremath{\mathcal G}}\xspace$ be the family of (labeled) complete graphs whose edges are colored with $c$ colors (numbered by $1,2,\dots, c$). Given $G, G'\in \mbox{\ensuremath{\mathcal G}}\xspace$, we say $G'$ \emph{extends} $G$ if $G$ is a proper subgraph of $G'$, i.e., $G'$ can be obtained from $G$ by iteratively adding a new vertex and colored edges connecting the new vertex with each of the existing vertices. A member $G$ of $\mbox{\ensuremath{\mathcal G}}\xspace$ is called \emph{$(k_1, k_2, \ldots, k_c)$-saturated} if for every $i\in [c]$, the graph $G$ does not contain a monochromatic $K_{k_i}$ of color $i$, but every $G' \in \mbox{\ensuremath{\mathcal G}}\xspace$ that extends $G$ contains a monochromatic $K_{k_i}$ of color $i$ for some $i$. A graph $G \in \mbox{\ensuremath{\mathcal G}}\xspace$ is called \emph{$(k_1, k_2, \ldots, k_c)$-semisaturated} if for every $G'\in \mbox{\ensuremath{\mathcal G}}\xspace$ that extends $G$, there exists some $i\in [c]$ such that $G'$ contains a copy of a monochromatic $K_{k_i}$ of color $i$ which is not in $G$. Clearly the size of the largest $(k_1, k_2, \ldots, k_c)$-saturated graph in $\mbox{\ensuremath{\mathcal G}}\xspace$, which we denote by $\ram_{\mbox{\ensuremath{\mathcal G}}\xspace}(k_1,\dots, k_c)$, is equal to the usual Ramsey number minus one. Let $\sat_{\mbox{\ensuremath{\mathcal G}}\xspace}(k_1,\dots k_c)$ denote the size of the smallest saturated $G\in \mbox{\ensuremath{\mathcal G}}\xspace$, and finally let $\osat_{\mbox{\ensuremath{\mathcal G}}\xspace}(k_1,\dots k_c)$ denote the size of the smallest $(k_1, \dots, k_c)$-semisaturated $G\in \mbox{\ensuremath{\mathcal G}}\xspace$. From the definition it is clear that \[\osat_{\mbox{\ensuremath{\mathcal G}}\xspace}(k_1,\dots, k_c)\le \sat_{\mbox{\ensuremath{\mathcal G}}\xspace}(k_1,\dots, k_c) \le \ram_{\mbox{\ensuremath{\mathcal G}}\xspace}(k_1,\dots, k_c).\] For convenience, we also use $\sat_{\mbox{\ensuremath{\mathcal G}}\xspace}(k;c)$ to denote $\sat_{\mbox{\ensuremath{\mathcal G}}\xspace}(k_1, k_2, \dots, k_c)$ and $\osat_{\mbox{\ensuremath{\mathcal G}}\xspace}(k;c)$ to denote $\osat_{\mbox{\ensuremath{\mathcal G}}\xspace}(k_1, k_2, \dots, k_c)$ when $k_1 = k_2 = \cdots = k_c = k$. For a fixed $k$ and growing $l$, the following results about Ramsey numbers are known (due to Bohman and Keevash~\cite{BK} and Ajtai, Komlós and Szemerédi~\cite{AKS}, respectively): \[c'_k\frac{l^{\frac{k+1}{2}}}{(\log l)^{\frac{k+1}{2}-\frac{1}{k-2}}}\le \ram_{\mbox{\ensuremath{\mathcal G}}\xspace}(k,l)\le c_k\frac{l^{k-1}}{(\log l)^{k-2}}.\] For the case $l=3$, it is known that \[\ram_{\mbox{\ensuremath{\mathcal G}}\xspace}(k,3)=\Theta\left(\frac{k^2}{\log k}\right).\] The upper bound was proven by Ajtai, Komlós, and Szemerédi~\cite{AKS}; the lower bound was obtained originally by Kim~\cite{Kim}, and subsequently improved by Fiz Pontiveros, Griffiths and Morris~\cite{FGM} and Bohman and Keevash~\cite{BK}. We prove the following results: \begin{theorem}\label{theorem:satgraphs} For two colors, \[\osat_{\mbox{\ensuremath{\mathcal G}}\xspace}(k,l)= \sat_{\mbox{\ensuremath{\mathcal G}}\xspace}(k,l)= (k-1)(l-1),\] and for more than two colors, \[(k_1-1)(k_2+\dots+k_c-2c+3)\le \osat_{\mbox{\ensuremath{\mathcal G}}\xspace}(k_1, \ldots, k_c)\le \sat_{\mbox{\ensuremath{\mathcal G}}\xspace}(k_1, \ldots, k_c)\le (k_1-1)\cdots (k_c-1).\] In the latter lower bound we can exchange $k_1$ with any other $k_i$. \end{theorem} In the case when $k_1=k_2=\dots =k_c=k$, Theorem~\ref{theorem:satgraphs} implies that $\sat_{\mbox{\ensuremath{\mathcal G}}\xspace}(k;c)\le (k-1)^c$. Using an idea of Pálvölgyi~\cite{pdperscomm}, with probabilistic methods we improve this bound in the case when $c$ is large compared to $k$. \begin{theorem}\label{thm:random} $\osat_{\mbox{\ensuremath{\mathcal G}}\xspace}(k;c)\le 48k^2c^{k^2}$. \end{theorem} \subsection*{Posets} In this paper we are also interested in saturation problems on partially ordered sets (posets). Dilworth's theorem~\cite{Dilworth} answers a Ramsey-type problem about posets, since it implies that a poset of size $(k-1)(l-1)+1$ contains either a chain of length $k$ or an antichain of length~$l$. A natural saturation and semisaturation version of this problem can be posed. Let $\mbox{\ensuremath{\mathcal P}}\xspace$ denote the set of all finite posets. Given $P = (S, \leq_P)$, $P'=(S',\leq_{P'})\in \mbox{\ensuremath{\mathcal P}}\xspace$, we say $P'$ \emph{extends} $P$ if $S\subsetneq S'$ and for all $x,y\in S$, $x\leq_{P} y$ if and only if $x\leq_{P'} y$. A poset $P\in \mbox{\ensuremath{\mathcal P}}\xspace$ is $(k,l)$-semisaturated if every poset $P'\in \mbox{\ensuremath{\mathcal P}}\xspace$ extending $P$ contains a $k$-chain or an $l$-antichain which is not completely contained in $P$. Similarly as before, $\osat_{\mbox{\ensuremath{\mathcal P}}\xspace}(k,l)$ denotes the minimum size of such a semisaturated poset. If $P$ is additionally $k$-chain and $l$-antichain free, then we say that $P$ is $(k,l)$-saturated. We use $\sat_{\mbox{\ensuremath{\mathcal P}}\xspace}(k,l)$ to denote the minimum size of such a saturated poset. We also define $\ram_{\mbox{\ensuremath{\mathcal P}}\xspace}(k,l)$ as the maximum size of a $(k,l)$-saturated poset. Using this notation, Dilworth's theorem implies that \[\ram_{\mbox{\ensuremath{\mathcal P}}\xspace}(k,l)=(k-1)(l-1).\] For the semisaturation number of posets we show the following. \begin{theorem}\label{thm:weakGeneralPoset} \[\osat_{\mbox{\ensuremath{\mathcal P}}\xspace}(k,1)=0,\; \osat_{\mbox{\ensuremath{\mathcal P}}\xspace}(k,2)=k-1.\] For $l\ge 3$, we have \[\osat_{\mbox{\ensuremath{\mathcal P}}\xspace}(k,l)= \min(2k+l-5,k+3l-7).\] \end{theorem} For the saturation numbers of general posets, we prove the following theorem. \begin{theorem}\label{thm:satGeneralPoset} \[\sat_{\mbox{\ensuremath{\mathcal P}}\xspace}(k,l)=(k-1)(l-1)=\ram_{\mbox{\ensuremath{\mathcal P}}\xspace}(k,l).\] \end{theorem} When the Ramsey and the saturation numbers are the same we gain further insight into the structure of the saturated objects. For example every saturated object has the same size. Other examples of this kind include the intersecting families of subsets of an $n$ element set mentioned in the introduction and, as we will see, sequences without increasing and decreasing subsequences of given lengths. \subsection*{Monotone point sets and sequences} Another well-known Ramsey-type result is the Erdős-Szekeres theorem on monotone point sets. A point set in general position is said to be monotone increasing (resp. decreasing) if when ordered according to the $x$-coordinates, the $y$-coordinates of the points are monotone increasing (resp. decreasing). The theorem of Erdős and Szekeres~\cite{ES} states that a set of $(k-1)(l-1)+1$ points in general position contains either an increasing subsequence of length $k$ or a decreasing subsequence of length $l$. There is an equivalent formulation of this result in terms of sequences. It states that a sequence of $(k-1)(l-1)+1$ numbers must contain either an increasing subsequence of length $k$ or a decreasing sequence of length $l$. We will work with both of these formulations. We can convert this problem into a saturation problem in the usual way. A sequence $S$ of distinct numbers (resp. point set with distinct $x$- and $y$-coordinates) is called $(k,l)$-saturated if it does not contain an increasing subsequence (resp. subset) of length $k$ or a decreasing subsequence (resp. subset) of length $l$ but any sequence (resp. point set with distinct $x$- and $y$-coordinates) $S'$ that contains $S$ as a subsequence (resp. subset) has either an increasing subsequence (resp. subset) of length $k$ or a decreasing subsequence (resp. subset) of length $l$. The functions $\sat_{\mbox{\ensuremath{\mathcal S}}\xspace}(k,l)$ and $\osat_{\mbox{\ensuremath{\mathcal S}}\xspace}(k,l)$ are defined analogously to as before. Moreover, we define $\ram_{\mbox{\ensuremath{\mathcal S}}\xspace}(k,l)$ to be the maximum size of a $(k,l)$-saturated sequence. With this notation the Erdős-Szekeres theorem states that \[\ram_{\mbox{\ensuremath{\mathcal S}}\xspace}(k,l)=(k-1)(l-1).\] For saturation numbers, we prove the following theorem. \begin{theorem}\label{thm:seqSat} \[\sat_{\mbox{\ensuremath{\mathcal S}}\xspace}(k,l) = (k-1)(l-1)=\ram_{\mbox{\ensuremath{\mathcal S}}\xspace}(k,l).\] \end{theorem} In other words Theorem~\ref{thm:seqSat} says that if a sequence of distinct numbers does not contain an increasing subsequence of length $k$ or a decreasing subsequence of length $l$, then either we can extend the sequence without creating such a subsequence or the length of the sequence is $(k-1)(l-1)$. For semisaturation numbers we have the following theorem. \begin{theorem}\label{thm:weakSatSeq} \[\osat_{\mbox{\ensuremath{\mathcal S}}\xspace}(1,l) = \osat_{\mbox{\ensuremath{\mathcal S}}\xspace}(k,1) = 0.\] For $k, l\geq 2$, we have \[\osat_{\mbox{\ensuremath{\mathcal S}}\xspace}(k,l) = \min(2k+l-5, 2l+k-5).\] \end{theorem} \subsection*{Convex point sets} Finally, we investigate the saturation problem for convex point sets in the plane. If a set of $n$ points is in convex position, then we say that the points form a \emph{convex $n$-set}. A set of $k$ points in convex position is called a $k$-cup (resp. $k$-cap) if the points lie on the graph of a convex (resp. concave) function (possibly multivalued). Let $\ram_{\mbox{\ensuremath{\mathcal C}}\xspace\C}(k,l)$ denote the size of the largest set of points in general position which contains neither a subset forming a $k$-cup nor a subset forming an $l$-cap. Similarly, let $\sat_{\mbox{\ensuremath{\mathcal C}}\xspace\C}(k,l)$ denote the size of the smallest point set which contains neither a subset forming a $k$-cup nor a subset forming an $l$-cap, such that adding any new point yields a $k$-cup or $l$-cap. Finally, let $\osat_{\mbox{\ensuremath{\mathcal C}}\xspace\C}(k,l)$ denote the size of the smallest point set such that adding any new point introduces a new $k$-cup or a new $l$-cap. In 1935, Erdős and Szekeres~\cite{ES} showed that \[\ram_{\mbox{\ensuremath{\mathcal C}}\xspace\C}(k,l)={\binom{k+l-4} {k-2}}.\] While we were not able to obtain non-trivial bounds for the saturation problem, for the semisaturation problem we have the following result. \begin{theorem}\label{thm:cupcapSemiSat} We have \begin{align*} \osat_{\mbox{\ensuremath{\mathcal C}}\xspace\C}(k,3)=\osat_{\mbox{\ensuremath{\mathcal C}}\xspace\C}(3,k)&=k-1.\\ \end{align*} For $k\geq4$, we have \begin{align*} \osat_{\mbox{\ensuremath{\mathcal C}}\xspace\C}(k,4)=\osat_{\mbox{\ensuremath{\mathcal C}}\xspace\C}(4,k)&=2k-2, \end{align*} and for $k\geq 5$ and $l \ge 5$, \[2k+2l-12 \le \osat_{\mbox{\ensuremath{\mathcal C}}\xspace\C}(k,l)\le 2k+2l-10.\] \end{theorem} The original motivation for investigating point sets free of cups and caps was to give an upper bound, namely ${ \binom{2n-4} {n-2}}$, on the maximum number of points in the plane avoiding a convex $n$-gon. Erd\H{o}s and Szekeres provided a lower bound of size $2^{n-1}$, and after a number of subsequent improvements a nearly optimal upper bound of size $2^{n+o(n)}$ was provided by Suk~\cite{suk}. An intriguing problem is to obtain the analogous saturation result. \begin{problem}\label{convex_sat} What is the minimum possible size of a point set in the plane in general position which does not contain a convex $n$-set, but adding any extra point (in general position) creates one? \end{problem} We could not even determine if the answer is polynomial in $n$ or not. Note that if we drop the general position assumption the problem becomes trivial, one can simply take $n-1$ points on a line. For the respective semisaturation problem, let $\osat_{\mbox{\ensuremath{\mathcal C}}\xspace}(n)$ denote the minimum possible size of a point set in the plane general position, such that adding any extra point to it (in general position) creates a new convex $n$-set. With this notation we prove the following theorem. \begin{theorem}\label{thm:convex} \[\osat_{\mbox{\ensuremath{\mathcal C}}\xspace}(n) = 2n-4.\] \end{theorem} Unlike the problem about cups and caps, this problem generalizes easily to higher dimensions. Let $\osat_{\mbox{\ensuremath{\mathcal C}}\xspace,d}(n)$ denote the minimum possible size of a point set in $\mathbb{R}^d$, such that it is in general position and adding one extra point to it (in general position) creates a new convex $n$-set. We obtain the following result. \begin{theorem}\label{thm:osat00} \[\osat_{\mbox{\ensuremath{\mathcal C}}\xspace,d}(n) \ge n-1 + \floor{\frac{n-2}{d}}.\] \end{theorem} \section{Graphs}\label{section:graph} \begin{proof}[Proof of Theorem \ref{theorem:satgraphs}] First we prove the upper bounds. For $c=2$, one sharp construction is when the blue (the first color) edges form the graph consisting of $l-1$ disjoint copies of $K_{k-1}$. Another construction is when the red (the second color) edges form the graph consisting of $k-1$ disjoint copies of $K_{l-1}$. It is easy to see that these two-edge-colored graphs are saturated. It is also easy to generalize these constructions to $c>2$ colors: Start with a graph $G_0$ with a single vertex. Now one by one for each color $i$ with $1\leq i\leq c$, construct a colored graph $G_i$ by replacing each vertex $v_j$ of $G_{i-1}$ with a clique $S_j$ of size $k_i-1$ such that all edges in the clique are in color $i$. The edges between every pair of such cliques $S_{j_1}$ and $S_{j_2}$ are in the same color as the edge $v_{j_1}v_{j_2}$ in $G_{i-1}$. It is not hard to see that the graph $G_c$ obtained by this construction is saturated. Now we prove the lower bound for $c=2$. Take a minimal two-edge-colored semisaturated graph (with respect to a blue $k$-clique and red $l$-clique). Extract maximal complete blue subgraphs (cliques) greedily one by one until we partition all the vertices into cliques (for a more advanced treatment of such greedy partitions see~\cite{gyorikeszegh}). Assume that the first $i$ blue cliques have size at least $k-1$ and the rest have size at most $k-2$. If $i\ge l-1$, then $G$ has at least $(k-1)(l-1)$ vertices and we are done. Otherwise when $i<l-1$, let $p$ be a new vertex, which we add to $G$ and connect with red edges to the first $i$ cliques and blue edges to the rest of the vertices. It is easy to see that in the resulting graph there is neither a blue clique of size $k$ containing $p$ nor a red clique of size $l$ containing $p$. Hence $G$ is not semisaturated, which contradicts our assumption. Finally, we prove the lower bound for $c>2$. Again, take a minimal $c$-edge-colored semisaturated graph $G$. If there is no clique of size $k_1-1$ of the first color in $G$, then we can connect a new vertex $p$ to every vertex with the first color. It follows that $G$ is not semisaturated, giving us a contradiction. Otherwise there exists a clique of size $k_1-1$ with color $1$ in $G$. Take such a clique $S$ and connect $p$ to every vertex in $S$ with the second color. Now $G-S$ must again contain a clique of size $k_1 -1$ with color $1$ otherwise we can connect $p$ to the rest of the vertices in $G$ with color one. We can connect $p$ to the vertices of this clique as well with the second color if $k_2>2$. Repeating this argument, we keep finding additional disjoint cliques of size $k_1-1$ of the first color. When we have $k_2-2$ such cliques, we continue to pull out cliques of size $k_1-1$ of the first color and connect them to $p$ with color $3$. Continuing in this way we find altogether $(k_2-2)+(k_3-2)+\dots+(k_c-2)+1$ cliques of size $k_1-1$ of color $1$, showing that indeed the number of vertices is at least $(k_1-1)(k_2+k_3+\dots+k_c-2c+3)$. As the role of the first color was not used, the same way we can find enough cliques of color $i$. \end{proof} In the case of two colors we have determined the exact bound for $\osat_{\mbox{\ensuremath{\mathcal G}}\xspace}$. Using the following trivial observation we see that the next open case is when $c=3$ and $k_1=k$, $k_2=k_3=3$, for which Theorem~\ref{theorem:satgraphs} gives the lower bound $3(k-1)$ and upper bound $4(k-1)$ for both the saturation and semisaturation problems. \begin{obs}\label{obs:kis2} \[\sat_{\mbox{\ensuremath{\mathcal G}}\xspace}(2,k_1,\dots, k_c)=\sat_{\mbox{\ensuremath{\mathcal G}}\xspace}(k_1,\dots, k_c),\] \[\osat_{\mbox{\ensuremath{\mathcal G}}\xspace}(2,k_1,\dots, k_c)=\osat_{\mbox{\ensuremath{\mathcal G}}\xspace}(k_1,\dots, k_c).\] \end{obs} We state an equivalent formulation of the $c=3$, $k_1=k$, $k_2=k_3=3$ case as an problem. \begin{problem} Is it true that the vertices of every $3$-edge colored complete graph $G$ with $n=4(k-1)$ vertices can be partitioned into three parts, the first part avoiding a $K_{k-1}$ of the first color, the second part avoiding edges of the second color and the third part avoiding edges of the third color? This is equivalent to the $\osat$ problem, in the $\sat$ variant we further assume that $G$ itself avoids $K_{k}$ of the first color and triangles of the second and third colors. \end{problem} We now give a probabilistic argument improving the upper bound in Theorem~\ref{theorem:satgraphs} in some cases. \begin{proof}[Proof of Theorem \ref{thm:random}] Consider a uniform random coloring of the edges of the complete graph $K_{nc}$ with $c$ colors, that is, each edge is assigned one of the $c$ colors uniformly randomly and independently. We claim that the resulting edge-colored graph $G$ is semisaturated with positive probability if $n$ is large enough. To show that $G$ is semisaturated, it suffices to show that the subgraph of $G$ induced by any set of $n$ vertices contains a monochromatic $K_{k-1}$ of each color. Indeed, if we add an extra vertex $q$ to $G$ and color the edges incident to $q$ in any way to get the graph $G'$, then by pigeonhole principle there will be $n$ edges incident to $q$ having the same color $d$. Since the endpoints of those $n$ edges (other than $q$) induce a subgraph in $G$ that contains a monochromatic $K_{k-1}$ of each color, we then can find a new $K_k$ in color $d$ in $G'$. Note that since we pick the color of each edge uniformly randomly and independently, each color class can be considered as an instance of the Erd\H{o}s-R\'enyi random graph $G(nc,\frac{1}{c})$. So we need to bound the probability of $G(nc,\frac{1}{c})$ having $n$ vertices whose induced subgraph does not contain a copy of $K_{k-1}$. We first need many pairwise edge-disjoint copies of $K_k$ in $K_n$. For our purposes the following simple lemma is enough: \begin{lemma}\label{decomp} We can find $\frac{1}{16k^2}n^2$ pairwise edge-disjoint copies of $K_k$ in $K_n$ for any $n\ge 4k^2$. \end{lemma} \begin{proof} Lemma~\ref{decomp} can be proved in many ways. For example it easily follows from the following construction. Let $\{(i,j)|i\in [k], j\in[r]\}$ be the first $kr$ vertices of the $K_n$ where $r$ is a the largest prime such that $kr\le n$. From Bertrand's postulate we know that $r$ is at least $\floor {\frac{n}{2k}}$. Since $n>4k$ we have $\floor{\frac{n}{2k}}\ge \frac{n}{2k}-1\ge \frac{n}{4k}$. Consider the cliques whose vertex set has the following form: $\{(1,a+b),(2,a+2b),\dots, (k,a+bk) \}$ where $a,b\in [r]$ and the second coordinates are understood modulo $r$. It is easy to see that this gives us $\frac{1}{16k^2}n^2$ disjoint $K_k$'s. Indeed, suppose otherwise that two copies share an edge, then for some values $(a,b)\ne(a',b')$, $i\ne j$ we would have $a+ib=a'+ib'$ and $a+jb=a'+jb'$ modulo $r$, implying $(i-j)(b-b')=0$ modulo $r$. As $1\le i,j$ and $i,j\le k\le r$ ($n\ge 4k^2$ implies that $k\le r$), this gives $b=b'$, which in turn implies $a=a'$, a contradiction. \end{proof} Now we can calculate a bound on the probability of $G(n,p)$ containing a $K_{k-1}$ (where $p=1/c$). Let $n\ge 4k^2$ to be chosen later. First we fix $\frac{1}{16k^2}n^2$ disjoint copies of $K_{k-1}$ using Lemma \ref{decomp}. For each copy the probability that it is not in $G(n,p)$ is $1-p^{\binom{k-1}{2}}$. Since the cliques are edge-disjoint, the probability that no $K_{k-1}$ appears at all is at most $\left ( 1-p^{\binom{k-1}{2}}\right )^{\frac{n^2}{16k^2}}\le e^{-p^{\binom{k-1}{2}}\frac{n^2}{16k^2}}$. Returning to the original problem we see that there are $\binom{cn}{n}\le (ec)^n$ ways to choose $n$ vertices out of $nc $. The colors have a symmetric role in the problem. Hence by the union bound, the probability that we can find $n$ vertices and a color such that there is no $K_{k-1}$ of that color among those $n$ vertices is at most \[c(ec)^n e^{-p^{\binom{k-1}{2}}\frac{n^2}{16k^2}}.\] Picking $n=3\log(c)16k^2c^{\binom{k-1}{2}}$ we get \[c(ec)^n e^{-p^{\binom{k-1}{2}}\frac{n^2}{16k^2}}\le e^nc^{n+1}e^{-3\log(c)n}< 1.\] Therefore the probability of the bad cases is less than $1$. So we can find a semisaturated graph on $cn=c\cdot 3\log(c)16k^2c^{\binom{k-1}{2}}\le 48k^2 c^{k^2}$ vertices. \end{proof} \section{Posets}\label{section:poset \begin{proof}[Proof of Theorem \ref{thm:weakGeneralPoset}] If $l=1$, then obviously adding an element to the empty poset will introduce an antichain of size $1$. Thus $\osat_{\mbox{\ensuremath{\mathcal P}}\xspace}(k,1)=0$. Consider the case when $l=2$. If a newly added element is incomparable with any of the elements of the poset, then we find a new antichain of size two. So if $P$ is a poset which is not semisaturated for $l=2$, then we must be able to add an element comparable to all elements of $P$ without introducing a new chain of size $k$. This is possible if and only if $P$ does not contain a chain of size $k-1$. On the other hand, the smallest poset containing a chain of size $k-1$ is the poset containing only this chain on $k-1$ elements, and this poset is clearly semisaturated for $2$-antichains and $k$-chains. Thus $\osat_{\mbox{\ensuremath{\mathcal P}}\xspace}(k,2)=k-1$ (and this is the only semisaturating poset of this size). Now we may assume that $l\ge 3$. We show two semisaturated posets, one with $2k+l-5$ elements and one with $k+3l-7$ elements. For the first construction consider an antichain $A$ of size $l-1$ and then add two chains, $C_1$ and $C_2$, of length $k-2$ to the poset such that the $C_1$ lies below the elements of $A$ and $C_2$ lies above them (see Figure~\ref{fig:theo5}). The resulting poset is semisaturated. Indeed, if we add $p$ such that it is not comparable to any element of $A$, then $A\cup\{p\}$ is an antichain of length $l$. If $p$ lies below an element $a$ in $A$, then $C_2\cup\{a,p\}$ is a chain of length $k$. Similarly if $p$ lies above an element $a$ in $A$ then $C_1\cup\{a,p\}$ is a chain of length $k$. Therefore we cannot add $p$ without creating a chain of size $k$ or an antichain of size $l$. \begin{figure}[!ht] \centering \begin{minipage}{0.4\textwidth} \include{posetconst4} \end{minipage} \begin{minipage}{0.4\textwidth} \include{posetconst3} \end{minipage} \caption{The two constructions for $k=6$, $l=5$.} \label{fig:theo5} \end{figure} The second construction starts with two disjoint antichains $A_1$ and $A_2$ of length $l-1$. Then we add a chain $C$ of length $k-3$ between them, that is, every element of $C$ is above every element of $A_1$ and below every element of $A_2$. Finally we add an antichain $B$ of $l-2$ elements such that they are incomparable to everything. To see that this construction is semisaturated suppose that we can add an element $p$ without creating a chain of size $k$. Then $p$ cannot be above any element of $A_2$ nor below any element of $A_1$. On the other hand $p$ must be comparable to some elements $a_1\in A_1$ and $a_2\in A_2$ otherwise we would get an antichain of length $l$. Consequently, $p$ is above $a_1$ and below $a_2$. We know that $C\cup \{a_1,a_2\}$ is a chain of length $k-1$ so there must be an element $c\in C$ such that $p$ and $c$ are incomparable otherwise $C\cup\{a_1,a_2,p\}$ would be a chain of size $k$. Since $B\cup \{c\}$ is an antichain and $B \cup \{p,c\}$ cannot be an antichain, $p$ must be comparable to some element $q\in B$. If $p$ is above $q$ then $a_2$ is comparable to $q$ through $p$. If $p$ is below $q$ then $a_1$ is comparable to $q$ through $p$. But neither case is possible since $q$ is incomparable to both $a_1$ and $a_2$ in the original poset. To show that we need at least $\min(2k+l-5,k+3l-7)$ elements for semisaturation we start with the following observation. Let $P$ be a semisaturated poset, and let $L$ denote those elements that are the minimal elements of a chain of length $k-1$ in $P$. We claim that $|L|\ge l-1$. To see this, observe that if we add an element $p$ below every element of $P\setminus L$ and incomparable to every element of $L$ (this is possible since $L$ is clearly a downset in the poset), then no chain of length $k$ is created. Since $P$ is semisaturated, it follows that $p$ must be in an antichain of length $l$ which lies in $L\cup \{p\}$. Thus, $|L|\ge l-1$ and $L$ contains an antichain $L'$ of size $l-1$. Similarly we define $U$ to be the set of elements that are maximal elements of a chain of length $k-1$. In the same way we can see that $|U|\ge l-1$, and $U$ contains an antichain $U'$ of size $l-1$. If $U\cap L\ne \emptyset$, then there is a chain of length $2k-3$. Since this chain intersects $L'$ in at most one element, we have at least $2k-3+l-1-1=2k+l-5$ elements. If $U\cap L= \emptyset$ and $1\le k\le 3$, then the number of elements is at least $|U|+|L|\ge 2l-2\ge 2k+l-5$, as required (using that $l\ge 3$). From now on we assume $k\ge 3$ and consider two cases. First suppose that every element of $U$ is comparable to every element of $L$. Then we can add $p$ to the poset such that it lies below every element of $U$, above every element of $L$ and incomparable to every other element. Suppose $p$ creates a new chain $C$ of length $k$. Clearly $C\subset U\cup L\cup \{p\}$. By symmetry we may assume that $|C\cap U|\ge \ceil{\frac{k-1}{2}}$. Let $u$ be the first element in $C$ above $p$. Since $u\in U$ there is a chain $C_2$ of size $k-1$ ending in $u$. Consequently $(C\cap U) \cup C_2$ is a chain of length at least $k-1+\ceil{\frac{k-1}{2}}-1$. In total we have found $|U'\cup L'\cup ((C\cap U) \cup C_2)| $ elements. Any chain intersects any antichain in at most one element, therefore \[|U'\cup L'\cup ((C\cap U) \cup C_2)|\ge l-1+l-1+k-1+\ceil{\frac{k-1}{2}}-1-2=2l+k+\ceil{\frac{k-1}{2}}-6.\] As $\min(k+3l-7,2k+l-5)$ is at most \[\floor{\frac{(2k+l-5)+(k+3l-7)}{2}}=\floor{2l+\frac{3}{2}k-6}=2l+k+\floor{\frac{k}{2}}-6=2l+k+\ceil{\frac{k-1}{2}}-6,\] we have at least $\min(k+3l-7,2k+l-5)$ elements. Suppose now that adding $p$ does not create a new chain of length $k$. Then since $P$ is $(k,l)$-semisaturated, it must happen that adding $p$ creates a new antichain of length $l$. Since $p$ is comparable to the elements of $U$ and $L$ we have found $l-1$ new elements that are not in $L\cup U$. Therefore we have three disjoint antichains of length $l-1$ ($L'$, $U'$ and this antichain containing $p$). We must also have a chain of length $k-1$ and this chain intersects each of the three antichains in at most one element. Therefore we have at least $3(l-1)+k-1-3=k+3l-7$ elements. The only remaining case is when $U\cap L= \emptyset$ and we can find $u\in U$ and $q\in L$ such that $u$ and $q$ are incomparable. This implies that the chains going up from $q$ and going down from $u$ are disjoint. So we have two disjoint chains of length $k-1$, giving us at least $2(k-1)+2(l-1)-4=2k+2l-8\ge 2k+l-5$ elements. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:satGeneralPoset}] Given a poset $P$ with fewer than $(k-1)(l-1)$ elements which contains no $k$-chain and no $l$-antichain, we need to show that we can always add an element $p$ to $P$ in such a way that the resulting poset still avoids $k$-chains and $l$-antichains. If the maximum length of an antichain in $P$ is at most $l-2$, then we can easily add a new element to $P$ incomparable with all elements of $P$, and thus still avoid $k$-chains and $l$-antichains. Suppose now that the size of the maximal antichain is $l-1$. By Dilworth's theorem we can decompose $P$ into $l-1$ chains. By our assumption that we are $k$-chain free, all of them have size at most $k-1$ and at least one of them has size strictly less than $k-1$. Denote one such chain by $C$. For an element $c\in C$ denote by $D_c$ the subchain of $C$ consisting of $c$ and the elements that are below $c$ in $C$. Similarly, $U_c$ is the subchain of $C$ containing $c$ and the elements above $c$ in $C$. First suppose that there is no chain of size $k-1$ in $P$ whose bottom element is the bottom element $q'$ of $C$. Then we add a new element $p$ directly under $q'$ and incomparable with all the elements that are not above $q'$ (Figure~\ref{fig:theo6a}, left side). We claim that $P\cup \{p\}$ still avoids $k$-chains and $l$-antichains and so $P$ was not saturated, a contradiction. Indeed, as the poset can still be partitioned into $l-1$ chains (the former partition of $P$ with the difference that $C$ is extended with $p$ as a new bottom element under $q'$), it follows that there is no antichain of length $l$. Also a chain of length $k$ must have $p$ as its bottom element, and then $q'$ as its element directly above $p$, but then this chain minus $p$ would be a chain of length $k-1$ in $P$ with bottom element $q'$, contradicting our assumption. The case when there is no chain of size $k-1$ in $P$ whose top element is the top element of $C$ is handled similarly. \begin{figure}[!ht] \centering\begin{minipage}{0.4\textwidth} \includegraphics{theorem6fig.pdf} \end{minipage} \begin{minipage}{0.4\textwidth} \includegraphics{theorem6fig3.pdf} \end{minipage} \caption{Adding $p$ to the bottom of $C$ and finding a long chain if $q$ is above $r$.} \label{fig:theo6a} \end{figure} Thus, we may assume that there is a largest element $q$ of $C$ for which there is a chain $C_q$ of size $k-1$ containing $q$ whose part below $q$ (including $q$) coincides with $D_q$. Similarly, there is a smallest element $r$ of $C$ for which there is a chain $C_r$ of size $k-1$ containing $r$ whose part above $r$ (including $r$) coincides with $U_r$. We claim that $r$ is above $q$. Suppose on the contrary that $q$ is above $r$ or that they coincide. Then taking the part of $C_q$ above $q$, the part of $C$ between $q$ and $r$ and the part of $C_r$ below $r$ we get a chain $C'$ whose length is at least $k$, a contradiction. Indeed, the sum of $\abs{C}$ and $\abs{C'}$ is the same as the sum of $\abs{C_q}$ and $\abs{C_r}$, and so as $C_q$ and $C_r$ have $k-1$ elements and $C$ has at most $k-2$ elements, it follows that $C'$ must have at least $k$ elements, a contradiction (see Figure~\ref{fig:theo6a}). Thus $q$ is not the top element of $C$ and there is an element $s$ of $C$ directly above it. Now add a new element $p$ directly above $q$ and below $s$ such that $p$ is incomparable to all elements that are not below $q$ or above $s$ (Figure~\ref{fig:theo6b}). We claim that $P\cup \{p\}$ still avoids $k$-chains and $l$-antichains and thus $P$ was not saturated, a contradiction. The poset $P\cup \{p\}$ can still be partitioned into $l-1$ chains by taking the previous partition except that $C$ is extended with $p$ put between $q$ and $s$. Thus, the resulting poset still avoids $l$-antichains. Suppose now that it contains a $k$-chain, it necessarily contains $p$, and then $q$ is directly below $p$ in the chain and $s$ is directly above $p$ in the chain. Deleting $p$ from this $k$-chain we get a $(k-1)$-chain $C'$ in $P$ which has $q$ and $s$ directly above each other. Let $D'_q$ be the part of $C'$ below $q$ (including $q$) and $U'_{s}$ be the rest of $C'$, that is the part of $C'$ above $s$ (including $s$). By the definition of $q$ there is a chain of size $k-1$ whose bottom part is $D_q$, thus $D'_q$ can have size at most as much as $D_q$ otherwise the top part of this chain extended with $D'_q$ would be a chain of size at least $k$. Again by the maximality of $q$, the chain formed by $D_q$ and $U'_{s}$ can have size at most $k-2$, thus the chain formed by $D'_q$ and $U'_{s}$ can also have size at most $k-2$, but this is exactly $C'$ which was of size $k-1$, a contradiction. \begin{figure}[!ht] \centering \centering\begin{minipage}{0.4\textwidth} \includegraphics[width=5cm]{theorem6fig2.pdf} \end{minipage} \begin{minipage}{0.4\textwidth} \includegraphics[width=5cm]{theorem6fig2b.pdf} \end{minipage} \caption{Inserting $p$ into the poset.} \label{fig:theo6b} \end{figure} \end{proof} \section{Saturation of monotone point sets and sequences}\label{section:strongseq Throughout this section, the term sequence will always refer to a sequence of distinct real numbers. We may, without loss of generality, assume that the numbers in the sequence are positive. \begin{definition} A sequence $S$ is $(k,l)$-saturated if $S$ contains no increasing subsequence of length $k$ nor decreasing subsequence of length $l$, but any sequence $S'$ containing $S$ as a proper subsequence contains either an increasing subsequence of length $k$ or a decreasing subsequence of length $l$. Let $\sat_{\mbox{\ensuremath{\mathcal S}}\xspace}(k,l)$ denote the minimum possible length of a $(k,l)$-saturated sequence. \end{definition} Let $\vec{a} = [a_1, a_2,\dots, a_m]$ be a $(k+1,l+1)$-saturated sequence. We define the function ${\gamma_{\vec{a}}:[m] \to [l] \times [k]}$ by $\gamma_{\vec{a}}(t)=(i,j)$, where $i$ is the length of the longest decreasing subsequence of $\vec{a}$ ending at $a_t$ and $j$ is the length of the longest increasing subsequence of $\vec{a}$ ending at $a_t$. Since for every $n' < n\le m$ we can extend either the longest increasing subsequence or the longest decreasing subsequence ending at $a_{n'}$ by appending $a_n$ to the end, we have the following observation. \begin{obs}\label{obs:inc} If $n'<n$, then at least one coordinate of $\gamma_{\vec{a}}(n)$ is strictly larger than the corresponding coordinate of $\gamma_{\vec{a}}(n')$. \end{obs} Define an $l \times k$ matrix $R_{\vec{a}} = (r_{ij})$ corresponding to the sequence $\vec{a}$ by setting \[r_{ij} = \begin{cases} \gamma_{\vec{a}}^{-1}(i,j) & \textrm{if $(i,j) \in \image(\gamma_{\vec{a}})$,} \\ 0 & \textrm{otherwise.} \end{cases}\] Define an $l \times k$ matrix $V_{\vec{a}} = (v_{ij})$ corresponding to the sequence $\vec{a}$ by setting \[v_{ij}= \begin{cases} a_{\gamma_{\vec{a}}^{-1}(i,j)} & \textrm{if $(i,j) \in \image(\gamma_{\vec{a}})$,} \\ 0 & \textrm{otherwise.} \end{cases} \] Finally, let $W_{\vec{a}}=(w_{ij})$ be the $l \times k$ matrix such that the $i$-th row of $W_{\vec{a}}$ is the $(l+1 -i)$-th row of $V_{\vec{a}}$ for every $i \in [l]$. For example, consider the sequence $\vec{a}=[33,11,22,55,44]$ with $k=3$, $l=2$. In this case $R_{\vec{a}}=\begin{pmatrix} 1 & 0 & 4\\ 2 & 3 & 5 \end{pmatrix}$ and $V_{\vec{a}}=\begin{pmatrix} 33 & 0 & 55\\ 11 & 22 & 44 \end{pmatrix}$ and thus $W_{\vec{a}}=\begin{pmatrix}11 & 22 & 44\\ 33 & 0 & 55 \end{pmatrix}$. \begin{definition} We say that a matrix $(m_{ij})$ is \emph{partially increasing} if all the elements are nonnegative, the positive values are distinct and $i_1\le i_2$, $j_1\le j_2$ implies $m_{i_1j_1}\le m_{i_2j_2}$ for all nonzero elements of the matrix. \end{definition} \begin{lemma}\label{lem:young_tableau} $R_{\vec{a}}$ and $W_{\vec{a}}$ are both partially increasing. \end{lemma} \begin{proof} Observation~\ref{obs:inc} implies that $R_{\vec{a}}$ is partially increasing. To show that $W_{\vec{a}}$ is partially increasing we need to prove that if we take $i_1\ge i_2$ and $j_1\le j_2$, then $v_{i_1,j_1}\le v_{i_2,j_2}$ whenever $v_{i_1,j_1}$ and $v_{i_2,j_2}$ are positive numbers. Assume on the contrary that $v_{i_1,j_1}> v_{i_2,j_2}>0$. Let $n, n'$ be the indices such that $v_{i_1,j_1}=a_n>a_{n'}=v_{i_2,j_2}$. First, if $n<n'$, then a decreasing subsequence of length $i_1$ ending in $a_{n}$ can be extended with $a_{n'}$ and thus the longest increasing subsequence ending in $a_{n'}$ has length at least $i_1+1$, which contradicts that $i_2 \leq i_1$. Second, if $n>n'$, then an increasing subsequence of length $j_2$ ending in $a_{n'}$ can be extended with $a_{n}$ and thus the longest increasing subsequence ending in $a_{n}$ has length at least $j_2+1$, which contradicts that $j_1 \leq j_2$. \end{proof} Let $\mathbb{S}_{k,l}$ be the set of extremal $(k+1, l+1)$-saturated sequences of length $kl$ whose entries are distinct integers in $[kl]$. Observe that when $\vec{a} \in \mathbb{S}_{k,l}$, $R_{\vec{a}}$ and $W_{\vec{a}}$ have all positive entries, and are increasing in both rows and columns by Lemma~\ref{lem:young_tableau}. Before moving on to the proof of Theorem~\ref{thm:seqSat}, we briefly discuss how our results relate to the classification of extremal sequences for the Erd\H{o}s-Szekeres theorem in terms of Young tableaus. As $R_{\vec{a}}$ and $W_{\vec{a}}$ have values from $[kl]$, they correspond to a pair of standard rectangular Young tableaus $\mathbb{Y}_{l,k} \times \mathbb{Y}_{l,k}$ (with entries in $[kl]$). It was observed earlier by Knuth [\cite{Knuth}, Exercise 5.1.4.9] (see also [\cite{Stanley}, Example 7.23.19(b)]) that the set $\mathbb{S}_{k,l}$ is in bijection with the set of pairs of standard Young tableaus $\mathbb{Y}_{l,k} \times \mathbb{Y}_{l,k}$ (with entries in $[kl]$) via the Robinson-Schensted correspondence. Romik~\cite{romik} (see also~\cite{Czabarka-W}) gave an explicit bijection via the function $\phi(\vec{a}) = (R_{\vec{a}},W_{\vec{a}})$. Theorem~\ref{thm:seqSat} shows that all $(k+1,l+1)$-saturated sequences are in fact extremal, i.e., have length $kl$. Hence there is also a bijection between the set of all $(k+1,l+1)$-saturated sequences and the set of pairs of standard rectangular Young tableaus (with entries in $[kl]$). \begin{proof}[Proof of Theorem~\ref{thm:seqSat}] Let $\vec{a} = [a_1, a_2,\dots, a_m]$ be a $(k+1,l+1)$-saturated sequence. By definition $\sat_{\mbox{\ensuremath{\mathcal S}}\xspace}(k+1,l+1) \leq m \leq \ram_{\mbox{\ensuremath{\mathcal S}}\xspace}(k+1,l+1)=kl$. To simplify our notation let $\gamma=\gamma_{\vec{a}}$, $R=R_{\vec{a}}$, $V=V_{\vec{a}}$ and $W=W_{\vec{a}}$. By the Erdős-Szekeres theorem for sequences, $\sat_{\mbox{\ensuremath{\mathcal S}}\xspace}(k+1,l+1) \leq k l$. Hence it suffices to show that $\sat_{\mbox{\ensuremath{\mathcal S}}\xspace}(k+1,l+1) \geq kl$. Let $\vec{a} = [a_1, a_2,\dots, a_m]$ be an arbitrary sequence of length $m < kl$ containing no increasing subsequence of length $k+1$ and no decreasing subsequence of length $l+1$. We need to show that $\vec{a}$ is not saturated. \begin{claim}\label{cl:completion} If a partially increasing matrix $M$ contains a $0$, then that $0$ can be replaced by a positive number so that the resulting matrix is still partially increasing. \end{claim} \begin{proof}[Proof of Claim~\ref{cl:completion}] Let $(i_0,j_0)$ be the position of a $0$ in $M$. If there is no nonzero entry in any position $(i,j)$ with $i_0 \le i$, $j_0 \le j$, then replace 0 with any number larger than all entries of the matrix. Otherwise let $t=\min\limits_{i\ge i_0,j\ge j_0}(M)_{ij}$. Change the value of $M$ at $(i_0,j_0)$ to be $t-\epsilon$ where $\epsilon$ is smaller than the difference of any two nonzero values of $M$ and it is also smaller than $t$. It is easy to see that since $M$ is partially increasing, the new matrix obtained is also partially increasing. \end{proof} By Claim~\ref{cl:completion}, if $r_{i,j}=0$ (and thus $v_{i,j}=w_{l+1-i,j}=0$), we can replace these $0$'s with some positive number (not necessarily integers) such that $R$ and $W$ are still partially increasing. We then relabel the elements of $R$ (if necessary) with an initial sequence of integers in $[k l]$ while respecting their order. Call the resulting matrices $R'$ and $W'$ respectively, we get $V'$ from $W'$ by reversing the order of its rows, that is the $i$-th row of $V'$ is the $(l+1-i)$-th row of $W'$. Continuing the example above we obtain that ${R'=\begin{pmatrix} 1 & 3 & 5\\ 2 & 4 & 6 \end{pmatrix}}$, $V'=\begin{pmatrix} 33 & 50 & 55\\ 11 & 22 & 44 \end{pmatrix}$ and $W'=\begin{pmatrix} 11 & 22 & 44\\ 33 & 50 & 55 \end{pmatrix}$. Now we can extend $\vec{a}$ as follows: Insert the number $(V')_{i,j}$ immediately after the {$((R')_{i,j}-1)${-th}} position of $\vec{a}$. Call the resulting sequence $\vec{b}$ (in our example $\vec{b}=[33,11,50,22,55,44]$). We see that $V'$ records the values in $\vec{b}$ and $R'$ records the order of these values. We want to show that $\vec{b}$ contains no increasing subsequence of length $k+1$ nor a decreasing subsequence of length $l+1$. Suppose $\vec{b}$ contains an increasing subsequence of length $k+1$. Then at least two elements of this subsequence must be in the same column of $V'$ by the pigeonhole principle. Since $V'$ is just $W'$ reversed, it is decreasing in the columns. Hence, the lower one of these two elements is smaller, and since they are in an increasing subsequence it must appear earlier in $\vec{b}$. On the other hand $R'$ is increasing in the columns so the lower must come later, a contradiction. Similarly, $\vec{b}$ does not have a decreasing subsequence of length $l+1$. Hence $\vec{a}$ is not saturated and the proof is complete. \end{proof} \section{Semisaturation of monotone point sets and sequences}\label{section:seq} In this section we use the point set formulation of the problem, the term point set always refers to a set of points in general position (i.e., no two points share a common $x$ or $y$ coordinate). We start with the following trivial lemma. \begin{lemma} \label{lem:intersect} If $I$ is an increasing subset and $D$ is a decreasing subset of the point set $P$, then they intersect in at most one element. \end{lemma} \begin{proof}[Proof of Theorem \ref{thm:weakSatSeq}] Fix some $n\in\mathbb{Z}^{+}$, and assume that $P$ is semisaturated with respect to monotone $n$-sequences. Then we know that any point not already contained in $P$ must combine with $n-1$ points from $P$ to form a monotone $n$-sequence. Note that any subsequence of a monotone sequence must itself be a monotone sequence; as such, our analysis will focus on monotone $(n-1)$-sequences in $P$, and consider which points in the plane can be added to a given $(n-1)$-sequence to produce a monotone $n$-sequence. We say an $(n-1)$-sequence {\em blocks} such points; thus, we can say that $P$ is saturated with respect to monotone $n$-sequences if and only if every point in the plane is either contained in $P$, or blocked by some monotone $(n-1)$-sequence from $P$. Consider some increasing $(k-1)$-sequence $\point[1],\dots,\point[k-1]$. The set of points blocked by the sequence is precisely the union $\cup_{i=1}^{k}\region[i]$ of regions given by \begin{minipage}{0.55\textwidth} \begin{alignat*}{5} &\region[1] &&=& (-\infty,x_1]&\times(-\infty,y_1];\\ &\region[i+1] &&=& [x_i,x_{i+1}]&\times[y_i,y_{i+1}],\text{ for $i=1,\dots,k-2$;}\\ &\region[k] &&=& [x_{k-1},\infty)&\times[y_{k-1},\infty). \end{alignat*} \end{minipage} \begin{minipage}{0.4\textwidth} \includegraphics{squenceproog.pdf} \end{minipage} \smallskip Decreasing sequences behave similarly. For our proof, we focus on points that are outside some fixed axis-parallel rectangle that contains all the points in $P$, and so are only interested in regions $\region[1]$ and $\region[k]$ above. More precisely, we pick bounding values $\lo{x}$, $\hi{x}$, $\lo{y}$, and $\hi{y}$ such that for each point $\coords\inP$ we have that $\lo{x}<x<\hi{x}$ and $\lo{y}<y<\hi{y}$. We will focus on how points lying on the lines forming the boundary of this region are blocked; note that these inequalities guarantee that such points can {\em only} be blocked by being at one end or the other of an increasing $(k-1)$-sequence or decreasing $(l-1)$-sequence. To begin, consider points along the line $y=\lo{y}$; fix any such point $(x,\lo{y})$. Now, since $\lo{y}$ is strictly less than the $y$-coordinate of any point in $P$, we may conclude that if any $(n-1)$-sequence of points $\point[1],\dots,\point[n-1]$ from $P$ block $(x,\lo{y})$, it must be the case either that the sequence is decreasing (and $n=l$) and $x_{n-1}\le x$ or that the sequence is increasing (and $n=k$) and $x\le x_{1}$. Viewed from the opposite perspective, we can see that any decreasing $(l-1)$-sequence $\point[1],\dots,\point[l-1]$ in $P$ blocks the left-bounded interval $[x_{l-1},\infty)$ on the line $y=\lo{y}$. Symmetrically, any increasing $(k-1)$-sequence blocks the right-bounded interval $(-\infty,x_1]$. For the entire line $y=\lo{y}$ to be blocked, then, we need a left-bounded interval and a right-bounded interval that intersect each other; this equates to a decreasing $(l-1)$-sequence and an increasing $(k-1)$-sequence such that the former lies entirely to the left of the latter. Let $\lo{D}$ and $\lo{I}$ be a decreasing $(l-1)$-sequence and increasing $(k-1)$-sequence from $P$, respectively, such that for all $\coords\in\lo{D}$ and all $\coordsP\in\lo{I}$, we have that $x \le x'$. The preceding argument guarantees the existence of such, and furthermore a symmetric argument with respect to the line $y=\hi{y}$ gives us that we can find an increasing $(k-1)$-sequence and a decreasing $(l-1)$-sequence $\hi{I}$ and $\hi{D}$, respectively, in $P$, such that for all $\coords\in\hi{I}$ and all $\coordsP\in\hi{D}$ we have that $x \le x'$. Our claim is that $\abs{\lo{D}\cup\lo{I}\cup\hi{D}\cup\hi{I}}\ge \min(2k+l-5,2l+k-5)$. We break our analysis into three cases. \begin{itemize} \item[Case:] $\lo{D}\cap\hi{D}=\emptyset$. Recall that we have assumed that no two points in $P$ share either a common $x$-value or a common $y$-value; thus, Lemma~\ref{lem:intersect} tells us that $\abs{\lo{I}\cap\lo{D}}\le1$ and $\abs{\lo{I}\cap\hi{D}}\le1$. Thus, we get that \begin{align*} \abs{\lo{D}\cup\lo{I}\cup\hi{D}\cup\hi{I}} &\ge \abs{\lo{I}\cup\lo{D}\cup\hi{D}}\\ &\ge \abs{\lo{I}}+\abs{\lo{D}}+\abs{\hi{D}}-\abs{\lo{I}\cap\lo{D}}-\abs{\lo{I}\cap\hi{D}}-\abs{\lo{D}\cap\hi{D}}\\ &\ge k-1 + 2(l-1) - 2\\ &= 2l+k -5, \end{align*} exactly as desired. \item[Case:] $\lo{I}\cap\hi{I}=\emptyset$. This case is symmetric to the preceding one. Applying Lemma~\ref{lem:intersect} appropriately gives us that \begin{equation*} \abs{\lo{D}\cup\lo{I}\cup\hi{D}\cup\hi{I}} \ge \abs{\lo{D}\cup\lo{I}\cup\hi{I}} \ge 2k + l -5, \end{equation*} once again. \item[Case:] $\abs{\lo{D}\cap\hi{D}},\abs{\lo{I}\cap\hi{I}}>0$. Let $\coords\in\lo{D}\cap\hi{D}$. Now, by our definitions of $\hi{I}$ and $\hi{D}$, we must have that for all $\coordsP\in\hi{I}$, $x' \le x$ holds. Similarly, our definitions of $\lo{I}$ and $\lo{D}$ ensure that for all $\coordsP\in\lo{I}$, we have $x \le x'$. Consider combining this with our assumption that no two points in $P$ share either a common $x$-value or a common $y$-value. Say $\hi{I}$ consists of the point sequence $\point[1],\dots,\point[k-1]$, and $\lo{I}$ consists of $\point[1]',\dots,\point[k-1]'$. Then we must have that \begin{equation*} x_1 < x_2 < \dots < x_{k-1} \le x \le x_1' < x_2' <\dots < x_{k-1}'. \end{equation*} By assumption, however, we have that $\lo{I}\cap\hi{I}\neq\emptyset$; this can only hold, then, if $p_{k-1}=p_{1}'$. Thus, we have that $\lo{I}\cup\hi{I}$ is, in fact, an increasing $(2k-3)$-sequence. A symmetric argument implies that, similarly, $\lo{D}\cup\hi{D}$ is a decreasing $(2l-3)$-sequence. So applying Lemma~\ref{lem:intersect} gives us that \begin{align*} \abs{\lo{D}\cup\lo{I}\cup\hi{D}\cup\hi{I}} & \ge \abs{\lo{D}\cup\hi{D}} + \abs{\lo{I}\cup\hi{I}} -\abs{(\lo{D}\cup\hi{D})\cap(\lo{I}\cup\hi{I})} \\ & \ge (2l-3)+(2k-3) - 1 \ge \min(2k+l-5,2l+k-5), \end{align*} \end{itemize} since $k$ and $l$ are at least $2$. In every case, the above gives us the desired lower bound, namely that $\abs{P}\ge \min(2k+l-5,2l+k-5)$. The upper bound follows from the following simple construction (see Figure~\ref{fig:seq}). \begin{figure}[!ht] \centering \includegraphics{seqconst.pdf} \caption{Construction for semisaturated sequences ($k=6,l=4$).} \label{fig:seq} \end{figure} \begin{construction} We present a construction of size $2k+l-5$, a construction of size $2l+k-5$ is attained by taking this construction for $k'=l$ and $l'=k$ of size $2k'+l'-5=k+2l-5$ and then reversing the order of the $x$-coordinates. Take an increasing $(k-2)$-sequence $p_1,p_2,\dots,p_{k-2}$ and let $p_{k-2}=(a,b)$. Take another increasing $(k-2)$-sequence $p_1',p_2',\dots,p_{k-2}'$, let $p'_1 = (a',b')$ and assume $a<a'$ and $b<b'$. Consider the rectangle defined by the vertices $(a,b),(a,b'),(a',b),(a',b')$. Let $0<\epsilon< \min\left((a'-a)/4,(b'-b)/4\right)$ and consider the rectangle with corners $(a+\epsilon,b+\epsilon),(a+\epsilon,b'-\epsilon),(a'-\epsilon,b+\epsilon),(a'-\epsilon,b'-\epsilon)$. Take a decreasing $(l-1)$-sequence $q_1,q_2,\dots,q_{l-1}$ with $q_1 = (a+\epsilon,b'-\epsilon)$ and $q_{l-1} = (a'-\epsilon,b+\epsilon)$. \qedhere \end{construction} \end{proof} \section{Convex point sets}\label{section:conv Given a point set $S \subseteq \mathbb{R}^d$, we use $\conv(S)$ to denote the {\em convex hull} of $S$, which is the smallest convex set in $\mathbb{R}^d$ that contains $S$, and we use $\interior(S)$ to denote the interior of $S$. First we prove the semisaturation result about cups and caps. \begin{proof}[Proof of Theorem \ref{thm:cupcapSemiSat}] For $k\le 2$ and $l\le 2$ the problem is trivial. Any two points form a 2-cup and also a 2-cap. Hence $\osat_{\mbox{\ensuremath{\mathcal C}}\xspace\C}(2,l)=\osat_{\mbox{\ensuremath{\mathcal C}}\xspace\C}(k,2)=1$. From now on let us assume that $k\ge 3$ and $l\ge 3$. Let $P$ be a point set that is semisaturated for $3$-cups and $l$-caps. Let $L$ be the set of lines determined by the points of $P$. There is an unbounded region $R$ in the plane bounded by parts of the lines that lies below every line of $L$. If we add a point $p$ in $R$ it must create a $3$-cup or an $l$-cap. We can choose $p$ inside $R$ to have smaller $x$ coordinate than any element of $P$, hence we can ensure that $p$ is not part of any $3$-cup. Therefore $p$ must be in a $l$-cap and $P$ must have at least $l-1$ elements. On the other hand a point set forming an $(l-1)$-cap is semisaturated for $k=3$. Indeed if we add a point, then it either creates a 3-cup or any three points form a 3-cap, which means that the whole set is a cap. The $l=3$ case can be handled similarly. Now we will assume that $k\ge 5$ and $l\ge 5$; the case when $k=4$ or $l=4$ will be settled later. We can define $L$ and $R$ as above. If we add $p$ anywhere in $R$ it must create a $k$-cup or an $l$-cap. Since $k\ge 4$ and $p$ lies below the lines of $L$, $p$ can only create an $l$-cap and furthermore $p$ is either the first element of this cap or the last one. We can choose $p$ inside $R$ to have a smaller $x$ coordinate than any element of $P$ (see Figure \ref{fig:cupcap}). This ensures that $p$ is the first element in the $l$-cap it has created. Now we move $p$ continuously inside $R$, all the while increasing its $x$-coordinate, until it has bigger $x$-coordinate than any element of $P$. At this point $p$ cannot be the first element of the $l$-cap it creates, thus during this movement there must be a last moment where $p$ is the first element of some $l$-cap containing it. Clearly the change happens as $p$ passes below an element $p_{below}$ of $P$. Let $x_{below}$ denote the $x$-coordinate of this point. \begin{figure}[!ht] \centering \includegraphics[width=10cm]{cupcap1.pdf} \caption{If $p$ is to the right of $x_{below}$, then it is not the first element of any $4$-cap.} \label{fig:cupcap} \end{figure} If we put $p$ in $R$ slightly after $x_{below}$, then it must extend some $(l-1)$-cap $A_1$ whose points lie to the left of $x_{below}$, except maybe for $p_{below}$. Similarly if we put it slightly before $x_{below}$, then it must extend some $(l-1)$-cap $A_2$ that lies to the right of $x_{below}$. In the same way we can define $p_{above}$, $x_{above}$ and two $(k-1)$-cups $U_1$ and $U_2$ such that they lie on the left and right side of $x_{above}$. In total we have found $|A_1\cup A_2 \cup U_1 \cup U_2|$ elements. Clearly $|A_1\cap A_2|\le 1$ and $|U_1\cap U_2|\le 1$. Since any cup intersects any cap in at most two points we have $|A_i\cap U_j|\le 2$. Also either $x_{below}< x_{above}$ or $x_{below}> x_{above}$ or $x_{below}= x_{above}$. In the first case $A_1\cap U_2=\emptyset$ and in the second case $A_2\cap U_1=\emptyset$, giving us \[|A_1\cup A_2 \cup U_1 \cup U_2|\ge 2k-2+2l-2-|A_1\cap A_2|-|U_1\cap U_2|-3\cdot 2\ge 2k+2l-12.\] If $x_{below}= x_{above}$ we have $|A_1\cap U_2|\le 1$ and $|A_2\cap U_1|\le 1$ so we have \[|A_1\cup A_2 \cup U_1 \cup U_2|\ge 2k-2+2l-2-|A_1\cap A_2|-|U_1\cap U_2|-2\cdot 2-1-1\ge 2k+2l-12.\] For the upper bound we give a construction. First consider the point set shown in Figure~\ref{fig:cupcapconst}. \begin{figure}[!ht] \centering \includegraphics[width=7cm]{cupcap4.pdf} \includegraphics[width=7cm]{cupcap5.pdf} \caption{ Cup-cap semisaturation for $k=l=5$ and for $k=8,l=7$} \label{fig:cupcapconst} \end{figure} This point set consists of 10 points, and it is saturated for 5-cups and 5-caps. We show this by dividing the plane into regions and for each region showing four points of the point set that form either a 5-cup or a 5-cap with any point of the region. Consider the regions in Figure~\ref{fig:cupcapregions}. \begin{figure}[!ht] \centering \includegraphics[width=0.3\textwidth]{cupcap41.pdf} \includegraphics[width=0.3\textwidth]{cupcap42.pdf} \includegraphics[width=0.3\textwidth]{cupcap43.pdf} \includegraphics[width=0.3\textwidth]{cupcap44.pdf} \includegraphics[width=0.3\textwidth]{cupcap45.pdf} \includegraphics[width=0.3\textwidth]{cupcap46.pdf} \includegraphics[width=0.3\textwidth]{cupcap47.pdf} \includegraphics[width=0.3\textwidth]{cupcap48.pdf} \caption{ The regions blocked by the 4-cups and 4-caps.} \label{fig:cupcapregions} \end{figure} In the first seven subfigures we have drawn a region and indicated which four points of the point set blocks that region. In the eight subfigure we have drawn all the regions. As we can see these regions cover half of the plane. Since the point set is centrally symmetric, this is enough. Hence for $k=l=5$ we have semisaturation with $2k+2l-10$ points. Now we will show that this construction can be extended for $k,l\ge 5$. In Figure \ref{fig:cupcapconst} we can see $A_1$, $A_2$, $U_1$ and $U_2$. To get a construction for $k,l$ we just extend $A_1$ to the left and $A_2$ to the right with $l-5$ elements and $U_1$ to the left and $U_2$ to the right with $k-5$ elements. See Figure \ref{fig:cupcapconst} for an example. The resulting configuration will be semisaturated. Considering the same regions as in Figure \ref{fig:cupcapregions} will work. If a region was blocked by a $4$-cup, that $4$-cup is now extended by $k-5$ elements, so we have a blocking $(k-1)$-cup. Similarly if a region was blocked by a $4$-cap, that $4$-cap is now extended by $k-5$ elements, so we have a blocking $(k-1)$-cap. Therefore we have found a semisaturated set with $10+2(k-5)+2(l-5)=2k+2l-10$ points. In the case of $l=4$ and $k\ge 5$ the construction is quite similar. A possible configuration for the $k=5$ case is given in Figure \ref{fig:4case} and the blocked regions are given in Figure \ref{fig:4caseall}. For $k>5$ we can extend $U_1$ and $U_2$ just as we did in Figure \ref{fig:cupcapconst}. We leave the details for the interested readers. \begin{figure}[!ht] \centering \includegraphics[width=0.3\textwidth]{4cupalap.pdf} \includegraphics[width=0.39\textwidth]{4cupalap2.pdf} \caption{Cup-cap semisaturation for $l=4, k = 5$ and for $l=4, k=7$.} \label{fig:4case} \end{figure} \begin{figure}[!ht] \centering \includegraphics[width=0.15\textwidth]{4cuplist1.pdf} \includegraphics[width=0.15\textwidth]{4cuplist2.pdf} \includegraphics[width=0.15\textwidth]{4cuplist3.pdf} \includegraphics[width=0.15\textwidth]{4cuplist4.pdf} \includegraphics[width=0.15\textwidth]{4cuplist5.pdf} \includegraphics[width=0.15\textwidth]{4cuplist6.pdf} \includegraphics[width=0.15\textwidth]{4cupall.pdf} \caption{The regions blocked by the 4-cups and 3-caps.} \label{fig:4caseall} \end{figure} Next we show that at least $2k-2$ points are required to obtain a saturated construction. Indeed, by the same reasoning as in the $l,k \ge 5$ case we can find two cups $U_1,U_2$ of size $k-1$ intersecting in at most one point and two caps $A_1,A_2$ of size $3$ intersecting in at most one point. If $U_1$ and $U_2$ are disjoint, then we have already found $2k-2$ many points, so suppose they intersect in one point $q$. We know that either $A_1$ lies to the left of $q$ or $A_2$ lies to the right of $q$ (it is possible that $A_1$ or $A_2$ contains $q$). In either case we must have one more point since no three points of $U_1$ nor of $U_2$ form a cap. \end{proof} Now we continue with the semisaturation of convex point sets. \begin{proof}[Proof of Theorem~\ref{thm:osat00}] Suppose that to the contrary there is a semisaturated set of points $S$ with $n-1 + \floor{\frac{n-2}{d}}-s$ points in $\mbox{\ensuremath{\mathbb R}}\xspace^d$, for some $s\ge 1$. Denote by $S_1,S_2,\dots,S_m$ the subsets of $S$ that are convex $(n-1)$-sets. If $\cap_{i=1}^m \interior(\conv(S_i)) \neq \varnothing$, then we may add any point in the intersection without yielding $n$ points in convex position. We can also add this point such that the resulting point set is in general position. The interior of a convex set is convex, hence by Helly's theorem it is sufficient to show that the intersection of any $(d+1)$ of the sets from $\{\interior(\conv(S_i))\}_{i\in [m]}$ is nonempty. Consider $d+1$ sets in $\{S_i\}_{i\in [m]}$: $S_{i_1},S_{i_2},\dots,S_{i_{d+1}}$. They each have size $n-1$ and are contained in the point set $S$ of size $n-1 + \floor{\frac{n-2}{d}}-s$, thus we have that \begin{align*} \abs{(S_{i_1}\cap S_{i_2}\cap \dots \cap S_{i_{d+1}})^c} &= \abs{S_{i_1}^c\cup S_{i_2}^c\cup \dots \cup S_{i_{d+1}}^c} \\& \leq \abs{S_{i_1}^c}+ \abs{S_{i_2}^c} + \dots + \abs{S_{i_{d+1}}^c} =(d+1)\left(\floor{\frac{n-2}{d}}-s\right). \end{align*} Therefore \begin{align*} \abs{S_{i_1}\cap S_{i_2}\cap \dots \cap S_{i_{d+1}}}& \ge \left(n-1+\floor{\frac{n-2}{d}}-s\right)-(d+1)\left(\floor{\frac{n-2}{d}}-s\right) \\&= n-1-d\left(\floor{\frac{n-2}{d}}-s\right)\ge1+ds\ge d+1. \end{align*} Since the original point set is in general position, these $d+1$ points in the intersection span a non-degenerate simplex. It follows that the interiors of $\conv(S_{i_1}),\dots, \conv(S_{i_{d+1}})$ intersect, as required. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:convex}] We will construct a set of points $S$ such that $S$ is semisaturated in the plane $\mathbb{R}^2$. Consider a convex polygon $Q =\conv(v_0,v_1,\ldots, v_{2n-3})$ with $2n-4$ sides such that the side $v_i v_{i+1}$ is parallel to the side $v_{n+i-2} v_{n+i-1}$ (where the indices are modulo $2n-4$). For ease of reference, we call the pair of sides $v_i v_{i+1}$, $v_{n+i-2} v_{n+i-1}$ \textit{opposite sides} of $Q$. Let $S=\{v_0,v_1,\dots,v_{2n-3}\}$ be the vertex set of $Q$. We claim that $S$ is semisaturated in $\mathbb{R}^2$. \begin{claim}\label{cl:inside} Let $P$ be any point contained in $Q$ such that $Q\cup\{P\}$ is in general position. Then $S \cup \{P\}$ has a convex $n$-gon with $P$ as one of its vertices. \begin{proof} Let $P$ be an arbitrary point in the interior of $Q$. Consider the chord $v_0v_{n-2}$ (see Figure~\ref{fig:convexfig}). This divides $Q$ into two $(n-1)$-gons $\{ v_{0}, v_{1},\ldots, v_{n-2}, \}$ and $\{ v_{n-2}, v_{n-1},\ldots, v_{0}, \}$. Since the points are in general position, $P$ must lie in the interior of one of these $(n-1)$-gons. Then $P$ and the points of the other $(n-1)$-gon form a convex $n$-set. \end{proof} \end{claim} \begin{claim}\label{cl:outside} Let $P$ be any point not contained in $Q$ such that $Q\cup\{P\}$ is in general position. Then $S \cup \{P\}$ has a convex $n$-gon with $P$ as one of its vertices. \end{claim} We say that a point $A\in Q$ can be \emph{seen} from $P$ if $\overline{PA}\cap Q={A}$. A side $v_i,v_{i+1}$ can be seen from $P$ if all of its points can be seen from $P$. Clearly $P$ can see at most one from a pair of opposite sides. So there are at least $n-2$ sides that cannot be seen from $P$. Each of these sides will be a side of the convex hull $\conv(Q\cup\{P\})$. There will be two additional sides of this convex~hull incident to $P$, so $\conv(Q\cup\{P\})$ has at least $n$ sides. Therefore we can choose $n-1$ points from $S$ such that they form a convex $n$-gon with $P$. \begin{figure}[!ht] \centering \include{convex} \caption{Finding a convex $n$-gon in the extended point set.} \label{fig:convexfig} \end{figure} Claim~\ref{cl:inside} and Claim~\ref{cl:outside} imply that $S$ is semisaturated, hence $\osat_{\mbox{\ensuremath{\mathcal C}}\xspace}(n) \leq 2n-4$. Now we prove the lower bound. Suppose $S$ is a semisaturated set of points in the plane. The set $S$ determines $\binom{|S|}{2}$ lines, which partition the plane into regions. Let $P_1$ be a point in one of the infinite regions (and not between any pair of parallel lines), and let $P_2$ be another point in the opposite infinite region. That is $P_1$ and $P_2$ lie on different sides for each of the $\binom{|S|}{2}$ lines. Since $S$ is semisaturated, there are two sets $S_1,S_2\subset S$ such that $S_1\cup \{P_1\}$ and $S_2\cup \{P_2\}$ are convex $n$-gons. We claim that $|S_1\cap S_2|\le 2$. Suppose $v_1,v_2,v_3 \in S_1\cap S_2$, and assume that $v_1,v_2,v_3,P_1$ is a convex quadrilateral (in that order). Observe that $P_1$ is in the same side as $v_3$ with respect to the line $v_1v_2$ and on the same side as $v_1$ with respect the line $v_2v_3$. On the other hand, $P_2$ is in the opposite side of $v_3$ with respect the line $v_1v_2$ and on the opposite of $v_1$ with respect the line $v_2v_3$. The points $v_1,v_2,v_3,P_2$ cannot form a convex quadrilateral (in any order). Indeed, one of the sides of this quadrilateral must be either $v_1v_2$ or $v_2v_3$, but the line defined by $v_1v_2$ would separate $P_2$ from $v_3$, and the line defined by $v_2v_3$ would separate $P_2$ from $v_1$. This cannot happen since $S_2 \cup \{P_2\}$ is in convex position, a contradiction. \begin{figure}[!ht] \centering \includegraphics{convim.pdf} \caption{Convex $n$-gons containing $P_1$ and $P_2$ intersect in at most two points.} \label{fig:conv2} \end{figure} Hence $|S_1\cup S_2|=|S_1|+|S_2|-|S_1\cap S_2|\ge n-1+n-1-2=2n-4.$ \end{proof} \section{General treatment of saturation questions}\label{general} In this section we provide a general formulation for many of the the problems we have considered in this paper. Given a $c$-edge-colored complete $s$-uniform hypergraph $H=(V,E)$, we say that a subset of vertices $S \subseteq V$ forms a monochromatic complete subhypergraph of $H$ in color $i$ if the ($s$-uniform) subhypergraph induced by $S$ has only hyperedges in color $i$. Many of the problems we considered have the following form. \begin{defi}\label{gengen} Given constants $c$ and $s$, let $\mbox{\ensuremath{\mathcal F}}\xspace_0$ be the family of complete $s$-uniform hypergraphs whose edges are colored with $c$ colors (numbered by $1,2,\dots, c$). For a subfamily $\mbox{\ensuremath{\mathcal F}}\xspace$ of $\mbox{\ensuremath{\mathcal F}}\xspace_0$, a member $F$ of $\mbox{\ensuremath{\mathcal F}}\xspace$ is \emph{saturated} if for every $i$, $F$ does not contain a monochromatic complete subhypergraph of size $k_i$ and color $i$, but every $F' \in \mbox{\ensuremath{\mathcal F}}\xspace$ that extends $F$ contains a monochromatic complete subhypergraph of size $k_i$ of color $i$ for some $i$. $F$ is \emph{semisaturated} if we omit the first condition, that is, if $F \in\mbox{\ensuremath{\mathcal F}}\xspace$ and every $F'\in \mbox{\ensuremath{\mathcal F}}\xspace$ that extends $F$ contains a monochromatic complete subhypergraph of size $k_i$ of color $i$ for some $i$ which is not in $F$. Let $\ram_{\mbox{\ensuremath{\mathcal F}}\xspace}(k_1,\dots k_c)$ denote the size (number of vertices) of the largest saturated $F\in \mbox{\ensuremath{\mathcal F}}\xspace$, and let $\sat_{\mbox{\ensuremath{\mathcal F}}\xspace}(k_1,\dots k_c)$ denote the size of the smallest saturated $F\in \mbox{\ensuremath{\mathcal F}}\xspace$. Finally, let $\osat_{\mbox{\ensuremath{\mathcal F}}\xspace}(k_1,\dots k_c)$ denote the size of the smallest semisaturated $F\in \mbox{\ensuremath{\mathcal F}}\xspace$. \end{defi} \begin{obs} For any $\mbox{\ensuremath{\mathcal F}}\xspace$ and positive integers $k_1, \ldots, k_c$, \[\osat_{\mbox{\ensuremath{\mathcal F}}\xspace}(k_1, \ldots, k_c)\le \sat_{\mbox{\ensuremath{\mathcal F}}\xspace}(k_1, \ldots, k_c) \le \ram_{\mbox{\ensuremath{\mathcal F}}\xspace}(,k_1, \ldots, k_c).\] \end{obs} Note that whenever $\sat=\ram$ holds, all saturated members of $\mbox{\ensuremath{\mathcal F}}\xspace$ have the same size. Thus we gain further insight into the respective Ramsey-type problem as well. Moreover, when $c=2$, one can regard the problem as the first color class forming an (uncolored) hypergraph $H$. Then it follows that a complete subhypergraph in the first color is a complete subhypergraph in $H$ while a complete subhypergraph in the second color is an independent set in $H$. Definition~\ref{gengen} is quite general. In this paper we have introduced saturation problems for graphs, posets, monotone point sets, and cups and caps. All of these fit into this formulation. First, the graph case we get by setting $s=2$ and $\mbox{\ensuremath{\mathcal F}}\xspace=\mbox{\ensuremath{\mathcal F}}\xspace_0$. We get the poset case by setting $c=s=2$ and letting $\mbox{\ensuremath{\mathcal F}}\xspace$ be the family of those $2$-edge-colored graphs that we can obtain as the comparability graph of a poset. We obtain the monotone point set case by setting $c=s=2$ and letting $\mbox{\ensuremath{\mathcal F}}\xspace$ be the family of those $2$-edge-colored graphs that we can obtain from the pairs of elements in a sequence by coloring the increasing pairs red and the decreasing pairs blue. Finally, the cup and cap case we get by setting $c=2$, $s=3$ and letting $\mbox{\ensuremath{\mathcal F}}\xspace$ be the family of those $2$-colored complete $3$-uniform hypergraphs that we can obtain by taking a point set in general position and coloring a triple red if it forms a cup and blue if it forms a cap (note that every triple forms a cup or a cap). The only problem we considered that does not fit into this formulation is the case of convex subsets of points. It is interesting that for both the $2$-colored graph case and the poset case we have $\sat(k,l)=(k-1)(l-1)$, yet we could not find any general reasoning that handles both of these cases at once. It is interesting that the relative behavior of $\sat$, $\osat$ and $\ram$ can vary substantially depending on the setting. Indeed, for graphs $\osat=\sat$ yet $\ram$ is exponential, while for posets and monotone point sets, $\sat$ equals $\ram$ yet $\osat$ is smaller ($\osat$ behaves differently in the latter two cases). \section{Acknowledgements} We thank Tuan Tran for pointing out an inaccuracy in Lemma~\ref{decomp} in an earlier version of this manuscript.
{ "timestamp": "2020-05-05T02:35:28", "yymm": "2004", "arxiv_id": "2004.06097", "language": "en", "url": "https://arxiv.org/abs/2004.06097", "abstract": "In 1964, Erdős, Hajnal and Moon introduced a saturation version of Turán's classical theorem in extremal graph theory. In particular, they determined the minimum number of edges in a $K_r$-free, $n$-vertex graph with the property that the addition of any further edge yields a copy of $K_r$. We consider analogues of this problem in other settings. We prove a saturation version of the Erdős-Szekeres theorem about monotone subsequences and saturation versions of some Ramsey-type theorems on graphs and Dilworth-type theorems on posets.We also consider semisaturation problems, wherein we allow the family to have the forbidden configuration, but insist that any addition to the family yields a new copy of the forbidden configuration. In this setting, we prove a semisaturation version of the Erdős-Szekeres theorem on convex $k$-gons, as well as multiple semisaturation theorems for sequences and posets.", "subjects": "Combinatorics (math.CO)", "title": "Saturation problems in the Ramsey theory of graphs, posets and point sets", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9924227594044641, "lm_q2_score": 0.8198933315126791, "lm_q1q2_score": 0.813680802477132 }
https://arxiv.org/abs/2010.14043
The Teaching Dimension of Kernel Perceptron
Algorithmic machine teaching has been studied under the linear setting where exact teaching is possible. However, little is known for teaching nonlinear learners. Here, we establish the sample complexity of teaching, aka teaching dimension, for kernelized perceptrons for different families of feature maps. As a warm-up, we show that the teaching complexity is $\Theta(d)$ for the exact teaching of linear perceptrons in $\mathbb{R}^d$, and $\Theta(d^k)$ for kernel perceptron with a polynomial kernel of order $k$. Furthermore, under certain smooth assumptions on the data distribution, we establish a rigorous bound on the complexity for approximately teaching a Gaussian kernel perceptron. We provide numerical examples of the optimal (approximate) teaching set under several canonical settings for linear, polynomial and Gaussian kernel perceptrons.
\section{Acknowledgements} Yuxin Chen is supported by NSF 2040989 and a C3.ai DTI Research Award 049755. \section{Experimental Evaluation}\label{appendix: experimentals} In this section, we provide an algorithmic procedure for constructing the $\epsilon$-approximate teaching set, and quantitatively evaluate our theoretical results as presented in \thmref{thm: boundedclassifier} and \thmref{thm: gaussian_main_thm} (cf. \secref{subsec: bounded_error}). Results in this section are supplementary to \figref{fig:example-RBF}. For a qualitative evaluation of the \emph{$\epsilon$-approximated teaching set}, please refer to \figref{fig:Learned_RBF_45c}, which illustrates the learner's Gaussian kernel perceptron learned from the {$\epsilon$-approximated teaching sets} on different classification tasks. \subsection{Experimental Setup} Our experiments are carried out on 4 different datasets: the two-moon dataset (2 interleaving half-circles with noise), the two-circles dataset (a large circle containing a small circle, with noise) from sklearn\footnote{https://scikit-learn.org/stable/modules/classes.html\#module-sklearn.datasets}, the Banana dataset\footnote{https://www.scilab.org/tutorials/machine-learning-–-classification-svm} where the two classes are not perfectly separable, and the Iris dataset\footnote{https://archive.ics.uci.edu/ml/datasets/iris} with one of the three classes removed. For each dataset, the following steps are performed: \begin{enumerate} \item\label{enum: e1} For a given set of data, we, assuming the role of the teacher, find the optimal Gaussian (with $\sigma = 0.9$) separator $\boldsymbol{\theta}^*$ and plot the corresponding boundaries. We estimate the perceptron loss $\textbf{err}(f^*)$ for this separator by summing up the total perceptron loss on the dataset and averaging over the size of the dataset. \item\label{enum: e2} For some $s$, we use the degree $s$ polynomial approximation of the Gaussian separator to determine the approximate polynomial boundaries and select $r = \binom{2+s}{s} - 1$ points on the boundaries such that their images in the polynomial feature space are linearly independent. We make a copy of these points and assign positive labels to one copy and negative labels to the other. In addition, we pick 2 more points arbitrarily, one on each side of the boundaries (i.e. with opposite labels). Thus $\mathcal{TS_{\theta^*}}$ of size $2r$ is constructed. \item\label{enum: e3} Following \assref{assumption: orthogonal}-\ref{assumption: bounded cone}, the Gaussian kernel perceptron learner (with the same $\sigma$ parameter value 0.9) uses only $\mathcal{TS_{\theta^*}}$ to learn a separator $\hat{\boldsymbol{\theta}}$. The perceptron loss $\textbf{err}(\hat{f})$ w.r.t. the original dataset is calculated by averaging the total perceptron loss over the number of points in the dataset. \item\label{enum: e4} Repeat Step 2 and Step 3 for $s$ = $2, 3,\cdots, 12$ and record the perceptron loss (i.e the $\max$ error function as shown in \secref{sec.statement}) for the corresponding teaching set sizes $2r$ (where $r$ = $\binom{2+s} {s} - 1$). Then we plot the error $\left|\textbf{err}(f^*) - \textbf{err}(\hat{f})\right|$ as a function of the teaching set size. \end{enumerate} The corresponding plots for Steps 1-4 are shown in columns (a)-(d) of \figref{fig:panel-RBF}, where for Step 2 (column (b)), the plots all correspond to when $s=5$. \subsection{Implementation Details} In this subsection we provide more algorithmic and numeric details about the implementation of the experiments. First we describe how the first $r-1$ points are generated in \stepref{enum: e2}. Given that the approximate polynomial separator has been found using the kernel and feature map approximation described in \eqnref{eqn: eqn17} and \eqnref{eqn:eqn18}, we are able to plot the corresponding boundaries, and by the same reasoning as in the case of teaching set generation for the polynomial learner, we need to locate points on the boundaries such that their images in the r-dimensional feature space are linearly independent. We achieve this by sampling points on the zero-contour line and row-reducing the matrix formed by the image of all such points. This way, $r-1$ qualified points can be efficiently located. In addition, as discussed in \secref{subsec: bounded_error}, the teaching points are selected within the radius of some small constant multiple of $\sqrt{R}$ consistently across the experiments. In this case, we have arbitrarily picked the constant to be 4. In \stepref{enum: e3}, when the learner learns the separator, we need to ensure \assref{assumption: orthogonal}-\ref{assumption: bounded cone} are satisfied. This is made possible by adding the corresponding constraints to the learner's optimization procedure. Specifically, we need to enforce that 1) the norm of $\hat{\boldsymbol{\theta}}$ is not far from 1, and 2) $\beta$\footnote{We pick two points outside the orthogonal complement of $\mathbb{P}\boldsymbol{\theta}^*$, one with positive label and another with negative label. Thus, in place of $\beta_0$ (as used in \secref{subsec.gaussiankernel}) we use $\beta \in \ensuremath{\mathbb{R}}^2$ here.} and $\gamma$ are bounded absolutely as mentioned in \assref{assumption: bounded cone}. This is achieved by adjusting the specified bound higher or lower as the current-iteration $\hat{\boldsymbol{\theta}}$ norm varies during the optimization procedure. Eventually, we normalize $\hat{\boldsymbol{\theta}}$ and check that the final $\beta$ and $\gamma$ are indeed bounded (i.e. \assref{assumption: bounded cone} is satisfied). Finally, the perceptron loss calculated for each value of $s$ is based on 5 separate runs of \stepref{enum: e2}, while for each run, the learner's kernelized Gaussian perceptron learning algorithm is repeated 5 times. The learner's perceptron loss is then averaged over the 25 epochs of the algorithm to prevent numerical inaccuracies that may arise during the learner's constrained optimization process and possibly the teaching set generation process. \subsection{Results} We present the experiment results in \figref{fig:panel-RBF}. In the right-most plot of each row, the estimates of $\left|\textbf{err}(f^*) - \textbf{err}(\hat{f})\right|$ are plotted against the teaching set sizes $2r$ corresponding to $s=2,\cdots,12$ (as discussed in \secref{subsec: gaussian_kernel_approx}). As can be observed from the shape of the curves in plots of column (d), indeed, our experimental results confirm that the number of teaching examples needed for $\epsilon$-approximate teaching is upper-bounded by $d^{\bigO{\log^2 \frac{1}{\epsilon}}}$ for Gaussian kernel perceptrons. \begin{figure*}[t] \centering \begin{subfigure}[b]{0.24\textwidth} \centering \includegraphics[width=\linewidth]{fig/supp_fig1-1b.png} \label{fig:Teacher_RBf_41a} \end{subfigure} \begin{subfigure}[b]{0.24\textwidth} \centering \includegraphics[width=\linewidth]{fig/supp_fig1-2b.png} \label{fig:Polynomial_approx_41b} \end{subfigure} \begin{subfigure}[b]{0.24\textwidth} \centering \includegraphics[width=\linewidth]{fig/supp_fig1-3b.png} \label{fig:Learned_RBF_41c} \end{subfigure} \begin{subfigure}[b]{0.24\textwidth} \centering \includegraphics[width=\linewidth]{fig/plot1d.png} \label{fig:Epsilon_size_41d} \end{subfigure} \\ \begin{subfigure}[b]{0.24\textwidth} \centering \includegraphics[width=\linewidth]{fig/supp_fig2-1b.png} \label{fig:Teacher_RBf_42a} \end{subfigure} \begin{subfigure}[b]{0.24\textwidth} \centering \includegraphics[width=\linewidth]{fig/supp_fig2-2b.png} \label{fig:Polynomial_approx_42b} \end{subfigure} \begin{subfigure}[b]{0.24\textwidth} \centering \includegraphics[width=\linewidth]{fig/supp_fig2-3b.png} \label{fig:Learned_RBF_42c} \end{subfigure} \begin{subfigure}[b]{0.24\textwidth} \centering \includegraphics[width=\linewidth]{fig/plot2d.png} \label{fig:Epsilon_size_42d} \end{subfigure} \\ \begin{subfigure}[b]{0.24\textwidth} \centering \includegraphics[width=\linewidth]{fig/supp_fig3-1.png} \label{fig:Teacher_RBf_43a} \end{subfigure} \begin{subfigure}[b]{0.24\textwidth} \centering \includegraphics[width=\linewidth]{fig/supp_fig3-2.png} \label{fig:Polynomial_approx_43b} \end{subfigure} \begin{subfigure}[b]{0.24\textwidth} \centering \includegraphics[width=\linewidth]{fig/supp_fig3-3.png} \label{fig:Learned_RBF_43c} \end{subfigure} \begin{subfigure}[b]{0.24\textwidth} \centering \includegraphics[width=\linewidth]{fig/plot3d.png} \label{fig:Epsilon_size_43d} \end{subfigure} \\ \begin{subfigure}[b]{0.24\textwidth} \centering \includegraphics[width=\linewidth]{fig/supp_fig5-1.png} \caption{} \label{fig:Teacher_RBf_45a} \end{subfigure} \begin{subfigure}[b]{0.24\textwidth} \centering \includegraphics[width=\linewidth]{fig/supp_fig5-2.png} \caption{} \label{fig:Polynomial_approx_45b} \end{subfigure} \begin{subfigure}[b]{0.24\textwidth} \centering \includegraphics[width=\linewidth]{fig/supp_fig5-3.png} \caption{} \label{fig:Learned_RBF_45c} \end{subfigure} \begin{subfigure}[b]{0.24\textwidth} \centering \includegraphics[width=\linewidth]{fig/plot4d.png} \caption{} \label{fig:Epsilon_size_45d} \end{subfigure} \caption{Constructing the $\epsilon$-approximate teaching set $\mathcal{TS}_{\boldsymbol{\theta}^*}$ for a Gaussian kernel perceptron learner. Each row corresponds to the numerical results on a different dataset as described in the beginning of \appref{appendix: experimentals}. For each row from left to right: (a) optimal Gaussian boundary and the data set; (b) Teacher identifies a (degree-5) polynomial approximation of the Gaussian decision boundary and finds the $\epsilon$-approximate teaching set $\mathcal{TS}_{\boldsymbol{\theta}^*}$ (marked by cyan plus markers and red dots); (c) Learner learns a Gaussian kernel perceptron from the optimal teaching set in the previous plot; (d) $\abs{\mathcal{TS}}\text{-}\epsilon$ plot for degree-2 to degree-12 polynomial approximation teaching results. The blue curve corresponds to $d^{\bigO{\log^2 \frac{1}{\epsilon}}}$ = $2^{\bigO{\log^2 \frac{1}{\epsilon}}}$ where $d=2$.} \label{fig:panel-RBF} \end{figure*} \section{Gaussian Kernel Perceptron}\label{appendix: gaussian perceptron} In this appendix, we would provide the proofs to the key results: \thmref{thm: boundedclassifier} and \thmref{thm: gaussian_main_thm}, as shown in \secref{subsec: bounded_error}. The key to establishing the results is to provide a constructive procedure for an approximate teaching set. Under the \assref{assumption: orthogonal} and \assref{assumption: bounded cone}, when the Gaussian learner optimizes \eqnref{eqn: bounded} w.r.t the teaching set, any solution $\hat{\boldsymbol{\theta}} \in {\boldsymbol{\mathcal{A}}}_{opt}$ would be $\epsilon$-close to the optimal classifier point-wise, thereby bounding the error on the data distribution ${\mathcal{P}}$ on the input space ${\mathcal{X}}$. We organize this appendix as follows: in \appref{appendixsub: solutionexists} we show that there exists a solution to \eqnref{eqn: bounded}; in \appref{appendixsub: proofofmainthm} we provide the proofs for our key results \thmref{thm: boundedclassifier} and \thmref{thm: gaussian_main_thm}. \paragraph{Truncating the Taylor features of Gaussian kernel.} In \secref{subsec: gaussian_kernel_approx}, we showed the construction of the projection $\mathbb{P}$ such that $\mathbb{P}\Phi$ forms a feature map for the kernel $\Tilde{{\mathcal{K}}}$. We denote the orthogonal projection to $\mathbb{P}$ by $\mathbb{P}^{\bot}$. Thus, we can write $\Phi({\mathbf{x}}) = \mathbb{P}\Phi({\mathbf{x}}) + \mathbb{P}^{\bot}\Phi({\mathbf{x}})$ for any ${\mathbf{x}} \in \mathbb{R}^d$. We discussed the choice of $R$ and $s$. The primary motivation to pick them in the certain way is to retain maximum information in the first $\binom{d+s}{s}$ coordinates of $\Phi(\cdot)$. This is in line with the observation that the eigenvalues of the canonical orthogonal basis~\cite{article} (also eigenvectors) for the Gaussian reproducing kernel Hilbert space ${\mathcal{H}}_{{\mathcal{K}}}$ decays with higher-indexed coordinates, thus the more sensitive eigenvalues are in the first $\binom{d+s}{s}$ coordinates. Thus, if we could show that $\mathbb{P}\hat{\boldsymbol{\theta}}$ is $\epsilon$-approximately close to $\mathbb{P}\boldsymbol{\theta}^*$ where $\hat{\boldsymbol{\theta}} \in {\boldsymbol{\mathcal{A}}}_{opt}$ is a solution to \eqnref{eqn: bounded}, then $\hat{\boldsymbol{\theta}}$ also would be $\epsilon$-approximately close to $\boldsymbol{\theta}^*$. \paragraph{What should be an optimal $R$ vs. choice of the index $s$?} In \secref{subsec: gaussian_kernel_approx}, we solved for $s$ such that $$\frac{1}{(s+1)!}\cdot \paren{R}^{s+1} \le \epsilon$$ If ${\mathbf{x}} \in \mathcal{B}(\sqrt{2\sqrt{R}\sigma^2}, 0)$ then using \lemref{lemma: approxbound} we have \begin{equation} \inmod{\mathbb{P}^{\perp}\Phi({\mathbf{x}})}^2 \le \frac{1}{(s+1)!}\cdot \paren{\sqrt{R}}^{s+1} \le \frac{\epsilon}{\paren{\sqrt{R}}^{s+1}} \le \frac{\epsilon}{\paren{\sqrt{d}}^{s}} \label{eqn: largeeps} \end{equation} where the last inequality follows as $R := \max\left\{\frac{\log^2 \frac{1}{\epsilon}}{e^2},d\right\}$. We define $\epsilon_s := \frac{\epsilon}{\paren{\sqrt{d}}^{s}}$. Note that \eqnref{eqn: largeeps} holds for all ${\mathbf{x}} \in {\mathcal{X}}$\, since $\frac{\norm{{\mathbf{x}}}{{\mathbf{x}}}}{\sigma^2} \le 2\sqrt{R}$. This factor $\paren{\sqrt{d}}^{s}$ in the denominator of $\epsilon_s$ would be useful in nullifying any $\sqrt{r}$ term since $r = \binom{d+s}{s} = \bigO{d^{s}}$. \subsection{Construction of a Solution to \eqnref{eqn: bounded}}\label{appendixsub: solutionexists} In this subsection, we would show that $\eqnref{eqn: bounded}$ has a minimizer $\hat{\boldsymbol{\theta}} \in {\boldsymbol{\mathcal{A}}}_{opt}$ such that $\mathbf{p}(\hat{\boldsymbol{\theta}}) = 0$ where $\mathbf{p}(\cdot)$ is the objective value. Notice that for any $i$ the teaching points $\curlybracket{({\mathbf{z}}_i, 1), ({\mathbf{z}}_i, -1)}$ are correctly classified only if $\hat{\boldsymbol{\theta}}\cdot\Phi({\mathbf{z}}_i) = 0$ and $\hat{\boldsymbol{\theta}}\cdot\Phi({\mathbf{a}}) > 0$. We define the set $\boldsymbol{\mathrm{B}} = \curlybracket{{\mathbf{b}}_1,{\mathbf{b}}_2,\cdots,{\mathbf{b}}_{r}}$ to represent $\{{\mathbf{z}}_i\}_{i=1}^{r-1}\cup \{{\mathbf{a}}\}$ in that order. We define the Gaussian kernel Gram matrix $\textLambda$ corresponding to $\boldsymbol{\mathrm{B}}$ as follows: \begin{equation} \textLambda[i,j] = {\mathcal{K}}({\mathbf{b}}_i,{\mathbf{b}}_j)\quad \forall i,j \in \bracket{r} \end{equation} Since $\{{\mathbf{z}}_i\}_{i=1}^{r-1}$ and ${\mathbf{a}}$ could be chosen from $\mathcal{B}(\sqrt{2\sqrt{R}\sigma^2},0)$ as for any two points ${\mathbf{x}}, {\mathbf{x}}'$ in the teaching set $\frac{\inmod{{\mathbf{x}} - {\mathbf{x}}'}^2}{2\sigma^2} = \bigTheta{\log \frac{1}{\epsilon}}$ thus all the non-diagonal entries of $\textLambda$ could be bounded as $\bigTheta{\epsilon}$. Thus, the non-diagonal entries of $\textLambda$ are upper bounded w.r.t to the choice of $\epsilon$. We denote the concatenated vector of $\gamma$ and $\beta_0$ by $\ensuremath{\boldsymbol{\eta}}$ as $\ensuremath{\boldsymbol{\eta}} := (\gamma^\top,\beta_0)^\top$. Consider the following matrix equation: \begin{equation} \textLambda\cdot \ensuremath{\boldsymbol{\eta}} = \parenb{\underbrace{0,\cdots,0}_{\textnormal{For}\; {\mathbf{z}}_i's},\underbrace{\vphantom{0,\cdots,0}1}_{\textnormal{For}\; {\mathbf{a}}}}^{\top}\label{eqn: satisfyobjective} \end{equation} Notice that any solution $\ensuremath{\boldsymbol{\eta}}$ to \eqnref{eqn: satisfyobjective} has zero objective value for \eqnref{eqn: bounded}. Since $\sum_{i=1}^r \paren{\textLambda[i,r]\cdot\ensuremath{\boldsymbol{\eta}}_i} = \hat{\boldsymbol{\theta}}\cdot\Phi({\mathbf{a}}) > 0$ thus we scale the last component of \eqnref{eqn: satisfyobjective} to 1. First, we observe that \eqnref{eqn: satisfyobjective} has a solution because Gaussian kernel Gram matrix $\textLambda$ to the finite set of points is \tt{strictly positive definite} implying $\textLambda$ is invertible. Thus, there is a unique solution $\ensuremath{\boldsymbol{\eta}}_0 \in \ensuremath{\mathbb{R}}^{r}$ such that: \begin{equation} \textLambda\cdot \ensuremath{\boldsymbol{\eta}}_0 = \nu^{\top}\nonumber \end{equation} where $\nu := (0,0,\cdots,1)$ as shown in \eqnref{eqn: satisfyobjective}. Also, $\ensuremath{\boldsymbol{\eta}}_0^{\top}\cdot\textLambda\cdot\ensuremath{\boldsymbol{\eta}}_0 = \beta_0 > 0$. Now, we need to ensure that $\ensuremath{\boldsymbol{\eta}}_0$ satisfies \assref{assumption: bounded cone}. To analyse the boundedness, we rewrite the above equation as: \begin{equation} \ensuremath{\boldsymbol{\eta}}_0 = \textLambda^{-1}\cdot \nu^{\top}\nonumber \end{equation} To evaluate the entries of $\ensuremath{\boldsymbol{\eta}}_0$, we only need to understand the last column of $\textLambda^{-1}$ (since $\textLambda^{-1}\cdot \nu^{\top}$ contains entries from the last column of $\textLambda^{-1}$). Using the construction of the inverse using the minors of $\textLambda$, we note that $\textLambda^{-1}[i,r] = \frac{1}{\det(\textLambda)}\cdot M_{(i,r)}$, where $M_{(i,r)}$ is the minor of $\textLambda$ corresponding to entry indexed as $(i, r)$ (determinant of the submatrix of $\textLambda$ formed by removing the \tt{i}th row and \tt{r}th column). Note, determinant is an alternating form, which for a square matrix $T$ of dimension $n$ has the explicit sum $\sum_{\sigma \in S^n}sign(\sigma)\cdot\prod_{i}^n u_{i,\sigma(i)}$, where $T[i,j] = u_{i,j}$. Since the non-diagonal entries of $\textLambda$ are bounded by $\epsilon$, thus we can bound the minors. Note, $|M_{r,r}| \ge 1 - \bigO{\epsilon^2} + \cdots + (-1)^{r-1} \bigO{\epsilon^{r-1}}$ and for $i \neq r$ $|M_{i,r}| \le \bigO{\epsilon} + \bigO{\epsilon^2} + \cdots + \bigO{\epsilon^{r-1}}$. Since, $\epsilon$ is sufficiently small, thus $|M_{r,r}|$ majorizes over $|M_{i,r}|$ for $i \neq r$. But then $\ensuremath{\boldsymbol{\eta}}_0 = \frac{1}{\det(\textLambda)}\paren{(-1)^{1+r}M_{1, r}, (-1)^{2+r}M_{2, r}, \cdots, (-1)^{r+r}M_{r, r}}^{\top}$. When we normalize $\boldsymbol{\theta}$, we get $\bar{\ensuremath{\boldsymbol{\eta}}}_0 = \ensuremath{\boldsymbol{\eta}}_0/\inmod{\boldsymbol{\theta}}$. We note that, $\inmod{\boldsymbol{\theta}}^2 = \ensuremath{\boldsymbol{\eta}}_0^{\top}\cdot\textLambda\cdot\ensuremath{\boldsymbol{\eta}}_0 = \beta_0$, implying $\bar{\ensuremath{\boldsymbol{\eta}}}_0 = \ensuremath{\boldsymbol{\eta}}_0/\sqrt{\beta_0}$. Since, $\beta_0 = \frac{1}{\det(\textLambda)}\cdot M_{r,r}$, thus entries of $\bar{\ensuremath{\boldsymbol{\eta}}}_0$ satisfy \assref{assumption: bounded cone}. Thus, we have a solution to \eqnref{eqn: bounded} which satisfies \assref{assumption: bounded cone}. \subsection{Proof of \thmref{thm: boundedclassifier} and \thmref{thm: gaussian_main_thm}}\label{appendixsub: proofofmainthm} In this section, we would establish our key results for the approximate teaching of a Gaussian kernel perceptron. Under the \assref{assumption: orthogonal} and \assref{assumption: bounded cone}, we would show to teach a target model $\boldsymbol{\theta}^*$ $\epsilon$-approximately we only require at most $d^{\bigO{\log^2\frac{1}{\epsilon}}}$ labelled teaching points from ${\mathcal{X}}$. In order to achieve the $\epsilon$-approximate teaching set, we would show that the teaching set $\mathcal{TS}_{\boldsymbol{\theta}^*}$ as constructed in \eqnref{eqn: teaching set} achieves an $\epsilon$-closeness between $f^* = \boldsymbol{\theta}^*\cdot\Phi(\cdot)$ and $\hat{f} = \hat{\boldsymbol{\theta}}\cdot\Phi(\cdot)$ i.e $ \left|f^*({\mathbf{x}}) - \hat{f}({\mathbf{x}})\right| \le \epsilon$ point-wise. Before we move to the proofs of the key results, we state the following relevant lemma which bounds the length (norm) of a vector spanned by a basis with the smoothness condition on the basis as mentioned in \secref{subsec: bounded_error}. \begin{lemma}\label{lemma: maximum norm} Consider the Euclidian space $\ensuremath{\mathbb{R}}^n$. Assume $\curlybracket{{\mathbf{v}}_i}_{i=1}^n$ forms a basis with unit norms. Additionally, for any $i,j$ $\left|{\mathbf{v}}_i\cdot {\mathbf{v}}_j\right| \le \cos{\theta_0}$ where $ \cos{\theta_0} \le \frac{1}{2n}$. Fix a small real scalar $\epsilon > 0$. Now, consider any random vector ${\mathbf{p}} \in \ensuremath{\mathbb{R}}^n$ such that $\forall i \in \bracket{n}$ $|{\mathbf{p}} \cdot {\mathbf{v}}_i| \le \epsilon$. Then the following bound on ${\mathbf{p}}$ holds: $$||{\mathbf{p}}||_2 \le \sqrt{2n}\cdot \epsilon.$$ \end{lemma} \begin{proof} We define $M = \ensuremath{\mathbb{R}}^n$ as the space in which ${\mathbf{p}}$ and ${\mathbf{v}}_i$'s are embedded. Consider another copy of the space $N = \ensuremath{\mathbb{R}}^n$ with standard orthogonal basis $\curlybracket{e_1,\cdots,e_n}$. We define the map $\boldsymbol{\mathrm{W}}: M \longrightarrow N$ as follows: \begin{align*} \boldsymbol{\mathrm{W}} &: M \longrightarrow N\\ q &\mapsto ({\mathbf{v}}_1\cdot q,{\mathbf{v}}_2\cdot q,\cdots,{\mathbf{v}}_n\cdot q) \end{align*} Since $\curlybracket{{\mathbf{v}}_i}_{i=1}^n$ forms a basis, thus $\boldsymbol{\mathrm{W}}$ is invertible. To ease the analysis, we could assume $\epsilon = 1$ (follows by scaling symmetry). Thus, it is clear that $w := \boldsymbol{\mathrm{W}}{\mathbf{p}}$ has all its entries bounded in absolute value by 1. We could write ${\mathbf{p}} = \boldsymbol{\mathrm{W}}^{-1}w $, thus $\inmod{{\mathbf{p}}}^2 = \paren{\boldsymbol{\mathrm{W}}^{-1}w}^{\top}\paren{\boldsymbol{\mathrm{W}}^{-1}w} = w^{\top}\paren{\boldsymbol{\mathrm{W}}^{\top}\boldsymbol{\mathrm{W}}}^{-1}w$. Thus, showing the bound for $w^{\top}\paren{\boldsymbol{\mathrm{W}}^{\top}\boldsymbol{\mathrm{W}}}^{-1}w$ where $w \in \bracket{-1,1}^n$ suffices. We note that $\paren{\boldsymbol{\mathrm{W}}^{\top}\boldsymbol{\mathrm{W}}}$ is a symmetric $(n\times n)$ matrix with diagonal entries 1 and non-diagonal entries bounded in absolute value by $\cos{\theta_0}$. Using convergence of the Neumann series $\sum_{k=0}^{\infty} \paren{\mathbb{Id}_{\paren{n\times n}} - \paren{\boldsymbol{\mathrm{W}}^{\top}\boldsymbol{\mathrm{W}}}}^k$ as $\paren{\mathbb{Id}_{\paren{n\times n}} - \paren{\boldsymbol{\mathrm{W}}^{\top}\boldsymbol{\mathrm{W}}}}$ is a bounded operator, we have: \begin{align} \left|\paren{\boldsymbol{\mathrm{W}}^{\top}\boldsymbol{\mathrm{W}}}^{-1} - \mathbb{Id}_{\paren{n\times n}}\right|_{\ell_{\infty}} &\le \left|\sum_{k=1}^{\infty} \paren{\mathbb{Id}_{\paren{n\times n}} - \paren{\boldsymbol{\mathrm{W}}^{\top}\boldsymbol{\mathrm{W}}}}^k\right|_{\ell_{\infty}} \label{eqn: neu1}\\ &\le \sum_{k=1}^{\infty}\left| \paren{\mathbb{Id}_{\paren{n\times n}} - \paren{\boldsymbol{\mathrm{W}}^{\top}\boldsymbol{\mathrm{W}}}}^k\right|_{\ell_{\infty}}\label{eqn: neu2}\\ &\le \sum_{k=1}^{\infty} n^{k-1}\cos^{k}{\theta_0}\label{eqn: neu3}\\ &= \frac{\cos{\theta_0}}{1-n\cos{\theta_0}}\label{eqn: neu4} \end{align} where $|B|_{\ell_{\infty}}$ refers to the maximum absolute value of any entry of $B$. \eqnref{eqn: neu1} follows using the Neumann series $\paren{\boldsymbol{\mathrm{W}}^{\top}\boldsymbol{\mathrm{W}}}^{-1}$ = $\sum_{k=0}^{\infty} \paren{\mathbb{Id}_{\paren{n\times n}} - \paren{\boldsymbol{\mathrm{W}}^{\top}\boldsymbol{\mathrm{W}}}}^k$. \eqnref{eqn: neu2} is a direct consequence of triangle inequality. Since entries of $\paren{\mathbb{Id}_{\paren{n\times n}} - \paren{\boldsymbol{\mathrm{W}}^{\top}\boldsymbol{\mathrm{W}}}}^k$ are bounded in absolute value by $\cos{\theta_0}$ thus \eqnref{eqn: neu3} follows. Using a straight-forward geometric sum we get an upper bound on the maximum absolute value of any entry in $\paren{\boldsymbol{\mathrm{W}}^{\top}\boldsymbol{\mathrm{W}}}^{-1} -\, \mathbb{Id}_{\paren{n\times n}}$ in \eqnref{eqn: neu4}. Now, we note that \begin{align} w^{\top}\paren{\boldsymbol{\mathrm{W}}^{\top}\boldsymbol{\mathrm{W}}}^{-1}w &= w^{\top}\paren{\paren{\boldsymbol{\mathrm{W}}^{\top}\boldsymbol{\mathrm{W}}}^{-1} -\, \mathbb{Id}_{\paren{n\times n}}}w + w^{\top}\paren{\mathbb{Id}_{\paren{n\times n}}}w\nonumber\\ &\le \sum_{i,j}w_{ij}\paren{\paren{\boldsymbol{\mathrm{W}}^{\top}\boldsymbol{\mathrm{W}}}^{-1} -\, \mathbb{Id}_{\paren{n\times n}}}_{ij}w_{ij} + \inmod{w}^2\nonumber\\ &\le \sum_{i,j}\left|w_{ij}\paren{\paren{\boldsymbol{\mathrm{W}}^{\top}\boldsymbol{\mathrm{W}}}^{-1} -\, \mathbb{Id}_{\paren{n\times n}}}_{ij}w_{ij}\right| + \inmod{w}^2\nonumber\\ &\le \sum_{i,j} \left|\paren{\paren{\boldsymbol{\mathrm{W}}^{\top}\boldsymbol{\mathrm{W}}}^{-1} -\, \mathbb{Id}_{\paren{n\times n}}}_{ij}\right| + n\label{eqn: b1}\\ &\le \frac{n^2\cos{\theta_0}}{1-n\cos{\theta_0}} + n\label{eqn: b2}\\ & = \frac{n}{1-n\cos{\theta_0}}\label{eqn: b3} \end{align} In \eqnref{eqn: b1} we use $w \in \bracket{-1,1}^n$. \eqnref{eqn: b2} follows using \eqnref{eqn: neu4}. Since we have $\inmod{{\mathbf{p}}}_2^2 = w^{\top}\paren{\boldsymbol{\mathrm{W}}^{\top}\boldsymbol{\mathrm{W}}}^{-1}w$ and $\cos{\theta_0} \le \frac{1}{2n}$, thus using \eqnref{eqn: b3} $$\inmod{{\mathbf{p}}}_2 \le \sqrt{\frac{n}{1-n\cos{\theta_0}}} \le \sqrt{2n}.$$ Scaling the map $\boldsymbol{\mathrm{W}}$ to $\epsilon$ yields the stated claim. \end{proof} Under the \assref{assumption: orthogonal} and \assref{assumption: bounded cone}, and bounded norm of $\boldsymbol{\theta}^*$ and $\hat{\boldsymbol{\theta}}$, we would establish that $\mathcal{TS}_{\boldsymbol{\theta}^*}$ is a $d^{\bigO{\log^2 \frac{1}{\epsilon}}}$ size $\epsilon$-approximate teaching set for $\boldsymbol{\theta}^*$. Before establishing the main result, we show the proof of \thmref{thm: boundedclassifier} below. Using \eqnref{eqn: largeeps}, we note that: \begin{align*} \forall\,{\mathbf{x}} \in {\mathcal{X}}\quad &\inmod{\mathbb{P}^{\perp}\Phi({\mathbf{x}})} \le \sqrt{\epsilon_s} \implies \inmod{\mathbb{P}\Phi({\mathbf{x}})} \ge \sqrt{1-\epsilon_s}\\ \forall\,({\mathbf{z}},y) \in \mathcal{TS}_{\boldsymbol{\theta}^*}\quad &\inmod{\mathbb{P}^{\perp}\Phi({\mathbf{z}})} \le \sqrt{\epsilon_s} \implies \inmod{\mathbb{P}\Phi({\mathbf{z}})} \ge \sqrt{1-\epsilon_s} \end{align*} Now, we could further bound the norms of $\mathbb{P}^{\perp}\hat{\boldsymbol{\theta}}$ and $\mathbb{P}^{\perp}\boldsymbol{\theta}^*$ using triangle inequality and boundedness of $\curlybracket{\alpha_i}_{i=1}^l$ and $\bracket{\beta_0,\gamma}$ (as shown in \assref{assumption: bounded cone}): \begin{align*} \inmod{\mathbb{P}^{\perp}\boldsymbol{\theta}^*} = \inmod{\sum_{i=1}^l \alpha_i\cdot \mathbb{P}^{\perp}\Phi({\mathbf{a}}_i)} &\le \sum_{i=1}^l \inmod{\alpha_i\cdot \mathbb{P}^{\perp}\Phi({\mathbf{a}}_i)} \le \paren{\sum_{i=1}^l \left| \alpha_i\right|}\cdot \sqrt{\epsilon_s} = \boldsymbol{C}_{\epsilon}\cdot \sqrt{\epsilon_s}\\ \inmod{\mathbb{P}^{\perp}\hat{\boldsymbol{\theta}}} = \inmod{\beta_0\cdot \mathbb{P}^{\perp}\Phi({\mathbf{a}}) + \sum_{j=1}^{r-1}\gamma_j\cdot \mathbb{P}^{\perp}\Phi({\mathbf{z}}_j)} &\le \inmod{\beta_0\cdot \mathbb{P}^{\perp}\Phi({\mathbf{a}})} + \sum_{i=1}^{r-1} \inmod{\gamma_i\cdot \mathbb{P}^{\perp}\Phi({\mathbf{z}}_i)} \le \paren{\left|\beta_0\right| + \sum_{i=1}^{r-1} \left| \gamma_i\right|}\cdot \sqrt{\epsilon_s} = \boldsymbol{D}_{\epsilon}\cdot \sqrt{\epsilon_s} \end{align*} \begin{proof}[Proof of \thmref{thm: boundedclassifier}] In the following, we would bound $\left|f^*({\mathbf{x}}) - \hat{f}({\mathbf{x}})\right|$ by $ \sqrt{\epsilon}$. In order to bound the modulus, we would split the difference using $\mathbb{P}$ and $\mathbb{P}^{\bot}$ and then analyze the terms correspondingly. We can write any classifier $f$ as $f({\mathbf{x}}) = \boldsymbol{\theta}\cdot\Phi({\mathbf{x}}) = \mathbb{P}\boldsymbol{\theta}\cdot\mathbb{P}\Phi({\mathbf{x}}) + \mathbb{P}^{\bot}\boldsymbol{\theta}\cdot\mathbb{P}^{\bot}\Phi({\mathbf{x}})$. Thus, we have: \begin{align} \left|f^*({\mathbf{x}}) - \hat{f}({\mathbf{x}})\right| &= \left|\mathbb{P}\boldsymbol{\theta}^*\cdot\mathbb{P}\Phi({\mathbf{x}}) + \mathbb{P}^{\bot}\boldsymbol{\theta}^*\cdot\mathbb{P}^{\bot}\Phi({\mathbf{x}}) - \mathbb{P}\hat{\boldsymbol{\theta}}\cdot\mathbb{P}\Phi({\mathbf{x}}) - \mathbb{P}^{\bot}\hat{\boldsymbol{\theta}}\cdot\mathbb{P}^{\bot}\Phi({\mathbf{x}})\right|\nonumber\\ &\le \left|\mathbb{P}\boldsymbol{\theta}^*\cdot\mathbb{P}\Phi({\mathbf{x}})- \mathbb{P}\hat{\boldsymbol{\theta}}\cdot\mathbb{P}\Phi({\mathbf{x}})\right| + \left|\mathbb{P}^{\bot}\boldsymbol{\theta}^*\cdot\mathbb{P}^{\bot}\Phi({\mathbf{x}})-\mathbb{P}^{\bot}\hat{\boldsymbol{\theta}}\cdot\mathbb{P}^{\bot}\Phi({\mathbf{x}})\right|\label{eqn: split1}\\ &\le \left|\mathbb{P}\boldsymbol{\theta}^*\cdot\mathbb{P}\Phi({\mathbf{x}})-\mathbb{P}\hat{\boldsymbol{\theta}}\cdot\mathbb{P}\Phi({\mathbf{x}})\right| + \inmod{\mathbb{P}^{\bot}\boldsymbol{\theta}^*-\mathbb{P}^{\bot}\hat{\boldsymbol{\theta}}}\cdot\inmod{\mathbb{P}^{\bot}\Phi({\mathbf{x}})}\label{eqn: split2}\\ &\le \underbrace{\left|\mathbb{P}\boldsymbol{\theta}^*\cdot\mathbb{P}\Phi({\mathbf{x}})- \mathbb{P}\hat{\boldsymbol{\theta}}\cdot\mathbb{P}\Phi({\mathbf{x}})\right|}_{\bigstar} +\, \paren{\boldsymbol{C}_{\epsilon}+\boldsymbol{D}_{\epsilon}}\cdot \epsilon_s \label{eqn: split3} \end{align} \eqnref{eqn: split1} follows using triangle inequality. We can further bound $\left|\mathbb{P}^{\bot}\boldsymbol{\theta}^*\cdot\mathbb{P}^{\bot}\Phi({\mathbf{x}})-\mathbb{P}^{\bot}\hat{\boldsymbol{\theta}}\cdot\mathbb{P}^{\bot}\Phi({\mathbf{x}})\right|$ using Cauchy-Schwarz inequality and thus \eqnref{eqn: split2} follows. Using the observations: $||\mathbb{P}^{\bot}\hat{\boldsymbol{\theta}}|| \le \boldsymbol{D}_{\epsilon}\cdot\sqrt{\epsilon_s}$ and $\inmod{\mathbb{P}^{\bot}\boldsymbol{\theta}^*} \le \boldsymbol{C}_{\epsilon}\cdot\sqrt{\epsilon_s}$, we could upper bound $\inmod{\mathbb{P}^{\bot}\boldsymbol{\theta}^*-\mathbb{P}^{\bot}\hat{\boldsymbol{\theta}}}$ by $\paren{\boldsymbol{C}_{\epsilon}+\boldsymbol{D}_{\epsilon}}\cdot \sqrt{\epsilon_s}$. Since ${\mathbf{x}} \in {\mathcal{X}}$ thus $\inmod{\mathbb{P}^{\bot}\Phi({\mathbf{x}})} \le \sqrt{\epsilon_s}$ (as shown in \eqnref{eqn: largeeps}), which gives \eqnref{eqn: split3}. Now, the key is to bound the $(\bigstar)$ appropriately and then the result would be proven. We would rewrite $\mathbb{P}\hat{\boldsymbol{\theta}}\cdot\mathbb{P}\Phi({\mathbf{x}})$ in terms of the basis formed by $\{\mathbb{P}\Phi({\mathbf{z}}_i)\}_{i=1}^{r-1} \cup \{\mathbb{P}\theta^*\}$ (by \assref{assumption: orthogonal} $\{\mathbb{P}\Phi({\mathbf{z}}_i)\}_{i=1}^l$ are linearly independent and orthogonal to $\mathbb{P}\theta^*$). Using the basis, we can write $\mathbb{P}\hat{\boldsymbol{\theta}} = \sum_{i=1}^{r-1}c_i\cdot \mathbb{P}\Phi({\mathbf{z}}_i) + \lambda_r\cdot\mathbb{P}\boldsymbol{\theta}^*$ for some scalars $c_1,c_2,\cdots,\lambda_r$. Alternatively, we could rewrite $\mathbb{P}\hat{\boldsymbol{\theta}} = \beta_0\cdot \mathbb{P}\Phi({\mathbf{a}}) + \sum_{j=1}^{r-1}\gamma_j\cdot \mathbb{P}\Phi({\mathbf{z}}_j)$ where $\beta_0 > 0$ (as shown in \appref{appendixsub: solutionexists}). This could be used to note that $\lambda_r > 0$ because $\mathbb{P}\boldsymbol{\theta}^*\cdot\mathbb{P}\Phi({\mathbf{a}}) > 0$ (cf \secref{subsec.gaussiankernel}). We study the decomposition of $\mathbb{P}\hat{\boldsymbol{\theta}}$ in terms of the basis in order to understand the component of $\mathbb{P}\hat{\boldsymbol{\theta}}$ along $\mathbb{P}\boldsymbol{\theta}^*$. We observe that: \begin{equation} \inmod{\mathbb{P}\hat{\boldsymbol{\theta}}}^2 = \inmod{\sum_{i=1}^{r-1}c_i\cdot \mathbb{P}\Phi({\mathbf{z}}_i)}^2 + \inmod{\lambda_r\cdot\mathbb{P}\boldsymbol{\theta}^*}^2 \label{eqn:perpnorm} \end{equation} Since $\hat{\boldsymbol{\theta}}$ is a solution to \eqnref{eqn: bounded}, $\hat{\boldsymbol{\theta}}\cdot\Phi({\mathbf{z}}_i) = 0$ for any $i \in \bracket{r-1}$. Now, we can write the equation in terms of projections as: \begin{equation} \forall i\quad \mathbb{P}\hat{\boldsymbol{\theta}}\cdot \mathbb{P}\Phi({\mathbf{z}}_i) + \mathbb{P}^{\bot}\hat{\boldsymbol{\theta}}\cdot \mathbb{P}^{\bot}\Phi({\mathbf{z}}_i) = 0\label{eqn: expandeqn} \end{equation} Using Cauchy-Schwarz inequality on the product $|\mathbb{P}^{\bot}\hat{\boldsymbol{\theta}}\cdot \mathbb{P}^{\bot}\Phi({\mathbf{z}}_i)|$ we obtain: \begin{equation} |\mathbb{P}^{\bot}\hat{\boldsymbol{\theta}}\cdot \mathbb{P}^{\bot}\Phi({\mathbf{z}}_i)| \le ||\mathbb{P}^{\bot}\hat{\boldsymbol{\theta}}||\cdot ||\mathbb{P}^{\bot}\Phi({\mathbf{z}}_i)|| \le \boldsymbol{D}_{\epsilon}\cdot\epsilon_s \nonumber \end{equation} Plugging this into \eqnref{eqn: expandeqn}, we get the following bound on $|\mathbb{P}\hat{\boldsymbol{\theta}}\cdot \mathbb{P}\Phi({\mathbf{z}}_i)|$: \begin{equation} |\mathbb{P}\hat{\boldsymbol{\theta}}\cdot \mathbb{P}\Phi({\mathbf{z}}_i)| \le \boldsymbol{D}_{\epsilon}\cdot\epsilon_s \label{eqn: boundproj} \end{equation} We denote $V_{O} := \sum_{i=1}^{r-1}c_i\cdot \mathbb{P}\Phi({\mathbf{z}}_i)$. Notice that $V_{O}$ is the orthogonal projection of $\mathbb{P}\hat{\boldsymbol{\theta}}$ along the subspace $\textbf{span}\langle\mathbb{P}\Phi({\mathbf{z}}_1),\cdots,\mathbb{P}\Phi({\mathbf{z}}_{r-1})\rangle$. Thus, we could rewrite \eqnref{eqn: boundproj} further as: \begin{equation} |\mathbb{P}\hat{\boldsymbol{\theta}}\cdot \mathbb{P}\Phi({\mathbf{z}}_i)| = |\paren{V_{O} + \lambda_r\cdot\mathbb{P}\boldsymbol{\theta}^*}\cdot \mathbb{P}\Phi({\mathbf{z}}_i)| = |V_{O}\cdot \mathbb{P}\Phi({\mathbf{z}}_i)| \le \boldsymbol{D}_{\epsilon}\cdot\epsilon_s \nonumber \end{equation} Notice that $||\mathbb{P}\Phi({\mathbf{z}}_i)|| \ge \sqrt{1-\epsilon_s}$. Hence, component of $V_{O}$ along $\mathbb{P}\Phi({\mathbf{z}}_i)$ is upper bounded by $\frac{\boldsymbol{D}_{\epsilon}\cdot\epsilon_s}{\sqrt{1-\epsilon_s}}$. Since $\curlybracket{\mathbb{P}\Phi({\mathbf{z}}_1),\cdots,\mathbb{P}\Phi({\mathbf{z}}_{r-1})}$ satisfy the conditions of \lemref{lemma: maximum norm} (the smoothness condition mentioned in \secref{subsec.gaussiankernel}) thus we could bound the norm of $V_{O}$ as follows: \begin{equation} ||V_{O}|| \le \sqrt{2(r-1)}\cdot \frac{\boldsymbol{D}_{\epsilon}\cdot\epsilon_s}{\sqrt{1-\epsilon_s}} \label{eqn: finalbound} \end{equation} Using \eqnref{eqn:perpnorm} and \eqnref{eqn: finalbound} we can lower bound the norm of $\lambda_r\cdot\mathbb{P}\boldsymbol{\theta}^*$ as follows: \begin{align} \inmod{\mathbb{P}\hat{\boldsymbol{\theta}}}^2 =& \inmod{\sum_{i=1}^{r-1}c_i\cdot \mathbb{P}\Phi({\mathbf{z}}_i)}^2 +\inmod{\lambda_r\cdot\mathbb{P}\boldsymbol{\theta}^*}^2 = \inmod{V_{O}}^2 +\inmod{\lambda_r\cdot\mathbb{P}\boldsymbol{\theta}^*}^2\nonumber\\ \implies& \inmod{\lambda_r\cdot\mathbb{P}\boldsymbol{\theta}^*}^2 \ge \paren{1-\boldsymbol{D}_{\epsilon}^2\cdot\epsilon_s} - 2\paren{r-1}\cdot \frac{\boldsymbol{D}_{\epsilon}^2\cdot\epsilon_s^2}{\paren{1-\epsilon_s}} \ge 1-2\boldsymbol{D}_{\epsilon}^2\cdot\epsilon_s \label{eqn: actualnorm} \end{align} This follows because $\inmod{\mathbb{P}\hat{\boldsymbol{\theta}}}^2 \ge \paren{1-\boldsymbol{D}_{\epsilon}^2\cdot\epsilon_s} $ as $\inmod{\hat{\boldsymbol{\theta}}} = \bigO{1}$ and $\sqrt{2(r-1)}\cdot \epsilon_s \le \sqrt{2(r-1)}\cdot \frac{\epsilon}{(\sqrt{d})^s} \le \epsilon$. With these observations we can rewrite $\paren{\bigstar}$ as follows: \begin{align} \centering \left|\mathbb{P}\boldsymbol{\theta}^*\cdot\mathbb{P}\Phi({\mathbf{x}})- \mathbb{P}\hat{\boldsymbol{\theta}}\cdot\mathbb{P}\Phi({\mathbf{x}})\right| &= \left|\mathbb{P}\boldsymbol{\theta}^*\cdot\mathbb{P}\Phi({\mathbf{x}})- \sum_{i=1}^{r-1}c_i\cdot \mathbb{P}\Phi({\mathbf{z}}_i)\cdot\mathbb{P}\Phi({\mathbf{x}}) - \lambda_r\cdot\mathbb{P}\boldsymbol{\theta}^*\cdot\mathbb{P}\Phi({\mathbf{x}})\right|\nonumber\\ &\le \left|\mathbb{P}\boldsymbol{\theta}^*\cdot\mathbb{P}\Phi({\mathbf{x}})- \lambda_r\cdot\mathbb{P}\boldsymbol{\theta}^*\cdot\mathbb{P}\Phi({\mathbf{x}})\right| + \left| \sum_{i=1}^{r-1}c_i\cdot \mathbb{P}\Phi({\mathbf{z}}_i)\cdot\mathbb{P}\Phi({\mathbf{x}})\right|\label{eqn: triangle}\\ &\le \sqrt{|\boldsymbol{C}_{\epsilon}^2 - 2\boldsymbol{D}_{\epsilon}^2|}\cdot\sqrt{\epsilon_s} + \inmod{\sum_{i=1}^{r-1}c_i\cdot \mathbb{P}\Phi({\mathbf{z}}_i)}\cdot \inmod{\mathbb{P}\Phi({\mathbf{x}})}\label{eqn: boundfinal1}\\ &\le \sqrt{|\boldsymbol{C}_{\epsilon}^2 - 2\boldsymbol{D}_{\epsilon}^2|}\cdot\sqrt{\epsilon_s} +\sqrt{2(r-1)}\cdot \frac{\boldsymbol{D}_{\epsilon}\cdot\epsilon_s}{\sqrt{1-\epsilon_s}}\label{eqn: boundfinal2}\\ & \le \frac{\sqrt{|\boldsymbol{C}_{\epsilon}^2 - 2\boldsymbol{D}_{\epsilon}^2|}\cdot\sqrt{\epsilon}}{(\sqrt{d})^{s/2}} + 2\boldsymbol{D}_{\epsilon}\cdot\epsilon\label{eqn: boundfinal3}\\ & \le 2\max\left\{\frac{\sqrt{|\boldsymbol{C}_{\epsilon}^2 - 2\boldsymbol{D}_{\epsilon}^2|}}{(\sqrt{d})^{s/2}},\, 2\boldsymbol{D}_{\epsilon}\cdot\sqrt{\epsilon}\right\}\cdot \sqrt{\epsilon}\label{eqn: boundfinal4} \end{align} \eqnref{eqn: triangle} is a direct implication of triangle inequality. In \eqnref{eqn: boundfinal1}, in the first term we note that $\lambda_r > 0$ () $\inmod{\mathbb{P}\boldsymbol{\theta}^*} \ge \sqrt{1-\boldsymbol{C}_{\epsilon}^2\cdot\epsilon_s}$ and use \eqnref{eqn: actualnorm}, and in the second use Cauchy-Schwarz inequality. \eqnref{eqn: boundfinal2} follows using \eqnref{eqn: finalbound} and that $||\mathbb{P}\Phi({\mathbf{x}})||$ is bounded by 1. We could unfold the value of $\epsilon_s \le \frac{\epsilon}{(\sqrt{d})^s}$. This gives us \eqnref{eqn: boundfinal3}. We could rewrite \eqnref{eqn: boundfinal3} to get a bound in terms of $\sqrt{\epsilon}$ to obtain \eqnref{eqn: boundfinal4}. Now, using \eqnref{eqn: split3} and \eqnref{eqn: boundfinal4}, can bound $\left|f^*({\mathbf{x}}) - \hat{f}({\mathbf{x}})\right|$ as follows: \begin{align*} \left|f^*({\mathbf{x}}) - \hat{f}({\mathbf{x}})\right| &\le 2\max\left\{\frac{\sqrt{|\boldsymbol{C}_{\epsilon}^2 - 2\boldsymbol{D}_{\epsilon}^2|}}{(\sqrt{d})^{s/2}},\, 2\boldsymbol{D}_{\epsilon}\cdot\sqrt{\epsilon}\right\}\cdot \sqrt{\epsilon} + \paren{\boldsymbol{C}_{\epsilon}+\boldsymbol{D}_{\epsilon}}\cdot \frac{\epsilon}{(\sqrt{d})^s}\\ &\le 3\max\left\{\frac{\sqrt{|\boldsymbol{C}_{\epsilon}^2 - 2\boldsymbol{D}_{\epsilon}^2|}}{(\sqrt{d})^{s/2}},\, 2\boldsymbol{D}_{\epsilon}\cdot\sqrt{\epsilon},\, \paren{\boldsymbol{C}_{\epsilon}+\boldsymbol{D}_{\epsilon}}\cdot \frac{\sqrt{\epsilon}}{(\sqrt{d})^s} \right \}\cdot \sqrt{\epsilon}\\ &\le 3\boldsymbol{C}'\cdot \sqrt{\epsilon} \end{align*} where $\boldsymbol{C}' := \max\left\{\frac{\sqrt{|\boldsymbol{C}_{\epsilon}^2 - 2\boldsymbol{D}_{\epsilon}^2|}}{(\sqrt{d})^{s/2}},\, 2\boldsymbol{D}_{\epsilon}\cdot\sqrt{\epsilon},\, \paren{\boldsymbol{C}_{\epsilon}+\boldsymbol{D}_{\epsilon}}\cdot \frac{\sqrt{\epsilon}}{(\sqrt{d})^s} \right \}$. \\ Notice that all the terms in $\max\left\{\frac{\sqrt{|\boldsymbol{C}_{\epsilon}^2 - 2\boldsymbol{D}_{\epsilon}^2|}}{(\sqrt{d})^{s/2}},\, 2\boldsymbol{D}_{\epsilon}\cdot\sqrt{\epsilon},\, \paren{\boldsymbol{C}_{\epsilon}+\boldsymbol{D}_{\epsilon}}\cdot \frac{\sqrt{\epsilon}}{(\sqrt{d})^s} \right \}$ are smaller than 1 because of boundedness of $\boldsymbol{C}_{\epsilon}$ and $\boldsymbol{D}_{\epsilon}$. Thus, we have shown a $3C'\cdot\sqrt{\epsilon}$ (where $C'$ is a constant smaller than 1) bound on the point-wise difference of $\hat{f}$ and $f^*$. Now, if we scale the $\epsilon$ and solve for $\epsilon^2/3$, we get the desired bound. Hence, the main claim of \thmref{thm: boundedclassifier} is proven i.e. $ \left|f^*({\mathbf{x}}) - \hat{f}({\mathbf{x}})\right| \le \epsilon$. \end{proof} Now, we would complete the proof of the main result of \secref{subsec.gaussiankernel} which bounds the error incurred by the solution $\hat{\boldsymbol{\theta}} \in {\boldsymbol{\mathcal{A}}}_{opt}(\mathcal{TS}_{\boldsymbol{\theta}^*})$ i.e. \thmref{thm: gaussian_main_thm}. The point-wise closeness of $f^*$ and $\hat{f}$ established in \thmref{thm: boundedclassifier} would be key in bounding the error. We complete the proof as follows: \begin{proof}[Proof of \thmref{thm: gaussian_main_thm}] We show the error analysis when data-points are sampled from the data distribution ${\mathcal{P}}$. \begin{align} \left|\textbf{err}(f^*) - \textbf{err}(\hat{f})\right| &= \left|\expover{({\mathbf{x}},y) \sim {\mathcal{P}}}{\max(-y\cdot f^*({\mathbf{x}}), 0)} - \expover{({\mathbf{x}},y) \sim {\mathcal{P}}}{\max(-y\cdot \hat{f}({\mathbf{x}}), 0)}\right|\label{eqn:final1}\\ &= \left|\expover{({\mathbf{x}},y) \sim {\mathcal{P}}}{\max(-y\cdot f^*({\mathbf{x}}), 0) - \max(-y\cdot \hat{f}({\mathbf{x}}), 0)}\right|\label{eqn:final2}\\ &\le \expover{({\mathbf{x}},y) \sim {\mathcal{P}}}{\left|f^*({\mathbf{x}}) - \hat{f}({\mathbf{x}})\right|}\label{eqn:final3}\\ &\le \epsilon \label{eqn:final4} \end{align} \eqnref{eqn:final1} follows using the definition of $\textbf{err}(\cdot)$ function. Because of linearity of expectation, we get \eqnref{eqn:final2}. In \eqnref{eqn:final3}, we use the observation that modulus of an expectation is bounded by the expectation of the modulus of the random variable $f^*({\mathbf{x}}) - \hat{f}({\mathbf{x}})$. In \thmref{thm: boundedclassifier}, we showed that for any ${\mathbf{x}} \in {\mathcal{X}}$, $\left|f^*({\mathbf{x}}) - \hat{f}({\mathbf{x}})\right| \le \epsilon$. Thus, the main claim follows. \end{proof} \newpage \section{Linear Perceptrons}\label{appendix: linear perceptron} \section{Polynomial Kernel Perceptron}\label{appendix: polynomial perceptron} In this appendix, we would provide the proof for the main result of \secref{subsec.poly} i.e. \thmref{thm: poly_main_theorem}. We would complete the proof by constructing a teaching set for exact teaching. Similar to the proof of \thmref{thm: linear perceptron main result}, the key idea is to find linear independent polynomials on the orthogonal subspace defined by $\boldsymbol{\theta}^* \in {\mathcal{H}}_{{\mathcal{K}}}$. Our \assref{assumption: polyorthogonal} would ensure that there are such $r-1$ linear independent polynomials. Rest of the work follows steps inspired as seen in the proof of \thmref{thm: linear perceptron main result}. We assume that $\boldsymbol{\theta}^*$ is non-degenerate and has at least one point in $\mathbb{R}^d$ classified \tt{strictly} positive, and provide the poof below. \begin{proof}[Proof of \thmref{thm: poly_main_theorem}] First, we would show the construction of a teaching set for a target model $\boldsymbol{\theta}^* \in {\mathcal{H}}_{{\mathcal{K}}}$. Denote by $\mathcal{V}^{\bot}_{\boldsymbol{\theta}^*} \subset {\mathcal{H}}_k$ ($\cong$ ${\mathcal{H}}_{{\mathcal{K}}}$ using \propref{prop: polynomial space} i.e isomorphic as vector spaces) the orthogonal subspace of $\boldsymbol{\theta}^*$. Since $\boldsymbol{\theta}^*$ satisfies \assref{assumption: polyorthogonal}, thus there exists a set of $r-1$ linearly independent vectors (polynomials because of \propref{prop: polynomial space}) of the form $\{\Phi({\mathbf{z}}_i)\}_{i=1}^{r-1}$ in $\mathcal{V}^{\bot}_{\boldsymbol{\theta}^*}$ where $\curlybracket{{\mathbf{z}}_i}_{i=1}^{r-1} \in \mathbb{R}^d$. Note that $\boldsymbol{\theta}^*\cdot\Phi({\mathbf{z}}_i) = 0$. Now, pick ${\mathbf{a}} \in \mathbb{R}^d$ such that $\boldsymbol{\theta}^*\cdot\Phi({\mathbf{a}}) >0$ (assuming non-degeneracy). We note that $\curlybracket{({\mathbf{z}}_i,1)}_{i=1}^{r-1}\cup \curlybracket{({\mathbf{z}}_i,-1)}_{i=1}^{r-1}\cup \curlybracket{({\mathbf{a}},1)}$ forms a teaching set for the decision boundary corresponding to $\boldsymbol{\theta}^*$. Using similar ideas from the proof of \thmref{thm: linear perceptron main result}, we notice that any solution $\hat{\boldsymbol{\theta}}$ to \eqnref{eqn: objectkernel} satisfies $\boldsymbol{\theta}^*\cdot\Phi({\mathbf{z}}_i) = 0$ for the labelled datapoints corresponding to $\Phi({\mathbf{z}}_i)$. Thus, $\hat{\boldsymbol{\theta}}$ doesn't have any component along $\Phi({\mathbf{z}}_i)$. \eqnref{eqn: objectkernel} is minimized if $\hat{\boldsymbol{\theta}}\cdot\Phi({\mathbf{a}}) \ge 0$ implying $\hat{\boldsymbol{\theta}} = t\boldsymbol{\theta}^*$. Thus, under \assref{assumption: polyorthogonal}, we show an upper bound $\bigO{\binom{d+k-1}{k}}$ on the size of a teaching set for $\boldsymbol{\theta}^*$. \end{proof} \newpage \section{Conclusion} We have studied and extended the notion of teaching dimension for optimization-based perceptron learner. We also studied a more general notion of approximate teaching which encompasses the notion of exact teaching. To the best of our knowledge, our exact teaching dimension for linear and polynomial perceptron learner is new; so is the upper bound on the approximate teaching dimension of Gaussian perceptron learner and our analysis technique in general. There are many possible extensions to the present work. For example, one may extend our analysis to relaxing the assumptions imposed on the data distribution for polynomial and Gaussian perceptrons. This can potentially be achieved by analysing the linear perceptron and finding ways to nullify subspaces other than orthogonal vectors. This could enhance the results for both the exact teaching of polynomial perceptron learner to more general case and a tighter bound on the approximate teaching dimension of Gaussian perceptron learner. On the other hand, a natural extension of our work is to understand the approximate teaching complexity for other types of ERM learners, e.g. kernel SVM, kernel ridge, and kernel logistic regression. We believe the current work and its extensions would enrich our understanding of optimal and approximate teaching and enable novel applications. \subsubsection*{\bibname}} \begin{document} \twocolumn[ \aistatstitle{Instructions for Paper Submissions to AISTATS 2021} \aistatsauthor{ Author 1 \And Author 2 \And Author 3 } \aistatsaddress{ Institution 1 \And Institution 2 \And Institution 3 } ] \begin{abstract} The Abstract paragraph should be indented 0.25 inch (1.5 picas) on both left and right-hand margins. Use 10~point type, with a vertical spacing of 11~points. The \textbf{Abstract} heading must be centered, bold, and in point size 12. Two line spaces precede the Abstract. The Abstract must be limited to one paragraph. \end{abstract} \section{GENERAL FORMATTING INSTRUCTIONS} The camera-ready versions of the accepted papers are 8 pages, plus any additional pages needed for references. Papers are in 2 columns with the overall line width of 6.75~inches (41~picas). Each column is 3.25~inches wide (19.5~picas). The space between the columns is .25~inches wide (1.5~picas). The left margin is 0.88~inches (5.28~picas). Use 10~point type with a vertical spacing of 11~points. Please use US Letter size paper instead of A4. Paper title is 16~point, caps/lc, bold, centered between 2~horizontal rules. Top rule is 4~points thick and bottom rule is 1~point thick. Allow 1/4~inch space above and below title to rules. Author descriptions are center-justified, initial caps. The lead author is to be listed first (left-most), and the Co-authors are set to follow. If up to three authors, use a single row of author descriptions, each one center-justified, and all set side by side; with more authors or unusually long names or institutions, use more rows. Use one-half line space between paragraphs, with no indent. \section{FIRST LEVEL HEADINGS} First level headings are all caps, flush left, bold, and in point size 12. Use one line space before the first level heading and one-half line space after the first level heading. \subsection{Second Level Heading} Second level headings are initial caps, flush left, bold, and in point size 10. Use one line space before the second level heading and one-half line space after the second level heading. \subsubsection{Third Level Heading} Third level headings are flush left, initial caps, bold, and in point size 10. Use one line space before the third level heading and one-half line space after the third level heading. \paragraph{Fourth Level Heading} Fourth level headings must be flush left, initial caps, bold, and Roman type. Use one line space before the fourth level heading, and place the section text immediately after the heading with no line break, but an 11 point horizontal space. \subsection{Citations, Figure, References} \subsubsection{Citations in Text} Citations within the text should include the author's last name and year, e.g., (Cheesman, 1985). Be sure that the sentence reads correctly if the citation is deleted: e.g., instead of ``As described by (Cheesman, 1985), we first frobulate the widgets,'' write ``As described by Cheesman (1985), we first frobulate the widgets.'' The references listed at the end of the paper can follow any style as long as it is used consistently. \subsubsection{Footnotes} Indicate footnotes with a number\footnote{Sample of the first footnote.} in the text. Use 8 point type for footnotes. Place the footnotes at the bottom of the column in which their markers appear, continuing to the next column if required. Precede the footnote section of a column with a 0.5 point horizontal rule 1~inch (6~picas) long.\footnote{Sample of the second footnote.} \subsubsection{Figures} All artwork must be centered, neat, clean, and legible. All lines should be very dark for purposes of reproduction, and art work should not be hand-drawn. Figures may appear at the top of a column, at the top of a page spanning multiple columns, inline within a column, or with text wrapped around them, but the figure number and caption always appear immediately below the figure. Leave 2 line spaces between the figure and the caption. The figure caption is initial caps and each figure should be numbered consecutively. Make sure that the figure caption does not get separated from the figure. Leave extra white space at the bottom of the page rather than splitting the figure and figure caption. \begin{figure}[h] \vspace{.3in} \centerline{\fbox{This figure intentionally left non-blank}} \vspace{.3in} \caption{Sample Figure Caption} \end{figure} \subsubsection{Tables} All tables must be centered, neat, clean, and legible. Do not use hand-drawn tables. Table number and title always appear above the table. See Table~\ref{sample-table}. Use one line space before the table title, one line space after the table title, and one line space after the table. The table title must be initial caps and each table numbered consecutively. \begin{table}[h] \caption{Sample Table Title} \label{sample-table} \begin{center} \begin{tabular}{ll} \textbf{PART} &\textbf{DESCRIPTION} \\ \hline \\ Dendrite &Input terminal \\ Axon &Output terminal \\ Soma &Cell body (contains cell nucleus) \\ \end{tabular} \end{center} \end{table} \section{SUPPLEMENTARY MATERIAL} If you need to include additional appendices during submission, you can include them in the supplementary material file. You can submit a single file of additional supplementary material which may be either a pdf file (such as proof details) or a zip file for other formats/more files (such as code or videos). Note that reviewers are under no obligation to examine your supplementary material. If you have only one supplementary pdf file, please upload it as is; otherwise gather everything to the single zip file. You must use \texttt{aistats2021.sty} as a style file for your supplementary pdf file and follow the same formatting instructions as in the main paper. The only difference is that it must be in a \emph{single-column} format. You can use \texttt{supplement.tex} in our starter pack as a starting point. \section{SUBMISSION INSTRUCTIONS} To submit your paper to AISTATS 2021, please follow these instructions. \begin{enumerate} \item Download \texttt{aistats2021.sty}, \texttt{fancyhdr.sty}, and \texttt{sample\_paper.tex} provided in our starter pack. Please, do not modify the style files as this might result in a formatting violation. \item Use \texttt{sample\_paper.tex} as a starting point. \item Begin your document with \begin{flushleft} \texttt{\textbackslash documentclass[twoside]\{article\}}\\ \texttt{\textbackslash usepackage\{aistats2021\}} \end{flushleft} The \texttt{twoside} option for the class article allows the package \texttt{fancyhdr.sty} to include headings for even and odd numbered pages. \item When you are ready to submit the manuscript, compile the latex file to obtain the pdf file. \item Check that the content of your submission, \emph{excluding} references, is limited to \textbf{8 pages}. The number of pages containing references alone is not limited. \item Upload the PDF file along with other supplementary material files to the CMT website. \end{enumerate} \subsection{Camera-ready Papers} If your papers are accepted, you will need to submit the camera-ready version. Please make sure that you follow these instructions: \begin{enumerate} \item Change the beginning of your document to \begin{flushleft} \texttt{\textbackslash documentclass[twoside]\{article\}}\\ \texttt{\textbackslash usepackage[accepted]\{aistats2021\}} \end{flushleft} The option \texttt{accepted} for the package \texttt{aistats2021.sty} will write a copyright notice at the end of the first column of the first page. This option will also print headings for the paper. For the \emph{even} pages, the title of the paper will be used as heading and for \emph{odd} pages the author names will be used as heading. If the title of the paper is too long or the number of authors is too large, the style will print a warning message as heading. If this happens additional commands can be used to place as headings shorter versions of the title and the author names. This is explained in the next point. \item If you get warning messages as described above, then immediately after $\texttt{\textbackslash begin\{document\}}$, write \begin{flushleft} \texttt{\textbackslash runningtitle\{Provide here an alternative shorter version of the title of your paper\}}\\ \texttt{\textbackslash runningauthor\{Provide here the surnames of the authors of your paper, all separated by commas\}} \end{flushleft} Note that the text that appears as argument in \texttt{\textbackslash runningtitle} will be printed as a heading in the \emph{even} pages. The text that appears as argument in \texttt{\textbackslash runningauthor} will be printed as a heading in the \emph{odd} pages. If even the author surnames do not fit, it is acceptable to give a subset of author names followed by ``et al.'' \item The camera-ready versions of the accepted papers are 8 pages, plus any additional pages needed for references. \item If you need to include additional appendices, you can include them in the supplementary material file. \item Please, do not change the layout given by the above instructions and by the style file. \end{enumerate} \subsubsection*{Acknowledgements} All acknowledgments go at the end of the paper, including thanks to reviewers who gave useful comments, to colleagues who contributed to the ideas, and to funding agencies and corporate sponsors that provided financial support. To preserve the anonymity, please include acknowledgments \emph{only} in the camera-ready papers. \subsubsection*{References} References follow the acknowledgements. Use an unnumbered third level heading for the references section. Please use the same font size for references as for the body of the paper---remember that references do not count against your page length total. \subsubsection*{\bibname}} \begin{document} \onecolumn \aistatstitle{Instructions for Paper Submissions to AISTATS 2021: \\ Supplementary Materials} \section{FORMATTING INSTRUCTIONS} To prepare a supplementary pdf file, we ask the authors to use \texttt{aistats2021.sty} as a style file and to follow the same formatting instructions as in the main paper. The only difference is that the supplementary material must be in a \emph{single-column} format. You can use \texttt{supplement.tex} in our starter pack as a starting point, or append the supplementary content to the main paper and split the final PDF into two separate files. Note that reviewers are under no obligation to examine your supplementary material. \section{MISSING PROOFS} The supplementary materials may contain detailed proofs of the results that are missing in the main paper. \subsection{Proof of Lemma 3} \textit{In this section, we present the detailed proof of Lemma 3 and then [ ... ]} \section{ADDITIONAL EXPERIMENTS} If you have additional experimental results, you may include them in the supplementary materials. \subsection{The Effect of Regularization Parameter} \textit{Our algorithm depends on the regularization parameter $\lambda$. Figure 1 below illustrates the effect of this parameter on the performance of our algorithm. As we can see, [ ... ]} \vfill \end{document} \section{Introduction} Machine teaching studies the problem of finding an optimal training sequence to steer a learner towards a target concept \cite{DBLP:journals/corr/ZhuSingla18}. An important learning-theoretic complexity measure of machine teaching is the \emph{teaching dimension} \cite{goldman1995complexity}, which specifies the minimal number of training examples required in the worst case to teach a target concept. Over the past few decades, the notion of teaching dimension has been investigated under a variety of learner's models and teaching protocols (e.g,. \citet{cakmak2012algorithmic,singla2013actively,singla2014near,liu2017iterative,haug2018teaching,tschiatschek2019learner,DBLP:conf/icml/LiuDLLRS18,DBLP:conf/ijcai/KamalarubanDCS19,DBLP:conf/nips/Hunziker0AR0PYS19,DBLP:conf/ijcai/DevidzeMH0S20,DBLP:conf/icml/RakhshaRD0S20}). One of the most studied scenarios is the case of teaching a version-space learner \cite{goldman1995complexity,article:anthony95,zilles2008teaching,doliwa2014recursive,chen2018understanding,mansouri2019preference,pmlr-v98-kirkpatrick19a}. Upon receiving a sequence of training examples from the teacher, a version-space learner maintains a set of hypotheses that are consistent with the training examples, and outputs a \emph{random} hypothesis from this set. As a canonical example, consider teaching a 1-dimensional binary threshold function $f_{\theta^*}(x) = \id{x -\theta^*}$ for $x\in [0,1]$. For a learner with a finite (or countable infinite) version space, e.g., $\theta \in \{\frac{i}{n}\}_{i=0,\dots,n}$ where $n\in \mathbb{Z}^+$ (see \figref{fig:example.1d-vs}), a smallest training set is $\{\left(\frac{i}{n},0\right), \left(\frac{i+1}{n},1\right)\}$ where $\frac{i}{n} \leq \theta^* < \frac{i+1}{n}$; thus the teaching dimension is $2$. However, when the version space is continuous, the teaching dimension becomes $\infty$, because it is no longer possible for the learner to pick out a unique threshold $\theta^*$ with a finite training set. This is due to two key (limiting) modeling assumptions of the version-space learner: (1) all (consistent) hypotheses in the version space are treated equally, and (2) there exists a hypothesis in the version space that is consistent with all training examples. As one can see, these assumptions fail to capture the behavior of many modern learning algorithms, where the best hypotheses are often selected via \emph{optimizing} certain loss functions, and the data is not perfectly separable (i.e. not realizable w.r.t. the hypothesis/model class). To lift these modeling assumptions, a more realistic teaching scenario is to consider the learner as an \emph{empirical risk minimizer} (ERM). In fact, under the realizable setting, the version-space learner could be viewed as an ERM that optimizes the 0-1 loss---one that finds all hypotheses with zero training error. Recently, \citet{JMLR:v17:15-630} studied the teaching dimension of linear ERM, and established values of teaching dimension for several classes of linear (regularized) ERM learners, including support vector machine (SVM), logistic regression and ridge regression. As illustrated in \figref{fig:example.1d-hinge}, for the previous example it suffices to use $\{\left(\theta^*-\epsilon,0\right), \left(\theta^*+\epsilon,1\right)\}$ with any $\epsilon \leq \min(1-\theta^*, \theta^*)$ as training set to teach $\theta^*$ as an optimizer of the SVM objective (i.e., $l$2 regularized hinge loss); hence the teaching dimension is 2. In \figref{fig:example.1d-perceptron}, we consider teaching an ERM learner with perceptron loss, i.e., $\ell(f_\theta(x), y) = \max\left( -y\cdot (x-\theta), 0\right)$ (where $y \in \curlybracket{-1,1}$). If the teacher is allowed to construct \emph{any} training example with \emph{any} labeling\footnote{If the teacher is restricted to only provide consistent labels (i.e., the realizable setting), then the ERM with perceptron loss reduces to the version space learner, where the teaching dimension is $\infty$.} , then it is easy to verify that the minimal training set is $\{(\theta^*, -1), (\theta^*,1)\}$. \begin{figure}[t] \centering \begin{subfigure}[b]{.2\textwidth} \centering \includegraphics[trim={0, 0, 0, 10mm}, width=\linewidth]{fig/illu-threshold-finite.pdf} \vspace{-5mm} \caption{$\textsc{0/1}$ loss} \label{fig:example.1d-vs} \end{subfigure}\qquad \begin{subfigure}[b]{.2\textwidth} \centering \includegraphics[width=\linewidth]{fig/illu-threshold-svm.pdf} \vspace{-5mm} \caption{SVM (hinge loss)} \label{fig:example.1d-hinge} \end{subfigure}\\%\qquad \vspace{2mm} \begin{subfigure}[b]{.2\textwidth} \centering \includegraphics[width=\linewidth]{fig/illu-threshold-perceptron.pdf} \vspace{-5mm} \caption{Perceptron} \label{fig:example.1d-perceptron} \end{subfigure} \caption{Teaching a 1D threshold function to an ERM learner. Training instances are marked in grey. (a) Version-space learner with a finite hypothesis set. (b) SVM and training set $\{\left(\theta^*-\epsilon,0\right), \left(\theta^*+\epsilon,1\right)\}$. (c) ERM learner with (perceptron) loss and training set $\{(\theta^*, 0), (\theta^*,1)\}$. }\label{fig:illu-relaxed-general} \vspace{-3mm} \end{figure} While these results show promise at understanding optimal teaching for ERM learners, existing work \cite{JMLR:v17:15-630} has focused exclusively on the linear setting with the goal to teach the exact hypothesis (e.g., teaching the exact model parameters or the exact decision boundary for classification tasks). Aligned with these results, we establish an upper bound as shown in \secref{subsec.linear}. It remains a fundamental challenge to rigorously characterize the teaching complexity for nonlinear learners. Furthermore, in the cases where exact teaching is not possible with a finite training set, the classical teaching dimension no longer captures the fine-grained complexity of the teaching tasks, and hence one needs to relax the teaching goals and investigate new notions of teaching complexity. In this paper, we aim to address the above challenges. We focus on kernel perceptron, a specific type of ERM learner that is less understood even under the linear setting. Following the convention in teaching ERM learners, we consider the \emph{constructive} setting, where the teacher can construct arbitrary teaching examples in the support of the data distribution. Our contributions are highlighted below, with main theoretical results summarized in Table~\ref{tab:results-overview}. \begin{itemize} \item We formally define approximate teaching of kernel perceptron, and propose a novel measure of teaching complexity, namely the \emph{$\epsilon$-approximate teaching dimension} ($\epsilon$-TD), which captures the complexity of teaching a ``relaxed'' target that is close to the target hypothesis in terms of the expected risk. Our relaxed notion of teaching dimension strictly generalizes the teaching dimension of \citet{JMLR:v17:15-630}, where it trades off the teaching complexity against the risk of the taught hypothesis, and hence is more practical in characterizing the complexity of a teaching task (\secref{sec.statement}).\newline \item We show that exact teaching is feasible for kernel perceptrons with finite dimensional feature maps, such as linear kernel and polynomial kernel. Specifically, for data points in $\mathbb{R}^d$, we establish a $\bigTheta{d}$ bound on the teaching dimension of linear perceptron. Under a mild condition on data distribution, we provide a tight bound of $\bigTheta{\binom{d+k-1}{k}}$ for polynomial perceptron of order $k$. We also exhibit optimal training sets that match these teaching dimensions (\secref{subsec.linear} and \secref{subsec.poly}).\newline \item We further show that for Gaussian kernelized perceptron, exact teaching is not possible with a finite set of hypotheses, and then establish a $d^{\bigO{\log^2 \frac{1}{\epsilon}}}$ bound on the $\epsilon$-approximate teaching dimension (\secref{subsec.gaussiankernel}). To the best of our knowledge, these results constitute the first known bounds on (approximately) teaching a non-linear ERM learner (\secref{sec.theoreticalresults}). \end{itemize} \begin{table}[t!] \centering \scalebox{.88}{ \begin{tabular}{cccc} \toprule & \textbf{linear} & \textbf{polynomial} & \textbf{Gaussian} \\ \midrule TD (exact) & $\bigTheta{d}$ & $\bigTheta{\binom{d+k-1}{k}}$ & $\infty$ \\ $\epsilon$-approximate TD & - & - & $d^{\bigO{\log^2 \frac{1}{\epsilon}}}$ \\ \textbf{Assumption } & - & \ref{assumption: polyorthogonal} & \ref{assumption: orthogonal}, \ref{assumption: bounded cone}\\ \bottomrule \end{tabular} } \caption{Teaching dimension for kernel perceptron}\label{tab:results-overview} \vspace{-3mm} \end{table} \section{Motivation for Assumptions}\label{appendix: motivation} In this appendix, we discuss the motivations and insights for the key \assumref{assumption: polyorthogonal}{assumption: orthogonal}{assumption: bounded cone} made in \secref{subsec.poly} and \secref{subsec.gaussiankernel}. This appendix is organized in the following way: \appref{appsubsec.limitation} discusses \assref{assumption: polyorthogonal} and provides the proofs of \lemref{lemma: exact teaching} and \lemref{lemma: approximate teaching } in the context of polynomial kernel (see \secref{subsec.poly}); \appref{appsubsec.approxassumtions} discusses the \assref{assumption: orthogonal} and \assref{assumption: bounded cone} in the context of Gaussian kernel perceptron (see \secref{subsec.gaussiankernel}). \paragraph{Reformulation of a model $\boldsymbol{\theta}$ as a polynomial form} As noted in \secref{sec.statement}, we consider the reproducing kernel Hilbert space~\cite{learnkernel} ${\mathcal{H}}_{{\mathcal{K}}}$ which could be spanned by the linear combinations of kernel functions of the form ${\mathcal{K}}({\mathbf{x}}, \cdot)$. More concretely, \begin{equation} {\mathcal{H}}_{{\mathcal{K}}} = \condcurlybracket{\sum_{i=1}^m \alpha_i\cdot{\mathcal{K}}({\mathbf{x}}_i,\cdot)}{ m \in \mathbb{N},\, {\mathbf{x}}_i \in {\mathcal{X}},\, \alpha_i \in \ensuremath{\mathbb{R}}, i = 1,\cdots,m}\nonumber \end{equation} Thus, we could write any model $f_{\boldsymbol{\theta}} \in {\mathcal{H}}_{{\mathcal{K}}}$ (parametrized by $\boldsymbol{\theta}$) as $\sum_{i=1}^n \alpha_i\cdot{\mathcal{K}}({\mathbf{x}}_i,\cdot)$ for some $n \in \mathbb{N}$, $ {\mathbf{x}}_i \in {\mathcal{X}}$ for $i \in \bracket{n}$. This interesting because if ${\mathcal{K}}(\cdot,\cdot)$ is a polynomial kernel of degree $k$, then \begin{equation} f_{\boldsymbol{\theta}}({\mathbf{x}}) = \sum_{i=1}^n \alpha_i\cdot{\mathcal{K}}({\mathbf{x}}_i,{\mathbf{x}}) = \sum_{i=1}^n \alpha_i\cdot \normg{{\mathbf{x}}_i}{{\mathbf{x}}}^k = \sum_{i=1}^n \alpha_i\cdot \paren{{\mathbf{x}}_{i1}{\mathbf{x}}_1+\cdots+{\mathbf{x}}_{id}{\mathbf{x}}_d}^k \label{eqn: poly form} \end{equation} where ${\mathbf{x}}_i = \paren{{\mathbf{x}}_{i1},\cdots,{\mathbf{x}}_{id}}$. Thus, $f_{\boldsymbol{\theta}}(\cdot)$ could be reformulated as a homogeneous polynomial of degree $k$ in $d$ variables. Notice that for polynomial kernel in \secref{subsec.poly}, for a target model $\boldsymbol{\theta}^*$ we study the orthogonal projections of the form $\Phi({\mathbf{x}})$ for ${\mathbf{x}} \in {\mathcal{X}}$ such that $\boldsymbol{\theta}^*\cdot\Phi({\mathbf{x}}) = 0$. Alternatively, using \eqnref{eqn: poly form} we wish to solve the polynomial equation: \begin{equation} f_{\boldsymbol{\theta}^*}({\mathbf{x}}) = 0 \implies \sum_{i=1}^n \alpha_i\cdot \paren{{\mathbf{x}}_{i1}{\mathbf{x}}_1+\cdots+{\mathbf{x}}_{id}{\mathbf{x}}_d}^k = 0 \nonumber \end{equation} where we denote $\boldsymbol{\theta}^* := \sum_{i=1}^n \alpha_i\cdot\Phi({\mathbf{x}}_i)$. For \assref{assumption: polyorthogonal} we wish to find $\binom{d+k-1}{k}$ real solutions of the form ${\mathbf{x}}' \in \ensuremath{\mathbb{R}}^d$, of this equation which are linearly independent. It is well-studied in the literature of polynomial algebra that this equation might not satisfy the required assumption. We construct one such model for the proof of \lemref{lemma: approximate teaching }. This reformulation can be extended for sum of polynomial kernels of the form $\sum_{j=1}^s c_j\cdot{\normg{{\mathbf{x}}}{{\mathbf{x}}'}}^j$ where $c_j \ge 0$. In \assref{assumption: orthogonal} the reformulation reduces to a variant of the above polynomial equation i.e. \begin{equation*} \sum_{i=1}^n\alpha_i\paren{\sum_{j=1}^s c_i\cdot \paren{{\mathbf{x}}_{i1}{\mathbf{x}}_1+\cdots+{\mathbf{x}}_{id}{\mathbf{x}}_d}^j} = 0 \end{equation*} So far, we discussed a characterization of the notion of orthogonality for a target model $\boldsymbol{\theta}^*$ in the form of a polynomial equation. This characterization would help us understand \assref{assumption: polyorthogonal} and \assref{assumption: orthogonal}. In \secref{appsubsec.limitation}, we discuss that \assref{assumption: polyorthogonal} is the most natural step to make for exact teaching of a target model. \subsection{ Limitation of Exact Teaching: Polynomial Kernel Perceptron}\label{appsubsec.limitation} In this subsection, we provide the proofs of \lemref{lemma: exact teaching} and \lemref{lemma: approximate teaching } as stated in \secref{subsec.motivation}. These results establish that in the realizable setting, \assref{assumption: polyorthogonal} is required for exact teaching: \lemref{lemma: exact teaching}. Furthermore, there are pathological cases where violation of the assumption leads to models which couldn't be approximately taught: \lemref{lemma: approximate teaching }. \begin{proof}[Proof of \lemref{lemma: exact teaching}] We would prove the result by contradiction. Assume that $\mathcal{TS}_{\boldsymbol{\theta}^*}$ be a teaching set which \tt{exactly} teaches $\boldsymbol{\theta}^*$. $\mathrm{WLOG}$ we enumerate the teaching set as $\mathcal{TS}_{\boldsymbol{\theta}^*} = \curlybracket{\paren{{\mathbf{x}}_1,y_1},\cdots,\paren{{\mathbf{x}}_n,y_n}}$. For the sake of clarity, we would rewrite \eqref{eqn: objectkernel} again \begin{equation} {\boldsymbol{\mathcal{A}}}_{opt}(\mathcal{TS}_{\boldsymbol{\theta}^*}):= \mathop{\rm arg\,min}_{\boldsymbol{\theta} \in {\mathcal{H}}_{{\mathcal{K}}}}\sum_{i=1}^n \max(-y_i\cdot \boldsymbol{\theta}^*\cdot\Phi({\mathbf{x}}_i), 0)\label{eq: polyobject} \end{equation} Denote by $\mathcal{V}^{\bot}_{\boldsymbol{\theta}^*} \subset {\mathcal{H}}_k$ the orthogonal subspace of $\boldsymbol{\theta}^*$. We denote the objective value of \eqnref{eq: polyobject} by ${\mathbf{p}}(\boldsymbol{\theta}) := \sum_{i = 1}^{n} \max(-y_i\cdot\boldsymbol{\theta}\cdot\Phi({\mathbf{x}}_i),\: 0)$. We further define \tt{effective direction of a teaching point} $\paren{{\mathbf{x}}_i,y_i} \in \mathcal{TS}_{\boldsymbol{\theta}^*}$ in the RKHS ${\mathcal{H}}_{{\mathcal{K}}}$ as $\boldsymbol{d}_i := -y_i\cdot\Phi({\mathbf{x}}_i)$. Because of the realizable setting i.e. all teaching points are correctly classified, it is clear that $$-y_i\cdot \boldsymbol{\theta}^*\cdot\Phi({\mathbf{x}}_i) \le 0 \implies \boldsymbol{\theta}^*\cdot \boldsymbol{d}_i \le 0.$$ Since $\boldsymbol{\theta}^*$ violates \assref{assumption: polyorthogonal}, thus $\exists$ a unit normalized direction $\hat{\boldsymbol{d}} \in \mathcal{V}^{\bot}_{\boldsymbol{\theta}^*}$ which can't be spanned by ${\mathcal{S}}_{0} \triangleq \condcurlybracket{\Phi({\mathbf{x}})}{ \Phi({\mathbf{x}}) \in \mathcal{V}^{\bot}_{\boldsymbol{\theta}^*}\,\, \textnormal{for some}\,\, {\mathbf{x}} \in {\mathcal{X}}}$ such that $\hat{\boldsymbol{d}} \perp \mathbf{span}\left\langle{\mathcal{S}}_{0}\right\rangle$. Now, we would show that $\exists \lambda > 0$ (real) such that \begin{equation} \paren{\boldsymbol{\theta}^* + \lambda\hat{\boldsymbol{d}}} \in {\boldsymbol{\mathcal{A}}}_{opt}(\mathcal{TS}_{\boldsymbol{\theta}^*}) \label{eqn: new theta} \end{equation} Notice that for some $i$ if $\boldsymbol{d}_i \in \mathcal{V}^{\bot}_{\boldsymbol{\theta}^*}$ then $(\boldsymbol{\theta}^* + \lambda\hat{\boldsymbol{d}})\cdot \boldsymbol{d}_i = \boldsymbol{\theta}^*\cdot \boldsymbol{d}_i \le 0$. Now, we consider the case when $\boldsymbol{d}_i \notin \mathcal{V}^{\bot}_{\boldsymbol{\theta}^*}$. We could expand $\boldsymbol{d}_i$ as follows: \begin{equation} \boldsymbol{d}_i = a_i\hat{\boldsymbol{d}}^{\perp} + b_i\hat{\boldsymbol{d}} \label{eqn: expand} \end{equation} where $a_i$ and $b_i$ are real scalars and $\hat{\boldsymbol{d}}^{\perp}$ is normalized orthogonal projection of $\boldsymbol{d}_i$ to orthogonal complement (orthogonal subspace) of $\hat{\boldsymbol{d}}$. These constructions are illustrated in \figref{fig:lemma1}. Now, we would compute the following dot product: \begin{align} &(\boldsymbol{\theta}^* + \lambda\hat{\boldsymbol{d}})\cdot \boldsymbol{d}_i \nonumber\\ \implies& \boldsymbol{\theta}^*\cdot \boldsymbol{d}_i + \lambda\hat{\boldsymbol{d}}\cdot (a_i\hat{\boldsymbol{d}}^{\perp} + b_i\hat{\boldsymbol{d}}) \label{eqn: part1}\\ \implies& \boldsymbol{\theta}^*\cdot\boldsymbol{d}_i + \lambda\hat{\boldsymbol{d}}\cdot b_i\hat{\boldsymbol{d}} \label{eqn: part2} \end{align} \eqnref{eqn: part1} follows using \eqnref{eqn: expand}. In \eqnref{eqn: part2} we note that $\hat{\boldsymbol{d}} \perp \hat{\boldsymbol{d}}^{\perp}$. If $b_i \le 0$ then $(\boldsymbol{\theta}^* + \lambda\hat{\boldsymbol{d}})\cdot \boldsymbol{d}_i \le 0$ as $\boldsymbol{\theta}^*\cdot \boldsymbol{d}_i < 0$. If $b_i > 0$, then to ensure $(\boldsymbol{\theta}^* + \lambda\hat{\boldsymbol{d}})\cdot \boldsymbol{d}_i \le 0$, using \eqnref{eqn: part2} we need $$\lambda \le \frac{-\boldsymbol{\theta}^*\cdot \boldsymbol{d}_i}{b_i}$$ Since, $i$ is chosen arbitrarily thus for all the effective directions $\boldsymbol{d}_i \notin \mathcal{V}^{\bot}_{\boldsymbol{\theta}^*}$ where $b_i > 0$, we pick postive scalar $\lambda$ such that: $$\lambda := \min_{i:\, b_i > 0}\frac{-\boldsymbol{\theta}^*\cdot \boldsymbol{d}_i}{b_i}$$ For this choice of $\lambda$ we show that $\boldsymbol{\theta}^* + \lambda\hat{\boldsymbol{d}} \in {\boldsymbol{\mathcal{A}}}_{opt}(\mathcal{TS}_{\boldsymbol{\theta}^*})$. Thus, by definition, $\mathcal{TS}_{\boldsymbol{\theta}^*}$ as stated above can't teach $\boldsymbol{\theta}^*$ exactly. Hence, if $\boldsymbol{\theta}^*$ violates \assref{assumption: polyorthogonal} then we can't teach it exactly in the realizable setting. \end{proof} \begin{figure*}[t] \centering \begin{subfigure}[b]{0.30\textwidth} \centering \includegraphics[width=\linewidth]{fig/lemma1.png} \caption{} \label{fig:lemma1} \end{subfigure} \qquad \begin{subfigure}[b]{0.40\textwidth} \centering \includegraphics[width=\linewidth]{fig/lemma2_plot1.png} \caption{} \label{fig:lemma2} \end{subfigure} \caption{Illustrations for Proofs of \lemref{lemma: exact teaching} and \lemref{lemma: approximate teaching }. (a) For \lemref{lemma: exact teaching}, consider $\boldsymbol{\theta}^*$ as shown. $\mathbf{span}\left\langle{\mathcal{S}}_{0}\right\rangle$ only covers the direction indicated by the dashed blue arrow. $\Tilde{\boldsymbol{\theta}}$ correctly labels not only teaching examples on $\mathcal{V}^{\bot}_{\boldsymbol{\theta}^*}$ and within $\mathbf{span}\left\langle{\mathcal{S}}_{0}\right\rangle$, but also those not on $\mathcal{V}^{\bot}_{\boldsymbol{\theta}^*}$, e.g. along effective directions $d_1$, $d_2$, $d_3$; (b) To visualize the proof idea of \lemref{lemma: approximate teaching }, we demonstrate an example in $\ensuremath{\mathbb{R}}^2$ with feature space of dimension 3 (where $k = 2$). We consider a model $\boldsymbol{\theta}^* = \frac{1}{\sqrt{2}}\cdot\Phi((1,0))+\frac{1}{\sqrt{2}}\cdot\Phi((0,1))$. Since $k$ is even, each point $\textbf{x}$ in $\mathbb{R}^2$ corresponds to a non-negative value of $f_{\boldsymbol{\theta}^*}(\textbf{x})$. This function is plotted as the blue surface in (b) along $z$-axis. Yellow surface represents a threshold of $\epsilon$ along $z$-axis; thus any point above it has value more than $\epsilon$. A red $\delta$-norm ring on the $\ensuremath{\mathbb{R}}^2$-plane denotes the constraint on the norm of a teaching point ($||\Phi({\mathbf{x}})|| = \delta \implies ||{\mathbf{x}}|| = \delta^{1/4}$). If we are constrained to select only points outside of the red $\delta$-norm ring, then the plot illustrates a situation where no points outside the ring satisfies $f_{\boldsymbol{\theta}^*}(\textbf{x}) = \boldsymbol{\theta}^*\cdot\Phi({\mathbf{x}}) < \epsilon$.} \label{fig:lemma1-2} \end{figure*} Now, we provide the proof of \lemref{lemma: approximate teaching } for which we give a construction of a model $\boldsymbol{\theta}^*$ which violates \assref{assumption: polyorthogonal} and also show that it can't be taught arbitrarily $\epsilon$-close approximately. The proof is also illustrated in \figref{fig:lemma2}. \begin{proof}[Proof of \lemref{lemma: approximate teaching }] Assume $\boldsymbol{\theta}^*$ is a target model which violates \assref{assumption: polyorthogonal}. If $\boldsymbol{\theta}^*$ can be taught \tt{approximately} for arbitrarily small $\epsilon > 0$ then $\exists$ $\Tilde{\boldsymbol{\theta}}^*$ which can be taught \tt{exactly} (i.e. satisfies \assref{assumption: polyorthogonal}) such that $$\boldsymbol{\theta}^*\cdot \Tilde{\boldsymbol{\theta}}^* \ge 1-\cos{a_{\epsilon}},\,\, \textnormal{where}\,\, \cos{a_{\epsilon}} = \epsilon$$ if $\boldsymbol{\theta}^*$ and $\Tilde{\boldsymbol{\theta}}^*$ are unit normalized. This implies that if $\Phi({\mathbf{x}}) \in \mathcal{V}_{\Tilde{\boldsymbol{\theta}}^*}^{\perp} \subset {\mathcal{H}}_{{\mathcal{K}}}$ (orthogonal complement of $\Tilde{\boldsymbol{\theta}}^*$) such that $||\Phi({\mathbf{x}})|| \le 1$ then the following holds: \begin{equation} |\boldsymbol{\theta}^*\cdot \Phi({\mathbf{x}})| \le \epsilon \label{eqn: almost orth} \end{equation} Alternatively, we could think of $\Phi({\mathbf{x}})$ as being almost orthogonal to $\boldsymbol{\theta}^*$. Now, we would show a construction of a target model when $k$ has parity even, which not only violates \assref{assumption: polyorthogonal} but can't be taught \tt{approximately} such that \eqnref{eqn: almost orth} holds. The idea is to find $\boldsymbol{\theta}^*$ which doesn't have almost orthogonal projections in ${\mathcal{H}}_{{\mathcal{K}}}$ with norm lower-bounded by $\delta$. Consider the following construction for a target model $\boldsymbol{\theta}^* \in {\mathcal{H}}_{{\mathcal{K}}}$: \begin{equation} \boldsymbol{\theta}^* = \sum_{i=1}^d \frac{1}{\sqrt{d}}\cdot \Phi({\mathbf{e}}_i),\quad ||\boldsymbol{\theta}^*|| = 1 \label{eqn: thetaconstruction1} \end{equation} where $\{{\mathbf{e}}_i\}$'s form the standard basis in $\ensuremath{\mathbb{R}}^d$. Notice that for any ${\mathbf{x}} \in {\mathcal{X}}$, \begin{equation} \boldsymbol{\theta}^*\cdot \Phi({\mathbf{x}}) = \sum_{i=1}^d \frac{1}{\sqrt{d}}\cdot \Phi({\mathbf{e}}_i)\cdot\Phi({\mathbf{x}}) = \sum_{i=1}^d \frac{1}{\sqrt{d}}\cdot {\mathbf{x}}_i^k. \label{eqn: thetaconstruction2} \end{equation} RHS of the above equation is zero only when all the ${\mathbf{x}}_i^k = 0$ since $k$ is even. Thus, the only projection orthogonal to $\boldsymbol{\theta}^*$ is the zero projection in ${\mathcal{H}}_{{\mathcal{K}}}$, thus violates \assref{assumption: polyorthogonal}. Now, we show that $\boldsymbol{\theta}^*$ as constructed in \eqnref{eqn: thetaconstruction1} can't be taught approximately for arbitrarily small $\epsilon > 0$. If ${\mathbf{x}} \in {\mathcal{X}}$ is such that $||\Phi({\mathbf{x}})|| \ge \delta$, then using H\"{o}lder's inequality: \begin{align} \sum_{i=1}^d {\mathbf{x}}_i^2 \le \sum_{i=1}^d 1\cdot {\mathbf{x}}_i^2 \le d^{\frac{k-2}{k}}\paren{\sum_{i=1}^d \paren{{\mathbf{x}}_i^2}^{\frac{k}{2}}}^{\frac{2}{k}} \nonumber \end{align} But we have $\sum_{i=1}^d {\mathbf{x}}_i^2 \ge \delta^{\frac{2}{k}}$. Thus, using \eqnref{eqn: thetaconstruction1}-\eqnref{eqn: thetaconstruction2} \begin{equation} \paren{\sum_{i=1}^d {\mathbf{x}}_i^k} \ge \frac{\delta}{d^{2(k-2)}} \implies \boldsymbol{\theta}^*\cdot\Phi({\mathbf{x}}) \ge \frac{\delta}{d^{2k-\frac{7}{2}}} \nonumber \end{equation} Thus, if $\epsilon < \frac{\delta}{d^{2k-\frac{7}{2}}}$ then $\boldsymbol{\theta}^*\cdot\Phi({\mathbf{x}}) > \epsilon$. This implies that $\Phi({\mathbf{x}})$ can't be chosen almost orthogonal to $\boldsymbol{\theta}^*$ violating \eqnref{eqn: almost orth}. Hence, $\nexists$ $\Tilde{\boldsymbol{\theta}}^*$ arbitrarily close to $\boldsymbol{\theta}^*$ which can be taught exactly. Thus, the construction of $\boldsymbol{\theta}^*$ in \eqnref{eqn: thetaconstruction1} violates \assref{assumption: polyorthogonal} and can't be taught approximately for arbitrarily small $\epsilon > 0$. \end{proof} \paragraph{Is the assumption of lower bound $\delta$ restrictive?} Now, we would argue that the assumption of a lower bound on the norm of the teaching point for \lemref{lemma: approximate teaching } is only for analysis of the proof presented above. Consider the target model $\boldsymbol{\theta}^*$ constructed in \eqnref{eqn: thetaconstruction1}. Consider that $\exists$ $\Tilde{\boldsymbol{\theta}}^*$ which can be taught exactly using arbitrarily small normed teaching points (i.e. lower bound of $\delta$ is violated) such that $$\boldsymbol{\theta}^*\cdot \Tilde{\boldsymbol{\theta}}^* \ge 1-\cos{a_{\epsilon}},\,\, \textnormal{where}\,\, \cos{a_{\epsilon}} = \epsilon$$ for arbitrarily small $\epsilon > 0$. Define the teaching set as $\mathcal{TS}_{\Tilde{\boldsymbol{\theta}}^*}$. But, even if we unit-normalize all the teaching points, call the normalized set $\mathcal{TS}^{unit}_{\Tilde{\boldsymbol{\theta}}^*}$, \eqnref{eqn: objectkernel} is still satisfied. Since in that case for any $({\mathbf{x}}_i,y_i) \in \mathcal{TS}^{unit}_{\Tilde{\boldsymbol{\theta}}^*}$, \eqnref{eqn: almost orth} is violated. Hence, violating the assumption of lower bound on the norm of the teaching points doesn't invalidate the claim of \lemref{lemma: approximate teaching }. \subsection{Approximate Teaching: \assref{assumption: orthogonal} and \assref{assumption: bounded cone}}\label{appsubsec.approxassumtions} As noted in \secref{subsec.gaussiankernel}, the teaching dimension of a Gaussian kernel perceptron learner is $\infty$. This calls for studying these non-linear kernel in the setting of approximate teaching. Inspired by our discussion in the previous subsection, we argue that the underlying assumptions: \assref{assumption: orthogonal} and \assref{assumption: bounded cone} are fairly mild in order to establish strong results stated in \thmref{thm: boundedclassifier} and \thmref{thm: gaussian_main_thm} (cf. \secref{subsec.gaussiankernel}). This appendix subsection is divided into two paragraphs corresponding to the assumptions as follows: \paragraph{Existence of orthogonal linear independent projections: \assref{assumption: orthogonal}.} Notice that the projected polynomial space or the approximated kernel $\Tilde{{\mathcal{K}}}$ is a sum of polynomial kernels. We rewrite \eqnref{eqn: eqn17} for ease of clarity: \begin{equation*} \Tilde{{\mathcal{K}}}({\mathbf{x}}, {\mathbf{x}}') = \mathbi{e}^{-\frac{||{\mathbf{x}}||^2}{2\sigma^2}}\mathbi{e}^{-\frac{||{\mathbf{x}}'||^2}{2\sigma^2}}\sum_{k=0}^{s}\frac{1}{k!}\paren{\frac{\normg{{\mathbf{x}}}{{\mathbf{x}}'}}{\sigma^2}}^k \end{equation*} If we replace $z = \frac{\normg{{\mathbf{x}}}{{\mathbf{x}}'}}{\sigma^2}$, we could write \begin{equation*} \Tilde{{\mathcal{K}}}({\mathbf{x}}, {\mathbf{x}}') = \mathbi{e}^{-\frac{||{\mathbf{x}}||^2}{2\sigma^2}}\mathbi{e}^{-\frac{||{\mathbf{x}}'||^2}{2\sigma^2}}\sum_{k=0}^{s}\frac{1}{k!}\cdot z^k \end{equation*} Since all the coefficients of the polynomial $\sum_{k=0}^{s}\frac{1}{k!}\cdot z^k$ are positive thus if $s$ is even then $\Tilde{{\mathcal{K}}}({\mathbf{x}}, {\mathbf{x}}') > 0$. Thus, if $\boldsymbol{\theta}^* = \sum_{i=1}^l \alpha_i\cdot{\mathcal{K}}({\mathbf{a}}_i, \cdot)$ for some $\{{\mathbf{a}}_i\}_{i=1}^l \subset {\mathcal{X}}$ such that $\alpha_i$'s are positive then \assref{assumption: orthogonal} would be violated. Hence, there is a class of $r$ values for which the assumption would be violated. It is straight-forward to note that \lemref{lemma: exact teaching} could be extended to sum of polynomial kernels. Similar extension for \lemref{lemma: approximate teaching } when the highest degree is of parity even could be established. These results follow by noting the polynomial space of homogeneous polynomials of degree $k$ in $d$ variables is isomorphic \cite{article} to the polynomial space of degree $k$ in $(d-1)$ variables. Since Hilbert space $\Tilde{{\mathcal{K}}}$ is a sum of polynomial kernels thus the extended results hold. This implies that there could be pathological cases where $\mathbb{P}\boldsymbol{\theta}^*$ could not be learnt approximately in $\Tilde{{\mathcal{K}}}$. But this poses a problem because most of the information of a model in terms of the eigenvalues of the orthogonal basis of the Gaussian kernel is contained in the starting indices i.e. $\forall k \le s,\quad \Phi_{k,\boldsymbol{\lambda}}({\mathbf{x}}) = \mathbi{e}^{-\frac{||{\mathbf{x}}||^2}{2\sigma^2}}\cdot \frac{\sqrt{{\mathcal{C}}^k_{\boldsymbol{\lambda}}}}{\sqrt{k!}\sigma^k}\cdot{\mathbf{x}}^{\boldsymbol{\lambda}} $ where $\sum_{i=1}^d\boldsymbol{\lambda}_i = k$. It has been discussed in \appref{appendix: gaussian perceptron}. Since the fixed Hilbert space induced by $\Tilde{{\mathcal{K}}}$ is spanned by these truncated projections, thus \assref{assumption: orthogonal} gives a characterization for approximately teachable models. It is left to be understood if there is a more unified characterization which could incorporate approximately teachable models beyond \assref{assumption: orthogonal}. \paragraph{Boundedness of weights: \assref{assumption: bounded cone}.} It is fairly natural in the sense that in \thmref{thm: boundedclassifier} we are bounding (approximating) the error values of the function point-wise i.e. $f^*$ (using $\hat{f}$) for a fixed $\epsilon$. If for some $\hat{\boldsymbol{\theta}} \in \mathcal{A}_{opt}$, $\hat{f}$ ( $= \hat{\boldsymbol{\theta}}\cdot\Phi(\cdot)$) is unboundedly sensitive to some teaching (training) point, then bounding error becomes stringent. Further, we show that there exists a unique solution up to a positive constant scaling to \eqnref{eqn: bounded} which satisfy the assumption in \appref{appendixsub: solutionexists}. The weights $\{\alpha_i\}_{i=1}^l$ and $[\beta_0,\gamma]$ could be thought of as Lagrange multipliers for Gaussian kernel perceptron. Boundedness of the multipliers is a well-studied problem in the mathematical programming and optimization literature \cite{gauvin,nooshin,dutta,locallipschitz}. Interestingly, \citet{luksan} demonstrated the importance of the boundedness of the Lagrange multipliers for the study of interior point methods for non-linear programming. On the other hand, \assref{assumption: bounded cone} as a regularity condition provides new insights into solving problems where task is to universally approximate the underlying functions as discussed in the proof of \thmref{thm: boundedclassifier} in \appref{appendixsub: proofofmainthm}. \newpage \section{Teaching Dimension for Kernel Perceptron}\label{sec.theoreticalresults} \vspace{-1mm} In this section, we study the generic problem of teaching kernel perceptrons in three different settings:\,1) linear (in \secref{subsec.linear}); 2)\, polynomial (in \secref{subsec.poly}); and Gaussian (in \secref{subsec.gaussiankernel}). Before establishing our main result for Gaussian kernelized perceptrons, we first introduce two important results for linear and polynomial perceptrons inherently connected to the Gaussian perceptron. Our proofs are inspired by ideas from linear algebra and projective geometry as detailed \iftoggle{longversion}{in \appref{appendix:table-of-contents}}{in the supplemental materials}. \vspace{-1mm} \subsection{Homogeneous Linear Perceptron}\label{subsec.linear} In this subsection, we study the problem of teaching a linear perceptron. First, we consider an optimization problem similar to \eqnref{eqn: objectmain} as shown in \citet{JMLR:v17:15-630}:\vspace{-1mm} \begin{equation} {\boldsymbol{\mathcal{A}}}_{opt} := \mathop{\rm arg\,min}_{\boldsymbol{\theta} \in \mathbb{R}^d} \sum_{i = 1}^n \ell(\boldsymbol{\theta}\cdot{{\mathbf{x}}}_i, y_i) + \frac{\lambda}{2}||\boldsymbol{\theta}||^2_{A} \label{eqn: eqn1} \looseness -3 \end{equation} where $\ell(\cdot,\cdot)$ is a convex loss function, $A$ is a positive semi-definite matrix, $||\boldsymbol{\theta}||_{A}$ is defined as $\sqrt{\boldsymbol{\theta}^\top A \boldsymbol{\theta}}$, and $\lambda > 0$. For convex loss function $\ell(\cdot,\cdot)$, Theorem 1~\cite{JMLR:v17:15-630} established a degree-of-freedom lower bound on the number of training items to obtain a unique solution $\boldsymbol{\theta}^*$. Since, the loss function for linear perceptron is convex thus we immediately obtain a lower bound on the teaching dimension as follows: \begin{corollary}\label{cor: linear lower bound} If $A = 0$ and $\lambda = 1$, then \eqnref{eqn: objectmain} can be solved as \eqnref{eqn: eqn1}. Moreover, teaching dimension for decision boundary corresponding to a target model $\boldsymbol{\theta}^*$ is lower-bounded by $\bigOmega{d}$. \end{corollary} Now, we would establish an upper bound on $TD({\boldsymbol{\mathcal{A}}}_{opt},\boldsymbol{\theta}^*)$ for exact teaching of the decision boundary of a target model $\boldsymbol{\theta}^*$. The key idea is to find a set of points which span the orthogonal subspace of $\boldsymbol{\theta}^*$, which we use to force a solution $\hat{\boldsymbol{\theta}} \in {\boldsymbol{\mathcal{A}}}_{opt}$ such that it has a component only along $\boldsymbol{\theta}^*$. Formally, we state the claim of the result with proof as follows: \begin{theorem}\label{thm: linear perceptron main result} Given any target model $\boldsymbol{\theta}^*$, for solving \eqnref{eqn: objectmain} the teaching dimension for the decision boundary corresponding to $\boldsymbol{\theta}^*$ is $\bigTheta{d}$. The following is a teaching set: \begin{align*} {{\mathbf{x}}}_i = {\mathbf{v}}_i,\quad y_i = 1\quad \forall\; i\; \in\; [d-1];\qquad\qquad\qquad\ \ \,\\ \quad {{\mathbf{x}}}_d = -\sum_{i=1}^{d-1} {\mathbf{v}}_i,\quad y_d = 1;\quad {{\mathbf{x}}}_{d+1} = \boldsymbol{\theta}^*,\quad y_{d+1} = 1 \end{align*} where $\{{\mathbf{v}}_i\}_{i=1}^{d}$ is an orthogonal basis for $\mathbb{R}^d$ which extends with ${\mathbf{v}}_d = \boldsymbol{\theta}^*$. \vspace{-1mm} \end{theorem} \begin{proof} Using \corref{cor: linear lower bound}, the lower bound for solving \eqnref{eqn: objectmain} is immediate. Thus, if we show that the mentioned labeled set of training points form a teaching set, then we can show an upper bound which would imply a tight bound of $\bigTheta{d}$ on the teaching dimension for finding the decision boundary. Denote the set of labeled data points as ${\mathcal{D}}$. Denote by ${\mathbf{p}}(\boldsymbol{\theta}) := \sum_{i = 1}^{d+1} \max(-y_i\cdot\boldsymbol{\theta}\cdot{{\mathbf{x}}}_i,\: 0)$. Since $\{{\mathbf{v}}_i\}_{i=1}^{d}$ is an orthogonal basis, thus $\forall \, i \in [d-1]\quad {\mathbf{v}}_i\cdot \boldsymbol{\theta}^* = 0$, thus it is not very difficult to show that ${\mathbf{p}}(t\boldsymbol{\theta}^*) = 0$ for some positive scalar $t$. Note, if $\hat{\boldsymbol{\theta}}$ is a solution to \eqnref{eqn: objectmain} then: \vspace{-1mm} \begin{equation*} \hat{\boldsymbol{\theta}} \in \mathop{\rm arg\,min}_{\boldsymbol{\theta} \in \mathbb{R}^d} \sum_{i = 1}^{d+1} \max(-y_i\cdot\boldsymbol{\theta}\cdot{{\mathbf{x}}}_i,\: 0) \end{equation*} Also, ${\mathbf{p}}(\hat{\boldsymbol{\theta}}) = 0 \implies {{\mathbf{x}}}_i\cdot \hat{\boldsymbol{\theta}} \ge 0\; \forall \, i \in [d]$ but then ${\mathbf{x}}_{d} = - \sum_{i=1}^{d-1} {{\mathbf{x}}}_i$ $\implies \forall \, i \in [d]\quad {{\mathbf{x}}}_i\cdot \hat{\boldsymbol{\theta}} = 0$. Note that, $\hat{\boldsymbol{\theta}}\cdot \boldsymbol{\theta}^* \ge 0$ forces $\hat{\boldsymbol{\theta}} = t\boldsymbol{\theta}^*$ for some positive constant $t$. Thus, ${\mathcal{D}}$ is a teaching set for the decision boundary of $\boldsymbol{\theta}^*$. This establishes the upper bound, and hence the theorem follows. \end{proof} \vspace{-3mm} \paragraph{Numerical example} To illustrate \thmref{thm: linear perceptron main result}, we provide a numerical example for teaching a linear perceptron in $\mathbb{R}^3$, with $\boldsymbol{\theta}^* = (-3,3,5)^\top$ (illustrated in \figref{fig:exp:exact-teaching:linear}). To construct the teaching set, we first obtain an orthogonal basis $\{(0.46, 0.86, -0.24)^\top, (0.76, -0.24, 0.6)^\top\}$ for the subspace orthogonal to $\boldsymbol{\theta}^*$, and add a vector $(-1.22, -0.62, -0.36)^\top$ which is in the exact opposite direction of the first two combined. Finally we add to $\mathcal{TS}$ an arbitrary vector which has a positive dot product with the normal vector, e.g. $(-0.46, 0.46, 0.76)^\top$. Labeling all examples positive, we obtain $\mathcal{TS}$ of size $4$. \subsection{Homogeneous Polynomial Kernelized Perceptron}\label{subsec.poly} In this subsection, we study the problem of teaching a polynomial kernelized perceptron in realizable setting. Similar to \secref{subsec.linear}, we establish an exact teaching bound on the teaching dimension under a mild condition on the data distribution. We consider homogeneous polynomial kernel ${\mathcal{K}}$ of degree $k$ in which for any ${\mathbf{x}}, {\mathbf{x}}' \in \mathbb{R}^d$ \[{\mathcal{K}}({\mathbf{x}}, {\mathbf{x}}') = \paren{\langle{\mathbf{x}}, {\mathbf{x}}'\rangle}^k\] If $\Phi(\cdot)$ denotes the \textit{feature map} for the corresponding RKHS, then we know that the dimension of the map is $\binom{d+k-1}{k}$ where each component of the map can be represented by $\Phi_{\boldsymbol{\lambda}}({\mathbf{x}}) = \sqrt{ \frac{k!}{\prod_{i=1}^d\boldsymbol{\lambda}_i!}}{\mathbf{x}}^{\boldsymbol{\lambda}}$ where $\boldsymbol{\lambda} \in \paren{\ensuremath{\mathbb{N}}\cup \curlybracket{0}}^{d}$ and $\sum_{i} \boldsymbol{\lambda}_i = k$. Denote by ${\mathcal{H}}_{{\mathcal{K}}}$ the RKHS corresponding to the polynomial kernel ${\mathcal{K}}$. We use ${\mathcal{H}}_k := {\mathcal{H}}_k(\mathbb{R}^d)$ to represent the linear space of homogeneous polynomials of degree $k$ over $\mathbb{R}^d$. We mention an important result which shows the RKHS for polynomial kernels is isomorphic to the space of homogeneous polynomials of degree $k$ in $d$ variables. \begin{proposition}[Chapter III.2, Proposition 6 \cite{article}]\label{prop: polynomial space} ${\mathcal{H}}_k = {\mathcal{H}}_{{\mathcal{K}}}$ as function spaces and inner product spaces. \end{proposition} The dimension $\dim \paren{{\mathcal{H}}_k(\mathbb{R}^d)}$ of the linear space of homogeneous polynomials of degree $k$ over $\mathbb{R}^d$ is $\binom{d+k-1}{k}$. Denote by $r := \binom{d+k-1}{k}$. Since ${\mathcal{H}}_{{\mathcal{K}}}$ is a vector space for polynomial kernel ${\mathcal{K}}$, thus for exact teaching there is an obvious lower bound of $\bigOmega{\binom{d+k-1}{k}}$ on the teaching dimension. Before we establish the main result of this subsection we state a mild assumption on the target model we consider for exact teaching which is as follows: \begin{assumption}[Existence of orthogonal polynomials]\label{assumption: polyorthogonal} For the target model $\boldsymbol{\theta}^* \in {\mathcal{H}}_{{\mathcal{K}}}$, we assume that there exist $(r-1)$ linearly independent polynomials on the orthogonal subspace of $\boldsymbol{\theta}^*$ in ${\mathcal{H}}_{{\mathcal{K}}}$ of the form $\left\{\Phi({\mathbf{z}}_i)\right\}_{i=1}^{r-1}$ where $\forall i\; {\mathbf{z}}_i \in {\mathcal{X}} $. \end{assumption} Similar to \thmref{thm: linear perceptron main result}, the key insight in having \assref{assumption: polyorthogonal} is to find independent polynomial on the orthogonal subspace defined by $\boldsymbol{\theta}^*$. We state the claim here with proof established \iftoggle{longversion}{in \appref{appendix: polynomial perceptron}}{in the supplemental materials}. \begin{theorem}\label{thm: poly_main_theorem} For all target models $\boldsymbol{\theta}^* \in {\mathcal{H}}_{{\mathcal{K}}}$ for which the \assref{assumption: polyorthogonal} holds, for solving \eqnref{eqn: objectkernel}, the exact teaching dimension for the decision boundary corresponding to $\boldsymbol{\theta}^*$ is $\bigO{\binom{d+k-1}{k}}$. \end{theorem} \begin{figure*}[t] \centering \begin{subfigure}[b]{0.3\textwidth} \centerin \includegraphics[width=\linewidth]{fig/TS_lin3_2.png} \caption{Linear ($\mathcal{TS}$)}\label{fig:exp:exact-teaching:linear} \label{fig:example.polytope} \end{subfigure} \begin{subfigure}[b]{0.3\textwidth} \centerin \includegraphics[width=\linewidth]{fig/TS_poly2_4.png} \caption{Polynomial ($\mathcal{TS}$)}\label{fig:exp:exact-teaching:polynomial} \end{subfigure} \begin{subfigure}[b]{0.3\textwidth} \centerin \includegraphics[width=\linewidth]{fig/ts_poly2_3d_4.png} \caption{Polynomial (feature space)} \label{fig:exp:feature-space:polynomial} \end{subfigure} \caption{Numerical examples of exact teaching for linear and polynomial perceptrons. Cyan plus marks and red dots correspond to positive and negative teaching examples respectively } \label{fig:exp:exact-teahcing} \end{figure*} \vspace{-2mm} \paragraph{Numerical example} For constructing $\mathcal{TS}$ in the polynomial case, we follow a similar strategy in the higher dimensional space that the original data is projected into. The only difference is that we need to ensure the teaching examples have pre-images in the original space. For that, we adopt a randomized algorithm that solves for $r-1$ boundary points in the original space (i.e. solve for $\theta^*\cdot \Phi(\mathbf{x}) = 0$) , while checking the images of these points are linearly independent. Also, instead of adding a vector in the opposite direction of these points combined, we simply repeat the $r-1$ points in the teaching set, while assigning one copy of them positive labels and the other copy negative labels. Finally, we need one last vector (label it positive) whose image has a positive component in $\theta^*$, and we obtain $\mathcal{TS}$ of size $2r-1$. \figref{fig:exp:exact-teaching:polynomial} and \figref{fig:exp:feature-space:polynomial} demonstrate the above constructive procedure on a numerical example with $d=2$, homogeneous polynomial kernel of degree 2, and $\boldsymbol{\theta}^* = {(1, 4, 4)}^\top$. In \figref{fig:exp:exact-teaching:polynomial} we show the decision boundary (red lines) and the level sets (polynomial contours) of this quadratic perceptron, as well as the teaching set identified via the above algorithmic procedure. In \figref{fig:exp:exact-teaching:polynomial}, we visualize the decision boundary (grey plane) in the feature space (after applying the feature map). The blue surface corresponds to all the data points that have pre-images in the original space $\mathbb{R}^2$. \subsection{Limitations in Exact Teaching of Polynomial Kernel Perceptron}\label{subsec.motivation} In the previous section \secref{subsec.poly}, we imposed the \assref{assumption: polyorthogonal} on the target models $\boldsymbol{\theta}^*$. It turns out that we couldn't do better than this. More concretely, we need to impose this assumption for exact teaching of polynomial kernel perceptron learner. Further, there are pathological cases where violation of the assumption leads to models which couldn't be approximately taught. Intuitively, solving \eqnref{eqn: objectkernel} in the paradigm of exact teaching reduces to nullifying the orthogonal subspace of $\boldsymbol{\theta}^*$ i.e. any component of $\boldsymbol{\theta}^*$ along the subspace is nullified. Since the information of the span of the subspace has to be encoded into the datapoints chosen for teaching, \assref{assumption: polyorthogonal} is a natural step to make. Interestingly, we show that the step is not so stringent. In the realizable setting in which all the teaching points are correctly classified, if we lift the assumption then exact teaching is not possible We state the claim in the following lemma: \begin{lemma}\label{lemma: exact teaching} Consider a target model $\boldsymbol{\theta}^*$ that doesn't satisfy \assref{assumption: polyorthogonal}. Then, there doesn't exist a teaching set $\mathcal{TS}_{\boldsymbol{\theta}^*}$ which exactly teaches $\boldsymbol{\theta}^*$ i.e. for any $\mathcal{TS}_{\boldsymbol{\theta}^*}$ and any real $t > 0$ $${\boldsymbol{\mathcal{A}}}_{opt}\paren{\mathcal{TS}_{\boldsymbol{\theta}^*}} \neq \{t\boldsymbol{\theta}^*\}.$$ \end{lemma} \lemref{lemma: exact teaching} shows that for \tt{exact} teaching $\boldsymbol{\theta}^*$ should satisfy \assref{assumption: polyorthogonal}. Then, the natural question that arises is whether we can achieve arbitrarily $\epsilon$-close \tt{approximate} teaching for $\boldsymbol{\theta}^*$. In other words, we would like to find $\Tilde{\boldsymbol{\theta}}^*$ that satisfies \assref{assumption: polyorthogonal} and is in $\epsilon$-neighbourhood of $\boldsymbol{\theta}^*$. We show a negative result for this when $k$ is even. For this we assume that, the datapoints in the teaching set $\mathcal{TS}_{\Tilde{\boldsymbol{\theta}}^*}$ have lower-bounded norm, call it, $\delta > 0$ i.e. if $({{\mathbf{x}}}_i,y_i) \in \mathcal{TS}_{\Tilde{\boldsymbol{\theta}}^*}$ then $||\Phi({{\mathbf{x}}}_i)|| \ge \delta$. We require this additional assumption only for the purpose of analysis. We would show that it wouldn't lead to any pathological cases where the constructed target model $\boldsymbol{\theta}^*$ incorporates approximate teaching. \begin{lemma}\label{lemma: approximate teaching } Let ${\mathcal{X}} \subseteq \mathbb{R}^d$ and ${\mathcal{H}}_{{\mathcal{K}}}$ be the reproducing kernel Hilbert space such that kernel function ${\mathcal{K}}$ is of degree $k$. If $k$ has parity even then there exists a target model $\boldsymbol{\theta}^*$ which violates \assref{assumption: polyorthogonal} and can't be taught approximately. \end{lemma} The results are discussed in details with proofs \iftoggle{longversion}{in \appref{appendix: motivation}}{in the supplemental materials}. \assref{assumption: polyorthogonal} and the stated lemmas provide insights into understanding the problem of teaching for non-linear perceptron kernels. In the next section, we study Gaussian kernel and the ideas generated here would be useful in devising a teaching set in the paradigm of approximate teaching. \subsection{Gaussian Kernelized Perceptron}\label{subsec.gaussiankernel} In this subsection, we consider the Gaussian kernel. Under mild assumptions inspired by the analysis of teaching dimension for exact teaching of linear and polynomial kernel perceptrons, we would establish as our main result an upper bound on the $\epsilon$-approximate teaching dimension of Gaussian kernel perceptrons using a construction of an $\epsilon$-approximate teaching set. \paragraph{Preliminaries of Gaussian kernel} A Gaussian kernel ${\mathcal{K}}$ is a function of the form \begin{equation} {\mathcal{K}}({\mathbf{x}}, {\mathbf{x}}') = \mathbi{e}^{-\frac{||{\mathbf{x}}-{\mathbf{x}}'||^2}{2\sigma^2}} \label{eqn:eqn11} \end{equation} for any ${\mathbf{x}}, {\mathbf{x}}' \in \mathbb{R}^d$ and parameter $\sigma$. First, we would try to understand the feature map before we find an approximation to it. Notice: \[\mathbi{e}^{-\frac{||{\mathbf{x}}-{\mathbf{x}}'||^2}{2\sigma^2}} = \mathbi{e}^{-\frac{||{\mathbf{x}}||^2}{2\sigma^2}}\mathbi{e}^{-\frac{||{\mathbf{x}}'||^2}{2\sigma^2}}\mathbi{e}^{\frac{\normg{{\mathbf{x}}}{{\mathbf{x}}'}}{\sigma^2}} \] Consider the scalar term $z = \normg{{\mathbf{x}}}{{\mathbf{x}}'}/\sigma^2$. We can expand the term of the product using the Taylor expansion of $\mathbi{e}^z$ near $z = 0$ as shown in \citet{Cotter2011ExplicitAO}, which amounts to $\mathbi{e}^{\frac{\normg{{\mathbf{x}}}{{\mathbf{x}}'}}{\sigma^2}} = \sum_{k=0}^{\infty}\frac{1}{k!}\paren{\frac{\normg{{\mathbf{x}}}{{\mathbf{x}}'}}{\sigma^2}}^k$. We can further expand the previous sum as \begin{align} \mathbi{e}^{\frac{\normg{{\mathbf{x}}}{{\mathbf{x}}'}}{\sigma^2}} &= \sum_{k=0}^{\infty}\frac{1}{k!}\paren{\frac{\normg{{\mathbf{x}}}{{\mathbf{x}}'}}{\sigma^2}}^k \nonumber\\ &= \sum_{k=0}^{\infty}\frac{1}{k!\sigma^{2k}}\bigparen{\sum_{l=1}^d {\mathbf{x}}_l\cdot{\mathbf{x}}'_l}^k \nonumber\\ &= \sum_{k=0}^{\infty}\frac{1}{k!\sigma^{2k}}\sum_{|\boldsymbol{\lambda}|=k} {\mathcal{C}}^k_{\boldsymbol{\lambda}}\cdot{\mathbf{x}}^{\boldsymbol{\lambda}}\cdot({\mathbf{x}}')^{\boldsymbol{\lambda}} \label{eqn:eqn12} \end{align} where ${\mathcal{C}}^k_{\boldsymbol{\lambda}} = \frac{k!}{\prod_{i=1}^d\boldsymbol{\lambda}_i!}$. Thus, we use \eqnref{eqn:eqn12} to obtain explicit feature representation to the Gaussian kernel in \eqnref{eqn:eqn11} as $\Phi_{k,\boldsymbol{\lambda}}({\mathbf{x}}) = \mathbi{e}^{-\frac{||{\mathbf{x}}||^2}{2\sigma^2}}\cdot \frac{\sqrt{{\mathcal{C}}^k_{\boldsymbol{\lambda}}}}{\sqrt{k!}\sigma^k}\cdot{\mathbf{x}}^{\boldsymbol{\lambda}}$. We get the explicit feature map $\Phi(\cdot)$ for the Gaussian kernel with coordinates as specified. Theorem 1 of \citet{haquangminh} characterizes the RKHS of Gaussian kernel. It establishes that $\dim({\mathcal{H}}_{{\mathcal{K}}}) = \infty$. Thus, we note that the exact teaching for an arbitrary target classifier $f^*$ in this setting has an infinite lower bound. This calls for analysing the teaching problem of a Gaussian kernel in the \tt{approximate} teaching setting \paragraph{Definitions and notations for approximate teaching} For any classifier $f \in {\mathcal{H}}_{{\mathcal{K}}}$, we define $\textbf{err}$($f$) = $\expctover{({\mathbf{x}},y) \sim {\mathcal{P}}({\mathbf{x}},y)}{\max(-y\cdot f({\mathbf{x}}),0)}$. Our goal is to find a classifier $f$ with the property that its expected true loss $\textbf{err}$($f$) is as small as possible. In the realizable setting, we assume that there exists an optimal separator $f^*$ such that for any data instances sampled from the data distribution the labels are consistent i.e. ${\mathcal{P}}(y\cdot f^*({\mathbf{x}}) \le 0) = 0$. In addition, we also experiment for the non-realizable setting. In the rest of the subsection, we would study the relationship between the teaching complexity for an optimal Gaussian kernel perceptron for \eqnref{eqn: objectkernel} and $|\textbf{err}(f^*) - \textbf{err}(\hat{f})|$ where $f^*$ is the optimal separator and $\hat{f}$ is the solution to ${\boldsymbol{\mathcal{A}}}_{opt}(\mathcal{TS}_{\boldsymbol{\theta}^*})$ for the constructed teaching set $\mathcal{TS}_{\boldsymbol{\theta}^*}$. \subsubsection{Gaussian Kernel Approximation}\label{subsec: gaussian_kernel_approx} Now, we would talk about finite-dimensional polynomial approximation $\Tilde{\Phi}$ to the Gaussian feature map $\Phi$ via projection as shown in \citet{Cotter2011ExplicitAO}. Consider \begin{align*} \Tilde{\Phi}&: \mathbb{R}^d \longrightarrow \mathbb{R}^q\\ \Tilde{{\mathcal{K}}}({\mathbf{x}}, {\mathbf{x}}') &= \Tilde{\Phi}({\mathbf{x}})\cdot\Tilde{\Phi}({\mathbf{x}}') \end{align*} With these approximations, we consider classifiers of the form $\Tilde{f}({\mathbf{x}}) = \Tilde{\boldsymbol{\theta}}\cdot \Tilde{\Phi}({\mathbf{x}})$ such that $\Tilde{\boldsymbol{\theta}} \in \mathbb{R}^q$. Now, assume that there is a projection map $\mathbb{P}$ such that $\Tilde{\Phi} = \mathbb{P}\Phi$. In \citet{Cotter2011ExplicitAO}, authors used the following approximation to the Gaussian kernel: \begin{equation} \Tilde{{\mathcal{K}}}({\mathbf{x}}, {\mathbf{x}}') = \mathbi{e}^{-\frac{||{\mathbf{x}}||^2}{2\sigma^2}}\mathbi{e}^{-\frac{||{\mathbf{x}}'||^2}{2\sigma^2}}\sum_{k=0}^{s}\frac{1}{k!}\paren{\frac{\normg{{\mathbf{x}}}{{\mathbf{x}}'}}{\sigma^2}}^k \label{eqn: eqn17} \end{equation} This gives the following explicit feature representation for the approximated kernel: \begin{equation} \forall k \le s,\quad \Tilde{\Phi}_{k,\boldsymbol{\lambda}}({\mathbf{x}}) = \Phi_{k,\boldsymbol{\lambda}}({\mathbf{x}}) = \mathbi{e}^{-\frac{||{\mathbf{x}}||^2}{2\sigma^2}}\cdot \frac{\sqrt{{\mathcal{C}}^k_{\boldsymbol{\lambda}}}}{\sqrt{k!}\sigma^k}\cdot{\mathbf{x}}^{\boldsymbol{\lambda}} \label{eqn:eqn18} \end{equation} where $\Phi_{k,\boldsymbol{\lambda}}({\mathbf{x}})$ is the coordinate for Gaussian feature map. Note that the feature map $\Tilde{\Phi}$ defined by the explicit features in \eqnref{eqn:eqn18} has dimension $\binom{d+s}{d}$. Thus, $\mathbb{P}\Phi = \Tilde{\Phi}$ where the first $\binom{d+s}{d}$ coordinates are retained. We denote the RKHS corresponding to $\Tilde{{\mathcal{K}}}$ as ${\mathcal{H}}_{\Tilde{{\mathcal{K}}}}$. A simple property of the approximated kernel map is stated in the following lemma which was proven in \citet{Cotter2011ExplicitAO}. \begin{lemma}[\citet{Cotter2011ExplicitAO}]\label{lemma: approxbound} For the approximated map $\Tilde{{\mathcal{K}}}$, we obtain the following upper bound: \begin{equation} \left|{\mathcal{K}}({\mathbf{x}},{\mathbf{x}}) - \Tilde{{\mathcal{K}}}({\mathbf{x}},{\mathbf{x}})\right| \le \frac{1}{(s+1)!}\paren{\frac{||{\mathbf{x}}||\cdot ||{\mathbf{x}}'||}{\sigma^2}}^{s+1} \label{eqn: approximatekernel} \end{equation} \end{lemma} Note that if $s$ is chosen large enough and the points ${\mathbf{x}}, {\mathbf{x}}'$ are bounded wrt $\sigma^2$, then RHS of \eqnref{eqn: approximatekernel} can be bounded by any $\epsilon > 0$. Since $\left|{\mathcal{K}}({\mathbf{x}},{\mathbf{x}}) - \Tilde{{\mathcal{K}}}({\mathbf{x}},{\mathbf{x}})\right| = \inmod{\mathbb{P}^{\bot}\Phi({\mathbf{x}})}^2$, thus for a Gaussian kernel, information theoretically, the first $\binom{d+s}{s}$ coordinates are highly sensitive. We would try to analyze this observation under some mild assumptions on the data distribution to construct an $\epsilon$-approximate teaching set. As discussed in \iftoggle{longversion}{\appref{appendix: gaussian perceptron}}{ the supplemental materials}, we would find the value of $s$ as if the datapoints are coming from a ball of radius $R := \max\left\{\frac{\log^2 \frac{1}{\epsilon}}{e^2},d\right\}$ in $\mathbb{R}^d$ i.e. $\frac{\inmod{{\mathbf{x}}}^2}{\sigma^2} \le R$. Thus, we wish to solve for the value of $s$ such that $\frac{1}{(s+1)!}\cdot \paren{R}^{s+1} \le \epsilon$. To approximate $s$ we use Sterling's approximation, which states that for all positive integers $n$, we have $$\sqrt{2\pi}n^{n+1/2}e^{-n} \le n! \le en^{n+1/2}e^{-n}.$$ Using the bound stated in \lemref{lemma: approxbound}, we fix the value for $s$ as $e^2\cdot R$. We would assume that $R = \frac{\log^2 \frac{1}{\epsilon}}{e^2}$ since we wish to achieve arbitrarily small $\epsilon$-approximate\footnote{When $R = d$ all the key results follow the same analysis.} teaching set. We define $r:= r(\boldsymbol{\theta}^*, \epsilon) = \binom{d+s}{s}$. \vspace{-1mm} \subsubsection{Bounding the Error}\label{subsec: bounded_error} \vspace{-1mm} In this subsection, we discuss our key results on approximate teaching of a Gaussian kernel perceptron learner under some mild assumptions on the target model $\boldsymbol{\theta}^*$. In order to show $\left|\textbf{err}(f^*) - \textbf{err}(\hat{f})\right| \le \epsilon$ via optimizing to a solution $\hat{\boldsymbol{\theta}}$ for \eqnref{eqn: objectkernel}, we would achieve a point-wise $\epsilon$-closeness between $f^*$ and $\hat{f}$. Specifically, we show that $\left|f^*({\mathbf{x}})-\hat{f}({\mathbf{x}})\right| \le \epsilon$ universally which is similar in spirit to universal approximation theorems~\cite{Liang2017WhyDN, lu2020universal, Yarotsky2017ErrorBF} for neural networks. We prove that this universal approximation could be achieved with $d^{\bigO{\log^2 \frac{1}{\epsilon}}}$ size teaching set. We assume that the input space ${\mathcal{X}}$ is bounded such that $\forall {\mathbf{x}} \in {\mathcal{X}}$\,\, $\frac{\norm{{\mathbf{x}}}{{\mathbf{x}}}}{\sigma^2} \le 2\sqrt{R}$. Since the motivation is to find classifiers which are close to the optimal one point-wise, thus we assume that target model $\boldsymbol{\theta}^*$ has unit norm. As mentioned in \eqnref{eqn: kernelfunction}, we can write the target model $\boldsymbol{\theta}^* \in {\mathcal{H}}_{{\mathcal{K}}}$ as $\boldsymbol{\theta}^* = \sum_{i=1}^l \alpha_i\cdot{\mathcal{K}}({\mathbf{a}}_i, \cdot)$ for some $\{{\mathbf{a}}_i\}_{i=1}^l \subset {\mathcal{X}}$ and $\alpha_i \in \ensuremath{\mathbb{R}}$. The classifier corresponding to $\boldsymbol{\theta}^*$ is represented by $f^*$. \eqnref{eqn: objectkernel} can be rewritten corresponding to a teaching set ${\mathcal{D}} := \curlybracket{\paren{{\mathbf{x}}_i,\; y_i}}_{i=1}^n$ as: \begin{equation} {\boldsymbol{\mathcal{A}}}_{opt} := \mathop{\rm arg\,min}_{\beta \in \ensuremath{\mathbb{R}}^l} \sum_{i = 1}^n \max\bigparen{-y_i\cdot \sum_{j=1}^l \beta_j\cdot{\mathcal{K}}({\mathbf{a}}_j,{\mathbf{x}}_i),\; 0}\label{eqn: bounded1} \end{equation} Similar to \assref{assumption: polyorthogonal} (cf \secref{subsec.poly}), to construct an approximate teaching set we assume a target model $\boldsymbol{\theta}^*$ has the property that for some truncated polynomial space ${\mathcal{H}}_{\Tilde{{\mathcal{K}}}}$ defined by feature map $\Tilde{\Phi}$ there are linearly independent projections in the orthogonal complement of $\mathbb{P}\boldsymbol{\theta}^*$ in ${\mathcal{H}}_{\Tilde{{\mathcal{K}}}}$. More formally, we state the property as an assumption which is discussed in details \iftoggle{longversion}{in \appref{appendix: motivation}}{in the supplemental materials}. \begin{assumption}[Existence of orthogonal classifiers]\label{assumption: orthogonal} For the target model $\boldsymbol{\theta}^*$ and some $\epsilon > 0$, we assume that there exists $r= r(\boldsymbol{\theta}^*, \epsilon)$ such that $\mathbb{P}\boldsymbol{\theta}^*$ has $r-1$ linear independent projections on the orthogonal subspace of $\mathbb{P}\boldsymbol{\theta}^*$ in ${\mathcal{H}}_{\Tilde{{\mathcal{K}}}}$ of the form $\{\Tilde{\Phi}({\mathbf{z}}_i)\}_{i=1}^{r-1}$ such that $\forall i\,\, {\mathbf{z}}_i \in {\mathcal{X}} $. \end{assumption} For the analysis of the key results, we impose a smoothness condition on the linear independent projections $\{\Tilde{\Phi}({\mathbf{z}}_i)\}_{i=1}^{r-1}$ that they are oriented away by a factor of $\frac{1}{r-1}$. Concretely, for any $i,j$ $\left|\Tilde{\Phi}({\mathbf{z}}_i)\cdot\Tilde{\Phi}({\mathbf{z}}_j)\right| \le \frac{1}{2(r-1)}$. This smoothness condition is discussed in the supplemental. Now, we consider the following reformulation of the optimization problem in \eqnref{eqn: bounded1} as follows: \begin{align} \hspace*{-3mm}{\boldsymbol{\mathcal{A}}}_{opt} := \mathop{\rm arg\,min}_{\beta_0 \in \ensuremath{\mathbb{R}},\, \gamma \in \ensuremath{\mathbb{R}}^{r-1}} \sum_{i = 1}^{2r-1} \max\paren{\ell(\beta_0, \gamma, {\mathbf{x}}_i, y_i),\; 0}\label{eqn: bounded} \end{align} where for any $i \in \bracket{2r-1}$ \vspace{-3mm}$$ \ell(\beta_0, \gamma, {\mathbf{x}}_i, y_i) = -y_i\cdot\bigparen{ \beta_0\cdot {\mathcal{K}}({\mathbf{a}},{\mathbf{x}}_i) + \sum_{j=1}^{r-1}\gamma_j\cdot {\mathcal{K}}({\mathbf{z}}_j, {\mathbf{x}}_i)} \nonumber$$ and with respect to the teaching set \begin{align} \mathcal{TS}_{\boldsymbol{\theta}^*} := \curlybracket{\parenb{{\mathbf{z}}_i, 1},\, \parenb{{\mathbf{z}}_i, -1}}_{i = 1}^{r-1} \cup \curlybracket{\parenb{{\mathbf{a}}, 1}}\label{eqn: teaching set} \end{align} where ${\mathbf{a}}$ is chosen such that $\mathbb{P}\boldsymbol{\theta}^*\cdot\mathbb{P}\Phi({\mathbf{a}}) > 0$\footnote{We assume $\boldsymbol{\theta}^*$ is non-degenerate in $\Tilde{{\mathcal{K}}}$ (as for polynomial kernels in \secref{subsec.poly}) i.e. has points ${\mathbf{a}} \in {\mathcal{X}}$ such that $\mathbb{P}\boldsymbol{\theta}^*\cdot\mathbb{P}\Phi({\mathbf{a}}) > 0$ (classified with label 1).} and $\Phi({\mathbf{a}})\cdot\Phi({\mathbf{z}}_i) \le Q\cdot \epsilon$ (where $Q$ is a constant). ${\mathbf{a}}$ could be chosen from a $\mathcal{B}(\sqrt{2\sqrt{R}\sigma^2},0)$ spherical ball in $\ensuremath{\mathbb{R}}^d$. We index the set $\mathcal{TS}_{\boldsymbol{\theta}^*}$ as $\curlybracket{\paren{{\mathbf{x}}_i, y_i}}_{i=1}^{2r-1}$. \eqnref{eqn: bounded} is optimized over $\hat{\boldsymbol{\theta}} = \beta_0\cdot {\mathcal{K}}({\mathbf{a}},\cdot) + \sum_{j=1}^{r-1}\gamma_j\cdot {\mathcal{K}}({\mathbf{z}}_j, \cdot)$ such that $\hat{\boldsymbol{\theta}}\cdot\Phi({\mathbf{a}}) > 0$ and $\{\Phi({\mathbf{z}}_i)\}_{i=1}^{r-1}$ satisfy \assref{assumption: orthogonal} where $\inmod{\hat{\boldsymbol{\theta}}} = \bigO{1}$. \begin{figure*}[t!] \centering \begin{subfigure}[b]{0.28\textwidth} \centering \includegraphics[width=\linewidth]{fig/supp_fig1-1b.png} \caption{Optimal Gaussian boundary} \label{fig:Teacher_RBf} \end{subfigure} \qquad \begin{subfigure}[b]{0.28\textwidth} \centering \includegraphics[width=\linewidth]{fig/supp_fig1-2b.png} \caption{Polynomial approximation} \label{fig:Polynomial_approx} \end{subfigure} \qquad \begin{subfigure}[b]{0.28\textwidth} \centering \includegraphics[width=\linewidth]{fig/supp_fig1-3b.png} \caption{Taught Gaussian boundary} \label{fig:Learned_RBF} \end{subfigure} \caption{Approximate teaching for Gaussian kernel perceptron. (a) Teacher ``receives'' $\boldsymbol{\theta}^*$ by training from the complete data set; (b) Teacher identifies a polynomial approximation of the Gaussian decision boundary and generates the teaching set $\mathcal{TS}_{\boldsymbol{\theta}^*}$ (marked by red dots and cyan crosses); (c) Learner learns a Gaussian kernel perceptron from $\mathcal{TS}_{\boldsymbol{\theta}^*}$.} \label{fig:example-RBF} \end{figure*} Note that any solution to $\eqnref{eqn: bounded}$ can have unbounded norm and can extend in arbitrary directions, thus we make an assumption on the learner which would be essential to bound the error of optimal separator of \eqnref{eqn: bounded}. \begin{assumption}[Bounded Cone]\label{assumption: bounded cone} For the target model $\boldsymbol{\theta}^* = \sum_{i = 1}^l \alpha_i\cdot{\mathcal{K}}({\mathbf{a}}_i,\cdot)$, the learner optimizes to a solution $\hat{\boldsymbol{\theta}}$ for \eqnref{eqn: bounded} with bounded coefficients. Alternatively, the sums $\sum_{i=1}^l \left|\alpha_i\right|$ and $\left|\beta_0\right| + \sum_{j=1}^{r-1}\left|\gamma_j\right|$ are bounded where $\hat{\boldsymbol{\theta}} \in {\mathcal{H}}_{{\mathcal{K}}}$ has the form $\hat{\boldsymbol{\theta}} = \beta_0\cdot {\mathcal{K}}({\mathbf{a}}_j,\cdot) + \sum_{j=1}^{r-1}\gamma_j\cdot {\mathcal{K}}({\mathbf{z}}_j, \cdot)$. \end{assumption} This assumption is fairly mild or natural in the sense that for $\hat{\boldsymbol{\theta}} \in {\boldsymbol{\mathcal{A}}}_{opt}$ as a classifier approximates $\boldsymbol{\theta}^*$ point-wise then they shouldn't be highly (or unboundedly) sensitive to datapoints involved in the classifiers. It is discussed in greater details \iftoggle{longversion}{in \appref{appendix: motivation}}{in the supplemental materials}. We denote by $\boldsymbol{C}_{\epsilon} := \sum_{i=1}^l |\alpha_i|$ and $\boldsymbol{D}_{\epsilon}:= |\beta_0| + \sum_{j=1}^{r-1}|\gamma_j|$. In \iftoggle{longversion}{ \appref{appendixsub: solutionexists}}{ the supplemental materials}, we show that there exists a unique solution (upto a positive scaling) to \eqnref{eqn: bounded} which satisfies \assref{assumption: bounded cone}. We would show that $\mathcal{TS}_{\boldsymbol{\theta}^*}$ is an $\epsilon$-approximate teaching set with $r = d^{\bigO{\log^2 \frac{1}{\epsilon}}}$ on the $\epsilon$-approximate teaching dimension. To achieve this, we first establish the $\epsilon$-\tt{closeness} of $\hat{f}$ (classifier $\hat{f}({\mathbf{x}}) := \hat{\boldsymbol{\theta}}\cdot\Phi({\mathbf{x}})$ where $\hat{\boldsymbol{\theta}} \in {\boldsymbol{\mathcal{A}}}_{opt}$) to $f^*$. Formally, we state the result as follows: \begin{theorem}\label{thm: boundedclassifier} For any target $\boldsymbol{\theta}^* \in {\mathcal{H}}_{{\mathcal{K}}}$ that satisfies \assref{assumption: orthogonal}-\ref{assumption: bounded cone} and $\epsilon > 0$, the teaching set $\mathcal{TS}_{\boldsymbol{\theta}^*}$ constructed for \eqnref{eqn: bounded} satisfies $\left|f^*({\mathbf{x}}) - \hat{f}({\mathbf{x}})\right| \le \epsilon$ for any ${\mathbf{x}} \in {\mathcal{X}}$ and any $\hat{f} \in {\boldsymbol{\mathcal{A}}}_{opt}(\mathcal{TS}_{\boldsymbol{\theta}^*})$. \end{theorem} Using \thmref{thm: boundedclassifier}, we can obtain the main result of the subsection which gives an $d^{\bigO{\log^2 \frac{1}{\epsilon}}}$ bound on $\epsilon$-approximate teaching dimension. We detail the proofs \iftoggle{longversion}{in \appref{appendix: gaussian perceptron}}{in the supplemental materials}: \begin{theorem}\label{thm: gaussian_main_thm} For any target $\boldsymbol{\theta}^* \in {\mathcal{H}}_{{\mathcal{K}}}$ that satisfies \assref{assumption: orthogonal}-\ref{assumption: bounded cone} and $\epsilon > 0$, the teaching set $\mathcal{TS}_{\boldsymbol{\theta}^*}$ constructed for \eqnref{eqn: bounded} is an $\epsilon$-approximate teaching set with $\epsilon$-$TD(\boldsymbol{\theta}^*,{\boldsymbol{\mathcal{A}}}_{opt}) = d^{\bigO{\log^2 \frac{1}{\epsilon}}}$ i.e. for any $\hat{f} \in {\boldsymbol{\mathcal{A}}}_{opt}(\mathcal{TS}_{\boldsymbol{\theta}^*})$, $$\left|\textbf{err}(f^*) - \textbf{err}(\hat{f})\right| \le \epsilon.$$ \end{theorem} \paragraph{Numerical example} \figref{fig:example-RBF} demonstrates the approximate teaching process for a Gaussian learner. We aim to teach the optimal model $\boldsymbol{\theta}^*$ (infinite-dimensional) trained on a pre-collected dataset with Gaussian parameter $\sigma = 0.9$, whose corresponding boundary is shown in \figref{fig:Teacher_RBf}. Now, for approximate teaching, the teacher calculates $\Tilde{\boldsymbol{\theta}}$ using the polynomial approximated kernel (i.e. $\Tilde{{\mathcal{K}}}$, and in this case, k=5) in \eqnref{eqn: eqn17} and the corresponding feature map in \eqnref{eqn:eqn18}. To ensure \assref{assumption: orthogonal} is met while generating teaching examples for $\Tilde{\boldsymbol{\theta}}$, we employ the randomized algorithm (as was used in \secref{subsec.poly}) with the key idea of ensuring that the teaching examples on the boundary are linearly independent in the approximated polynomial feature space, i.e. $\Tilde{{\mathcal{K}}}({\mathbf{z}}_i, {\mathbf{z}}_j) = 0$. Finally, the Gaussian learner receives $\mathcal{TS}_{\boldsymbol{\theta}^*}$ and learns the boundary shown in \figref{fig:Learned_RBF}. Note the slight difference between the boundaries in \figref{fig:Polynomial_approx} and in \figref{fig:Learned_RBF} as the learner learns with a Gaussian kernel. \section{Problem Statement}\label{sec.statement} \paragraph{Basic definitions} We denote by ${\mathcal{X}}$ the input space and ${\mathcal{Y}}:= \{-1,1\}$ the output space. A hypothesis is a function $h: {\mathcal{X}} \to {\mathcal{Y}}$. In this paper, we identify a hypothesis $h_{\boldsymbol{\theta}}$ with its model parameter $\boldsymbol{\theta}$. The hypothesis space ${\mathcal{H}}$ is a set of hypotheses. By training point we mean a pair $\paren{{\mathbf{x}},y} \in {\mathcal{X}} \times {\mathcal{Y}}$. We assume that the training points are drawn from an unknown distribution ${\mathcal{P}}$ over ${\mathcal{X}} \times {\mathcal{Y}}$. A training set is a multiset ${\mathcal{D}}$ = $\curlybracket{\paren{{\mathbf{x}}_1,y_1},\cdots,\paren{{\mathbf{x}}_n,y_n}}$ where repeated pairs are allowed. Let $\mathbb{D}$ denote the set of all training sets of all sizes. A learning algorithm ${\boldsymbol{\mathcal{A}}}: \mathbb{D} \to 2^{{\mathcal{H}}} $ takes in a training set $D \in \mathbb{D}$ and outputs a subset of the hypothesis space ${\mathcal{H}}$. That is, ${\boldsymbol{\mathcal{A}}}$ doesn't necessarily return a unique hypothesis. \vspace{-1mm} \paragraph{Kernel perceptron} Consider a set of training points ${\mathcal{D}} := \curlybracket{\paren{{\mathbf{x}}_i, y_i}}_{i=1}^n$ where ${\mathbf{x}}_i \in \ensuremath{\mathbb{R}}^d$ and hypothesis $\boldsymbol{\theta} \in \mathbb{R}^d$. A linear perceptron is defined as $f_{\boldsymbol{\theta}}({\mathbf{x}}):= \sign(\boldsymbol{\theta}\cdot {\mathbf{x}})$ in homogeneous setting. We consider the algorithm ${\boldsymbol{\mathcal{A}}}_{opt}$ to learn an optimal perceptron to classify ${\mathcal{D}}$ as defined below: \begin{equation} {\boldsymbol{\mathcal{A}}}_{opt}\paren{{\mathcal{D}}} := \mathop{\rm arg\,min}_{\boldsymbol{\theta} \in \ensuremath{\mathbb{R}}^d} \sum_{i = 1}^n \ell(f_{\boldsymbol{\theta}}({\mathbf{x}}_i),y_i). \label{eqn: objectmain} \end{equation} where the loss function $\ell(f_{\boldsymbol{\theta}}({\mathbf{x}}),y) := \max(-y\cdot f_{\boldsymbol{\theta}}({\mathbf{x}}), 0)$. Similarly, we consider the non-linear setting via kernel-based hypotheses for perceptrons that are defined with respect to a kernel operator ${\mathcal{K}}: {\mathcal{X}} \times {\mathcal{X}} \to \ensuremath{\mathbb{R}}$ which adheres to Mercer’s positive definite conditions \cite{vapnik1998statistical}. A kernel-based hypothesis has the form, \begin{equation} f({\mathbf{x}}) = \sum_{i=1}^k{\alpha_i}\cdot{\mathcal{K}}({\mathbf{x}}_i,{\mathbf{x}}) \label{eqn: kernelfunction} \end{equation} where $\forall i\,\, {\mathbf{x}}_i \in {\mathcal{X}}$ and $\alpha_i$ are reals. In order to simplify the derivation of the algorithms and their analysis, we associate a \tt{reproducing kernel Hilbert space} (RKHS) with ${\mathcal{K}}$ in the standard way common to all kernel methods. Formally, let ${\mathcal{H}}_{{\mathcal{K}}}$ be the closure of the set of all hypotheses of the form given in \eqnref{eqn: kernelfunction}. A non-linear kernel perceptron corresponding to ${\mathcal{K}}$ optimizes \eqnref{eqn: objectmain} as follows: \begin{equation} {\boldsymbol{\mathcal{A}}}_{opt}({\mathcal{D}}):= \mathop{\rm arg\,min}_{\boldsymbol{\theta} \in {\mathcal{H}}_{{\mathcal{K}}}}\sum_{i=1}^n \ell(f_{\boldsymbol{\theta}}({\mathbf{x}}_i),y_i)\label{eqn: objectkernel} \end{equation} where $f_{\boldsymbol{\theta}}(\cdot) = \sum_{i=1}^l \alpha_i\cdot {\mathcal{K}}({\mathbf{a}}_i,\cdot)$ for some $\{{\mathbf{a}}_i\}_{i=1}^l \subset {\mathcal{X}}$ and $\alpha_i$ real. Alternatively, we also write $f_{\boldsymbol{\theta}}(\cdot) = \boldsymbol{\theta}\cdot\Phi(\cdot)$ where $\Phi : {\mathcal{X}} \rightarrow {\mathcal{H}}_{{\mathcal{K}}}$ is defined as feature map to the kernel function ${\mathcal{K}}$. A reproducing kernel Hilbert space with ${\mathcal{K}}$ could be decomposed as ${\mathcal{K}}({\mathbf{x}},{\mathbf{x}}') = \normg{\Phi({\mathbf{x}})}{\Phi({\mathbf{x}}')}$ \cite{learnkernel} for any ${\mathbf{x}}, {\mathbf{x}}' \in {\mathcal{X}}$. Thus, we also identify $f_{\boldsymbol{\theta}}$ as $\sum_{i=1}^l \alpha_i\cdot \Phi({\mathbf{a}}_i)$. \paragraph{The teaching problem We are interested in the problem of teaching a target hypothesis $\boldsymbol{\theta}^*$ where a helpful \tt{teacher} provides labelled data points $\mathcal{TS} \subseteq {\mathcal{X}}\times {\mathcal{Y}}$, also defined as a \tt{teaching set}. Assuming the constructive setting \cite{JMLR:v17:15-630}, to teach a kernel perceptron learner the teacher can construct a training set with any items in $\mathbb{R}^d$ i.e. for any $({\mathbf{x}}', y') \in \mathcal{TS}$ we have ${\mathbf{x}}' \in \mathbb{R}^d$ and $y' \in \curlybracket{-1,1}$. Importantly, for the purpose of teaching we do not \tt{assume} that $\mathcal{TS}$ are drawn \tt{i.i.d} from a distribution. We define the teaching dimension for \tt{exact} parameter of $\boldsymbol{\theta}^*$ corresponding to a kernel perceptron as $TD(\boldsymbol{\theta}^*, {\boldsymbol{\mathcal{A}}}_{opt})$, which is the size of the smallest teaching set $\mathcal{TS}$ such that ${\boldsymbol{\mathcal{A}}}_{opt}\paren{\mathcal{TS}} = \{\boldsymbol{\theta}^*\}$. We define teaching of exact parameters of a target hypothesis $\boldsymbol{\theta}^*$ as \tt{exact teaching}. Since, a perceptron is agnostic to norms, we study the problem of teaching a target classifier \tt{decision boundary} where ${\boldsymbol{\mathcal{A}}}_{opt}\paren{\mathcal{TS}} = \{t\boldsymbol{\theta}^*\}$ for some real $t > 0$. Thus, $$TD(\{t\boldsymbol{\theta}^*\}, {\boldsymbol{\mathcal{A}}}_{opt}) = \min_{\text{real}\,p > 0} TD(p\boldsymbol{\theta}^*, {\boldsymbol{\mathcal{A}}}_{opt}).$$ Since it can be stringent to construct a teaching set for decision boundary (see \secref{subsec.gaussiankernel}), exact teaching is not always feasible. We introduce and study \tt{approximate teaching} which is formally defined as: \begin{definition}[$\epsilon$-approximate teaching set] Consider a kernel perceptron learner, with a kernel ${\mathcal{K}}: {\mathcal{X}} \times {\mathcal{X}} \to \ensuremath{\mathbb{R}}$ and the corresponding RKHS feature map $\Phi(\cdot)$. For a target model $\boldsymbol{\theta}^* \in {\mathcal{H}}_{{\mathcal{K}}}$ and $\epsilon > 0$, we say $\mathcal{TS} \subseteq {\mathcal{X}}\times {\mathcal{Y}}$ is an $\epsilon$-approximate teaching set wrt to ${\mathcal{P}}$ if the kernel perceptron $\hat{\boldsymbol{\theta}} \in {\boldsymbol{\mathcal{A}}}_{opt}(\mathcal{TS})$ satisfies \looseness -1 \begin{equation} \left|\expct{\max(-y\cdot f^*({\mathbf{x}}), 0)} - \expct{\max(-y\cdot \hat{f}({\mathbf{x}}), 0)}\right| \le \epsilon \end{equation} where the expectations are over $({\mathbf{x}},y)\sim {\mathcal{P}}$ and $f^*({\mathbf{x}}) = \boldsymbol{\theta}^*\cdot \Phi({\mathbf{x}})$ and $\hat{f}({\mathbf{x}}) = \hat{\boldsymbol{\theta}}\cdot \Phi({\mathbf{x}})$. \end{definition} Naturally, we define approximate teaching dimension as: \begin{definition}[$\epsilon$-approximate teaching dimension]\label{def:approxteaching} Consider a kernel perceptron learner, with a kernel ${\mathcal{K}}: {\mathcal{X}} \times {\mathcal{X}} \to \ensuremath{\mathbb{R}}$ and the corresponding RKHS feature map $\Phi(\cdot)$. For a target model $\boldsymbol{\theta}^* \in {\mathcal{H}}_{{\mathcal{K}}}$ and $\epsilon > 0$, we define $\epsilon$-$TD(\boldsymbol{\theta}^*,{\boldsymbol{\mathcal{A}}}_{opt})$ as the teaching dimension which is the size of the smallest teaching set for $\epsilon$-approximate teaching of $\boldsymbol{\theta}^*$ wrt ${\mathcal{P}}$. \end{definition} According to \defref{def:approxteaching}, exact teaching corresponds to constructing a $0$-approximate teaching set for a target classifier (e.g., the decision boundary of a kernel perceptron). We study linear and polynomial kernelized perceptrons in the exact teaching setting. Under some mild assumptions on the smoothness of the data distribution, we establish approximate teaching bound on approximate teaching dimension for Gaussian kernelized perceptron \section{List of Appendices}\label{appendix:table-of-contents} First, we provide the proofs of our theoretical results in full detail in the subsequent sections. We follow these by the experimental evaluations section. The appendices are summarized as follows: \begin{itemize} \item \appref{appendix: polynomial perceptron} provides the proof of \thmref{thm: poly_main_theorem} \item \appref{appendix: motivation} provides the motivations and key insights into \assumref{assumption: polyorthogonal}{assumption: orthogonal}{assumption: bounded cone} \item \appref{appendix: gaussian perceptron} provides relevant results and proofs of \thmref{thm: boundedclassifier} and \thmref{thm: gaussian_main_thm} (Approximate Teaching Set for Gaussian Learner) \item \appref{appendix: experimentals} provides the experimental evaluations for the theoretical results on various datasets \end{itemize}
{ "timestamp": "2021-02-26T02:05:28", "yymm": "2010", "arxiv_id": "2010.14043", "language": "en", "url": "https://arxiv.org/abs/2010.14043", "abstract": "Algorithmic machine teaching has been studied under the linear setting where exact teaching is possible. However, little is known for teaching nonlinear learners. Here, we establish the sample complexity of teaching, aka teaching dimension, for kernelized perceptrons for different families of feature maps. As a warm-up, we show that the teaching complexity is $\\Theta(d)$ for the exact teaching of linear perceptrons in $\\mathbb{R}^d$, and $\\Theta(d^k)$ for kernel perceptron with a polynomial kernel of order $k$. Furthermore, under certain smooth assumptions on the data distribution, we establish a rigorous bound on the complexity for approximately teaching a Gaussian kernel perceptron. We provide numerical examples of the optimal (approximate) teaching set under several canonical settings for linear, polynomial and Gaussian kernel perceptrons.", "subjects": "Machine Learning (cs.LG); Artificial Intelligence (cs.AI)", "title": "The Teaching Dimension of Kernel Perceptron", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9763105259435195, "lm_q2_score": 0.8333246015211008, "lm_q1q2_score": 0.8135835799927398 }
https://arxiv.org/abs/1810.11439
Hardy-Littlewood-Sobolev and Stein-Weiss inequalities on homogeneous Lie groups
In this note we prove the Stein-Weiss inequality on general homogeneous Lie groups. The obtained results extend previously known inequalities. Special properties of homogeneous norms play a key role in our proofs. Also, we give a simple proof of the Hardy-Littlewood-Sobolev inequality on general homogeneous Lie groups.
\section{Introduction} Historically, in \cite{HL28}, Hardy and Littlewood considered the one dimensional fractional integral operator on $(0,\infty)$ given by \begin{equation}\label{1Doper} T_{\lambda}u(x)=\int_{0}^{\infty}\frac{u(y)}{|x-y|^{\lambda}}dy,\,\,\,\,0<\lambda<1, \end{equation} and proved the following theorem: \begin{thm}\label{1DHLS28} Let $1<p<q<\infty$ and $u\in L^{p}(0,\infty)$ with $\frac{1}{q}=\frac{1}{p}+\lambda-1$, then \begin{equation} \|T_{\lambda}u\|_{L^{q}(0,\infty)}\leq C \|u\|_{L^{p}(0,\infty)}, \end{equation} where $C$ is a positive constant independent of $u$. \end{thm} The $N$-dimensional analogue of \eqref{1Doper} can be written by the formula: \begin{equation}\label{NDoper} I_{\lambda}u(x)=\int_{\mathbb{R}^{N}}\frac{u(y)}{|x-y|^{\lambda}}dy,\,\,\,\,0<\lambda<N. \end{equation} The $N$-dimensional case of Theorem \ref{1DHLS28} was extended by Sobolev in \cite{Sob38}: \begin{thm}\label{THM:HLS} Let $1<p<q<\infty$, $u\in L^{p}(\mathbb{R}^{N})$ with $\frac{1}{q}=\frac{1}{p}+\frac{\lambda}{N}-1$, then \begin{equation} \|I_{\lambda}u\|_{L^{q}(\mathbb{R}^{N})}\leq C \|u\|_{L^{p}(\mathbb{R}^{N})}, \end{equation} where $C$ is a positive constant independent of $u$. \end{thm} Later, in \cite{StWe58} Stein and Weiss obtained the following two-weight extention of the Hardy-Littlewood-Sobolev inequality, which is known as the Stein-Weiss inequality. \begin{thm}\label{Classiacal_Stein-Weiss_inequality} Let $0<\lambda<N$, $1<p<\infty$, $\alpha<\frac{N(p-1)}{p}$, $\beta<\frac{N}{q}$, $\alpha+\beta\geq0$ and $\frac{1}{q}=\frac{1}{p}+\frac{\lambda+\alpha+\beta}{N}-1$. If $1<p\leq q<\infty$, then \begin{equation} \||x|^{-\beta}I_{\lambda}u\|_{L^{q}(\mathbb{R}^{N})}\leq C \||x|^{\alpha}u\|_{L^{p}(\mathbb{R}^{N})}, \end{equation} where $C$ is a positive constant independent of $u$. \end{thm} The Hardy-Littlewood-Sobolev inequality on the Heisenberg group was obtained by Folland and Stein in \cite{FS74}. In \cite{GMS} the authors studied the Stein-Weiss inequality on the Carnot groups. Note that in \cite{HLZ} the authors also proved an analogue of the Stein-Weiss inequality on the Heisenberg groups. In \cite{ZW} author proved Stein-Weiss inequality on product spaces. In \cite{JD} author proved the Stein-Weiss inequality on the Euclidean half-space. In the works \cite{CF}, \cite{FM}, \cite{MW} and \cite{Per} authors were studied the regularity of fractional integrals on Euclidean spaces. In this note we first give a simple proof for the Hardy-Littlewood-Sobolev inequality on general homogeneous groups, recapturing the result of \cite[Theorem 4.1]{RY} where a much heavier machinery was used. In the proof we follow the method of Stein and Weiss, however, special properties of homogeneous norms of the homogeneous Lie groups play a key role in our calculations. Furthermore, in Theorem \ref{stein-weiss3} we establish the Stein-Weiss on general homogeneous groups based on the integral Hardy inequalities established in \cite{RY}. Of course, the obtained result recovers the previously known results of Abelian (Euclidean), Heisenberg, Carnot groups since the class of the homogeneous Lie groups contains those and since we can work with an arbitrary homogeneous quasi-norm. Note that in this direction systematic studies of different functional inequalities on general homogeneous (Lie) groups were initiated by the paper \cite{RSAM}. We refer to this and other papers by the authors (e.g. \cite{RSY1}) for further discussions. We also note that the best constant in the Hardy-Littlewood-Sobolev inequality on the Heisenberg group is now known, see Frank and Lieb \cite{FL12} (in the Euclidean case this was done earlier by Lieb in \cite{Lie83}). The expression for the best constant depends on the particular quasi-norm used and may change for a different choice of the quasi-norm. The main results of this paper are as follows: \begin{itemize} \item {\bf Hardy-Littlewood-Sobolev inequality}: Let $\mathbb{G}$ be a homogeneous group of homogeneous dimension $Q$ and let $|\cdot|$ be an arbitrary homogeneous quasi-norm on $\mathbb{G}$. Let $1<p<q<\infty,\,\,0<\lambda<Q$, $\frac{1}{q}=\frac{1}{p}+\frac{\lambda}{Q}-1$. Then for all $u\in L^{p}(\mathbb{G})$ and $h\in L^{q'}(\mathbb{G})$ we have \begin{equation}\label{EQ:HLSi1} \left|\int_{\mathbb{G}}\int_{\mathbb{G}}\frac{u(y)h(x)}{|y^{-1} x|^{\lambda}}dxdy\right|\leq C\|u\|_{L^{p}(\mathbb{G})}\|h\|_{L^{q'}(\mathbb{G})}, \end{equation} where $C$ is a positive constant independent of $u$ and $h$. For the formulation similar to that of Theorem \ref{THM:HLS} see Theorem \ref{Trsob}. \item {\bf Stein-Weiss inequality}: Let $\mathbb{G}$ be a homogeneous group of homogeneous dimension $Q$ and let $|\cdot|$ be an arbitrary homogeneous quasi-norm on $\mathbb{G}$. Let $0<\lambda<Q$, $1<p\leq q<\infty$, $\alpha<\frac{Q}{p'}$, $\beta<\frac{Q}{q}$, $\alpha+\beta\geq0$, $\frac{1}{q}=\frac{1}{p}+\frac{\alpha+\beta+\lambda}{Q}-1$, where $\frac{1}{p}+\frac{1}{p'}=1$ and $\frac{1}{q}+\frac{1}{q'}=1$. Then we have \begin{equation}\label{EQ:HLSi2} \left|\int_{\mathbb{G}}\frac{u(y)h(x)}{|x|^{\beta}|y^{-1}x|^{\lambda}|y|^{\alpha}}dxdy\right|\leq C\|u\|_{L^{p}(\mathbb{G})}\|h\|_{L^{q'}(\mathbb{G})}, \end{equation} where $C$ is a positive constant independent of $u$ and $h$. For the formulation similar to that of Theorem \ref{Classiacal_Stein-Weiss_inequality} see Theorem \ref{stein-weiss3}. Although \eqref{EQ:HLSi1} is clearly contained in \eqref{EQ:HLSi2}, we still keep them as separate statements since the Hardy-Littlewood-Sobolev inequality \eqref{EQ:HLSi1} allows for a simple proof which is much more transparent than that of the Stein-Weiss inequality \eqref{EQ:HLSi2}. The present proof is also much simpler than the original proof of \eqref{EQ:HLSi1} in \cite{RY}. \end{itemize} Finally, let us note that the heavier machinery developed in \cite{RY} also yielded a differential version of the Stein-Weiss inequality (which may be also called {Stein-Weiss-Sobolev inequality}), however, in a more special case of graded groups as follows (see \cite[Theorem 5.12]{RY} for details and the proof): \begin{itemize} \item {\bf Differential Stein-Weiss (or Stein-Weiss-Sobolev) inequality}: Let $\mathbb{G}$ be a graded Lie group of homogeneous dimension $Q$ and let $|\cdot|$ be an arbitrary homogeneous quasi-norm. Let $1<p,q<\infty$, $0\leq a<Q/p$ and $0\leq b<Q/q$. Let $0<\lambda<Q$, $0\leq \alpha <a+Q/p'$ and $0\leq \beta\leq b$ be such that $(Q-ap)/(pQ)+(Q-q(b-\beta))/(qQ)+(\alpha+\lambda)/Q=2$ and $\alpha+\lambda\leq Q$, where $1/p+1/p'=1$. Then there exists a positive constant $C=C(Q,\lambda, p, \alpha, \beta, a, b)$ such that \begin{equation}\label{HLS_ineq1_grad} \left|\int_{\mathbb{G}}\int_{\mathbb{G}}\frac{\overline{f(x)}g(y)}{|x|^{\alpha}|y^{-1}x|^{\lambda}|y|^{\beta}}dxdy\right|\leq C\|f\|_{\dot{L}^{p}_{a}(\mathbb{G})}\|g\|_{\dot{L}^{q}_{b}(\mathbb{G})} \end{equation} holds for all $f\in \dot{L}^{p}_{a}(\mathbb{G})$ and $g\in \dot{L}^{q}_{b}(\mathbb{G})$, where $\dot{L}^{p}_{a}(\mathbb{G})$ stands for a homogeneous Sobolev space of order $a$ over $L^p$ on the graded Lie group $\mathbb{G}.$ \end{itemize} \section{Stein-Weiss inequality on homogeneous group} \label{SEC:2} Let us recall that a Lie group (on $\mathbb{R}^{N}$) $\mathbb{G}$ with the dilation $$D_{\lambda}(x):=(\lambda^{\nu_{1}}x_{1},\ldots,\lambda^{\nu_{N}}x_{N}),\; \nu_{1},\ldots, \nu_{n}>0,\; D_{\lambda}:\mathbb{R}^{N}\rightarrow\mathbb{R}^{N},$$ which is an automorphism of the group $\mathbb{G}$ for each $\lambda>0,$ is called a {\em homogeneous (Lie) group}. For simplicity, throughout this paper we use the notation $\lambda x$ for the dilation $D_{\lambda}.$ The homogeneous dimension of the homogeneous group $\mathbb{G}$ is denoted by $Q:=\nu_{1}+\ldots+\nu_{N}.$ Also, in this note we denote a homogeneous quasi-norm on $\mathbb{G}$ by $|x|$, which is a continuous non-negative function \begin{equation} \mathbb{G}\ni x\mapsto |x|\in[0,\infty), \end{equation} with the properties \begin{itemize} \item[i)] $|x|=|x^{-1}|$ for all $x\in\mathbb{G}$, \item[ii)] $|\lambda x|=\lambda |x|$ for all $x\in \mathbb{G}$ and $\lambda>0$, \item[iii)] $|x|=0$ iff $x=0$. \end{itemize} Moreover, the following polarisation formula on homogeneous Lie groups will be used in our proofs: there is a (unique) positive Borel measure $\sigma$ on the unit quasi-sphere $ \mathfrak{S}:=\{x\in \mathbb{G}:\,|x|=1\}, $ so that for every $f\in L^{1}(\mathbb{G})$ we have \begin{equation}\label{EQ:polar} \int_{\mathbb{G}}f(x)dx=\int_{0}^{\infty} \int_{\mathfrak{S}}f(ry)r^{Q-1}d\sigma(y)dr. \end{equation} The quasi-ball centred at $x \in \mathbb{G}$ with radius $R > 0$ can be defined by \begin{equation} B(x,R) := \{y \in\mathbb {G} : |x^{-1} y|< R\}. \end{equation} We refer to \cite{FS1} for the original appearance of such groups, and to \cite{FR} for a recent comprehensive treatment. Let us consider the integral operator \begin{equation} I_{\lambda,|\cdot|}u(x)=\int_{\mathbb{G}}\frac{u(y)}{|y^{-1} x|^{\lambda}}dy,\,\,\,0<\lambda<Q. \end{equation} Note that when $Q>\alpha>0$ and $\lambda=Q-\alpha$ we get the Riesz potential $I_{\lambda,|\cdot|}=I_{Q-\alpha,|\cdot|}$. First we give a short proof of a version of the Hardy-Littlewood-Sobolev inequality on $\mathbb{G}$. \begin{thm}\label{Trsob} Let $\mathbb{G}$ be a homogeneous group of homogeneous dimension $Q$ and let $|\cdot|$ be an arbitrary homogeneous quasi-norm on $\mathbb{G}$. Let $1<p<q<\infty,\,\,0<\lambda<Q$, $\frac{1}{q}=\frac{1}{p}+\frac{\lambda}{Q}-1$, and $u\in L^{p}(\mathbb{G})$. Then we have \begin{equation}\label{rieszsobolev} \|I_{\lambda,|\cdot|}u\|_{L^{q}(\mathbb{G})}\leq C \|u\|_{L^{p}(\mathbb{G})}, \end{equation} where $C$ is a positive constant independent of $u.$ \end{thm} \begin{rem} With the assumptions of Theorem \ref{Trsob} and $h\in L^{q'}(\mathbb{G}),$ we have the following Hardy-Littlewood-Sobolev inequality \begin{equation} \left|\int_{\mathbb{G}}\int_{\mathbb{G}}\frac{u(y)h(x)}{|y^{-1} x|^{\lambda}}dxdy\right|\leq C\|u\|_{L^{p}(\mathbb{G})}\|h\|_{L^{q'}(\mathbb{G})}, \end{equation} where $C$ is a positive constant independent of $u$ and $h$. This gives \eqref{EQ:HLSi1}. \end{rem} \begin{proof}[Proof of Theorem \ref{Trsob}] As in the Euclidean case we will show that there is a constant $C>0$, such that \begin{equation}\label{needtoprove} m\{x:|K*u(x)|>\zeta\}\leq C\frac{\|u\|_{L^{p}(\mathbb{G})}^{q}}{\zeta^{q}}, \end{equation} where $m$ is the Haar measure on $\mathbb{G}$, $K(x)=|x|^{-\lambda}$ and $I_{\lambda,|\cdot|}u(x)=K*u(x)$, where $*$ is convolution. This implies inequality \eqref{rieszsobolev} via the Marcinkiewicz interpolation theorem. Let $K(x)=K_{1}(x)+K_{2}(x)$, where \begin{equation}\label{2.7} K_{1}(x):= \begin{cases} K(x),\,\,\,\text{if}\,\,\,|x|\leq\mu, \\ 0,\,\,\,\text{if}\,\,\,|x|>\mu, \end{cases} \text{and}\,\,\,\, K_{2}(x):= \begin{cases} K(x),\,\,\,\text{if}\,\,\,|x|>\mu, \\ 0,\,\,\,\text{if}\,\,\,|x|\leq\mu. \end{cases} \end{equation} Here $\mu$ is a positive constant. We have $I_{\lambda,|\cdot|}u(x)=K*u(x)=K_{1}*u(x)+K_{2}*u(x)$, so \begin{equation}\label{ocenkamery} m\{x:|K*u(x)|>2\zeta\}\leq m\{x:|K_{1}*u(x)|>\zeta\}+m\{x:|K_{2}*u(x)|>\zeta\}. \end{equation} It is enough to prove inequality \eqref{needtoprove} with $2\zeta$ instead of $\zeta$ in the left-hand side of the inequality. Without loss of generality we can assume $\|u\|_{L^{p}(\mathbb{G})}=1$ and by using Chebychev's and Minkowski's inequalities, we get \begin{multline}\label{2.9} m\{x:|K_{1}*u(x)|>\zeta\}\leq\frac{\int_{|K_{1}*u|>\zeta}|K_{1}*u|^{p}dx}{\zeta^{p}}\\ \leq\frac{\|K_{1}*u\|^{p}_{L^{p}(\mathbb{G})}}{\zeta^{p}}\leq\frac{\|K_{1}\|^{p}_{L^{1}(\mathbb{G})}\|u\|^{p}_{L^{p}(\mathbb{G})}}{\zeta^{p}}=\frac{\|K_{1}\|^{p}_{L^{1}(\mathbb{G})}}{\zeta^{p}}. \end{multline} By using \eqref{EQ:polar} and \eqref{2.7}, we compute \begin{multline} \|K_{1}\|_{L^{1}(\mathbb{G})}=\int_{0<|x|\leq\mu}|x|^{-\lambda}dx=\int_{0}^{\mu}r^{Q-1}r^{-\lambda}dR\int_{\mathfrak{S}}|y|^{-\lambda}d\sigma(y)\\ =|\mathfrak{S}| \int_{0}^{\mu}r^{Q-\lambda-1}dr= \frac{|\mathfrak {S}|}{Q-\lambda}(r^{Q-\lambda}|^{\mu}_{0})=\frac{|\mathfrak {S}|}{Q-\lambda} \mu^{Q-\lambda}, \end{multline} where $|\mathfrak{S}|$ is the $Q - 1$ dimensional surface measure of the unit quasi-sphere $\mathfrak{S}$. By using this in \eqref{2.9}, we obtain \begin{equation} m\{x:|K_{1}*u(x)|>\zeta\}\leq \left(\frac{|\mathfrak {S}|}{Q-\lambda}\right)^{p} \frac{\mu^{(Q-\lambda)p}}{\zeta^{p}}. \end{equation} Similarly by using Young's inequality, \eqref{EQ:polar} and the assumptions, we get \begin{multline} \|K_{2}*u\|_{L^{\infty}(\mathbb{G})}\leq\|K_{2}\|_{L^{p'}(\mathbb{G})}\|u\|_{L^{p}(\mathbb{G})}=\left(\int_{\mu}^{\infty}r^{-\lambda p'}r^{Q-1}dr\int_{\mathfrak{S}}|y|^{-\lambda p'}d\sigma(y)\right)^{\frac{1}{p'}}\\ =\left(\frac{|\mathfrak{S}|}{Q-\lambda p'}\right)^{\frac{1}{p'}}\left(\int_{\mu}^{\infty}r^{Q-\lambda p'-1}dr\right)^{\frac{1}{p'}}=\left(\frac{|\mathfrak{S}|}{Q-\lambda p'}\right)^{\frac{1}{p'}} (r^{Q-\lambda p'}|^{\infty}_{\mu})^{\frac{1}{p'}}\\ =\left(\frac{|\mathfrak{S}|}{\lambda p'-Q}\right)^{\frac{1}{p'}}\mu^{-\frac{Q}{q}}, \end{multline} since from the assumptions, we get $\frac{Q-\lambda p'}{p'}=\frac{Q}{p'}-\lambda=Q(1-\frac{1}{p}-\frac{\lambda}{Q})=-\frac{Q}{q}$. Moreover, if $\left(\frac{|\mathfrak{S}|}{\lambda p'-Q}\right)^{\frac{1}{p'}}\mu^{-\frac{Q}{q}}=\zeta$, then $\mu=\left(\frac{|\mathfrak{S}|}{\lambda p'-Q}\right)^{-\frac{q}{Qp'}} \zeta^{-\frac{\theta}{Q}}$, so we have $\|K_{2}*u\|_{L^{\infty}(\mathbb{G})}\leq\zeta$. Thus, we have $m\{x:|K_{2}*u|>\zeta\}=0.$ Combining these facts with \eqref{ocenkamery}, $\|u\|_{L^{p}(\mathbb{G})}=1$ and the assumptions we establish \begin{multline} m\{x:|K*u|>2\zeta\} \leq \left(\frac{|\mathfrak {S}|}{Q-\lambda}\right)^{p}\frac{\mu^{(Q-\lambda)p}}{\zeta^{p}}\\ =\left(\frac{|\mathfrak {S}|}{Q-\lambda}\right)^{p}\left(\frac{|\mathfrak{S}|}{\lambda p'-Q}\right)^{-\frac{q(Q-\lambda)p}{Qp'}}\zeta^{\frac{-(Q-\lambda)pq}{Q}-p} \leq C\zeta^{\frac{-(Q-\lambda)pq}{Q}-p}=C\zeta^{(\frac{\lambda}{Q}-1)pq-p}\\ =C\zeta^{(\frac{1}{q}-\frac{1}{p})pq-p}=C\zeta^{p-q-p}=C\frac{\|u\|^{q}_{L^{p}(\mathbb{G})}}{\zeta^{q}}. \end{multline} For completeness, let us recall two well-known ingredients. \begin{defn}[\cite{steinbook}]\label{defin} Let $ 1\leq p\leq\infty$, $1\leq q<\infty$ and $V:L^{p}(\mathbb{G})\rightarrow L^{q}(\mathbb{G})$ be a operator, then $V$ is called an operator of $\textit{weak type}$ $(p,q)$ if \begin{equation} m\{x:|Vu|>\zeta\}\leq C\left(\frac{\|u\|_{L^{p}(\mathbb{G})}}{\zeta}\right)^{q},\,\,\,\,\zeta>0, \end{equation} where $C$ is a positive constant and independent by $u$. \end{defn} Let us also recall the classical Marcinkiewicz interpolation theorem: \begin{thm} Let $V$ be sublinear operator of weak type $(p_{k}, q_{k})$ with $1 \leq p_{k} \leq q_k< \infty$, $k = 0, 1$ and $q_0 < q_1$. Then $V$ is bounded from $L^{p}(\mathbb{G})$ to $L^{q}(\mathbb{G})$ with \begin{equation} \frac{1}{p}=\frac{1-\gamma}{p_{0}}+\frac{\gamma}{p_{1}},\,\,\,\frac{1}{q}=\frac{1-\gamma}{q_{0}}+\frac{\gamma}{q_{1}}, \end{equation} for any $0<\gamma<1$, namely, \begin{equation} \|Vu\|_{L^{q}(\mathbb{G})}\leq C \|u\|_{L^{p}(\mathbb{G})}, \end{equation} for any $u\in L^{p}(\mathbb{G})$ and $C$ is a positive constant. \end{thm} From assumptions $\frac{1}{q}=\frac{1}{p}+\frac{\lambda}{Q}-1<\frac{1}{p}$, then $q>p$. According to Definition \ref{defin}, $I_{\lambda,|\cdot|}u$ is of weak type $(p,q)$, so by using the Marcinkiewicz interpolation theorem, we prove \eqref{rieszsobolev}. The proof of Theorem \ref{Trsob} is complete. \end{proof} The following statements will be useful to prove the homogeneous group version of the Stein-Weiss inequality (\cite[Theorem B*]{StWe58}). The next proposition is well-known, see e.g. {\cite[Theorem 3.1.39 and Proposition 3.1.35]{FR}} and historical references therein. \begin{prop}\label{prop_quasi_norm} Let $\mathbb{G}$ be a homogeneous Lie group. Then there exists a homogeneous quasi-norm on $\mathbb{G}$ which is a norm, that is, a homogeneous quasi-norm $|\cdot|$ which satisfies the triangle inequality \begin{equation} |x y|\leq |x| + |y|, \,\,\,\forall x, y \in \mathbb{G}. \end{equation} Furthermore, all homogeneous quasi-norms on $\mathbb{G}$ are equivalent. \end{prop} The next theorem is the integral version of Hardy inequalities on general homogeneous groups that will be instrumental in our proof. \begin{thm}[\cite{RY}]\label{integral_hardy} Let $\mathbb{G}$ be a homogeneous group of homogeneous dimension $Q$ and let $1 < p \leq q < \infty$. Let $W(x)$ and $U(x)$, be positive functions on $\mathbb{G}$. Then we have the following properties: (1) The inequality \begin{equation}\label{5.2} \left(\int_{\mathbb{G}}\left(\int_{B(0,|x|)}f(z)dz\right)^{q}W(x)dx\right)^{\frac{1}{q}}\leq C_{1} \left(\int_{\mathbb{G}}f^{p}(x)U(x)dx\right)^{\frac{1}{p}} \end{equation} holds for all $f\geq0$ a.e. on $\mathbb{G}$ if only if \begin{equation}\label{5.2.1} A_{1}:=\sup_{R>0}\left(\int_{\mathbb{G}\setminus B(0,|x|)}W(x)dx\right)^{\frac{1}{q}}\left(\int_{B(0,|x|)}U^{1-p'}(x)dx\right)^{\frac{1}{p'}}<\infty. \end{equation} (2) The inequality \begin{equation}\label{5.4} \left(\int_{\mathbb{G}}\left(\int_{\mathbb{G}\setminus B(0,|x|)}f(z)dz\right)^{q}W(x)dx\right)^{\frac{1}{q}}\leq C_{2} \left(\int_{\mathbb{G}}f^{p}(x)U(x)dx\right)^{\frac{1}{p}}, \end{equation} holds for all $f \geq 0$ if and only if \begin{equation}\label{5.4.1} A_{2}:=\sup_{R>0}\left(\int_{B(0,|x|)}W(x)dx\right)^{\frac{1}{q}}\left(\int_{\mathbb{G}\setminus B(0,|x|)}U^{1-p'}(x)dx\right)^{\frac{1}{p'}}<\infty. \end{equation} (3) If $\{C_i\}^{2}_{i=1}$ are the smallest constants for which \eqref{5.2} and \eqref{5.4} hold, then \begin{equation} A_{i} \leq C_{i} \leq (p')^{\frac{1}{p'}}p^{\frac{1}{q}} A_{i}, \,\,\,i = 1, 2. \end{equation} \end{thm} Now we formulate the Stein-Weiss inequality on $\mathbb{G}$. \begin{thm}\label{stein-weiss3} Let $\mathbb{G}$ be a homogeneous group of homogeneous dimension $Q$ and let $|\cdot|$ be an arbitrary homogeneous quasi-norm on $\mathbb{G}$. Let $0<\lambda<Q$, $1<p<\infty$, $\alpha<\frac{Q}{p'}$, $\beta<\frac{Q}{q}$, $\alpha+\beta\geq0$, $\frac{1}{q}=\frac{1}{p}+\frac{\alpha+\beta+\lambda}{Q}-1$, where $\frac{1}{p}+\frac{1}{p'}=1$ and $\frac{1}{q}+\frac{1}{q'}=1$. Then for $1<p\leq q<\infty$, we have \begin{equation}\label{stein-weiss} \||x|^{-\beta}I_{\lambda,|\cdot|}u\|_{L^{q}(\mathbb{G})}\leq C \||x|^{\alpha}u\|_{L^{p}(\mathbb{G})}. \end{equation} where $C$ is positive constant and independent by $u$. \end{thm} In inequality \eqref{stein-weiss} with $\alpha=0$ we get the weighted Hardy-Littlewood-Sobolev inequality established in \cite[Theorem 4.1]{RY}. Thus, by setting $\alpha=\beta=0$ we get Hardy-Littlewood-Sobolev inequality on the homogeneous Lie groups. In the Abelian (Euclidean) case ${\mathbb G}=(\mathbb R^{N},+)$, we have $Q=N$ and $|\cdot|$ can be any homogeneous quasi-norm on $\mathbb R^{N}$, so with the usual Euclidean distance, i.e. $|\cdot|=\|\cdot\|_{E}$, Theorem \ref{stein-weiss3} gives the classical result of Stein and Weiss (Theorem \ref{Classiacal_Stein-Weiss_inequality}). \begin{proof}[Proof of Theorem \ref{stein-weiss3}] Define \begin{equation} \||x|^{-\beta}I_{\lambda,|\cdot|}u\|^{q}_{L^{q}(\mathbb{G})}=\int_{\mathbb{G}}\left(\int_{\mathbb{G}}\frac{u(y)}{|x|^{\beta}|y^{-1} x|^{\lambda}}dy\right)^{q}dx=I_{1}+I_{2}+I_{3}, \end{equation} where \begin{equation} I_{1}=\int_{\mathbb{G}}\left(\int_{B\left(0,\frac{|x|}{2}\right)}\frac{u(y)}{|x|^{\beta}|y^{-1} x|^{\lambda}}dy\right)^{q}dx, \end{equation} \begin{equation} I_{2}=\int_{\mathbb{G}}\left(\int_{B(0,2|x|)\setminus B\left(0,\frac{|x|}{2}\right)}\frac{u(y)}{|x|^{\beta}|y^{-1} x|^{\lambda}}dy\right)^{q}dx, \end{equation} and \begin{equation} I_{3}=\int_{\mathbb{G}}\left(\int_{\mathbb{G}\setminus B(0,2|x|)}\frac{u(y)}{|x|^{\beta}|y^{-1} x|^{\lambda}dy}\right)^{q}dx. \end{equation} From now on, in view of Proposition \ref{prop_quasi_norm} we can assume that our quasi-norm is actually a norm. \textbf{Step 1.} Let us consider $I_{1}$. By using Proposition \ref{prop_quasi_norm} and the properties of the quasi-norm with $|y|\leq\frac{|x|}{2}$, we get $$|x|=|x^{-1}|=|x^{-1}y y^{-1}|$$ $$\leq |x^{-1} y|+|y^{-1}|=|y^{-1} x|+|y|$$ $$\leq |y^{-1} x|+\frac{|x|}{2}.$$ Then for any $\lambda>0$, we have $$2^{\lambda}|x|^{-\lambda}\geq |y^{-1} x|^{-\lambda}.$$ Therefore, we get \begin{multline} I_{1}=\int_{\mathbb{G}}\left(\int_{B\left(0,\frac{|x|}{2}\right)}\frac{u(y)}{|x|^{\beta}|y^{-1} x|^{\lambda}}dy\right)^{q}dx\leq 2^{\lambda}\int_{\mathbb{G}}\left(\int_{B\left(0,\frac{|x|}{2}\right)}\frac{u(y)}{|x|^{\beta+\lambda}}dy\right)^{q}dx\\ =2^{\lambda}\int_{\mathbb{G}}\left(\int_{B\left(0,\frac{|x|}{2}\right)}u(y)dy\right)^{q}|x|^{-(\beta+\lambda)q}dx. \end{multline} If condition \eqref{5.2.1} in Theorem \ref{integral_hardy} with $W(x)=|x|^{-(\beta+\lambda)q}$ and $U(y)=|y|^{\alpha p}$ in \eqref{5.2} is satisfied, then we have \begin{equation} I_{1}\leq2^{\lambda}\int_{\mathbb{G}}\left(\int_{B(0,\frac{|x|}{2})}u(y)dy\right)^{q}|x|^{-(\beta+\lambda)q}dx \leq C_{1}\||x|^{\alpha}u\|^{q}_{L^{p}(\mathbb{G})}. \end{equation} Let us verify condition \eqref{5.2.1}. So from the assumption we have $\alpha<\frac{Q}{p'}$, then $$\frac{1}{q}=\frac{1}{p}+\frac{\alpha+\beta+\lambda}{Q}-1<\frac{1}{p}+\frac{\frac{Q}{p'}+\beta+\lambda}{Q}-1=\frac{1}{p}+\frac{1}{p'}+\frac{\beta+\lambda}{Q}-1=\frac{\beta+\lambda}{Q},$$ that is, $Q-(\beta+\lambda)q<0$ and by the using polar decomposition \eqref{EQ:polar}: \begin{multline} \left(\int_{\mathbb{G}\setminus B(0,|x|)}W(x)dx\right)^{\frac{1}{q}}=\left(\int_{\mathbb{G}\setminus B(0,|x|)}|x|^{-(\beta+\lambda)q}dx\right)^{\frac{1}{q}}\\ =\left(\int_{R}^{\infty}\int_{\mathfrak{S}}r^{Q-1}r^{-(\beta+\lambda)q}drd\sigma(y)\right)^{\frac{1}{q}} =\left(|\mathfrak{S}|\int_{R}^{\infty}r^{Q-1-(\beta+\lambda)q}dr\right)^{\frac{1}{q}}\leq C R^{\frac{Q-(\beta+\lambda)q}{q}}. \end{multline} Since $\alpha<\frac{Q}{p'}$, we have $$\alpha p(1-p')+Q>\alpha p(1-p')+\alpha p'=\alpha p+\alpha p'(1-p)=\alpha p -\alpha p=0 .$$ So, $\alpha p(1-p')+Q>0$. Then, let us consider \begin{multline} \left(\int_{ B(0,|x|)}U^{1-p'}(x)dx\right)^{\frac{1}{p'}}=\left(\int_{ B(0,|x|)}|x|^{(1-p')\alpha p}dx\right)^{\frac{1}{p'}}\\=\left(\int^{ R}_{0}\int_{\mathfrak{S}}r^{(1-p')\alpha p}r^{Q-1}drd\sigma(y)\right)^{\frac{1}{p'}} \leq C\left(|\mathfrak{S}|\int^{ R}_{0}r^{(1-p')\alpha p+Q-1}dr\right)^{\frac{1}{p'}}\\ \leq C R^{\frac{(1-p')\alpha p+Q}{p'}}=CR^{\frac{Q-\alpha p'}{p'}}. \end{multline} Moreover, the assumptions imply $$A_{1}=\sup_{R>0}\left(\int_{\mathbb{G}\setminus B(0,|x|)}W(x)dx\right)^{\frac{1}{q}}\left(\int_{ B(0,|x|)}U^{1-p'}(x)dx\right)^{\frac{1}{p'}}\leq C R^{\frac{Q-(\beta+\lambda)q}{q}+\frac{Q-\alpha p'}{p'}}$$ $$=C R^{Q(\frac{1}{q}-\frac{1}{p}-\frac{\alpha+\beta+\lambda}{Q}+1)}=C<\infty,$$ where $C=C(\alpha,\beta,p,\lambda)$ is a positive constant. Then by using \eqref{5.2}, we obtain \begin{equation} I_{1}\leq C\int_{\mathbb{G}}\left(\int_{B\left(0,\frac{|x|}{2}\right)}u(y)dy\right)^{q}|x|^{-(\beta+\lambda)q}dx \leq C_{1}\||x|^{\alpha}u\|^{q}_{L^{p}(\mathbb{G})}. \end{equation} \textbf{Step 2.} As in the previous case $I_{1}$, now we consider $I_{3}$. From $2|x|\leq |y|$, we calculate $$|y|=|y^{-1}|=|y^{-1} x x^{-1}|\leq |y^{-1} x|+|x|$$ $$\leq |y^{-1} x|+\frac{|y|}{2},$$ that is, $$\frac{|y|}{2}\leq |y^{-1} x|.$$ Then, if condition \eqref{5.4.1} with $W(x)=|x|^{-\beta q}$ and $U(y)=|y|^{(\alpha+\lambda)p}$ is satisfied, then we have \begin{multline} I_{3}=\int_{\mathbb{G}}\left(\int_{\mathbb{G}\setminus B(0,2|x|)}\frac{u(y)}{|x|^{\beta}|y^{-1}x|^{\lambda}}dy\right)^{q}dx\leq C\int_{\mathbb{G}}\left(\int_{\mathbb{G}\setminus B(0,2|x|)}\frac{u(y)}{|x|^{\beta}|y|^{\lambda}}dy\right)^{q}dx\\ =C\int_{\mathbb{G}}\left(\int_{\mathbb{G}\setminus B(0,2|x|)}u(y)|y|^{-\lambda}dy\right)^{q}|x|^{-\beta q}dx\leq C\||x|^{\alpha}u\|^{q}_{L^{p}(\mathbb{G})}. \end{multline} Now let us check condition \eqref{5.4.1}. We have \begin{multline} \left(\int_{ B(0,|x|)}W(x)dx\right)^{\frac{1}{q}}=\left(\int_{ B(0,|x|)}|x|^{-\beta q}dx\right)^{\frac{1}{q}}\\=\left(\int_{0}^{R}\int_{\mathfrak{S}}r^{-\beta q}r^{Q-1}drd\sigma(y)\right)^{\frac{1}{q}} \leq C R^{\frac{Q-\beta q}{q}}, \end{multline} where $Q-\beta q>0$, and \begin{multline} \left(\int_{\mathbb{G}\setminus B(0,|x|)}U^{1-p'}(x)dx\right)^{\frac{1}{p'}}=\left(\int_{\mathbb{G}\setminus B(0,|x|)}|x|^{(\alpha+\lambda)(1-p')p}dx\right)^{\frac{1}{p'}}\\=\left(\int_{R}^{\infty}\int_{\mathfrak{S}}r^{Q-1}r^{(\alpha+\lambda)(1-p')p}drd\sigma(y)\right)^{\frac{1}{p'}} \leq C R^{\frac{Q-p'(\alpha+\lambda)}{p'}}, \end{multline} where from $\beta<\frac{Q}{q}$, we obtain $Q-p'(\alpha+\lambda)<0$. Combining these facts we have \begin{multline} A_{2}:=\sup_{R>0}\left(\int_{ B(0,|x|)}W(x)dx\right)^{\frac{1}{\theta}}\left(\int_{\mathbb{G}\setminus B(0,|x|)}U^{1-p'}(x)dx\right)^{\frac{1}{p'}}\leq C R^{\frac{Q-p'(\alpha+\lambda)}{p'}+\frac{Q-\beta q}{q}}\\ =C R^{\frac{Q}{p'}-(\alpha+\beta+\lambda)+\frac{Q}{q}}=C R^{Q(\frac{1}{p'}-\frac{\alpha+\beta+\lambda}{Q}+\frac{1}{q})}=C<\infty, \end{multline} where $C=C(\alpha,\beta,p,\lambda)$ is a positive constant. Then we establish \begin{equation} I_{3}=\int_{\mathbb{G}}\left(\int_{\mathbb{G}\setminus B(0,2|x|)}\frac{u(y)}{|x|^{\beta}|y^{-1} x|^{\lambda}}dy\right)^{q}dx\leq C\||x|^{\alpha}u\|^{q}_{L^{p}(\mathbb{G})}. \end{equation} \textbf{Step 3.} Let us estimate $I_{2}$ now. \textbf{Case 1:} $p<q$. From $\frac{|x|}{2}<|y|<2|x|$, we obtain $$\frac{|y^{-1} x|}{2}\leq \frac{|x|+|y|}{2}= \frac{|x|}{2}+\frac{|y|}{2}<\frac{3}{2}|y|,$$ that is, $$|y^{-1} x|<3|y|.$$ For all $\alpha+\beta\geq0$, we have $$|y^{-1} x|^{\alpha+\beta}< 3^{\alpha+\beta}|y|^{\alpha+\beta}= 3^{\alpha+\beta}|y|^{\alpha}|y|^{\beta}\leq 3^{\alpha+\beta}2^{|\beta|}|x|^{\beta}|y|^{\alpha}.$$ Therefore, \begin{multline*} I_{2}=\int_{\mathbb{G}}\left(\int_{B(0,2|x|)\setminus B\left(0,\frac{|x|}{2}\right)}\frac{u(y)}{|x|^{\beta}|y^{-1} x|^{\lambda}}dy\right)^{q}dx\\ \leq C\int_{\mathbb{G}}\left(\int_{B(0,2|x|)\setminus B\left(0,\frac{|x|}{2}\right)}\frac{|y|^{\alpha}u(y)}{|y^{-1} x|^{\alpha+\beta+\lambda}}dy\right)^{q}dx \\ \leq C \int_{\mathbb{G}}\left(\int_{\mathbb{G}}\frac{|y|^{\alpha}u(y)}{|y^{-1}x|^{\alpha+\beta+\lambda}}dy\right)^{q}dx =C\|I_{\lambda+\alpha+\beta,|\cdot|}\tilde{u}\|^{q}_{L^{q}(\mathbb{G})}, \end{multline*} where $\tilde{u}(x)=|x|^{\alpha}u(x)$. By assumption $\frac{1}{q}-\frac{1}{p}=\frac{\lambda+\alpha+\beta}{Q}-1<0$, then $Q>\lambda+\alpha+\beta$ and by using Theorem \ref{Trsob} with $p<q$, we establish \begin{equation} I_{2}\leq C\|I_{\lambda+\alpha+\beta,|\cdot|}\tilde{u}\|^{q}_{L^{q}(\mathbb{G})}\leq C\|\tilde{u}\|_{L^{p}(\mathbb{G})}^{q}=C\||x|^{\alpha}u\|_{L^{p}(\mathbb{G})}^{q}. \end{equation} \textbf{Case 2:} $p=q$. We decompose $I_{2}$ as \begin{equation} I_{2}=\sum_{k\in \mathbb{Z}}\int_{2^{k}\leq |x| \leq 2^{k+1}}\left(\int_{B(0,2|x|)\setminus B\left(0,\frac{|x|}{2}\right)}\frac{u(y)}{|x|^{\beta}|y^{-1} x|^{\lambda}}dy\right)^{p}dx. \end{equation} From $|x| \leq 2|y| \leq 4 |x|$ and $2^{k} \leq |x| \leq 2^{k+1}$, we have $2^{k-1} \leq |y| \leq 2^{k+2}$ and $0 \leq |y^{-1} x| \leq 3|x| \leq 3 \cdot 2^{k+1}$. By using Young's inequality with $\frac{1}{p}+\frac{1}{r}=1+\frac{1}{q}$ (our case $p=q$, hence $r=1$), we calculate \begin{align*} I_{2}&=\sum_{k\in \mathbb{Z}}\int_{2^{k}\leq |x| \leq 2^{k+1}}\left(\int_{B(0,2|x|)\setminus B\left(0,\frac{|x|}{2}\right)}\frac{u(y)}{|x|^{\beta}|y^{-1} x|^{\lambda}}dy\right)^{p}dx\\& =\sum_{k\in \mathbb{Z}}\int_{2^{k}\leq |x| \leq 2^{k+1}}\left(\int_{B(0,2|x|)\setminus B\left(0,\frac{|x|}{2}\right)}\frac{u(y)}{|y^{-1} x|^{\lambda}}dy\right)^{p}\frac{dx}{|x|^{\beta p}}\\& \leq \sum_{k\in \mathbb{Z}} 2^{-\beta p k}\|u\cdot\chi_{\{2^{k-1}\leq |y|\leq 2^{k+2}\}}*|x|^{-\lambda}\|^{p}_{L^{p}(\mathbb{G})}\\& \leq \sum_{k\in \mathbb{Z}} 2^{-\beta p k} \||x|^{-\lambda}\cdot\chi_{\{0\leq |y|\leq 3\cdot2^{k+1}\}}\|^{p}_{L^{1}(\mathbb{G})}\|u\cdot\chi_{\{2^{k-1}\leq |y|\leq 2^{k+2}\}}\|^{p}_{L^{p}(\mathbb{G})}\\& \leq C \sum_{k\in \mathbb{Z}} 2^{(Q-\lambda-\beta)kp}\|u\cdot\chi_{\{2^{k-1}\leq |y|\leq 2^{k+2}\}}\|^{p}_{L^{p}(\mathbb{G})} =C \sum_{k\in \mathbb{Z}} 2^{\alpha kp}\|u\cdot\chi_{\{2^{k-1}\leq |y|\leq 2^{k+2}\}}\|^{p}_{L^{p}(\mathbb{G})}\\& =C \sum_{k\in \mathbb{Z}} \|2^{\alpha (k-1)}u\cdot\chi_{\{2^{k-1} \leq |y|\leq 2^{k+2}\}}\|^{p}_{L^{p}(\mathbb{G})} \leq C\sum_{k\in \mathbb{Z}}\||y|^{\alpha}u\cdot\chi_{\{2^{k-1}\leq |y|\leq 2^{k+2}\}}\|^{p}_{L^{p}(\mathbb{G})}\\& = C\||x|^{\alpha}u\|^{p}_{L^{p}(\mathbb{G})}. \end{align*} Theorem \ref{stein-weiss3} is proved. \end{proof} \begin{rem} With assumptions Theorem \ref{stein-weiss3} and $h\in L^{q'}(\mathbb{G})$, we have the following Stein-Weiss inequality \begin{equation} \left|\int_{\mathbb{G}}\frac{u(y)h(x)}{|x|^{\beta}|y^{-1}x|^{\lambda}|y|^{\alpha}}dxdy\right|\leq C\|u\|_{L^{p}(\mathbb{G})}\|h\|_{L^{q'}(\mathbb{G})}, \end{equation} where $C$ is a positive constant independent of $u$ and $h$. This gives \eqref{EQ:HLSi2}. \end{rem}
{ "timestamp": "2018-10-29T01:14:50", "yymm": "1810", "arxiv_id": "1810.11439", "language": "en", "url": "https://arxiv.org/abs/1810.11439", "abstract": "In this note we prove the Stein-Weiss inequality on general homogeneous Lie groups. The obtained results extend previously known inequalities. Special properties of homogeneous norms play a key role in our proofs. Also, we give a simple proof of the Hardy-Littlewood-Sobolev inequality on general homogeneous Lie groups.", "subjects": "Analysis of PDEs (math.AP)", "title": "Hardy-Littlewood-Sobolev and Stein-Weiss inequalities on homogeneous Lie groups", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9814534365728415, "lm_q2_score": 0.8289388019824946, "lm_q1q2_score": 0.8135648359142934 }
https://arxiv.org/abs/math/9805076
An Introduction to Total Least Squares
The method of ``Total Least Squares'' is proposed as a more natural way (than ordinary least squares) to approximate the data if both the matrix and and the right-hand side are contaminated by ``errors''. In this tutorial note, we give a elementary unified view of ordinary and total least squares problems and their solution. As the geometry underlying the problem setting greatly contributes to the understanding of the solution, we introduce least squares problems and their generalization via interpretations in both column space and (the dual) row space and we shall use both approaches to clarify the solution. After a study of the least squares approximation for simple regression we introduce the notion of approximation in the sense of ``Total Least Squares (TLS)'' for this problem and deduce its solution in a natural way. Next we consider ordinary and total least squares approximations for multiple regression problems and we study the solution of a general overdetermined system of equations in TLS-sense. In a final section we consider generalizations with multiple right-hand sides and with ``frozen'' columns. We remark that a TLS-approximation needs not exist in general; however, the line (or hyperplane) of best approximation in TLS-sense for a regression problem does exist always.
\section{Introduction\label{par1}} \setcounter{equation}{0} This (tutorial) paper grew out of the need to motivate the usual formulation of a ``Total Least Squares problem'' and to explain the way it is solved using the ``Singular Value Decomposition''. Although it is an important generalization of (ordinary) least squares and not more difficult to understand, it is hardly treated in numerical textbooks up to now. In the well-known book of Golub \& Van Loan \cite{gvl} and in \cite{vanhuffel}, the problem is formulated as follows: \par\noindent \beq{problem} \matrix{ \mbox{\sl Given a matrix $A\inI\!\!R^{m\times n} $ with $m>n$ and a vector $\vek b\inI\!\!R^m$,}\cr \mbox{\sl find residuals $E\inI\!\!R^{m\times n} $ and $\vek r\inI\!\!R^m$ that minimize }\cr \mbox{\sl the Frobenius norm $\|(\,E\,|\,\vek r\,)\|_F$ subject to the condition $\vek b+\vek r\in Im(A+E)$. }} \end{equation} \par\noindent It is proposed as a more natural way to approximate the data if both $A$ and $b$ are contaminated by ``errors''. In our opinion, it is not made clear sufficiently well, why this indeed is a natural generalization of the standard least squares problem and why it makes sense to study it. On the other hand, the classroom note of Y. Nievergelt \cite{niever} gives a very nice introduction, but it tells only half of the story in that it considers (multiple) regression only. \par In this note, we shall give a unified view of ordinary and total least squares problems and their solution. As the geometry underlying the problem setting greatly contributes to the understanding of the solution, we shall introduce least squares problems and their generalization via interpretations in both column space and (the dual) row space and we shall use both approaches to clarify the solution. After a study of the least squares approximation for simple regression in section \ref{par2}, we introduce the notion of approximation in the sense of ``Total Least Squares (TLS)'' for this problem in section \ref{par3}. In the next section we consider ordinary and total least squares approximations for multiple regression problems and in section \ref{par5} we study the solution of a general overdetermined system of equations in TLS-sense. In a final section we consider generalizations with multiple right-hand sides and with ``frozen'' columns. We remark that a TLS-approximation needs not exist in general; however, the line (or hyperplane) of best approximation in TLS-sense for a regression problem does exist always. \par As numerical algorithms such as the QR-factorization and the Singular Value Decomposition (SVD) are relatively well-known and nicely implemented in a package like MATLAB, we shall not consider numerical algorithms to compute the solutions effectively. \section{Primal vs.~dual approach} \setcounter{equation}{0} To make clear how both column- and row-space arguments can be used to derive the solution of a least squares problem, we consider least squares in one dimension: \par\noindent \centerline{\sl Given $m$ points $\{x_i~|~ i=1,\,\cdots\,,\,m\}$, find $z\inI\!\!R$ that minimizes the quadratic functional} \beq{intro1} f(z):=\sum_{i=1}^m\,(x_i-z)^2 \,. \end{equation} The function $z\mapsto f(z)$ is a parabola. When we shift its center to the average $\overline x:={1\over m}\sum_{i=1}^m\,x_i$\,, \beq{intro1a} f(z)=\sum_{i=1}^m\,(x_i-z)^2 =\sum_{i=1}^m\,\{\,(x_i-\overline x)^2 +2(x_i-\overline x)(\overline x - z)+(\overline x-z)^2\,\}\,, \end{equation} we see that the sum of double products vanishes. Hence, the average $\overline x$ is the unique minimizer. \par\noindent In the dual approach we consider the data as one point in $\vek x\inI\!\!R^m$. The functional $f(z)$ then measures the square of the Euclidean distance to the point $z\vek e$, \beq{intro2} f(z)=\|\,\vek x - z\vek e\,\|_2^2\,,~~~\mbox{where}~~~ \vek x :=\left(\matrix{~x_1~\cr x_2 \cr \vdots \cr x_m}\right) ~~\mbox{and}~~ \vek e :=\left(\matrix{~1~\cr 1 \cr \vdots \cr 1}\right)\,. \end{equation} \hrule \begin{figure}[htb] \begin{center} \begin{picture}(280,115) \setlength{\unitlength}{.1mm} \put(0,0){\line(2,1){800}} \thicklines \put(200,100){\vector(1,0){500}} \put(202,101){\vector(2,1){400}} \put(600,300){\vector(1,-2){100}} \put(200,60){$\scriptstyle O$} \put(700,60){$\scriptstyle \vek x$} \put(810,400){$\scriptstyle span\{\vek e\}$} \put(560,320){$\scriptstyle z\,\vek e$} \put(650,200){$\scriptstyle \vek x -z\,\vek e$} \end{picture} \end{center} \caption{Vector $\vek x$, its orthogonal projection on $span\{\vek e\}$ and the residual vector $\vek x -z\,\vek e$ in the dual approach.\label{fig1a}} \end{figure} \hrule \par\noindent From fig. \ref{fig1a}, which shows the plane in $I\!\!R^m$ spanned by $\vek x$ and $\vek e$, we find the orthogonal projection of $\vek x$ on $span\{\vek e\}$ as minimizer, \beq{intro3} \overline x={\vek x^T\,\vek e \over \vek e^T\,\vek e }= {1\over m}\sum_{i=1}^m\,x_i\,. \end{equation} We see that both the primal and the dual approach provide the solution in different ways. In the primal approach we use the fact that linear terms vanish by a shift towards the average. In the dual approach we use an orthogonality argument. \section{Simple regression\label{par2}} \setcounter{equation}{0} In the plane $I\!\!R^2$ we are given $m$ data points (abscissae and ordinates) \beq{sregres0} \{(x_i\,,\,y_i)\inI\!\!R^2~|~ i=1,\,\cdots\,,\,m\} \end{equation} that should satisfy the linear (affine) relation $y(x)=a+bx$; find the parameters $a$ and $b$ that provide a ``best fit'', minimizing the sum of squares of the residuals \beq{sregres1} f(a\,,\,b):=\sum_{i=1}^m\,(y_i-a-b\,x_i)^2 \,. \end{equation} We can interpret this as searching the line $\ell:=\{(x,y)\inI\!\!R^2~|~ y=a +b\,x\}$ ``nearest'' to the datapoints, minimizing vertical distances and making the tacit assumption that {\sl model errors in the data-model $y=a+bx$ are confined to the observed $y$-coordinates}, as depicted in fig. \ref{fig1}. \ifnum\themachine=0 \begin{figure}[htb] \hbox{\hskip 37mm \vbox{\vskip 165pt{\special{illustration tlsfig1a.eps scaled 500}}\vskip -25pt}} \caption{\label{fig1} Simple linear regression; {\rm distances are measured along the $y$-axis.}} \end{figure} \else \begin{figure}[htb] \vskip -20pt \hskip43mm\psfig{figure=tlsfig1a.eps,width=210pt,height=160pt} \vskip-15pt \caption{\label{fig1} Simple linear regression; {\rm distances are measured along the $y$-axis.}} \end{figure} \fi \par\noindent Analogously to (\ref{intro1a}) using the centroid $\overline{ \vek z}:=(\overline x\,,\,\overline y)^T=( \,{1\over m}\sum_{i=1}^m\,x_i \,,\,{1\over m}\sum_{i=1}^m\,y_i\,)^T $ we rewrite $f$ and find as before, that the double products vanish, \beq{sregres2} \begin{array}{r c l} \displaystyle f(a\,,\,b):=\sum_{i=1}^m\,(y_i-a-b\,x_i)^2 &=&\displaystyle \sum_{i=1}^m\,\Big(y_i-\overline y +b\,(x_i-\overline x){\Big )}^2 +m\,(\overline y - a-b\,\overline x)^2 \cr ~&\ge&\displaystyle \sum_{i=1}^m\,\Big(y_i-\overline y + b\,(x_i-\overline x)\Big)^2\,,~~~~\forall ~a,\,b\,, \cr \end{array} \end{equation} with equality if $\overline y = a+b\,\overline x$. This implies that the centroid is located on the line: $\overline{\vek z}\in\ell$. Eliminating $a$ it remains to minimize a function of $b$ alone, which is a parabola. Hence the minimizer of (\ref{sregres1}) is \beq{sregres3} b={\sum_{i=1}^m\,(\overline x-x_i)(\overline y-y_i)\over \sum_{i=1}^m\,(\overline x-x_i)^2} ~~~~\mbox{and}~~~~a=\overline y -b\,\overline x\,. \end{equation} \par\noindent {\bf In the dual} approach in $I\!\!R^m$ we interpret $x_i$ and $y_i$ as components of vectors $\vek x$ and $\vek y\in I\!\!R^m$\,, \beq{sregres3a} \vek x :=\left(\matrix{~x_1~\cr x_2 \cr \vdots \cr x_m}\right) ~~~~\vek y :=\left(\matrix{~y_1~\cr y_2 \cr \vdots \cr y_m}\right) ~~~~ \vek e :=\left(\matrix{~1~\cr 1 \cr \vdots \cr 1}\right)~~~~\mbox{and}~~~~ A:=\left(~\vek e~|~\vek x\,\right)\inI\!\!R^{m\times 2}\,. \end{equation} In this setting the functional $f$ measures the square of the distance from $\vek y$ to a linear combination of $\vek e$ and $\vek x$, \beq{sregres4} f(a,\,b)=\|\,\vek y - a\,\vek e-b\,\vek x\,\|_2^2= \|\,\vek y - A\, {a\choose b} \, \|_2^2\,. \end{equation} As in (\ref{intro3}) it is minimized by the orthogonal projection of $\vek y$ on the span of $\vek x$ and $\vek e$ \beq{sregres5} f~\mbox{minimal}~~~~\iff ~~~~\vek y - A\, {a\choose b} ~\perp~ \mbox{Im}(A)\,. \end{equation} If the rank of $A$ is maximal, the solution can be computed, see \cite{gvl}, from the {\sl Normal Equations} or better by an {\sl Orthogonal Factorization} \beq{sregres6} A^TA {a\choose b}=A^T \vek y~~~~{\rm or~better}~~~~ A=QR~~~ {\rm and} ~~~ R {a\choose b}=Q^T\vek y\,. \end{equation} Otherwise we can use the {\sl Singular Value Decomposition} \beq{sregres8} A=U\,\Sigma\,V^T~~~~ {\rm and} ~~~~ {a\choose b}=V\,\Sigma^\dagger\,U^T\vek y\,. \end{equation} \section{Total Least Squares for simple regression\label{par3}} \setcounter{equation}{0} In (\ref{sregres1}) and fig. \ref{fig1} we considered the problem of locating a line nearest to a collection of points, where the distance is measured along the $y$-axis. It looks ``more natural'' to use the (shorter) true Euclidean distance instead, as drawn in fig. \ref{fig2}, which yields the line of {\it Total Least Squares}. \ifnum\themachine=0 \begin{figure}[htb] \hbox{\hskip 37mm \vbox{\vskip 160pt{\special{illustration tlsfig1c.eps scaled 500}}\vskip -25pt}} \caption{\label{fig2}Line of {\it Total Least Squares}: {\rm Model errors are distributed over the $x$- and $y$-coordinates.}} \end{figure} \else \begin{figure}[htb] \vskip -20pt \hskip43mm\psfig{figure=tlsfig1c.eps,width=210pt,height=160pt} \vskip-15pt \caption{\label{fig2}Line of {\it Total Least Squares}: {\rm Model errors are distributed over the $x$- and $y$-coordinates.}} \end{figure} \fi \par\noindent So we consider the {\sl Total Least Squares} problem of finding the line $\ell$ that minimizes the sum of squares of {\bf true} distances: \beq{tlstwo1} f(\ell)~:=~\sum_{i=1}^m\,dist(\,(x_i\,,\,y_i)\,,\, \ell\,)^2 \end{equation} Instead of asking for a line $y=ax+b$, we use the more symmetric form \beq{tlstwo1a} \ell=\{(x,y)\inI\!\!R^2~|~ a+r_1x+r_2y=0\}=\vek w+\vek r^\perp,~~~{\rm with}~~~ \|\vek r\|^2=r_1^2+r_2^2=1, \end{equation} where $\vek w$ is an arbitrary point on the line $\ell$, i.e. $a+r_1w_1+r_2w_2=0$. With this parametrization of $\ell$ we accept the possibility, that $r_2$ may become zero, and hence, that the line cannot be recast in the form $y=\alpha +\beta x$. In the description $\ell=\vek w+\vek r^\perp$, where $\vek r$ is of unit length, the distance from a point $\vek z$ to $\ell$ is given by, see fig. \ref{fig3}, \beq{tlstwo2} ~~~~~dist(\vek z,\ell)={|\vek r^T(\vek z -\vek w)|} ~~~{\rm where}~~~ \ell=\vek w +\vek r^\perp=\{\vek z \in I\!\!R^2 ~|~ \vek r^T(\vek z - \vek w)=0\,\} ~~ {\rm and} ~~\|\vek r\|=1\,. \end{equation} \begin{figure}[htb] \hrule \vspace*{15pt} \begin{center}\setlength{\unitlength}{1pt} \begin{picture}(280,140)(-130,0) \setlength{\unitlength}{.1mm} \put(500,0){\line(-4,3){650}} \thicklines \put(400,75){\vector(-3,4){313}} \put(0,0){\vector(3,4){150}} \put(0,375){\vector(3,4){87}} \put(400,75){\vector(-4,3){400}} \put(0,-35){$\scriptstyle\bf O$} \put(150,170){$\scriptstyle\vek r$} \put(410,75){$\scriptstyle\vek w$} \put(-420,420){$\scriptstyle\ell=\{\vek z \in I\!\!R^2~\vert~(\vek z - \vek w,\vek r)=0\}$} \put(87,500){$\scriptstyle\vek x$} \put(250,290){$\scriptstyle\vek x-\vek w$} \put(40,400){$\scriptstyle dist(\vek x,\ell)= {|\vek r^T(\vek x -\vek w)|}$} \end{picture} \end{center} \caption{\label{fig3}The line $\ell$ in the plane is given as the line through the vector $\vek w$ orthogonal to the vector $\vek r$ of unit length. For a given vector $\vek x$ the difference vector $\vek x -\vek w$ is drawn together with its projection along the line $\ell$ and its orthogonal complement.} \vspace*{10pt} \hrule \end{figure} \par\noindent Hence the TLS problem is to find $\vek r$ and $\vek w$ that minimize the functional \beq{tlstwo3} I(\vek r,\vek w):=\sum_{i=1}^m~ \left(\vek r^T(\vek z_i -\vek w)\right)^2= \sum_{i=1}^m ~\left(r_1\,(x_i-w_1)+r_2\,(y_i-w_2)\right)^2 \end{equation} where $$ \vek z_i={x_i\choose y_i}~~~~\mbox{and}~~~~ \vek r={r_1\choose r_2}\,,~~~~\|\vek r\|^2=r_1^2+r_2^2=1\,. $$ Making the shift to the centroid, as in (\ref{sregres2}) and (\ref{intro1a}), we find again, that the sum of double products vanishes, \beq{tlstwo4} \begin{array}{l c l} I(\vek r,\vek w)&=&\displaystyle \sum_{i=1}^m ~\left(\vek r^T (\vek z_i-\vek w)\vruimte{1.1em}{.6em}\right)^2 \vruimte{0pt}{12pt} \\ &=&\displaystyle \sum_{i=1}^m ~\left(\vek r^T (\vek z_i-\overline{\vek z})\right)^2\, +\sum_{i=1}^m ~2\,\vek r^T (\vek z_i-\overline{\vek z})\, \vek r^T (\overline{\vek z}-\vek w) +\,m(\vek r^T (\overline{\vek z}-\vek w))^2\vruimte{1.9em}{1.6em} \\ &=&\displaystyle I(\vek r,\overline{\vek z}) +\,m(\vek r^T (\overline{\vek z}-\vek w))^2~\ge~ I(\vek r,\overline{\vek z}) \,.\\ \end{array} \end{equation} Clearly, the centroid $\vek{\overline z}:=(\overline x,\overline y)^T$ minimizes the functional $\vek w\mapsto I(\vek r,\,\vek w\,)$ for every $\vek r\inI\!\!R^2$. This implies, that the minimizing line $\ell=\vek{\overline z}+\vek r^\perp$ passes through the centroid (as did the line of simple regression) and that we are left with the reduced minimization problem: \\{\sl Find the vector $\vek r$ with $\|\vek r\|_2=1$ minimizing} \beq{tlstwo5} I(\vek r,\,\vek{\overline z})= \sum_{i=1}^m ~\left(r_1\,(x_i-\overline x)+r_2\,(y_i-\overline y) \vruimte{1.1em}{.2em}\right)^2= \|B\vek r\|_2^2=\vek r^T\,B^T\,B\,\vek r\,, \end{equation} where $B\in I\!\!R^{m\times 2}$ is the matrix \beq{tlstwo5a} B:=\left(\,\vek x-\overline x\,\vek e~|~\vek y-\overline y\,\vek e\,\right)= \left(\matrix{x_1-\overline x & y_{1}-\overline y\cr x_2-\overline x & y_{2}-\overline y\cr \vdots & \vdots\cr x_m-\overline x & y_m-\overline y}\right) \,. \end{equation} The problem of minimizing $\|\,B\,\vek r~\|_2^2$ subject to $\|\,\vek r\,\|_2=1$ is solved by the Singular Value Decomposition of $B$, $$ B=U\,\Sigma\,V^T~~~~\mbox{with}~~~~\Sigma= \left(\matrix{~\sigma_1~&~0~\cr~0~&~\sigma_2~}\right)~~~ \mbox{and}~~~\sigma_1\ge\sigma_2\,. $$ The solution vector $\vek r$ of (\ref{tlstwo5}) is the right singular vector of $B$ corresponding to the smaller singular value of $B$\,. So we conclude: \begin{list}{\alph{enumi}.}{\leftmargin15pt \usecounter{enumi}} \item The solution always exists and is given by the line through the centroid orthogonal to the subdominant singular vector of $B$. \item As $r_2$ can be zero, the solution needs not be expressible in the form $y=\alpha +\beta x$. \item The solution is unique iff $\sigma_1\ne\sigma_2\,.$ \item The shift (\ref{tlstwo4}) to the centroid $\vek z\in\ell$ is the key in finding the solution, as shown in \cite{niever}. \end{list} \ifnum\themachine=0 \begin{figure}[htb] \hbox{\hskip 37mm \vbox{\vskip 160pt{\special{illustration tlsfig1c.eps scaled 500}}\vskip -15pt}} \caption{\label{fig2a}Components $(f_i,g_i)$ are the best approximations of $(x_i,y_i)$ on the line $a+r_1x+r_2y=0$\,.} \vskip-10pt \end{figure} \else \begin{figure}[htb] \vskip -20pt \hskip43mm\psfig{figure=tlsfig1c.eps,width=210pt,height=160pt} \vskip-15pt \caption{\label{fig2a}Components $(f_i,g_i)$ are the best approximations of $(x_i,y_i)$ on the line $a+r_1x+r_2y=0$\,.} \end{figure} \fi \par\noindent {\bf In the dual formulation} we consider the vectors $\vek x$, $\vek y$ and $\vek e$ as in (\ref{sregres3a}) and we describe the line $\ell$ as in (\ref{tlstwo1a}) by $\ell:=\{(\xi,\eta)~|~ a +r_1 \xi+r_2 \eta=0\}$. For $i=1\,\cdots\,m$ we denote by $(f_i,g_i)$ the point on $\ell$ nearest to $(x_i,y_i)$, see fig.~\ref{fig2a}, and by $(\overline f,\overline g):={1\over m}\sum_{i=1}^m (x_i,y_i)$ we denote their average. We define the vectors of first and second components $\vek f$, $\vek g\in I\!\!R^m$, $$ \vek f:=(f_1\,,\,f_2\,,\,\cdots\,,\,f_m)^T~~~ {\rm and} ~~~ \vek g:=(g_1\,,\,g_2\,,\,\cdots\,,\,g_m)^T. $$ These vectors clearly satisfy the relation $a\,\vek e+r_1\,\vek f+r_2\, \vek g=0$. So we can rephrase the minimization problem (\ref{tlstwo1}) as the quest for vectors $\vek f$ and $\vek g$ that minimize the sum of squares of distances \beq{tlstwo6} \begin{array}{r c l} I(a,\vek r):=\sum_{i=1}^m (x_i-f_i)^2 &+&\sum_{i=1}^m (y_i-g_i)^2~=~ \|\,\vek x-\vek f\,\|^2_2+\|\,\vek y-\vek g\,\|^2_2\cr & &\mbox{subject to}~~~~ a\,\vek e+r_1\,\vek f+r_2 \,\vek g=0\,,~~~r_1^2+r_2^2=1.\vruimte{1.9em}{0em} \end{array} \end{equation} Decomposing the vectors in their components in $span\{\vek e\}$ and in the orthogonal complement $\vek e^\perp$ we obtain \beq{tlstwo6a} I(a,\vek r)=\|\,\vek x-\vek f-(\overline x-\overline f)\vek e\,\|^2_2+ \|\,\vek y-\vek g-(\overline y-\overline g)\vek e\,\|^2_2+ m(\overline x-\overline f)^2+m(\overline y-\overline g)^2\,. \end{equation} The contributions from the parts in $span\{\vek e\}$ are minimized by the choice $\overline f=\overline x$ and $\overline g=\overline y$ and the subsidiary condition implies $a+r_1\overline x +r_2\overline y=0$ for that choice. Choosing $\widetilde \vek f:=\vek f-\overline x\,\vek e$ and $\widetilde\vek g :=\vek g-\overline y\,\vek e$ we are left with the problem to minimize in $\vek e^\perp$ the functional: \beq{tlstwo6b} \|\,\vek x-\overline x\,\vek e - \widetilde\vek f\,\|^2_2+ \|\,\vek y-\overline y\,\vek e -\widetilde\vek g\,\|^2_2~~~~\mbox{subject to}~~~~ r_1\,\widetilde\vek f+r_2\,\widetilde\vek g=\vek 0\,. \end{equation} It is not necessary to impose the condition $\widetilde \vek f\,,\,\widetilde \vek g\in\vek e^\perp$, since it is automatically satisfied by the minimizer, because $\vek x-\overline x\,\vek e$ and $\vek x-\overline x\,\vek e$ satisfy this condition. In matrix notation with $B:=\left(\,\vek x -\overline x\,\vek e~|~\vek y - \overline y\,\vek e\,\right)$ and $E:=\left(\,\widetilde \vek f~|~\widetilde \vek g\,\right)$ this minimization problem takes the form \beq{tlstwo7} \mbox{minimize}~~~~~\|\,B-E\,\|^2_F~~~~~~\mbox{subject to}~~~~rank(E)=1\,. \end{equation} From the Singular Value Decomposition of $B$, $$B=\sigma_1\,\vek u_1\,\vek v_1^T+\sigma_2\,\vek u_2\,\vek v_2^T ~~~~\mbox{we find} ~~~~E=\sigma_1\,\vek u_1\, \vek v_1^T\,,~~~~\mbox{provided}~~~~\sigma_1>\sigma_2\,. $$ Hence the total least squares solution is (as before) given by, $$ E\,\vek v_2=\vek 0~~~~\mbox{implying}~~~~\vek r =\vek v_2\,. $$ There is a difference in flavour between both approaches. Whereas the primal formulation (\ref{tlstwo5}) directly produces the minimizing vector, the dual approach (\ref{tlstwo7}) takes a roundabout. The latter provides a minimizing {\sl matrix} $E$; the parameters of the line are found only afterwards as the coefficients in the linear combination of the columns of $E$ that equals zero. \section{Multiple regression\label{par4}} \setcounter{equation}{0} The extension of ordinary and total least squares to multiple regression is almost straightforward. As most ideas in 2D-regression easily carry over, we can be brief about it. We are given the cloud of $m$ datapoints in $I\!\!R^n$ (each point consisting of an ``abscissa'' in $I\!\!R^{n-1}$ and an ordinate in $I\!\!R$), \beq{mregres0a} \{\vek z_i:=(x_1^{(i)},\cdots,x_{n-1}^{(i)},y_i)^T\,\in\,I\!\!R^n ~|~ i=1,\,\cdots\,,\,m\}\,, \end{equation} that should satisfy the linear (affine) model $y(x_1\,\cdots\,x_{n-1})= c_0 + c_1 x_1 + c_2 x_2+\cdots+c_{n-1} x_{n-1}\,.$ In ordinary least squares the parameters are determined by minimizing the functional $J$, \beq{mregres1} J(\vek c):=\sum_{i=1}^m~(y_i-c_0-c_1x_1^{(i)}- \cdots-c_{n-1}x_{n-1}^{(i)})^2\,, ~~~\vek c:=(\,c_{0}\,,\,\cdots\,,\, c_{n-1}\,)^T\,. \end{equation} and we can interpret this as the search for the best fitting hyperplane in $I\!\!R^n$\,, \beq{mregres0} \{ (x_1\,,\,\cdots\,,\,x_{n-1}\,,\,y)^T\inI\!\!R^n\,| \,y=c_0 + c_1 x_1 + c_2 x_2+\cdots+c_{n-1} x_{n-1}\,\}. \end{equation} As in (\ref{sregres2}), the double products vanish by a shift of the center to the centroid, implying $$ J(\vek c)\ge \sum_{i=1}^m~\,\left(y_i-\overline y- c_1(x_1^{(i)}-\overline x_{1})- \cdots-c_{n-1}(x_{n-1}^{(i)}-\overline x_{n-1})\,\right)^2 $$ with equality if $\overline y=c_0+c_1\,\overline x_1+\cdots+c_{n-1}\,\overline x_{n-1}$. Hence, the centroid is in the hyperplane. However, more than one unknown parameter is left and the easy argument of (\ref{sregres3}) cannot be applied directly. On the other hand, the dual approach (in ``column space'') (\ref{sregres3a}-\ref{sregres5}) is straightforward and provides the solution easily. Defining vectors $\vek x_k$ and $\vek y\inI\!\!R^m$ and the matrix $A\inI\!\!R^{m\times n}$, $$ \vek x_k :=\left(\matrix{~x_k^{(1)}~\cr x_k^{(2)} \cr \vdots \cr x_k^{(m)}}\right),~~~ {\vek y}:=\left(\matrix{y_{1}\cr y_{2}\cr\vdots\cr y_m\cr}\right),~~~ {\rm and} ~~~ A:=\left(\vek e\,|\,\vek x_1\,|\,\cdots\,|\,\vek x_{n-1}\right) =\left(\matrix{1 & x_1^{(1)}&\cdots&x_{n-1}^{(1)}\cr 1 & x_1^{(2)}&\cdots&x_{n-1}^{(2)}\cr \vdots & \vdots&~&\vdots\cr 1 & x_1^{(m)}&\cdots&x_{n-1}^{(m)}}\right) $$ the functional (\ref{mregres1}) takes the form: \beq{mregres2} J(\vek c)=\|\,\vek y-c_0\vek e-\cdots-c_{n-1}\vek x_{n-1}\,\|^2= \|\vek y-A\vek c\|_2^2\,. \end{equation} As in (\ref{intro3}) and (\ref{sregres5}) it is minimized by the orthogonal projection of $\vek y$ on the span of $\vek x_1\,\cdots\,\vek x_{n-1}$ and $\vek e$, i.e. on $Im(A)$, \beq{mregres3} f~\mbox{minimal}~~~~\iff ~~~~\vek y - A\, \vek c ~\perp~ \mbox{Im}(A)\,. \end{equation} As before, if the rank of $A$ is maximal, the solution can be computed from the {\sl Normal Equations} or better by an {\sl Orthogonal Factorization}, see \cite{gvl}, \beq{mregres4} A^TA \vek c =A^T \vek y~~~~{\rm or~better}~~~~ A=QR~~~ {\rm and} ~~~ R \vek c=Q^T\vek y\,. \end{equation} Otherwise we can use the {\sl Singular Value Decomposition} \beq{mregres5} A=U\,\Sigma\,V^T~~~~ {\rm and} ~~~~ \vek c=V\,\Sigma^\dagger\,U^T\vek y\,. \end{equation} \par\noindent {\bf The total least squares approximation} minimizes the sum of squares of {\it true} distances. We do not attribute a special position to the $y$-coordinate and describe the hyperplane in $I\!\!R^n$, as in (\ref{tlstwo1a}), by $\vek w + \vek r^\perp$. The functional to minimize is: \beq{mregres6} I(\vek r,\vek w):=\sum_{i=1}^m~ \left(\vek r^T(\vek z_i -\vek w)\right)^2= \sum_{i=1}^m \left(\vek r^T (\vek z_i -\overline{\vek z})\right)^2 +m(\vek r^T (\overline{\vek z}-\vek w))^2 \end{equation} subject to $\|\vek r\|=1\,.$ Since the double products in the second right-hand side cancel, the centroid (again) is in the hyperplane and it minimizes (\ref{mregres6}) for all $\vek r$. We are left with the reduced minimization problem, to find $\vek r$ with $\|\vek r\|_2=1$ minimizing \beq{mregres7} I(\vek r,\,\vek{\overline z})= \|B\vek r\|_2^2 \,,~~~~\mbox{with}~~~~ B:=\left(\matrix{x_1^{(1)}-\overline x_1 &\cdots& x_{n-1}^{(1)}-\overline x_{n-1} & y_{1}-\overline y\cr x_1^{(2)}-\overline x_1 &\cdots& x_{n-1}^{(2)}-\overline x_{n-1} & y_{2}-\overline y\cr \vdots &\ &\vdots& \vdots\cr x_1^{(m)}-\overline x_1 &\cdots& x_{n-1}^{(m)}-\overline x_{n-1} & y_m-\overline y}\right) \,. \end{equation} The solution vector $\vek r$ is the right singular vector of $B$ corresponding to the smallest singular value of $B$. We conclude: \begin{list}{\alph{enumi}.}{\leftmargin15pt \usecounter{enumi}} \item A solution always exists; it is given by the hyperplane through the centroid and orthogonal to the right singular vector belonging to the smallest singular value of matrix $B$. It is not expressible in the form (\ref{mregres0}) if $r_n=0$. \item The solution is unique, iff $\sigma_{n-1}>\sigma_n\,.$ \item The shift of (\ref{mregres6}) to the centroid $\vek z\in\ell$ is the key in finding the solution. \end{list} \par\noindent {\bf In the dual approach} we again consider the hyperplane (\ref{mregres0}), but now the $y$-coordinate has no special position in the defining equation, \beq{mregres8a} \{ (x_1\,,\,\cdots\,,\,x_{n-1}\,,\,y)^T\inI\!\!R^n\,|\, c_0 + c_1 x_1 + c_2 x_2+\cdots+c_{n-1} x_{n-1}+c_n y=0\,\}\,; \end{equation} instead of $c_n = -1$ we require $\sum_{i=1}^n c_i^2=1$. We choose (for each $i$) the point $(f_1^{(i)}\,,\,\cdots\,,\,f_{n-1}^{(i)}\,,\,g_i)^T$ on this hyperplane nearest to the datapoint $\vek z_i$, $(i=1\cdots m)$. The first, second, etc.\ coordinates of these points form in $I\!\!R^m$ the vectors $\vek f_k$ ($k=1\,\cdots\, n-1$) and $\vek g$, $$ \vek f_k=(f_k^{(1)}\,,\,f_k^{(2)}\,,\,\cdots\,,\,f_k^{(m)})^T~~~ {\rm and} ~~~ \vek g=(g_1\,,\,g_2\,,\,\cdots\,,\,g_m)^T\,, $$ which clearly satisfy the relation $c_0\vek e+c_1\vek f_1+\cdots+c_{n-1}\vek f_{n-1}+c_n\vek g=0$\,. The minimization of the sum of squares of distances from the datapoints $\vek z_i$ to the hyperplane can now be reformulated as the problem of finding vectors $\vek f_k$ ($k=1\,\cdots\, n-1$) and $\vek g$ in $I\!\!R^m$ that minimize the functional \beq{mregres8} \|\,\vek y-\vek g\,\|^2_2\,+\,\sum_{k=1}^{n-1}\, \|\,\vek x_k-\vek f_k\,\|^2_2 ~~~~\mbox{ subject to}~~ c_0\,\vek e+c_1\,\vek f_1+\cdots+c_{n-1}\vek f_{n-1}+c_n\,\vek g=0\,, \end{equation} where $\sum_{k=1}^{n}\,c_k^2=1\,.$ As in (\ref{tlstwo6a} -- \ref{tlstwo6b}) we may restrict this minimization problem to $\vek e^\perp$ and eliminate the unknown $c_0=-c_n\overline y_n-\sum_{k=1}^{n-1}\,c_k\overline x_k$ by orthogonalization w.r.t. $\vek e$; essentially this amounts to the same as the shift to the centroid in the primal approach in $I\!\!R^n$. So we find the restricted problem of finding vectors $\vek f_k$ ($k=1\,\cdots\, n-1$) and $\vek g$ that minimize $$ \|\,\vek y-\overline y\,\vek e -\vek g\,\|^2_2\,+ \,\sum_{k=1}^{n-1}\,\|\,\vek x_k-\overline x_k\,\vek e -\vek f_k\,\|^2_2 ~~~~\mbox{subject to}~~~~ c_1\,\vek f_1+\cdots+c_{n-1}\vek f_{n-1}+c_n\,\vek g=0\,. $$ Without imposing it, the minimizing vectors are orthogonal to $\vek e$ automatically, as in (\ref{tlstwo6b}). Defining the matrices $B$ and $E$, $$ B:=\left(\,\vek x_1 -\overline x_1\,\vek e~|~\cdots~|~ \vek x_{n-1} -\overline x_{n-1}\,\vek e~|~ \vek y -\overline y\,\vek e\,\right)~~~ {\rm and} ~~~E:=\left(\,\vek f_1~|~\cdots~|~\vek f_{n-1}~|~\vek g\,\right) $$ we can reformulate the problem as: \beq{mregres9} \mbox{minimize}~~~~~\|\,B-E\,\|^2_F~~~~~~\mbox{subject to}~~~~rank(E)=n-1\,. \end{equation} In this form it is easily solved by the SVD. If $B=\sum_{i=1}^n\,\sigma_i\,\vek u_i\,\vek v_i^T$, then $E=\sum_{i=1}^{n-1}\,\sigma_i\,\vek u_i\,\vek v_i^T$ is a minimizer of (\ref{mregres9}), which is unique, if $\sigma_{n-1}>\sigma_n\,.$ The coefficients $c_1\,,\,\cdots\,,\,c_n$ determining the hyperplane are the coordinates of the right singular vector $\vek v_n$ as before: $$ E\,\vek v_n=\vek 0\,,~~~~\Longrightarrow~~~~ \left(\matrix{c_1\cr\vdots\cr c_n}\right)=\vek v_n\,. $$ \section{General Least Squares\label{par5}} \setcounter{equation}{0} For a given matrix $A\inI\!\!R^{m\times n}$ with $m>n$ and right-hand side $\vek b\inI\!\!R^m$ we consider the problem to find the minimizer $\vek c\inI\!\!R^n$ of the functional \beq{gls1} J(\vek c)~:=~\|\,A\,\vek c\,-\,\vek b\,\|_2^2~~~~~ \mbox{with}~~~\vek c:=\left(\matrix{~c_1~\cr\vdots\cr c_n}\right)\,. \end{equation} where $$A:=\left(\matrix{~a_{1,1}~&~\cdots~&~a_{1,n}~\cr \vdots& &\vdots\cr a_{m,1}~&~\cdots~&~a_{m,n}}\right)\,\inI\!\!R^{m\times n} ~~~~~ {\rm and} ~~~~~ \vek b:=\left(\matrix{~b_1~\cr\vdots\cr b_m}\right)\,\inI\!\!R^m~~~(m\ge n)\,, $$ The difference with (\ref{mregres2}) is, that $A$ needs not contain a column consisting of all ones. The solution is obtained by a column space argument as in (\ref{mregres3}), namely that $J(\vek x)$ is minimal iff $\vek b - A\, \vek x$ is orthogonal to $\mbox{Im}(A)$ and it may be computed by normal equations, QR-factorization or SVD. \par What is interesting for the TLS generalization is the interpretation of (\ref{gls1}) in row space. We have introduced the TLS approximation in the sections \ref{par3} and \ref{par4} as the one that minimizes the sum of squares of the {\em true} distances of $m$ points to a hyperplane, whereas ordinary least squares measures the distances along the $y$-axis. We can interpret (\ref{gls1}) in this sense. The rows of the extended matrix $(A\,|\,-\vek b)$ define a cloud of $m$ points in $I\!\!R^{n+1}\,,$ \beq{gls2} ~~~~~\vek z_k :=(\,a_{k,1}\,,\,\cdots\,,\, a_{k,n}\,,\,-b_k\,)^T\inI\!\!R^{n+1}\,, ~~~\mbox{such that}~~~ \left(\,\vek z_1~|~\cdots~|~\vek z_m\,\right)= \left(\,A~|~ -\vek b\,\right)^T, \end{equation} to which we try to fit a linear function $b(x_1\cdots x_n)=c_1x_1+\cdots+c_nx_n$. In other words, we look for an $n$-dimensional {\em subspace} $\vek{\widehat c}^\perp$ in $I\!\!R^{n+1}$ (and not a hyperplane in $I\!\!R^n$ as in the regression problem), that is nearest to the datapoints (\ref{gls2}), minimizing \beq{gls3} ~~~J(\vek c)=\|\,\left(\,A~|~ -\vek b\,\right)\, \left(\matrix{~\vek c~\cr 1}\right)\,\|^2_2~=~ \sum_{k=1}^m~(\,\vek z_k^T\, \vek{\widehat c}\,)^2~~~{\rm where}~~~ \vek{\widehat c}:=\left(\matrix{\vek c\cr 1}\right)= \left(\matrix{c_1\cr \vdots \cr c_n\cr 1}\right)\inI\!\!R^{n+1}\,. \end{equation} In this sum of squares the quantity $\vek z_k^T\, \vek{\widehat c}$ measures the distance from $\vek z_k$ to $\vek{\widehat c}^\perp$ along the $n\!+\!1$-st coordinate axis. \par\noindent {\bf The Total Least Squares} approximation for the cloud of points (\ref{gls2}) minimizes the sum of squares of {\bf true} distances to the subspace $\vek{\widehat c}^\perp$. As the true distance from $\vek z_k$ to the subspace is given by $\vek z_k^T \vek c /\vek c^T\vek c$\,, see (\ref{tlstwo2}), the TLS-approximation minimizes the functional: \beq{gls4} I(\vek c):= \sum_{k=1}^m~{\left(\,\vek z_k^T\, \vek{\widehat c}\,\right)^2\over \vek{\widehat c}^T\, \vek{\widehat c}}~=~ {\|\left(\,A\,| -\vek b\,\right)\, \vek{\widehat c}\,\|^2 \over \vek{\widehat c}^T\, \vek{\widehat c}} ~~~~{\rm where}~~~~ \vek{\widehat c}:=\left(\matrix{\vek c\cr 1}\right) \end{equation} The fuctional $\vek r \mapsto \|(\,A\,| -\vek b\,)\,\vek r\|^2$ subject to $\|\vek r\|=1$ is minimal, if $\vek r$ is the right singular vector corresponding to the smallest singular value of the matrix $(\,A\,| -\vek b\,)$. Renormalizing the last component to $-1$, {\em if possible}, provides the solution to the TLS problem for the overdetermined system of equations $A\vek x=\vek b$. If the $n\!+\!1$-st component of this right singular vector is zero, no solution exists to the TLS-problem. The solution is unique if $\sigma_n>\sigma_{n+1}$. \par\noindent {\bf Interpretation of TLS in Column Space:} To each point $\vek z_k$ ($k=1\cdots m$) in the cloud (\ref{gls2}) \beq{gls5} \vek z_k=\left(\matrix{~a_{k,1}~\cr\vdots~\cr a_{k,n}\cr -b_k}~\right) ~~~\mbox{corresponds its best approximation} ~~~\vek w_k:= \left(\matrix{~f_{k,1}~\cr\vdots\cr f_{k,n}\cr -g_k}~\right)~ \in~\vek{\widehat c}^\perp\,. \end{equation} The TLS-approximation minimizes the sum of squares of the distances between the (given) points $\vek z_k$ and the points $\vek w_k$ in the subspace $\vek{\widehat c}^\perp$. We can write this sum of squares as the Frobenius norm of a matrix, if we consider the components $f_{k,j}$ as the elements of a matrix $F\inI\!\!R^{m\times n}$, and the components $g_k$ as the components of a vector $\vek g\inI\!\!R^m$. Hence, TLS minimizes \beq{gls6} \sum_{k=1}^m\,\| \vek z_k - \vek w_k\|^2=\|A-F\|^2_F+\|\vek b-\vek g\|^2= \|(A\,|-b)-(F\,| -g)\|^2_F \end{equation} Since the rows of the matrix $E:=(F\,| -g)\inI\!\!R^{m\times (n+1)}$ are orthogonal to $\vek{\widehat c}$, the rank of $E$ is $n$ at most. In other words, TLS minimizes \beq{gls7} \|\,(\,A\,|-\vek b\,) - E\,\|_F^2 ~~~~~\mbox{subject to}~~~~~ E \in I\!\!R^{m\times (n+1)}~~~ {\rm and} ~~~rank(E)\le n\,. \end{equation} We may interpret this as the quest for the solution of the solvable linear system $F\vek c=\vek g$ ``nearest'' to the (unsolvable) system $A\vek x=\vek b$, where ``solvable'' means: $\vek g \in {\rm Im}(F)\,$. \par The minimization problem (\ref{gls7}) is solved by the SVD. If $(\,A\,| -\vek b\,)=\sum_{i=1}^{n+1}\,\sigma_i\vek u_i\vek v_i^T\,,$ then $E=\sum_{i=1}^{n}\,\sigma_i\vek u_i\vek v_i^T$ and the required solution of the TLS-problem is the null-vector $\vek v_{n+1}$ of $E$, i.e.\ the right singular vector $\vek v_{n+1}$ of $(\,A\,|-\vek b\,)$ corresponding to the smallest singular value $\sigma_{n+1}$\,, provided the $n\!+\!1$-st component is non-zero. As stated at the end of section \ref{par3}, the formulation (\ref{gls7}) takes a roundabout in comparison to the equivalent formulation (\ref{gls4}) in that it asks for a minimizing system of equations, instead of the solution $\vek{\widehat c}$ itself. We conclude, that in general a best approximation of the overdetermined system $A\vek x=\vek b$ in TLS-sense may not exist, because we are not satisfied with the subspace as in a problem of regression; we want the equation for the subspace $b=c_1x_1+\cdots+c_nx_n$ to be explicit w.r.t.\ $b$. Furthermore, the solution is not necessarily unique. We shall illustrate this by two examples. \par\noindent {\bf Example 1:} Consider the cloud of 4 points in $I\!\!R^2$: $$ (1,1)\,,~~(-1,1)\,,~~(1,-1)\,,~~ {\rm and} ~~(-1,-1) $$ The LS-approximation is the horizontal line $\{(x,y)~|~ y=0\}$. The TLS-approximation makes the SVD of the matrix $B$, $$ B:=\left(\matrix{~1~&~1~\cr~1&-1~\cr-1~&~1\cr-1~&-1~}\right)= \left(\matrix{~ {1 \over 2} ~&~ {1 \over 2} ~& \sqrt{ {1 \over 2} }&0\cr ~ {1 \over 2} &- {1 \over 2} ~ & 0& \sqrt{ {1 \over 2} }\cr - {1 \over 2} ~ &~ {1 \over 2} & 0& \sqrt{ {1 \over 2} } \cr - {1 \over 2} ~&- {1 \over 2} &\sqrt{ {1 \over 2} }&0 ~ }\right) ~\left(\matrix{~2~&~0~\cr0&2\cr0&0\cr0&0}\right)~ \left(\matrix{~1~&~0~\cr0&1}\right)\,. $$ As both singular values are equal, there is no unicity; every line through the origin provides a solution, as shown in fig.\ \ref{fig4}. The sum of squares of distances from the points to a line with slope $\tan\phi$ is independent of the slope. \medskip \hrule \begin{figure}[htb] \begin{center} \begin{picture}(300,100)(-150,-55) \setlength{\unitlength}{.32mm} \put(-100,-50){\line(2,1){200}} \put(-100,0){\line(1,0){200}} \put(0,0){\circle*{3}} \put(50,50){\circle*{3}}\put(27,49){$\scriptscriptstyle (1,1)$} \put(50,-50){\circle*{3}}\put(22,-53){$\scriptscriptstyle (1,-1)$} \put(-50,50){\circle*{3}}\put(-78,49){$\scriptscriptstyle (-1,1)$} \put(-50,-50){\circle*{3}}\put(-83,-53){$\scriptscriptstyle (-1,-1)$} \put(50,50){\line(1,-2){10}} \put(50,-50){\line(-1,2){30}} \put(50,-50){\line(0,1){100}} \put(-25,-7){$\scriptstyle\varphi$} \put(53,10){$\scriptstyle\tan\,\varphi$} \put(57,40){$\scriptstyle\xi$} \put(27,-25){$\scriptstyle\eta$} \end{picture} \end{center}\vskip-10pt \caption{\label{fig4} Example 1: $\xi^2+\eta^2=(1+\tan\varphi)^2\cos^2 \varphi+(1-\tan\varphi)^2\cos^2\varphi=2$ independent on $\varphi$\,.} \end{figure} \hrule \par\noindent {\bf Example 2:} Solve the following problem in LS-sense and TLS-sense: $$ \left(\matrix{~1~&~0~\cr 0&0\cr0&0}\right)~{x \choose y}= \left(\matrix{~1~\cr1\cr 1}\right) $$ The normal equations for the LS-approximation are: $$ \left(\matrix{~1~&~0~\cr 0&0}\right)~{x \choose y}= \left(\matrix{~1~\cr 0}\right)~~~~~\Longrightarrow ~~~~x=1~~~ {\rm and} ~~~y~\mbox{undetermined}\,. $$ The SVD for TLS-problem is: $$ \def\scriptscriptstyle{1\over 2}{\scriptscriptstyle{1\over 2}} \def\scriptscriptstyle{-{1\over 2}}{\scriptscriptstyle{-{1\over 2}}} \def\scriptscriptstyle{1\over\sqrt 2}{\scriptscriptstyle{1\over\sqrt 2}} \def\scriptscriptstyle{-{1\over\sqrt 2}}{\scriptscriptstyle{-{1\over\sqrt 2}}} \def\scriptscriptstyle{\sqrt{2+\sqrt 2}}{\scriptscriptstyle{\sqrt{2+\sqrt 2}}} \def\scriptscriptstyle{\sqrt{2-\sqrt 2}}{\scriptscriptstyle{\sqrt{2-\sqrt 2}}} B~=~\left(\matrix{~1~&0&~1~\cr0&0&1\cr0&0&1}\right)~=~ \left(\matrix{ \scriptscriptstyle{1\over\sqrt 2} & \scriptscriptstyle{1\over\sqrt 2} & 0 \cr \scriptscriptstyle{1\over 2} & \scriptscriptstyle{-{1\over 2}} & \scriptscriptstyle{-{1\over\sqrt 2}} \cr \scriptscriptstyle{1\over 2} & \scriptscriptstyle{-{1\over 2}} & \scriptscriptstyle{1\over\sqrt 2} }\right) \left(\matrix{ \scriptscriptstyle{\sqrt{2+\sqrt 2}} & 0 & 0 \cr0&\scriptscriptstyle{\sqrt{2-\sqrt 2}}&0\cr0&0&0}\right) \left(\matrix{ \scriptscriptstyle{1\over 2}\scriptscriptstyle{\sqrt{2-\sqrt 2}} & 0 & \scriptscriptstyle{1\over 2}\scriptscriptstyle{\sqrt{2+\sqrt 2}} \cr \scriptscriptstyle{1\over 2}\scriptscriptstyle{\sqrt{2+\sqrt 2}} & 0 & \scriptscriptstyle{-{1\over 2}}\scriptscriptstyle{\sqrt{2-\sqrt 2}} \cr0&1&0}\right)\,. $$ The smallest singular value is 0\,. However, the $3^{\rm rd}$ component of the corresponding right singular vector $(0\,,\,1\,,\,0)^T$ is 0 as well, such that no TLS-solution exists! \section{Generalizations: (a) Multiple RHS\label{par7}} \setcounter{equation}{0} In ordinary least squares there is no difference between the treatment of one and multiple right-hand sides (RHS). In Total Least Squares the column space of the matrix is bent towards the RHS. If there are given several RHS's, we can treat each of them separately and compute the SVD of an extended matrix for each RHS. In a different approach we can try to bend the matrix to all RHS's collectively. So we consider the problem: given $A\inI\!\!R^{m\times n}$ ($m\ge n+p$) and $B\inI\!\!R^{m\times p}$ find $X\inI\!\!R^{n\times p}$ that solves the overdetermined system of equations $A\,X=B$ in TLS-sense. By analogy to (\ref{gls6}) we have to find the solution $X$ of a solvable matrix equation $F\,X=G$ (i.e. ${\rm Im}(G)\subset{\rm Im}(F)$\,) nearest to $A\,X=B$; we have to minimize \beq{gentls1} ~~~\|\,A-F\,\|_F^2~+~\|\,B-G\,\|_F^2 ~~~\mbox{subject to}~~~F \in I\!\!R^{m\times n}\,,~G \in I\!\!R^{m\times p}~ {\rm and} ~F\,X=G\,. \end{equation} Otherwise stated, find an approximation $E=(\,F~|~ G\,) \in I\!\!R^{m\times(n+p)}$ to $(\, A ~|~ B \,)$, such that \beq{gentls2} \|\,(\,A~|~ B\,)-E\,\|_F^2~~~~\mbox{is minimal subject to }~~~~rank(E)=n\,. \end{equation} The solution of (\ref{gentls2}) is constructed by making the SVD of $(\,A~|~ B\,)$: \beq{gentls3} \def\stapelup#1#2{\matrix{{\scriptscriptstyle #2}\cr #1}} \def\stapel#1#2{\matrix{#1\cr{\scriptscriptstyle #2}}} (\, A ~|~ B \,)=U\,\Sigma\,V^T= \left(\, \stapelup{U_1\vruimte{1em}{1em} }{(m\times n)} ~\vrule~\stapelup{ U_2\vruimte{1em}{1em} }{(m\times p)}\,\right)~ \left(\matrix{~\stapelup{\Sigma_{1}}{ (n\times n)}&~0~\cr \vruimte{1em}{0em}~&~\cr 0&\stapel{\Sigma_{2}}{ (p\times p)}}\right)~ \left(\matrix{~\stapelup{V_{1,1}}{(n\times n)}&~\stapelup{V_{1,2}} { (n\times p)}~\cr \vruimte{1em}{0em}~&~\cr \stapel{V_{2,1}}{ (p\times n)}&\stapel{V_{2,2}} { (p\times p)}}\right)^T\,. \end{equation} \par\noindent {\bf Theorem.} If we assume: \begin{list}{\alph{enumi}.}{\leftmargin15pt \usecounter{enumi}} \item $rank(V_{2,2})=p$\,, \item $\Sigma=diag(\sigma_1\,,\,\cdots\,,\,\sigma_n\,,\,\sigma_{n+1} \,,\,\cdots\,,\,\sigma_{n+p}\,)$ with $\sigma_j\ge\sigma_{j+1}$ and $\sigma_n\ne\sigma_{n+1}$\,, \end{list} then the TLS problem (\ref{gentls2}) has the unique solution $X=-V_{1,2}\,V_{2,2}^{-1}\,.$ \medskip\par\noindent {\bf Proof}: From (\ref{gentls3}) and the assumption $\sigma_n>\sigma_{n+1}$ it follows, that the best $rank ~n$ approximation\footnote{see \cite{gvl} theorem 2.5.2} of $(A~|~ B)$ in the Frobenius norm is given by $E$, \beq{gentls4} E:=\left(\, U_1 ~|~ U_2\,\right)~ \left(\matrix{~\Sigma_{1}~&~0~\cr 0&0}\right)~ \left(\matrix{~V_{1,1}~&~V_{1,2}~\cr V_{2,1}&V_{2,2}}\right)^T~ =U_1\,\Sigma_1\,\left(\, V_{1,1}^T ~|~ V_{1,2}^T\,\right)=(F~|~ G)\,, \end{equation} where $F:=U_1\,\Sigma_1\, V_{1,1}^T$ and $G:=U_1\,\Sigma_1\,V_{1,2}^T\,$. The orthogonality of the columns of $V$ implies $$ {V_{1,1}\choose V_{2,1}}^T\,{V_{1,2}\choose V_{2,2}}=(0) ~~~\mbox{and hence}~~~E\,\displaystyle{V_{1,2}\choose V_{2,2}}= F\,V_{1,2}+G\, V_{2,2}=(0)\,. $$ Under the assumption $rank(V_{2,2})=p$ we may conclude, that $X:=-V_{1,2}\,V_{2,2}^{-1}$ solves the approximate equation $FX=G$\,. \hfill\square{6pt}{} \par\noindent {\bf (b) Fixed columns:} In section \ref{par2} we have introduced the simple (bivariate) regression problem and we have shown that it is solved in LS-sense by the LS-solution of the overdetermined system of equations $A{a\choose b}=(\vek e ~|~\vek x){a\choose b}=\vek y$ (cf. \ref{sregres4}). However, as explained in section \ref{par5}, the TLS-solution of this overdetermined system of equations is derived from the SVD of the matrix $(\vek e~|~\vek x~|~\vek y)\inI\!\!R^{m\times 3}$. This differs from the TLS-solution of the regression problem, which is derived from the SVD of $B:=(\vek x -\overline x\vek e~|~\vek y-\overline y\vek e)\inI\!\!R^{m\times 2}$, cf. eq.~(\ref{tlstwo5a}). The reason for this difference is, that the formulation of the regression problem as an overdetermined set of equations $A{a\choose b}=\vek y$ hast lost its geometric interpretation as a line $y=a+bx$ in the $(x,y)$-plane. In the LS-solution this makes no difference since all uncertainty is put in the $\vek y$-column. However, TLS for $A{a\choose b}=\vek y$ puts uncertainty in all three columns $\vek e$, $\vek x$ and $\vek y$, although in the regression problem there is no reason to postulate uncertainty in the ``constant term''. The TLS-solution of the regression problem can be regained from $A{a\choose b}=\vek y$ if we ``freeze'' the first column of $A$ and put uncertainty in the columns $\vek x$ and $\vek y$ only as in eq.~(\ref{tlstwo6}). The solution is obtained by orthogonalization w.r.t. the frozen column $\vek e$. \par This motivates the study of TLS-problem for $A\,X=B$ with frozen columns, see \cite{ghs}, where uncertainty is postulated in a part of the columns of $A$ (LS is a special case, all columns of the matrix being frozen!). So we assume that the matrix $A$ is partitioned in a frozen part $A_1\inI\!\!R^{m\times j}$ and a part $A_2\inI\!\!R^{m\times k}$ containing some uncertainty with $j+k=n$. Given a right-hand side $B\inI\!\!R^{m\times p}$ with $m\ge j+k+p$\,, we seek matrices $X_1\inI\!\!R^{j\times p}$ and $X_2\inI\!\!R^{k\times p}$, such that \beq{gentls5} A_1\,X_1+A_2\,X_2~=~B~~~~~~\mbox{in TLS-sense w.r.t.}~~ A_2~~ {\rm and} ~~B~~~\mbox{keeping $A_1$ fixed.} \end{equation} More precise, minimize among all $C\inI\!\!R^{m\times k}$ and $D\inI\!\!R^{m\times p}$ \beq{gentls6} \|\,A_2-C\,\|_F^2~+~\|\,B-D\,\|_F^2 ~~~~~~\mbox{subject to}~~~~~A_1\,X_1+C\,X_2 =D\,. \end{equation} or otherwise said, subject to the condition $rank(A_1~|~ C~|~ D)=j+k=n$. \par\noindent Guided by the idea of (\ref{tlstwo6}), where we orthogonalized w.r.t.~the frozen column, we find the \\{\bf solution}: \beq{gentls7} \begin{array}{l} \mbox{a. Orthogonalize columns of $A_2$ and $B$ w.r.t. columns of $A_1$}\hspace{5em}\cr \mbox{b. Solve TLS-problem in the orthogonal complement ${\rm Im}(A_1)^\perp$\,.} \end{array} \end{equation} \par\noindent {\bf Proof:} If $A_1$ is of full column rank ($rank(A_1)=j$), we make the QR-factorization $$A_1=U\,{R_1\choose 0}~~~~ \mbox{ with}~~~~ U\inI\!\!R^{m\times m}\mbox{ orthogonal}~~~ {\rm and} ~~~R_1\inI\!\!R^{j\times j}\,. $$ Because the Frobenius norm is orthogonally invariant, the functional (\ref{gentls6}) is equal to \beq{gentls8a} \|\,U^T A_2-U^T C\,\|_F^2~+~\|\,U^T B-U^T D\,\|_F^2\,. \end{equation} Partitioning the matrices in parts consisting of the topmost $j$ rows and the remaining $m-j$ rows respectively, \beq{gentls8b} {A_{12}\choose A_{22}}:=U^T\,A_2,~~~ {B_1\choose B_2}:=U^T\,B ,~~~{C_1\choose C_2}:=U^T\,C, ~~~{D_1\choose D_2}:=U^T\,D \,, \end{equation} we can rewrite the functional as \beq{gentls8} \|\,A_{12}-C_1\,\|_F^2~+~\|\,B_1-D_1\,\|_F^2~+~\|\,A_{22}-C_2\,\|_F^2 ~+~\|\,B_2-D_2\,\|_F^2\,. \end{equation} It has to be minimized subject to the equations $R_1\,X_1+C_1\,X_2=D_1$ and $C_2\,X_2=D_2\,$. If $X_2$ is known, and if we choose $A_{12}=C_1$ and $B_1=D_1$, the first two terms in (\ref{gentls8}) vanish and $X_1$ can be solved from the equation $R_1\,X_1+C_1\,X_2=D_1$. Hence it suffices to minimize \beq{gentls9} \|\,A_{22}-C_2\,\|_F^2~+~\|\,B_2-D_2\,\|_F^2~~~~ \mbox{subject to}~~~~C_2\,X_2=D_2\,. \end{equation} This is solved as eq.~(\ref{gentls2}) by the SVD of $(C_2~|~ D_2)$. \par If $A$ is not of full column rank ($rank(A_1)=r<j$), we use the SVD of $A_1$: $$A_1=U\,\left(\matrix{\Sigma_1 &0\cr0&0}\right)\, {V^T_1\choose V^T_2}~~~~ \mbox{ with}~~~~ U\inI\!\!R^{m\times m},~~~ \Sigma_1\inI\!\!R^{r\times r},~~~V_1\inI\!\!R^{j\times r}, ~~~V_2\inI\!\!R^{j\times(j- r)}\,. $$ With the same partitioning as in (\ref{gentls8b}), but now with the $r$ topmost rows in the upper parts and the remaining $m-r$ rows in the lower parts, we arrive at the minimization of (\ref{gentls8}) subject to the conditions \beq{gentls10} \Sigma_1\,V_1^T\,X_1+C_1\,X_2=D_1~~~~ {\rm and} ~~~~ C_2\,X_2=D_2\,. \end{equation} Choosing $A_{12}=C_1$ and $B_1=D_1$ and solving $X_2$ from (\ref{gentls9}) we can solve $V_1^T\,X_1$ from (\ref{gentls10}). This makes the first two terms in (\ref{gentls8}) zero, such that the problem again is reduced to the form (\ref{gentls2}). As in standard LS-problems in which the matrix is not of full column rank, the part $X_1$ is not uniquely defined; we may add to it any linear combination of the columns of $V_2$\,.\hfill\square{6pt}{}
{ "timestamp": "1998-05-18T11:48:36", "yymm": "9805", "arxiv_id": "math/9805076", "language": "en", "url": "https://arxiv.org/abs/math/9805076", "abstract": "The method of ``Total Least Squares'' is proposed as a more natural way (than ordinary least squares) to approximate the data if both the matrix and and the right-hand side are contaminated by ``errors''. In this tutorial note, we give a elementary unified view of ordinary and total least squares problems and their solution. As the geometry underlying the problem setting greatly contributes to the understanding of the solution, we introduce least squares problems and their generalization via interpretations in both column space and (the dual) row space and we shall use both approaches to clarify the solution. After a study of the least squares approximation for simple regression we introduce the notion of approximation in the sense of ``Total Least Squares (TLS)'' for this problem and deduce its solution in a natural way. Next we consider ordinary and total least squares approximations for multiple regression problems and we study the solution of a general overdetermined system of equations in TLS-sense. In a final section we consider generalizations with multiple right-hand sides and with ``frozen'' columns. We remark that a TLS-approximation needs not exist in general; however, the line (or hyperplane) of best approximation in TLS-sense for a regression problem does exist always.", "subjects": "Rings and Algebras (math.RA); Numerical Analysis (math.NA)", "title": "An Introduction to Total Least Squares", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.97364464868338, "lm_q2_score": 0.8354835350552603, "lm_q1q2_score": 0.8134640729696273 }
https://arxiv.org/abs/0809.4621
Explicit constructions of infinite families of MSTD sets
We explicitly construct infinite families of MSTD (more sums than differences) sets. There are enough of these sets to prove that there exists a constant C such that at least C / r^4 of the 2^r subsets of {1,...,r} are MSTD sets; thus our family is significantly denser than previous constructions (whose densities are at most f(r)/2^{r/2} for some polynomial f(r)). We conclude by generalizing our method to compare linear forms epsilon_1 A + ... + epsilon_n A with epsilon_i in {-1,1}.
\section{Introduction} Given a finite set of integers $A$, we define its sumset $A+A$ and difference set $A-A$ by \bea A + A & \ = \ & \{a_i + a_j: a_i, a_j \in A\} \nonumber\\ A - A & = & \{a_i - a_j: a_i, a_j \in A\}, \eea and let $|X|$ denote the cardinality of $X$. If $|A+A| > |A-A|$, then, following Nathanson, we call $A$ an MSTD (more sums than differences) set. As addition is commutative while subtraction is not, we expect that for a `generic' set $A$ we have $|A-A| > |A+A|$, as a typical pair $(x,y)$ contributes one sum and two differences; thus we expect MSTD sets to be rare. Martin and O'Bryant \cite{MO} proved that, in some sense, this intuition is wrong. They considered the uniform model\footnote{This means each of the $2^n$ subsets of $\{1,\dots,n\}$ are equally likely to be chosen, or, equivalently, that the probability any $k \in \{1,\dots,n\}$ is in $A$ is just $1/2$.\label{footnote:unifmodel}} for choosing a subset $A$ of $\{1,\dots,n\}$, and showed that there is a positive probability that a random subset $A$ is an MSTD set (though, not surprisingly, the probability is quite small). However, the answer is very different for other ways of choosing subsets randomly, and if we decrease slightly the probability an element is chosen then our intuition is correct. Specifically, consider the binomial model with parameter $p(n)$, with $\lim_{n\to\infty} p(n) = 0$ and $n^{-1} = o(p(n))$ (so $p(n)$ doesn't tend to zero so rapidly that the sets are too sparse).\footnote{This model means that the probability $k \in \{1,\dots,n\}$ is in $A$ is $p(n)$.} Hegarty and Miller \cite{HM} recently proved that, in the limit as $n \to 0$, the percentage of subsets of $\{1,\dots,n\}$ that are MSTD sets tends to zero in this model. Though MSTD sets are rare, they do exist (and, in the uniform model, are somewhat abundant by the work of Martin and O'Bryant). Examples go back to the 1960s. Conway is said to have discovered $\{0, 2, 3, 4, 7, 11, 12, 14\}$, while Marica \cite{Ma} gave $\{0, 1, 2, 4, 7, 8, 12, 14, 15\}$ in 1969 and Freiman and Pigarev \cite{FP} found $\{0, 1, 2, 4, 5$, $9, 12, 13$, $14, 16, 17$, $21, 24, 25, 26, 28, 29\}$ in 1973. Recent work includes infinite families constructed by Hegarty \cite{He} and Nathanson \cite{Na2}, as well as existence proofs by Ruzsa \cite{Ru1, Ru2, Ru3}. Most of the previous constructions\footnote{An alternate method constructs an infinite family from a given MSTD set $A$ by considering $A_t = \{\sum_{i=1}^t a_i m^{i-1}: a_i \in A\}$. For $m$ sufficiently large, these will be MSTD sets; this is called the base expansion method. Note, however, that these will be very sparse. See \cite{He} for more details.} of infinite families of MSTD sets start with a symmetric set which is then `perturbed' slightly through the careful addition of a few elements that increase the number of sums more than the number of differences; see \cite{He,Na2} for a description of some previous constructions and methods. In many cases, these symmetric sets are arithmetic progressions; such sets are natural starting points because if $A$ is an arithmetic progression, then $|A+A| = |A-A|$.\footnote{As $|A+A|$ and $|A-A|$ are not changed by mapping each $x \in A$ to $\alpha x + \beta$ for any fixed $\alpha$ and $\beta$, we may assume our arithmetic progression is just $\{0,\dots,n\}$, and thus the cardinality of each set is $2n+1$.} In this work we present a new method which takes an MSTD set satisfying certain conditions and constructs an infinite family of MSTD sets. While these families are not dense enough to prove a positive percentage of subsets of $\{1, \dots, r\}$ are MSTD sets, we are able to elementarily show that the percentage is at least $C / r^4$ for some constant $C$. Thus our families are far denser than those in \cite{He,Na2}; trivial counting\footnote{For example, consider the following construction of MSTD sets from \cite{Na2}: let $m, d, k \in \mathbb{N}$ with $m \ge 4$, $1 \le d \le m-1$, $d \neq m/2$, $k \ge 3$ if $d < m/2$ else $k \ge 4$. Set $B = [0,m-1] \backslash \{d\}$, $L = \{m-d, 2m-d, \dots , km-d\}$, $a^\ast = (k + 1)m-2d$ and $A = B \cup L \cup (a^\ast - B) \cup \{m\}$. Then $A$ is an MSTD set. The width of such a set is of the order $km$. Thus, if we look at all triples $(m,d,k)$ with $km \le r$ satisfying the above conditions, these generate on the order of at most $\sum_{k \le r} \sum_{m \le r/k} \sum_{d \le m} 1 \ll r^2$, and there are of the order $2^r$ possible subsets of $\{0,\dots,r\}$; thus this construction generates a negligible number of MSTD sets. Though we write $f(r)/2^{r/2}$ to bound the percentage from other methods, a more careful analysis shows it is significantly less; we prefer this easier bound as it is already significantly less than our method. See for example Theorem 2 of \cite{He} for a denser example.} shows all of their infinite families give at most $f(r)2^{r/2}$ of the subsets of $\{1,\dots, r\}$ (for some polynomial $f(r)$) are MSTD sets, implying a percentage of at most $f(r) / 2^{r/2}$.\\ We first introduce some notation. The first is a common convention, while the second codifies a property which we've found facilitates the construction of MSTD sets. \\ \bi \item We let $[a,b]$ denote all integers from $a$ to $b$; thus $[a,b] = \{n \in \mathbb{Z}: a \le n \le b\}$.\\ \item We say a set of integers $A$ has the property $P_n$ (or is a $P_n$-set) if both its sumset and its difference set contain all but the first and last $n$ possible elements (and of course it may or may not contain some of these fringe elements).\footnote{It is not hard to show that for fixed $0<\alpha\le1$ a random set drawn from $[1,n]$ in the uniform model is a $P_{\lfloor \alpha n\rfloor}$-set with probability approaching $1$ as $n\to\infty$.\label{footnote:beingpn}} Explicitly, let $a=\min{A}$ and $b=\max{A}$. Then $A$ is a $P_n$-set if \bea\label{eq:beingPnsetsum} [2a+n,\ 2b-n] \ \subset\ A+A \eea and \bea\label{eq:beingPnsetdiff} [-(b-a)+n,\ (b-a)-n]\ \subset\ A-A.\eea \ \\ \ \ei We can now state our construction and main result. \begin{thm}\label{thm:mainconstruction} Let $A=L\cup R$ be a $P_n$, MSTD set where $L\subset[1,n]$, $R\subset[n+1,2n]$, and $1,2n\in A$;\footnote{Requiring $1, 2n \in A$ is quite mild; we do this so that we know the first and last elements of $A$.} see Remark \ref{rek:thmisnontrivial} for an example of such an $A$. Fix a $k \ge n$ and let $m$ be arbitrary. Let $M$ be any subset of $[n+k+1, n+k+m]$ with the property that it does not have a run of more than $k$ missing elements (i.e., for all $\ell \in [n+k+1,n+m+1]$ there is a $j \in [\ell, \ell+k-1]$ such that $j\in M$). Assume further that $n+k+1 \not\in M$ and set $A(M;k)=L \cup O_1 \cup M \cup O_2 \cup R'$, where $O_1=[n+1,n+k]$, $O_2=[n+k+m+1,n+2k+m]$ (thus the $O_i$'s are just sets of $k$ consecutive integers), and $R'=R+2k+m$. Then \ben \item $A(M;k)$ is an MSTD set, and thus we obtain an infinite family of distinct MSTD sets as $M$ varies; \item there is a constant $C > 0$ such that as $r\to\infty$ the percentage of subsets of $\{1,\dots,r\}$ that are in this family (and thus are MSTD sets) is at least $C / r^4$. \een \end{thm} \begin{rek}\label{rek:thmisnontrivial} In order to show that our theorem is not trivial, we must of course exhibit at least one $P_n$, MSTD set $A$ satisfying all our requirements (else our family is empty!). We may take the set\footnote{This $A$ is trivially modified from \cite{Ma} by adding 1 to each element, as we start our sets with 1 while other authors start with 0. We chose this set as our example as it has several additional nice properties that were needed in earlier versions of our construction which required us to assume slightly more about $A$.} $A = \{1, 2, 3, 5, 8, 9, 13, 15, 16\}$; it is an MSTD set as \bea A+A & \ = \ & \{2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,\nonumber\\ & & \ \ \ \ \ 22,23,24,25,26,28,29,30,31,32\} \nonumber\\ A-A & \ = \ & \{-15,-14,-13,-12,-11,-10,-8,-7,-6,-5,-4,-3,-2,-1,\nonumber\\ & & \ \ \ \ \ 0,1,2,3,4,5,6,7,8,10,11,12,13,14,15\} \eea (so $|A+A| = 30 >29 = |A-A|$). $A$ is also a $P_n$-set, as \eqref{eq:beingPnsetsum} is satisfied since $[10,24] \subset A+A$ and \eqref{eq:beingPnsetdiff} is satisfied since $[-7,7] \subset A-A$. For the uniform model, a subset of $[1,2n]$ is a $P_n$-set with high probability as $n\to\infty$, and thus examples of this nature are plentiful. For example, of the $1748$ MSTD sets with minimum $1$ and maximum $24$, $1008$ are $P_n$-sets. \end{rek} Unlike other estimates on the percentage of MSTD sets, our arguments are not probabilistic, and rely on explicitly constructing large families of MSTD sets. Our arguments share some similarities with the methods in \cite{He} (see for example Case I of Theorem 8) and \cite{MO}. There the fringe elements of the set were also chosen first. A random set was then added in the middle, and the authors argued that with high probability the resulting set is an MSTD set. We can almost add a random set in the middle; the reason we do not obtain a positive percentage is that we have the restriction that there can be no consecutive block of size $k$ of numbers in the middle that are not chosen to be in $A(M;k)$. This is easily satisfied by requiring us to choose at least one number in consecutive blocks of size $k/2$, and this is what leads to the loss of a positive percentage\footnote{Without this requirement, we could take any $M$ and thus would have a positive percentage work, specifically at least $2^{-(2k+2n)}$.} (though we do obtain sets that are known to be MSTD sets, and not just highly likely to be MSTD sets). The paper is organized as follows. We describe our construction in \S\ref{sec:infinitefamilies}, and prove our claimed lower bounds for the percentage of sets that are MSTD sets in \S\ref{sec:lowerboundspercentage}. We then generalize our construction in \S\ref{sec:generalizingconstr} and explore when there are infinite families of sets satisfying \be \left|\gep_1 A + \cdots + \gep_n A\right| \ > \ \left|\widetilde{\gep}_1 A + \cdots + \widetilde{\gep}_n A\right|, \ \ \ \gep_i, \widetilde{\gep}_i \in \{-1,1\}. \ee We end with some concluding remarks and suggestions for future research in \S\ref{sec:concremfutureresearch}. \section{Construction of infinite families of MSTD sets}\label{sec:infinitefamilies} Let $A\subset [1,2n]$. We can write this set as $A=L\cup R$ where $L\subset[1,n]$ and $R\subset[n+1,2n]$. We have \begin{equation} A+A \ = \ [L+L] \cup [L+R] \cup [R+R] \end{equation} where $L+L\subset[2,2n]$, $L+R \subset[n+2,3n]$ and $R+R\subset [2n+2,4n]$, and \begin{equation} A-A \ = \ [L-R]\cup [L-L] \cup [R-R] \cup [R-L] \end{equation} where $L-R\subset[-1,-2n+1]$, $L-L \subset[-(n-1),n-1]$, $R-R\subset [-(n-1),n-1]$ and $R-L\subset [1,2n-1]$. A typical subset $A$ of $\{1,\dots,2n\}$ (chosen from the uniform model, see Footnote \ref{footnote:unifmodel}) will be a $P_n$-set (see Footnote \ref{footnote:beingpn}). It is thus the interaction of the ``fringe'' elements that largely determines whether a given set is an MSTD set. Our construction begins with a set $A$ that is both an MSTD set and a $P_n$-set. We construct a family of $P_n$, MSTD sets by inserting elements into the middle in such a way that the new set is a $P_n$-set, and the number of added sums is equal to the number of added differences. Thus the new set is also an MSTD set. In creating MSTD sets, it is very useful to know that we have a $P_n$-set. The reason is that we have all but the ``fringe'' possible sums and differences, and are thus reduced to studying the extreme sums and differences. The following lemma shows that if $A$ is a $P_n$, MSTD set and a certain extension of $A$ is a $P_n$-set, then this extension is also an MSTD set. The difficult step in our construction is determining a large class of extensions which lead to $P_n$-sets; we will do this in Lemma \ref{lem:mainconstr}. \begin{lem}\label{lem:stability} Let $A=L\cup R$ be a $P_n$-set where $L\subset[1,n]$ and $R\subset[n+1,2n]$. Form $A'=L \cup M \cup R'$ where $M\subset [n+1,n+m]$ and $R'=R+m$. If $A'$ is a $P_n$-set then $|A'+A'|-|A+A|=|A'-A'|-|A-A|=2m$ (i.e., the number of added sums is equal to the number of added differences). In particular, if $A$ is an MSTD set then so is $A'$. \end{lem} \begin{proof} We first count the number of added sums. In the interval $[2,n+1]$ both $A+A$ and $A'+A'$ are identical, as any sum can come only from terms in $L+L$. Similarly, we can pair the sums of $A+A$ in the region $[3n+1,4n]$ with the sums of $A'+A'$ in the region $[3n+2m+1,4n+2m]$, as these can come only from $R+R$ and $(R+m)+(R+m)$ respectively. Since we have accounted for the $n$ smallest and largest terms in both $A+A$ and $A'+A'$, and as both are $P_n$-sets, the number of added sums is just $(3n+2m+1)-(3n+1)=2m$. Similarly, differences in the interval $[1-2n, -n]$ that come from $L-R$ can be paired with the corresponding terms from $L-(R+m)$, and differences in the interval $[n, 2n-1]$ from $R-L$ can be paired with differences coming from $(R+m)-L$. Thus the size of the middle grows from the interval $[-n+1,n-1]$ to the interval $[-n-m+1,n+m-1]$. Thus we have added $(2n+2m+3)-(2n+3)=2m$ differences. Thus $|A'+A'|-|A+A|=|A'-A'|-|A-A|=2m$ as desired. \end{proof} The above lemma is not surprising, as in it we assume $A'$ is a $P_n$-set; the difficulty in our construction is showing that our new set $A(M;k)$ is also a $P_n$-set for suitably chosen $M$. This requirement forces us to introduce the sets $O_i$ (which are blocks of $k$ consecutive integers), as well as requiring $M$ to have at least one of every $k$ consecutive integers. We are now ready to prove the first part of Theorem \ref{thm:mainconstruction} by constructing an infinite family of distinct $P_n$, MSTD sets. We take a $P_n$, MSTD set and insert a set in such a way that it remains a $P_n$-set; thus by Lemma \ref{lem:stability} we see that this new set is an MSTD set. \begin{lem}\label{lem:mainconstr} Let $A=L\cup R$ be a $P_n$-set where $L\subset[1,n]$, $R\subset[n+1,2n]$, and $1,2n\in A$. Fix a $k \ge n$ and let $m$ be arbitrary. Choose any $M\subset[n+k+1,n+k+m]$ with the property that $M$ does not have a run of more than $k$ missing elements, and form $A(M;k)=L \cup O_1 \cup M \cup O_2 \cup R'$ where $O_1=[n+1,n+k]$, $O_2=[n+k+m+1,n+2k+m]$, and $R'=R+2k+m$. Then $A(M;k)$ is a $P_n$-set. \end{lem} \begin{proof} For notational convenience, denote $A(M;k)$ by $A'$. Note $A'+A' \subset [2, 4n+4k+2m]$. We begin by showing that there are no missing sums from $n+2$ to $3n+4k+2m$; proving an analogous statement for $A'-A'$ shows $A'$ is a $P_n$-set. By symmetry\footnote{Apply the arguments below to the set $2n+2k+m-A'$, noting that $1, 2n+2k+m \in A'$.} we only have to show that there are no missing sums in $[n+2, 2n+2k+m]$. We consider various ranges in turn. We observe that $[n+2, n+k+1]\subset A'+A'$ because we have $1\in L$ and these sums result from $1+O_1$. Additionally, $O_1 + O_1=[2n+2, 2n+2k]\subset A'+A'$. Since $n\le k$ we have $n+k+1\ge 2n+1$, these two regions are contiguous and thus $[n+2, 2n+2k]\subset A'+A'$. Now consider $O_1 + M$. Since $M$ does not have a run of more than $k$ missing elements, the worst case scenario (in terms of getting the required sums) is that the smallest element of $M$ is $n+2k$ and that the largest element is $n+m+1$ (and, of course, we still have at least one out of every $k$ consecutive integers is in $M$). If this is the case then we still have $O_1+M \supset [(n+1)+(n+2k), (n+k) + (n+m+1)]=[2n+2k+1, 2n+k+m+1]$. We had already shown that $A'+A'$ has all sums up to $2n+2k$; this extends the sumset to all sums up to $2n+k+m+1$. All that remains is to show we have all sums in $[2n+k+m+2, 2n+2k+m]$. This follows immediately from $O_1+O_2=[2n+k+m+2,2n+3k+m]\subset A'+A'$. This extends our sumset to include all sums up to $2n+3k+m$, which is well past our halfway mark of $2n+2k+m$. Thus we have shown that $A'+A' \supset [n+2, 3n+4k+2m+1]$.\\ We now do a similar calculation for the difference set, which is contained in $[-(2n+2k+m)+1, (2n+2k+m)-1]$. As we have already analyzed the sumset, all that remains to prove $A$ is a $P_n$-set is to show that $A'-A' \supset [-n-2k-m+1,n+2k+m-1]$. As all difference sets\footnote{Unless, of course, $A$ is the empty set!} are symmetric about and contain $0$, it suffices to show the positive elements are present, i.e., that $A'-A' \supset [1,n+2k+m-1]$. We easily see $[1,k-1] \subset A'-A'$ as $[0,k-1] \subset O_1 - O_1$. Now consider $M - O_1$. Again the worst case scenario (for getting the required differences) is that the least element of $M$ is $n+2k$ and the greatest is $n+m+1$. With this in mind we see that $M - O_1 \supset [(n+2k)-(n+k) , (n+m+1)-(n+1)]=[k,m]$. Now $O_2 - O_1 \supset [(n+k+m+1)-(n+k), (n+2k+m)-(n+1)]=[m+1,2k+m-1]$, and we therefore have all differences up to $2k+m-1$. Since $2n \in A$ we have $2n+2k+m \in A'$. Consider $(2n+2k+m) - O_1 = [n+k+m,n+2k+m-1]$. Since $k\ge n$ we see that $n+k+m \le 2k+m$; this implies that we have all differences up to $n+2k+m-1$ (this is because we already have all differences up to $2k+m-1$, and $n+k+m$ is either less than $2k+m-1$, or at most one larger). \end{proof} \begin{proof}{Proof of Theorem \ref{thm:mainconstruction}(1).} The proof of the first part of Theorem \ref{thm:mainconstruction} follows immediately. By Lemma \ref{lem:mainconstr} our new sets $A(M;k)$ are $P_n$-sets, and by Lemma \ref{lem:stability} they are also MSTD. All that remains is to show that the sets are distinct; this is done by requiring $n+k+1$ is not in our set (for a fixed $k$, these sets have elements $n+1, \dots, n+k$ but not $n+k+1$; thus different $k$ yield distinct sets). \end{proof} \section{Lower bounds for the percentage of MSTDs}\label{sec:lowerboundspercentage} To finish the proof of Theorem \ref{thm:mainconstruction}, for a fixed $n$ we need to count how many sets $\widetilde{M}$ of the form $O_1 \cup M \cup O_2$ (see Theorem \ref{thm:mainconstruction} for a description of these sets) of width $r = 2k+m$ can be inserted into a $P_n$, MSTD set $A$ of width $2n$. As $O_1$ and $O_2$ are just intervals of $k$ consecutive ones, the flexibility in choosing them comes solely from the freedom to choose their length $k$ (so long as $k \ge n$). There is far more freedom to choose $M$. There are two issues we must address. First, we must determine how many ways there are there to fill the elements of $M$ such that there are no runs of $k$ missing elements. Second, we must show that the sets generated by this method are distinct. We saw in the proof of Theorem \ref{thm:mainconstruction}(1) that the latter is easily handled by giving $A(M;k)$ (through our choice of $M$) slightly more structure. Assume that the element $n+k+1$ is \emph{not} in $M$ (and thus not in $A$). Then for a fixed width $r=2k+m$ each value of $k$ gives rise to necessarily distinct sets, since the set contains $[n+1, n+k]$ but not $n+k+1$. In our arguments below, we assume our initial $P_n$, MSTD set $A$ is fixed; we could easily increase the number of generated MSTD sets by varying $A$ over certain MSTD sets of size $2n$. We choose not to do this as $n$ is fixed, and thus varying over such $A$ will only change the percentages by a constant independent of $k$ and $m$. Fix $n$ and let $r$ tend to infinity. We count how many $\widetilde{M}$'s there are of width $r$ such that in $M$ there is at least one element chosen in any consecutive block of $k$ integers. One way to ensure this is to divide $M$ into consecutive, non-overlapping blocks of size $k/2$, and choose at least one element in each block. There are $2^{k/2}$ subsets of a block of size $k/2$, and all but one have at least one element. Thus there are $2^{k/2} - 1 = 2^{k/2} (1 - 2^{-k/2})$ valid choices for each block of size $k/2$. As the width of $M$ is $r-2k$, there are $\lceil \frac{r-2k}{k/2}\rceil \le \frac{r}{k/2}-3$ blocks (the last block may have length less than $k/2$, in which case any configuration will suffice to ensure there is not a consecutive string of $k$ omitted elements in $M$ because there will be at least one element chosen in the previous block). We see that the number of valid $M$'s of width $r-2k$ is at least $2^{r-2k} \left(1 - 2^{-k/2}\right)^{\frac{r}{k/2}-3}$. As $O_1$ and $O_2$ are two sets of $k$ consecutive $1$'s, there is only one way to choose either. We therefore see that, for a fixed $k$, of the $2^r = 2^{m+2k}$ possible subsets of $r$ consecutive integers, we have at least $2^{r-2k} \left(1 - 2^{-k/2}\right)^{\frac{r}{k/2}-3}$ are permissible to insert into $A$. To ensure that all of the sets are distinct, we require $n+k+1 \not\in M$; the effect of this is to eliminate one degree of freedom in choosing an element in the first block of $M$, and this will only change the proportionality constants in the percentage calculation (and \emph{not} the $r$ or $k$ dependencies). Thus if we vary $k$ from $n$ to $r/4$ (we could go a little higher, but once $k$ is as large as a constant times $r$ the number of generated sets of width $r$ is negligible) we have at least some fixed constant times $2^r \sum_{k=n}^{r/4} \frac{1}{2^{2k}} \left(1 - 2^{-k/2}\right)^{\frac{r}{k/2}-3}$ MSTD sets; equivalently, the percentage of sets $O_1 \cup M \cup O_2$ with $O_i$ of width $k \in \{n,\dots, r/4\}$ and $M$ of width $r-2k$ that we may add is at least this divided by $2^r$, or some universal constant times \be\label{eq:keycardsum} \sum_{k=n}^{r/4} \frac1{2^{2k}} \left(1 - \frac1{2^{k/2}}\right)^{\frac{r}{k/2}} \ee (as $k \ge n$ and $n$ is fixed, we may remove the $-3$ in the exponent by changing the universal constant). We now determine the asymptotic behavior of this sum. More generally, we can consider sums of the form \be S(a,b,c;r) \ = \ \sum_{k=n}^{r/4} \frac1{2^{ak}} \left(1 - \frac1{2^{bk}}\right)^{r/ck}. \ee For our purposes we take $a=2$ and $b=c=1/2$; we consider this more general sum so that any improvements in our method can readily be translated into improvements in counting MSTD sets. While we know (from the work of Martin and O'Bryant \cite{MO}) that a positive percentage of such subsets are MSTD sets, our analysis of this sum yields slightly weaker results. The approach in \cite{MO} is probabilistic, obtained by fixing the fringes of our subsets to ensure certain sums and differences are in (or not in) the sum- and difference sets. While our approach also fixes the fringes, we have far more possible fringe choices than in \cite{MO} (though we do not exploit this). While we cannot prove a positive percentage of subsets are MSTD sets, our arguments are far more elementary. The proof of Theorem \ref{thm:mainconstruction}(2) is clearly reduced to proving the following lemma, and then setting $a = 2$ and $b=c=1/2$. \begin{lem}\label{lem:lowerupperbounds} Let \be S(a,b,c;r) \ = \ \sum_{k=n}^{r/4} \frac1{2^{ak}} \left(1 - \frac1{2^{bk}}\right)^{r/ck}. \ee Then for any $\epsilon > 0$ we have \be \frac{1}{r^{a/b}} \ \ll \ S(a,b,c;r) \ \ll \ \frac{(\log r)^{2a+\epsilon}}{r^{a/b}}. \ee \end{lem} \begin{proof} We constantly use $(1 - 1/x)^x$ is an increasing function in $x$. We first prove the lower bound. For $k \ge (\log_2 r)/b$ and $r$ large, we have \be \left(1 - \frac1{2^{bk}}\right)^{r/ck} \ = \ \left(1 - \frac1{2^{bk}}\right)^{2^{bk} \frac{r}{ck 2^{bk}}} \ \ge \ \left(1 - \frac{1}{r}\right)^{r \cdot \frac{b}{c \log_2 r}} \ \ge \ \foh \ee (in fact, for $r$ large the last bound is almost exactly 1). Thus we trivially have \bea S(a,b,c;r) & \ \ge \ & \sum_{k = (\log_2 r)/b}^{r/4} \frac1{2^{ak}} \cdot \foh \ \gg \ \frac1{r^{a/b}}. \eea For the upper bound, we divide the $k$-sum into two ranges: (1) $bn \le bk \le \log_2 r - \log_2 (\log r)^\delta$; (2) $\log_2 r - \log_2(\log r)^\delta \le bk \le br/4$. In the first range, we have \begin{eqnarray} \left(1 - \frac1{2^{bk}}\right)^{r/ck} & \ \le \ & \left(1 - \frac{(\log r)^\delta}{r}\right)^{r/ck}\nonumber\\ & \ \ll \ & \exp\left(-\frac{b(\log r)^\delta}{c \log_2 r}\right) \nonumber\\ & \ \le \ & \exp\left(-\frac{b\log 2}{c} \cdot (\log r)^{\delta -1}\right). \end{eqnarray} If $\delta > 2$ then this factor is dominated by $r^{-\frac{b\log 2}{c} \cdot (\log r)^{\delta - 2}} \ll r^{-A}$ for any $A$ for $r$ sufficiently large. Thus there is negligible contribution from $k$ in range (1) if we take $\delta = 2 + \epsilon/a$ for any $\epsilon > 0$. For $k$ in the second range, we trivially bound the factors $\left(1 - 1/2^{bk}\right)^{r/ck}$ by 1. We are left with \bea \sum_{k \ge \frac{\log_2 r}b - \frac{\log_2(\log r)^\delta}{b}} \ \frac1{2^{ak}} \cdot 1 \ \le \ \frac{(\log r)^{a \delta}}{r^{a/b}} \sum_{\ell = 0}^\infty \frac1{2^{a\ell}} \ \ll \ \frac{(\log r)^{a \delta}}{r^{a/b}}. \eea Combining the bounds for the two ranges with $\delta = 2 + \epsilon/a$ completes the proof. \end{proof} \begin{rek} The upper and lower bounds in Lemma \ref{lem:lowerupperbounds} are quite close, differing by a few powers of $\log r$. The true value will be at least $\left(\frac{\log r}{r}\right)^{a/b}$; we sketch the proof in Appendix \ref{sec:sizeSabcm}. \end{rek} \begin{rek} We could attempt to increase our lower bound for the percentage of subsets that are MSTD sets by summing $r$ from $R_0$ to $R$ (as we have fixed $r$ above, we are only counting MSTD sets of width $2n+r$ where $1$ and $2n+r$ are in the set. Unfortunately, at best we can change the universal constant; our bound will still be of the order $1/R^4$. To see this, note the number of such MSTD sets is at least a constant times $\sum_{r=R_0}^R 2^r / r^4$ (to get the percentage, we divide this by $2^R$). If $r \le R/2$ then there are exponentially few sets. If $r \ge R/2$ then $r^{-4} \in [1/R^4, 16/R^4]$. Thus the percentage of such subsets is still only at least of order $1/R^4$. \end{rek} \section{Generalizing our construction}\label{sec:generalizingconstr} Instead of searching for $A$ such that $|A+A| > |A-A|$, we now consider the more general problem\footnote{We do not consider the most general problem of comparing arbitrary combinations of $A$, contenting ourselves to this special case; see \cite{HM} for some thoughts about such generalizations.} of when \be \left|\gep_1 A + \cdots + \gep_n A\right| \ > \ \left|\widetilde{\gep}_1 A + \cdots + \widetilde{\gep}_n A\right|, \ \ \ \gep_i, \widetilde{\gep}_i \in \{-1,1\}. \ee Consider the generalized sumset \be f_{j_1,\ j_2}(A)\ =\ A+A+\cdots+A-A-A-\cdots-A,\ee where there are $j_1$ pluses\footnote{By a slight abuse of notation, we say there are two sums in $A+A-A$, as is clear when we write it as $\epsilon_1 A + \epsilon_2 A + \epsilon_3 A$.} and $j_2$ minuses, and set $j=j_1 + j_2$. Our notion of a $P_n$-set generalizes, and we find that if there exists one set $A$ with $|f_{j_1,\ j_2}(A)| > |f_{j_1',\ j_2'}(A)|$, then we can construct infinitely many such $A$. Note without loss of generality that we may assume $j_1 \ge j_2$.\footnote{This follows as we are only interested in $|f_{j_1,\ j_2}(A)|$, which equals $|f_{j_2,\ j_1}(A)|$. This is because $B$ and $-B$ have the same cardinality, and thus (for example) we see $A+A-A$ and $-(A-A-A)$ have the same cardinality.} \begin{defi}[$P_n^j$-set.] Let $A \subset [1, k]$ with $1, k, \in A$. We say $A$ is a $P^j_n$-set if any $f_{j_1,\ j_2}(A)$ contains all but the first $n$ and last $n$ possible elements. \end{defi} \begin{rek} Note that a $P_n^2$-set is the same as what we called a $P_n$-set earlier. \end{rek} We expect the following generalization of Theorem \ref{thm:mainconstruction} to hold. \begin{conj}\label{conj:genconstr} For any $f_{j_1,\ j_2}$ and $f_{j_1',\ j_2'}$, if there exists a finite set of integers $A$ which is (1) a $P^j_n$-set; (2) $A \subset [1, 2n]$ and $1, 2n \in A$; and (3) $|f_{j_1,\ j_2}(A)|>|f_{j_1',\ j_2'}(A)|$, then there exists an infinite family of such sets. \end{conj} The difficulty in proving the above conjecture is that we need to find a set $A$ satisfying $|f_{j_1,\ j_2}(A)|>|f_{j_1',\ j_2'}(A)|$; once we find such a set, we can mirror the construction from Theorem \ref{thm:mainconstruction}. Currently we can only find such $A$ for $j \in \{2,3\}$: \begin{thm}\label{thm:genconstr} Conjecture \ref{conj:genconstr} is true for $j \in \{2,3\}$. \end{thm} As the proof is similar to that of Theorem \ref{thm:mainconstruction}, we just highlight the changes. We prove the lemmas below in greater generality than we need for our theorem as this generality is needed to attack Conjecture \ref{conj:genconstr}. The first step is an analogue of Lemma \ref{lem:stability}, the second is proving that a $P_n^2$-set is also a $P_n^j$-set, and the third is constructing sets $A$ (when $j = 3$) to start the construction. \begin{lem}\label{lem:gen1} Let $A=L \cup R$ be a $P_n^j$-set, where $L \subset [1, n], R \subset [n+1, 2n]$. Form $A' =L\cup M\cup R',$ where $M \subset [n+1, n+m]$ and $R' = R+m$. If $A'$ is a $P_n^j$-set, then $|f_{j_1,\ j_2}(A')| -|f_{j_1,\ j_2}(A)| = |f_{j_1',\ j_2'}(A')| -|f_{j_1',\ j_2'}(A)|.$ Thus if $|f_{j_1,\ j_2}(A)|>|f_{j_1',\ j_2'}(A)|$, the same is true for $A'$. \end{lem} \begin{proof} Since $A\subset [1, 2n]$ and is a $P^j_n$-set, we know $f(A) \subset [j_1-2nj_2, 2nj_1-j_2]$ and $[j_1 -2nj_2+n, 2nj_1-j_2 -n] \subset f(A)$. Note any elements in $f(A) \cap [j_1 -2nj_2, j_1-2nj_2+n-1]$ can only come from $L+L+L+ \cdots+L-R-R-R-\cdots-R$. As $A' \subset [1, 2n+m]$, $f(A') \subset [j_1-(2n+m)j_2 , (2n+m)j_1-j_2]$ and $[j_1 -(2n+m)j_2+n, (2n+m)j_1-j_2 -n] \subset f(A)]$. Any elements in $f(A) \cap [j_1 -(2n+m)j_2, j_1-(2n+m)j_2+n-1]$ can only come only from $L+L+L+ \cdots+L-R'-R'-R'-\cdots-R'$, which is simply a translation of $L+L+L+ \cdots+L-R-R-R-\cdots-R$. A similar argument works for the right fringe of $f_{j_1,\ j_2}(A')$. Thus $|f(A')| = |f(A)| +jm$ (this is because the potential width of $f_{j_1,\ j_2}(A')$ is $jm$ more than that of $f_{j_1,\ j_2}(A)$, and the two fringes of these sets are in a 1-1 correspondence). Since $|f_{j_1,\ j_2}(A')| - |f_{j_1,\ j_2}(A)|$ depends only on $j=j_1+j_2$, it holds for any pair of forms with $j$ coefficients, and the lemma is proven. \end{proof} \begin{lem}\label{lem:gen2} For $j \ge 3$, any $P_n^2$-set is also a $P^j_n$-set. \end{lem} \begin{proof} Let A be a $P_n^2$-set, where $A \subset [1, k]$ and $1, k \in A$. Assume $k \geq 2n$. Then $A+A \cap [n+2, 2k-n] = [n+2, 2k-n]$ (as $A$ is a $P_n^2$-set). Let $f_{j_1,\ j_2}$ be a form with $j \ge 3$, and thus either $j_1$ or $j_2$ is at least 2; without loss of generality we assume $j_1 \ge 2$. There is a form $f_{j_1-2,\ j_2}$ such that $f_{j_1-2,\ j_2}(A) +A+A = f_{j_1,\ j_2}(A)$. The proof follows by showing $f_{j_1-2,\ j_2}(\{1, k\})+A+A$ contains all necessary elements, namely $[j_1-kj_2 +n, j_1k - j_2 -n]$. (By $f_{j_1-2,\ j_2}(\{1, k\})$ we mean all numbers of the form $\gep_1 a_1 + \cdots + \gep_{j-2} a_{j-2}$, with the $\gep_i$ the coefficients of the form $f_{j_1-2,j_2}$ and $a_i \in \{1,k\}$.) We have \begin{equation} f_{j_1-2,\ j_2}(\{1, k\})\ \supset \ \{j_1 - 2 -i +k(i-j_2) \;|\; 0\leq i \leq j -2\}. \end{equation} To see this, we first consider $i \le j_1-2$. For such $i$, for the positive summands choose $1$ a total of $j_1-2-i$ times and $k$ a total of $i$ times, while for the negative summands we choose $k$ each of the $j_2$ times. If now $j_1 - 2 < i \le j-2$, for the positive summands we choose $k$ a total of $i-j_2$ times (which is permissible as this is at most $j_1-2$) and we choose 1 the remaining $j_1-2-(i-j_2)$ times, while for the negative summands we choose $1$ all $j_2$ times. This leads to a sum of $k\cdot (i-j_2) + 1\cdot(j_1-2+j_2-i)-1\cdot j_2$, which equals $j_1-2-i+ k(i-j_2)$ as claimed. Unfortunately, this argument fails if $i=j_1-1$ and $j_1=j_2$, as we would then be choosing $k$ from the positive summands negative one times.\footnote{This is the only bad case we need consider, as we know $j_1 \ge j_2$, and the only problem arises when $i-j_2 < 0$.} We are thus left with showing that we may obtain the sum $-1-k$ in this special case. As $j_1=j_2$, we just choose $1$ for the $j_1-2$ positive summands and $-1$ for all but one of the $j_2$ negative summands (where we choose one to be $k$). As $A$ is a $P_n^2$-set, $A+A \supset [n+2, 2k-n]$. Thus \begin{eqnarray} \bigcup_{i=0}^{j-2} [L_i, U_i] & \ \subset \ & f_{j_1-2,\ j_2}(\{1, k\})+A+A, \end{eqnarray} where \begin{eqnarray} L_i & \ = \ & j_1 - 2 -i +k(i-j_2)+ n+2 \nonumber\\ U_i &=& j_1 - 2 -i +k(i-j_2) +2k-n. \end{eqnarray} We see that $L_0 = j_1-kj_2 +n$ and $U_{j-2} = j_1k - j_2 -n$, our two desired endpoints. The proof is completed by showing the intervals $[L_i,U_i]$ cover the desired interval and has no gap with its neighbors. Since $2n \leq k$, we have: \begin{eqnarray} L_i -1 & \ = \ & j_1 -i +k(i-j_2)+ n -1\nonumber\\ &=& (j_1-i+ki-j_2k -1) +n\nonumber\\ & \le & (j_1-i+ki-j_2k -1) +k- n \nonumber\\ &=& j_1 - 2 -(i-1) +k((i-1)-j_2) +2k-n \nonumber\\ &\le & U_{i-1}. \end{eqnarray} Thus there are no gaps between the intervals $[L_{i-1}, U_{i-1}], \; [L_i, U_i]$ and they therefore cover the necessary range. \end{proof} \begin{rek} Note that the above lemma is false if the size of $n$ is unrestricted. To take an extreme example, let $A = \{1, 10\}$ and $n= 9$. Then $A$ is a $P_n^2$-set ($11 \in A+A, \; 0 \in A-A$) but $A$ is not a $P^3_n$-set. \end{rek} \begin{proof}[Proof of Theorem \ref{thm:genconstr}] Lemmas \ref{lem:gen1} and \ref{lem:gen2} imply that the sets described in Lemma \ref{lem:mainconstr} also work in our generalized case. The counting argument of \S\ref{sec:lowerboundspercentage} requires no modification. Thus the theorem is proved \emph{provided} we can find an $A$ to start the process. The following set was obtained by taking elements in $\{2,\dots,49\}$ to be in $A$ with probability\footnote{Note the probability is 1/3 and not 1/2.} $1/3$ (and, of course, requiring $1, 50 \in A$); it took about 300000 sets to find the first one satisfying our conditions: \be A \ = \ \{1, 2, 5, 6, 16, 19, 22, 26, 32, 34, 35, 39, 43, 48, 49, 50\}. \ee To be a $P_{25}^3$-set we need to have $A+A+A \supset [n + 3, 6 n - n] = [28, 125]$ and $A+A-A \supset [-n + 2, 3 n - 1] = [-23, 74]$. A simple calculation shows $A+A+A = [3,150]$, all possible elements, while $A+A-A = [-48, 99] \backslash \{-34\}$ (i.e., every possible element but -34). Thus $A$ is a $P_{25}^3$-set satisfying $|A+A+A| > |A+A-A|$, and thus we have the example we need to prove Theorem \ref{thm:genconstr}. \end{proof} \begin{rek} We could also have taken \be A \ = \ \{1, 2, 3, 4, 8, 12, 18, 22, 23, 25, 26, 29, 30, 31, 32, 34, 45, 46, 49, 50\}, \ee which has the same $A+A+A$ and $A+A-A$. \end{rek} \section{Concluding remarks and future research}\label{sec:concremfutureresearch} One avenue of future research is to complete the proof of Conjecture \ref{conj:genconstr} and give an elementary example of an infinite family of sets satisfying $|f_{j_1,\ j_2}(A)| > |f_{j_1',\ j_2'}(A)|$. We have reason to believe the correct model is to look for $P_n^j$-sets by choosing the numbers $\{2,\dots,2n-1\}$ to be in $A$ with probability $1/j$ (and, of course, requiring $1, 2n \in A$). Unfortunately the density of such sets appears to decrease rapidly with $n$, and to date straightforward computer searches have been unsuccessful when $j=4$. As we shall see below, perhaps a better algorithm would incorporate choosing elements near the fringes (i.e., near $1$ and $2n$) with a different probability than $1/j$. \\ We also observed earlier (Footnote \ref{footnote:beingpn}) that for a constant $0<\alpha \le 1$, a set randomly chosen from $[1,2n]$ is a $P_{\lfloor \alpha n \rfloor}$-set with probability approaching $1$ as $n\to\infty$. MSTD sets are of course not random, but it seems logical to suppose that this pattern continues. \begin{conj}\label{conj:MSTDsP_n} Fix a constant $0<\alpha\le 1/2$. Then as $n\to\infty$ the probability that a randomly chosen MSTD set in $[1,2n]$ containing $1$ and $2n$ is a $P_{\lfloor \alpha n \rfloor}$-set goes to $1$. \end{conj} In our construction and that of \cite{MO}, a collection of MSTD sets is formed by fixing the fringe elements and letting the middle vary. The intuition behind both is that the fringe elements matter most and the middle elements least. Motivated by this it is interesting to look at all MSTD sets in $[1,n]$ and ask with what frequency a given element is in these sets. That is, what is \be \gamma(k;n) \ = \ \frac{\#\{A: k \in A\ {\rm and}\ A\ {\rm is\ an\ MSTD\ set}\}}{\#\{A: \ A\ {\rm is\ an\ MSTD\ set}\}} \ee as $n\to\infty$? We can get a sense of what these probabilities might be from Figure \ref{fig:MSTDfreq}. \begin{figure} \begin{center} \scalebox{1.75}{\includegraphics{freqn100kvaries.eps}} \caption{\label{fig:MSTDfreq}\textbf{Estimation of $\gamma(k,100)$ as $k$ varies from $1$ to $100$ from a random sample of 4458 MSTD sets.}} \end{center}\end{figure} Note that, as the graph suggests, $\gamma$ is symmetric about $\frac{n+1}2$, i.e. $\gamma(k,n)=\gamma(n+1-k,n)$. This follows from the fact that the cardinalities of the sumset and difference set are unaffected by sending $x \to \alpha x + \beta$ for any $\alpha, \beta$. Thus for each MSTD set $A$ we get a distinct MSTD set $n+1-A$ showing that our function $\gamma$ is symmetric. These sets are distinct since if $A=n+1-A$ then $A$ is sum-difference balanced.\footnote{The following proof is standard (see, for instance, \cite{Na2}). If $A = n+1-A$ then \be |A+A| \ = \ |A + (n+1-A)| \ = \ |n+1+(A-A)| \ = \ |A-A|.\ee} From \cite{MO} we know that a positive percentage of sets are MSTD sets. By the central limit theorem we then get that the average size of an MSTD set chosen from $[1,n]$ is about $n/2$. This tells us that on average $\gamma(k,n)$ is about $1/2$. The graph above suggests that the frequency goes to $1/2$ in the center. This leads us to the following conjecture: \begin{conj}\label{conj:50conjecture} Fix a constant $0<\alpha<1/2$. Then $\lim_{n\rightarrow\infty}{\gamma(k,n)}=1/2$ for $\lfloor \alpha n \rfloor \le k \le n - \lfloor \alpha n \rfloor$. \end{conj} \begin{rek} More generally, we could ask which non-decreasing functions $f(n)$ have $f(n)\rightarrow \infty$, $n-f(n)\rightarrow \infty$ and $\lim_{n\rightarrow\infty}{\gamma(k,n)}=1/2$ for all $k$ such that $\lfloor f(n) \rfloor \le k \le n - \lfloor f(n) \rfloor$. \end{rek}
{ "timestamp": "2008-11-22T15:59:14", "yymm": "0809", "arxiv_id": "0809.4621", "language": "en", "url": "https://arxiv.org/abs/0809.4621", "abstract": "We explicitly construct infinite families of MSTD (more sums than differences) sets. There are enough of these sets to prove that there exists a constant C such that at least C / r^4 of the 2^r subsets of {1,...,r} are MSTD sets; thus our family is significantly denser than previous constructions (whose densities are at most f(r)/2^{r/2} for some polynomial f(r)). We conclude by generalizing our method to compare linear forms epsilon_1 A + ... + epsilon_n A with epsilon_i in {-1,1}.", "subjects": "Number Theory (math.NT)", "title": "Explicit constructions of infinite families of MSTD sets", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9865717480217661, "lm_q2_score": 0.8244619220634456, "lm_q1q2_score": 0.8133908396275186 }
https://arxiv.org/abs/1402.5826
Depth and Stanley Depth of the Canonical Form of a factor of monomial ideals
In this paper we show that the depth and the Stanley depth of the factor of two monomial ideals is invariant under taking a so called canonical form. It follows easily that the Stanley Conjecture holds for the factor if and only if it holds for its canonical form. In particular, we construct an algorithm which simplifies the depth computation and using the canonical form we massively reduce the run time for the sdepth computation.
\section{Introduction} \vskip 1cm Let $K$ be a field and $S=K[x_1,\ldots,x_n]$ be the polynomial ring over $K$ in $n$ variables. A Stanley decomposition of a graded $S-$module $M$ is a finite family $$\mathcal{D} = (S_i, u_i)_{i \in I}$$ in which $u_i$ are homogeneous elements of $M$ and $S_i$ are graded $K-$algebra retract if $S$ for all $i \in I$ such that $S_i \cap \Ann (u_i) = 0$ and $$M = \DSum{i \in I}{} S_iu_i$$ as a graded $K-$vector space. The Stanley depth of $\mathcal{D}$, denoted by $\sdepth (\mathcal{D})$, is the depth of the $S-$module $\DSum{i \in I}{} S_iu_i$. The Stanley depth of $M$ is defined as $$ \sdepth\ (M) :=\max\{\sdepth \ ({\mathcal D})\ |\ {\mathcal D}\; \text{is a Stanley decomposition of}\; I \}.$$ Another definition of sdepth using partitions is given in \cite{HVZ}. Stanley's Conjecture \cite{S} states that the Stanley depth \\ $\sdepth (M)$ is $\geq \depth\ (M)$. Let $J \subsetneq I \subset S$ be two monomial ideals in $S$. In \cite{IKF}, Ichim et. al. studied the sdepth and depth of the factor $\nicefrac{I}{J}$ under polarization and reduced the Stanley's Conjecture to the case when the ideals are monomial squarefree. This is possible the best result from the last years concerning Stanley's depth. It is worth to mention that this result is not very useful for computing sdepth since it introduces a lot of new variables. In the squarefree case there are not many known results about the Stanley conjecture (see for example \cite{PP}). Another result of \cite{IKF} which helps in the sdepth computation is the following proposition, which extends \cite[Lemma 1.1]{Ci}, \cite[Lemma 2.1]{IQ}. \begin{proposition}\label{main} \cite[Proposition 5.1]{IKF} Let $k\in {\mathbb N}$ and $I''$, $J''$ be the monomial ideals obtained from $I$, $J$ in the following way: Each generator whose degree in $x_n$ is at least $k$ is multiplied by $x_n$ and all other generators are taken unchanged. Then $\sdepth_S\nicefrac{I}{J} = \sdepth_S\nicefrac{I''}{J''}$. \end{proposition} Inspired by this proposition we introduce a canonical form of a factor $\nicefrac{I}{J}$ of monomial ideals (see Definition \ref{def: canonical quot}) and we prove easily that sdepth is invariant under taking the canonical form (see Theorem \ref{prop:sdepth}). This leads us to the idea to study also the depth case (see Theorem \ref{prop:depth}). Theorem \ref{m} says that Stanley's Conjecture holds for a factor of monomial ideals if and only if it holds for its canonical form. As a side result, in the depth (respectively sdepth) computation algorithm for $\nicefrac{I}{J}$, one can first compute the canonical form and use the algorithm on this new much more simpler module (see the Appendix). In Example \ref{ex: timings} we conclude that the $\depth$ and $\sdepth$ algorithms are faster when considering the canonical form: using $\textsc{CoCoA}$\cite{Co}, $\textsc{Singular}$\cite{Sing} and Rinaldo's $\sdepth$ computation algorithm \cite{Rina} we see a small decrease in the $\depth$ case timing, but in the $\sdepth$ case the run time is massively reduced. We hope that our algorithm together with the one from \cite{AP} will be used very often in problems concerning monomial ideals. We owe thanks to Y.-H. Shen who noticed our results in a previous arXiv version and showed us the papers of Okazaki and Yanagawa \cite{OY} and \cite{Y}, because they are strongly connected with our topic. Indeed Proposition \ref{main} and Corollary \ref{prop:5.1 for depth } follow from \cite[Theorem 5.2]{OY} (see also \cite[Section 2,3]{OY}). However, our proofs of Lemma \ref{lemma:depth is constant} and Corollary \ref{prop:5.1 for depth } are completely different from those appeared in the quoted papers and we keep them for the sake of our completeness. \vskip 1cm \section{The canonical form of a factor of monomial ideals} \vskip 1cm Let $R=K[x_1,\ldots,x_{n-1}]$ be the polynomial $K$-algebra over a field $K$ and $S := R[x_n]$. Consider $J \subsetneq I\subset R$ two monomial ideals and denote by $G(I)$, respectively $G(J)$, the minimal (monomial) system of generators of $I$, respectively $J$. \begin{definition} \label{def: canonical} {\em The power $x_n^r$ {\em enters in a monomial} $u$ if $x_n^r|u$ but $x_n^{r+1}\nmid u$. We say that $I$ is {\em of type} $(k_1,\ldots,k_s)$ {\em with respect to} $x_n$ if $x_n^{k_i}$ are all the powers of $x_n$ which enter in a monomial of $G(I)$ for $i\in [s]$ and $1\leq k_1<\ldots<k_s$. $I$ is {\em in the canonical form with respect to} $x_n$ if $I$ is of type $(1,\ldots,s)$ for some $s\in {\mathbb N}$. We simply say that $I$ is {\em the canonical form} if it is in the canonical form with respect to all variables $x_1, \ldots, x_n$. } \end{definition} \begin{remark} \label{rem:canonical form}{\em Suppose that $I$ is of type $(k_1,\ldots,k_s)$ with respect to $x_n$. It is easy to get the {\em canonical form} $I'$ of $I$ {\em with respect to} $x_n$: replace $x_n^{k_i}$ by $x_n^i$ whenever $x_n^{k_i}$ enters in a generators of $G(I)$. Applying by recurrence this procedure for other variables we get the {\em canonical form} of $I$, that is with respect to all variables. Note that a squarefree monomial ideal is of type $(1)$ with respect to each $x_i$ and it is in the canonical form with respect to $x_i$, so in this case $I'=I$. } \end{remark} \begin{definition} \label{def: canonical quot} {\em Let $J \subsetneq I \subset S$ two monomial ideals. We say that $\nicefrac{I}{J}$ is {\em of type} $(k_1, \ldots, k_s)$ {\em with respect to } $x_n$ if $x_n^{k_i}$ are all the powers of $x_n$ which enter in a monomial of $G(I) \cup G(J)$ for $i \in [s]$ and $1 \leq k_1 < \ldots < k_s $. All the terminology presented in Definition \ref{def: canonical} will extend automatically to the factor case. Thus we may speak about the {\em canonical form} $\overline{\nicefrac{I}{J}}$ of $\nicefrac{I}{J}$. } \end{definition} \begin{remark} \label{rem:canonical quot} {\em In order to compute the canonical form with respect to $x_n$ of the $(k_1, \ldots, k_s)-$type factor $\nicefrac{I}{J}$, one will replace $x_n^{k_i}$ by $x_n^i$ whenever $x_n^{k_i}$ enters a generator of $G(I) \cup G(J)$. } \end{remark} \begin{example}\label{ex:canonical} {\em We present some examples where we compute the canonical form of a monomial ideal, respectively a factor of two monomial ideals. \\ \begin{enumerate} \item Consider $S = \mathbb Q [x,y]$ and the monomial ideal $I = (x^4,x^3y^7)$. Then the canonical form of $I$ is $I' = (x^2, xy)$. \item Consider $S = \mathbb Q [x,y,z]$, $I = (x^{10}y^5, x^4yz^7,z^7y^3)$ and \\ $J = (x^{10}y^{20}z^2, x^3y^4z^{13}, x^9y^2z^7)$. The canonical form of $\nicefrac{I}{J}$ is $\overline{\nicefrac{I}{J}} = \fracs{(x^4y^5, x^2yz^2, y^3z^2)}{(x^4y^6z, xy^4z^3, x^3y^2z^2)}$. \end{enumerate}} \end{example} The canonical form of a factor of monomial ideals $\nicefrac{I}{J}$ is not usually the factor of the canonical forms of $I$ and $J$ as shows the following example. \begin{example}{\em Let $S = \mathbb Q [x, y]$, $I = (x^4, y^{10}, x^2y^7)$ be and $J = (x^{20}, y^{30})$. The canonical form of $I$ is $I' = (x^2, y^2, xy)$ and the canonical form of $J$ is $J' = (x,y)$. Then $J'\not\subset I'$. But the canonical form of the factor $\nicefrac{I}{J}$ is $\overline{\nicefrac{I}{J}} = \fracs{(x^2,y^2,xy)}{(x^3, y^3)}$. } \end{example} Using Proposition \ref{main}, we see that the Stanley depth of a monomial ideal does not change when considering its canonical form. \begin{theorem}\label{prop:sdepth} Let $I$, $J$ be monomial ideals in $S$ and $\overline{\nicefrac{I}{J}}$ the canonical form of $\nicefrac{I}{J}$. Then $$\sdepth_S \nicefrac{I}{J}=\sdepth_S \overline{\nicefrac{I}{J}}.$$ \end{theorem} The proof goes applying inductively the following lemma. \begin{lemma}\label{lemma: sdepth} Suppose that $\nicefrac{I}{J}$ is of type $(k_1,\ldots,k_s)$ with respect to $x_n$ and $k_j+1<k_{j+1}$ for some $0\leq j<s$ (we set $k_0=0$). Let $G(I')$ (resp. $G(J')$) be the set of monomials obtained from $G(I)$ (resp. $G(J)$) by substituting $x_n^{k_i} $ by $x_n^{k_i-1}$ for $i>j$ whenever $x_n^{k_i}$ enters in a monomial of $G(I)$ (resp. $G(J)$). Let $I'$ and $J'$ be the ideals generated by $G(I')$ and $G(J')$. Then $$\sdepth_S \nicefrac{I}{J}=\sdepth_S \nicefrac{I'}{J'}.$$ \end{lemma} The proof of Lemma \ref{lemma: sdepth} follows from the proof of \cite[Proposition 5.1]{IKF} (see here Proposition \ref{main}). Next we focus on the $\depth \nicefrac{I}{J}$ and $\depth \overline{\nicefrac{I}{J}}$. The idea of the proof of the following lemma is taken from \cite[Section 2]{P}. \begin{lemma}\label{lemma:depth is constant} Let $I_0 \subset I_1 \subset \ldots \subset I_e \subset R$, $J\subset S$, $U_0 \subset U_1 \subset \ldots \subset U_e \subset R$, $V \subset S$ be some graded ideals of $S$, respectively $R$, such that $U_i \subset I_i$ for $0 \leq i \leq e$, $I_e \subset J$, $V \subset J$ and $U_e \subset V$. Consider $T_k = \Sum{i=0}{e}x_n^i I_i S + x_n^k J$ and $W_k = \Sum{i=0}{e}x_n^iU_iS + x_n^kV$ for $k > e$. Then $\depth_S \fracs{T_k}{W_k}$ is constant for all $k>e$. \end{lemma} \begin{proof} Consider the following linear subspaces of $S$: $I := \Sum{i=0}{e}x_n^i I_i$ and $U := \Sum{i=0}{e}x_n^i U_i$. Note that $I$ and $U$ are not ideals in $S$. If $I = U$, then the claim follows easily from the next chain of isomorphisms $\fracs{T_k}{W_k} \iso \fracs{x_n^kJ}{x_n^kJ \cap (I+x_n^kV)S} \iso \fracs{x_n^kJ}{x_n^k(I+V)S} \iso \fracs{J}{(I+V)S}$ for all $k>e$, and hence $\depth_S \fracs{T_k}{W_k}$ is constant for all $k>e$. Assume now that $I \neq U$ and consider the following exact sequence $$0 \to \fracs{J}{V} \xto{\cdot x_n^k} \fracs{T_k}{W_k} \to \fracs{T_k}{W_k + x_n^kJ} \to 0,$$ where the last term we denote by $H_k$. Note that $H_k \iso \fracs{IS}{IS \cap (U+x_n^kJ)S}$ and $IS \cap (U + x_n^kJ)S = US + x_n^kIS$. Since $x_n^kH_k=0$, $H_k$ is a $\nicefrac{S}{(x_n^k)}-$module. Then $\depth_S H_k = \depth_{\nicefrac{S}{(x_n^k)}}H_k=\depth_R H_k$ because the graded maximal ideal $m$ of $R$ generates a zero dimensional ideal in $\nicefrac{S}{(x_n^k)}$. But $H_k$ over $R$ is isomorphic with $\fracs{\oplus_{i=0}^{k-1} I_i}{\oplus_{i=0}^{k-1} U_i} \iso \bigoplus_{i=0}^{k-1} \fracs{I_i}{U_i}$, where $I_i=I_e$ and $U_i=U_e$ for $e<i<k$. It follows that $t:=\depth_S H_k = \min_i\left\{\depth_{R} \fracs{I_i}{U_i}\right\}$. If $\depth_S \fracs{J}{V} = 0$, then the Depth Lemma gives us $\depth_S \fracs{T_k}{W_k} = t = 0$ for all $k>e$ and hence we are done. Therefore we may suppose that $\depth_S \fracs{J}{V} > 0$. Note that $t>0$ implies $\depth_S \fracs{T_k}{W_k} > 0$ by the Depth Lemma since otherwise $\depth_S \fracs{T_k}{W_k} = \depth_{S} \fracs{J}{V} = 0$, which is false. Next we will split the proof in two cases. $\circ$ Case $t = 0$. Let ${\mathcal F}=\big\{i\in \{0,\ldots,e\}\ \big|\ \depth_R\nicefrac{I_i}{U_i}=0\big\}$ and $L_i\subset I_i$ be the graded ideal containing $U_i$ such that $\nicefrac{L_i}{U_i} \iso H_m^0(\nicefrac{I_{i}}{U_{i}})$. If $i\in {\mathcal F}$ and there exists $u\in ( L\cap V)\setminus U_i$ then $(m^s,x_n^k)x_n^i u\subset W_k$ for some $s\in {\mathbb N}$, that is $\depth_S \fracs{T_k}{W_k} = 0$ for all $k > e$. Now consider the case when $L_i\cap V=U_i$ for all $i\in {\mathcal F}$. If $i\in {\mathcal F}$ then note that $L_i\subset L_j$ for $i< j\leq e$. Set $V'=V+L_eS$, $U'=U+ \Sum{i\in {\mathcal F}}{}x_n^i L_i$ and $W'_k := U'S+x_n^kV'=U'S+x_n^kV$ because $x_n^kL_eS\subset U'S$. Consider the following exact sequence $$0 \to \fracs{W'_k}{W_k} \to \fracs{T_k}{W_k} \to \fracs{T_k}{W_k'} \to 0.$$ For the last term we have $H_m^0(\nicefrac{I_j}{U'_j})=0$, $0\leq j\leq e$ and so the new $t>0$, which is our next case. Thus we get $\depth_S \fracs{T_k}{W'_k}>0$ is constant for $k>e$. The first term is isomorphic to $\fracs{U'S}{U'S\cap W_k}$. But $U'S\cap W_k=US+(U'S\cap x_n^kV)$ since $US\subset U'S$. Since $U'S\cap (x_n^kS)=x_n^k (U_e+L_e)S$ and $U_e\subset V$ it follows that $U'S\cap x_n^kV=x_n^kUS+(x_n^kL_eS\cap x_n^kVS)=x_n^kUS$. Consequently, the first term from the above exact sequence is isomorphic with $\fracs{U'S}{US}$. Note that the annihilator of the element induced by some $u\in L_e\setminus V$ in $\nicefrac{U'S}{US}$ contains a power of $m$ and so $\depth_S \fracs{U'S}{US}\leq 1$. The inequality is equality since $x_n$ is regular on $\nicefrac{U'S}{US}$. By the Depth Lemma we get $\depth_S \fracs{T_k}{W_k}=1$ for all $k>e$. $\circ$ Case $t>0$. If $\depth_{R} \fracs{J}{V}\leq t= \depth_S H_k$ then the Depth Lemma gives us again the claim, i.e. $\depth_S \fracs{T_k}{W_k} = \depth_S \fracs{J}{V}$ for all $k>e$. Assume that $\depth_{S} \fracs{J}{V} >t$. Apply induction on $t$, the initial step $t = 0$ being done in the first case. Suppose that $t>0$. Then $\depth_S \fracs{J}{V}>t>0$ implies that $\depth_S \fracs{J}{V}\geq 2$ and so we may find a homogeneous polynomial $f \in m$ that is regular on $\fracs{J}{V}$. Moreover we may find $f$ to be regular also on all $\fracs{I_i}{U_i}$, $i \leq e$. Then $f$ is regular on $\fracs{T_k}{W_k}$. Set $V'' := V + f J$ and $U''_i := U_i + f I_i$ for all $i \leq e$ and set $W''_k := \Sum{i=0}{e}x_n^iU''_iS + x_n^kV''$. By Nakayama's Lemma we get $U'' \neq U$, and therefore $\depth_{R} \fracs{I}{U''} = t-1$ and by induction hypothesis it results that $\depth_S \fracs{T_k}{W_k} = 1 + \depth_S \fracs{T_k}{W''_k} = $ constant for all $k > e$. Finally, note that we may pass from the first case to the second one and conversely. In this way $U$ increases at each step. By Noetherianity at last we may arrive in finite steps to the case $I=U$, which was solved at the beginning. \hfill\ \end{proof} The next corollary is in fact \cite[Proposition 5.1]{IKF} (see Proposition \ref{main}) for depth. It follows easily from Lemma \ref{lemma:depth is constant} but also from \cite[Proposition 5.2]{OY} (see also \cite[Sections 2, 3]{Y}. \begin{corollary}\label{prop:5.1 for depth } Let $e \in \mathbb N$, $I$ and $J$ monomial ideals in $S := K[x_1, \ldots, x_n]$. Consider $I'$ and $J'$ be the monomial ideals obtained from $I$ and $J$ in the following way: each generator whose degree in $x_n$ $\geq e$ is multiplied by $x_n$ and all the other generators are left unchanged. Then $$\depth_S \nicefrac{I}{J} = \depth_S \nicefrac{I'}{J'}.$$ \end{corollary} This leads us to the equivalent result of Theorem \ref{prop:sdepth} for depth. \begin{theorem} \label{prop:depth} Let $I$ and $J$ be two monomial ideals in $S$ and $\overline{\nicefrac{I}{J}}$ the canonical form of $\nicefrac{I}{J}$. Then $$\depth_S \nicefrac{I}{J} = \depth_S \overline{\nicefrac{I}{J}}.$$ \end{theorem} \begin{proof} Assume that $\nicefrac{I}{J}$ is of type $(k_1,\ldots,k_s)$ with respect to $x_n$ and obviously $\overline{\nicefrac{I}{J}}$ is of type $(1,2,\ldots,s)$ with respect to $x_n$. Starting with $\overline{\nicefrac{I}{J}}$, we apply Corollary \ref{prop:5.1 for depth } till we obtain an $\nicefrac{I'_1}{J'_1}$ of type $(k_1,k_1+1,\ldots,k_1+s-1)$ having the same depth as $\overline{\nicefrac{I}{J}}$. We repeat the process until we get $\nicefrac{I'_s}{J'_s}$ of type $(k_1, k_2, \ldots, k_s)$ with respect to $x_n$ with the unchanged depth. Now we iterate and take the next variable. At the very end the claim will follow.\hfill \ \end{proof} Theorem \ref{prop:sdepth} and Theorem \ref{prop:depth} give us the following theorem \begin{theorem}\label{m} The Stanley conjecture holds for a factor of monomial ideals $\nicefrac{I}{J}$ if and only if it holds for its canonical form $\overline{\nicefrac{I}{J}}$. \end{theorem} Using Theorem \ref{prop:depth}, instead of computing the $\depth$ or the $\sdepth$ of $\nicefrac{I}{J}$, $J \subsetneq I \subset S$, we can compute it for the simpler module $\overline{\nicefrac{I}{J}}$. \begin{example} \label{ex: timings} {\em We present the different timings for the depth and sdepth computation algorithms with and without extracting the canonical form. $\textsc{Singular}$\cite{Sing} was used in the depth computations while $\textsc{CoCoA}$ \cite{Co} and Rinaldo's paper\cite{Rina} were used for the Stanley depth computation. \begin{enumerate} \item Consider the ideals from Example \ref{ex:canonical}(2). Timing for $\sdepth \nicefrac{I}{J}$ computation: 22s. Timing for $\sdepth \overline{\nicefrac{I}{J}}$ computation: 74 ms. \item Consider $R = \mathbb Q[x,y,z]$ and $I = (x^{100}yz,x^{50}yz^{50},x^{50}y^{50}z)$. Then the canonical form is $I' = (x^2yz,xyz^2,xy^2z)$. Timing for $\sdepth I$ computation: 13m 3s. Timing for $\sdepth I'$ computation: 21 ms. Notice that the difference in timings is very large. Therefore using the canonical form in the $\sdepth$ computation is a very important optimization step. On the other side, the $\depth$ computation is immediate in both cases. In the last example, the timing difference can be seen. \item Consider $R = \mathbb{Q}[x,y,z,t,v,a_1,\ldots,a_5]$,\\ $I = (v^4x^{12}z^{73},v^{87}t^{21}y^{13},x^{43}y^{18}z^{72}t^{28},vxy,vyz,vzt,vtx,a_1^{7000}, a_2^{413};)$, \\$J = (v^5x^{13}z^{74},v^{88}t^{22}y^{14},x^{44}y^{19}z^{73}t^{29},v^2x^2y^2,v^2y^2z^2,v^2z^2t^2,v^2t^2x^2)$. Timing for $\depth \nicefrac{I}{J}$ computation: 16m 11s. Timing for $\depth \overline{\nicefrac{I}{J}}$ computation: 11m. \end{enumerate} } \end{example} \vskip 1cm \section{Appendix} \vskip 1cm We sketch the simple idea of the algorithm which computes the canonical form of a monomial ideal $I$. This can easily be extended to compute the canonical form of $\nicefrac{I}{J}$ by simple applying it for $G(I) \cup G(J)$ and afterwards extracting the generators corresponding to $I$ and $J$. This was used in Example \ref{ex: timings}. The algorithm is based on Remark \ref{rem:canonical quot}: for each variable $x_i$ we build the list \verb"gp" in which we save the pair $(g,p)$, were $p$ is chosen such that $x_i^p$ enters the $g-$generator of the monomial ideal $I$. This list will be sorted by the powers $p$ as in the following example \begin{example} {\em Consider the ideal $I := (x^{13}, x^{4}y^{7}, y^7z^{10}) \subset \mathbb{Q} [x, y, z]$. Then for each variable we will obtain a different \verb"gp" as shown below: \begin{itemize} \item[$\circ$] For the first variable $x$, \verb"gp" is equal to \begin{tabular}{ |c | c | c |c| } \hline 2 & 4 & 1 & 13 \\ \hline \end{tabular}. Therefore $I$ is of type $(4,13)$ with respect to $x$. Hence, in order to obtain the canonical form with respect to $x$, one has to divide the second generator by $x^{4-1} = x^3$ and the first generator by $x^{13-2} = x^{11}$. After these computation we will get $I_1 = (x^2, xy^7, y^7z^{10})$. Note that $I_1$ is in the canonical form w.r.t. $x$. \item[$\circ$] For the second variable $y$, \verb"gp" is equal to \begin{tabular}{ |c | c | c |c| } \hline 3 & 7 & 2 & 7 \\ \hline \end{tabular}. Similar as above, one has to divide the second and the third generator by $y^6$, and hence it results $I_2 = (x^2, xy, yz^{10})$. Again, $I_2$ is in the canonical form w.r.t. $y$ and $x$. \item[$\circ$] For the last variable $z$, \verb"gp" is equal to \begin{tabular}{ |c | c | } \hline 3 & 10 \\ \hline \end{tabular}. We divide the third generator of $I_2$ by $z^9$ and we get our final result $I' = (x^2, xy, yz)$., which is in the canonical form with respect to all variables. \end{itemize} } \end{example} Based on the above idea, we construct two procedures: \verb"putIn" and \verb"canonical" $-$ the first one constructing the list \verb"gp", and the second one computing the canonical form of a monomial ideal. The proof of correctness and termination is trivial. The procedures were written in the $\textsc{Singular}$ language. \begin{verbatim} proc putIn(intvec v, int power, int nrgen) { if(size(v) == 1) { v[1] = nrgen; v[2] = power; return(v); } int i,j; if(power <= v[2]) { for(j = size(v)+2; j >=3; j--) { v[j] = v[j-2]; } v[1] = nrgen; v[2] = power; return(v); } if(power >= v[size(v)]) { v[size(v)+1] = nrgen; v[size(v)+1] = power; return(v); } for(j = size(v) + 2; (j>=4) && (power < v[j-2]); j = j-2) { v[j] = v[j-2]; v[j-1] = v[j-3]; } v[j] = power; v[j-1] = nrgen; return(v); } proc canonical(ideal I) { int i,j,k; intvec gp; ideal m; intvec v; v = 0:nvars(basering); for(i = 1; i<=nvars(basering); i++) { gp = 0; v[i] = 1; for(j = 1; j<=size(I); j++) { if(deg(I[j],v) >= 1) { gp = putIn(gp,deg(I[j],v),j); } } k = 0; if(size(gp) == 2) { I[gp[1]] = I[gp[1]]/(var(i)^(gp[2]-1)); } else { for(j = 1; j<=size(gp)-2;) { k++; I[gp[j]] = I[gp[j]]/(var(i)^(gp[j+1]-k)); j = j+2; while((j<=size(gp)-2) && (gp[j-1] == gp[j+1]) ) { I[gp[j]] = I[gp[j]]/(var(i)^(gp[j+1]-k)); j = j + 2; } } if(j == size(gp)-1) { if(gp[j-1] == gp[j+1]) { I[gp[j]] = I[gp[j]]/(var(i)^(gp[j+1]-k)); } else { k++; I[gp[j]] = I[gp[j]]/(var(i)^(gp[j+1]-k)); } } } v[i] = 0; } return(I); } \end{verbatim}
{ "timestamp": "2014-04-08T02:03:14", "yymm": "1402", "arxiv_id": "1402.5826", "language": "en", "url": "https://arxiv.org/abs/1402.5826", "abstract": "In this paper we show that the depth and the Stanley depth of the factor of two monomial ideals is invariant under taking a so called canonical form. It follows easily that the Stanley Conjecture holds for the factor if and only if it holds for its canonical form. In particular, we construct an algorithm which simplifies the depth computation and using the canonical form we massively reduce the run time for the sdepth computation.", "subjects": "Commutative Algebra (math.AC)", "title": "Depth and Stanley Depth of the Canonical Form of a factor of monomial ideals", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9838471665987074, "lm_q2_score": 0.8267117962054048, "lm_q1q2_score": 0.8133580582904155 }
https://arxiv.org/abs/1305.6104
On node distributions for interpolation and spectral methods
A scaled Chebyshev node distribution is studied in this paper. It is proved that the node distribution is optimal for interpolation in $C_M^{s+1}[-1,1]$, the set of $(s+1)$-time differentiable functions whose $(s+1)$-th derivatives are bounded by a constant $M>0$. Node distributions for computing spectral differentiation matrices are proposed and studied. Numerical experiments show that the proposed node distributions yield results with higher accuracy than the most commonly used Chebyshev-Gauss-Lobatto node distribution.
\section{Introduction} Choosing nodes is important in interpolating a function and solving differential or integral equations by pseudospectral methods. Given a sufficiently smooth function, if nodes are not suitably chosen, then the interpolation polynomials do not converge to the function as the number of nodes tends to infinity. A well-known example is the Runge's phenomenon. In particular, if one uses equi-spaced nodes to interpolate the Runge's function $f(x)=\frac{1}{1+25x^2}$ over the interval $[-1,1]$, then the errors of Lagrange polynomial interpolation blow up to infinity as the number of nodes increases (see, e.g., \cite{FB}). Let $f$ be a continuous function on $[-1,1]$, let $\bm{c} := (c_i)_{i=0}^{s}$, $c_i\in [-1,1]$, and let $L_{\bm{c}}(f)(x)$ be the Lagrange interpolation polynomial of $f$ over the nodes $(c_i)_{i=0}^{s}$. It is well-known from interpolation theory that \begin{equation} \max_{x\in [-1,1]}|L_{\bm{c}}(f)(x)-f(x)| \le (1+\Lambda(\bm{c}))\max_{x\in [-1,1]}|P^*(x) - f(x)|, \end{equation} where $P^*(x)$ is the best polynomial approximation of degree $s$ and $\Lambda(\bm{c})$ is the Lebesgue constant corresponding to the node distribution $\bm{c}=(c_i)_{i=0}^{s}$. The Lebesgue constant $\Lambda(\bm{c})$ indicates how far the Lagrange interpolation polynomial $L_{\bm{c}}(x)$ is from the best polynomial approximation of degree $s$. Lebesgue constants have been studied extensively in the literature (see, e.g, \cite{Brutman}, \cite{FB}, \cite{Henry}, \cite{Rack}, \cite{Smith}, \cite{Trefethen}, \cite{Hes98}, and references therein). It is of interest to find a node distribution for which the Lebesgue constant is minimal among all node distributions with the same number of nodes. This node distribution if existing is called an optimal node distribution. It is known that for a given number of nodes, the optimal node distribution may not be unique. If one wants these nodes to include boundary points, then such optimal node distribution is unique (cf. \cite{Henry}). However, finding these node distributions is not an easy task. In practice, one often uses Chebyshev-Gauss-Lobatto nodes for interpolation and pseudospectral methods. These nodes are extrema of Chebyshev polynomials of the first kind over $[-1,1]$. The most commonly used node distribution is Gauss-Chebyshev-Lobatto points. These points are extrema of Chebyshev polynomial $T_{s}$ over $[-1,1]$, i.e., \begin{equation} \label{eq5} c_i = \cos(\frac{i\pi}{s}),\qquad i=0,...,s. \end{equation} This node distribution is also referred to as Chebyshev points. In \cite{Hes98} the Lebesgue constant of this node distribution was studied. It was proved that the Lebesgue constant for Chebyshev-Gauss-Lobatto nodes in \eqref{eq5} satisfies the estimate (see, e.g., \cite{FB}, \cite{Hes98}) \begin{equation} \label{eq42} \Lambda^{CGL}(n) = \frac{2}{\pi}\bigg(\ln n + \gamma + \ln\frac{8}{\pi}\bigg) + O\big(\frac{1}{n^2}\big), \end{equation} where $\gamma=0.577215$ is the Euler constant and $n$ is the number of nodes. Although Chebyshev-Gauss-Lobatto node distribution works well in practice, it is not optimal in the sense that the Lebesgue constant for this node distribution is minimal among Lebesgue constants based on node distributions of the same number of nodes. It is well-known that for each function $f$ there is an optimal node distribution for interpolating the function. This optimal node distribution varies from functions to functions. When $f$ is known, there are algorithms for finding an optimal node distribution for interpolating $f$. However, these algorithms are not efficient in practice. In many cases, these algorithms are not applicable since the function to be interpolated is not known. This is the case when $f$ is a solution to a differential or an integral equation. It was proved that the optimal Lebesgue constant satisfies the following estimate (see, e.g., \cite{Vertesi}) \begin{equation} \label{eq43} \Lambda^{min}(n) = \frac{2}{\pi}\bigg(\ln n + \gamma + \ln\frac{4}{\pi}\bigg) + O(\frac{1}{(\ln n)^{1/3}}). \end{equation} From equations \eqref{eq42} and \eqref{eq43} one can see that the Lebesgue constant of Chebyshev-Gauss-Lobatto nodes is very close to the optimal one. In \cite{Gunttner} the Lebesgue contant for a scaled Chebyshev node distribution was studied. These nodes are obtained by scaling zeros of the Chebyshev polynomial $T_{s+1}(x)$. In particular, the scaled Chebyshev nodes $(c_i)_{i=1}^s$ in \cite{Gunttner} are defined as follows \begin{equation} \label{eq6} c_i = \cos\bigg(\frac{2i+1}{2(s+1)}\pi\bigg)\bigg[\cos\bigg(\frac{1}{2(s+1)}\pi\bigg)\bigg]^{-1},\qquad i=0,...,s. \end{equation} The Lebesgue constant of the scaled Chebyshev node distribution satisfies the following estimate (see, e.g., \cite{Gunttner}, \cite{Smith}) \begin{equation} \Lambda^{sC}(n) = \frac{2}{\pi}\bigg(\ln n + \gamma + \ln\frac{8}{\pi} - \frac{2}{3}\bigg) + O\big(\frac{1}{\ln n}\big). \end{equation} Note that $\ln\frac{8}{\pi} - \frac{2}{3} \approx \ln\frac{4}{\pi}+0.0265$ and $\ln\frac{8}{\pi} \approx \ln\frac{4}{\pi} + 0.24$. Thus, for ``large'' $n$, the Lebesgue constants of the scaled Chebyshev points are closer to the optimal Lebesgue contants compared to the Lebesgue constants of Chebyshev-Gauss-Lobatto points. The scaled Chebyshev nodes are often mentioned as the optimal choice in practice for interpolation (cf. \cite{Henry}). However, to the author's knowledge, there is no justification for the optimality of this choice in any sense. In practice one often uses Chebyshev-Gauss-Lobatto nodes and scaled Chebyshev nodes for interpolation and pseudospectral methods. In this paper, we study node distributions for interpolation and pseudospectral methods over the class of functions $$ C_M^{s+1}[-1,1]:=\{f\in C^{s+1}[-1,1]: \max_{x\in[-1,1]}|f^{(s+1)}(x)|\le M\}. $$ It turns out that the scaled Chebyshev nodes are optimal for interpolation over $C_M^{s+1}[-1,1]$. We also construct node distributions for computing differentiation matrices over $C_M^{s+1}[-1,1]$. Numerical experiments with the new node distributions in Section \ref{sec4} (see below) showed that these nodes yield better results than Chebyshev-Gauss-Lobatto points do. The paper is organized as follows. In Section \ref{sec2} we study node distributions for interpolation. We prove that the scaled Chebyshev nodes are ``optimal'' for interpolation over $C_M^{s+1}[-1,1]$. In Section \ref{sec3} node distributions for calculating differentiation matrices are proposed and justified. In Section \ref{sec4} numerical experiments are carried out with the new node distributions. \section{Interpolation} \label{sec2} Let $L_{\bm{c}}(f)$ denote the Lagrange interpolation polynomial of a sufficiently smooth function $f$ over the nodes $\bm{c}=(c_i)_{i=0}^s$, $c_i\in [-1,1]$. The error of Lagrange interpolation is given by the formula (see, e.g., \cite{Kress}) \begin{equation} \label{eq1} L_{\bm{c}}(f)(x)-f(x) = \frac{f^{(s+1)}(\xi(x))}{(s+1)!}\prod_{i=0}^s (x-c_i),\qquad \xi(x)\in[-1,1]. \end{equation} We are interested in finding a node distribution $\bm{c}$ so that the interpolation error $\|L_{\bm{c}}(f)-f\|_\infty$ is as small as possible. Here, $\|g\|_\infty$ denotes the sup-norm of $g$ over the interval $[-1,1]$, i.e., $\|g\|_\infty:=\sup_{x\in[-1,1]}|g(x)|$. Note that the element $\xi(x)$ in \eqref{eq1} depends on $x$ and $(c_i)_{i=0}^s$ in a nontrivial manner. Therefore, to minimize $\|L_{\bm{c}}(f)-f\|_\infty$ one often tries to find a distribution of $(c_i)_{i=0}^s$, $c_i\in [-1,1]$, so that \begin{equation} \label{eq2} \max_{-1\le x\le 1}\bigg|\prod_{i=0}^s (x-c_i)\bigg|\to \min. \end{equation} It is well-known that the zeros of $T_{s+1}(x)$, the Chebyshev polynomial of order $s+1$ of the first kind over $[-1,1]$, are the solution to \eqref{eq2}. These zeros are given by the formula \begin{equation} \label{eq3} c_i = \cos\bigg(\frac{2i+1}{2(s+1)}\pi\bigg),\qquad i=0,...,s. \end{equation} In practice one often wants to have boundary points as interpolation nodes, i.e., $c_0=-1$ and $c_s = 1$. Let \begin{equation} \mathcal{C}:=\{\bm{c}=(c_i)_{i=0}^s: -1=c_0<c_1<...<c_{s-1}<c_s = 1\}. \end{equation} The following question arises: for which set of points $(c_i)_{i=0}^{s}\in \mathcal{C}$, we have \begin{equation} \label{eq4} \max_{-1\le x\le 1}\bigg|\prod_{i=0}^{s} (x-c_i)\bigg|\to \min_{\bm{c}\in \mathcal{C}}\, ? \end{equation} The answer is given in the following result: \begin{theorem} \label{theorem1} Let $(\bar{c}_i)_{i=0}^s\in \mathcal{C}$ be a solution to \eqref{eq4}, i.e., \begin{equation} \label{eq8} \max_{-1\le x\le 1}\bigg|\prod_{i=0}^{s}(x-\bar{c}_i)\bigg| = \min_{\bm{c}\in \mathcal{C}} \max_{-1\le x\le 1}\bigg|\prod_{i=0}^{s}(x-c_i)\bigg|. \end{equation} Then this solution is uniquely and $(\bar{c}_i)_{i=0}^s$ are determined by \begin{equation} \label{eq9} \bar{c}_i = \cos\bigg(\frac{2i+1}{2(s+1)}\pi\bigg)\bigg[\cos\bigg(\frac{\pi}{2(s+1)}\bigg)\bigg]^{-1},\qquad i=0,...,s. \end{equation} \end{theorem} \begin{proof} Let $$ P(x) := \prod_{i=0}^s (x-\bar{c}_i), $$ where $\bar{c}_i$, $i=0,...,s$, are defined by \eqref{eq9}. Then \begin{equation} \begin{split} P\bigg(\frac{x}{\cos(\frac{\pi}{2(s+1)})}\bigg) &= \bigg[\frac{1}{\cos(\frac{\pi}{2(s+1)})}\bigg]^{s+1}\prod_{i=0}^s \bigg(x-\cos\big(\frac{2i+1}{2(s+1)}\pi\big)\bigg) \\ &= \bigg[\frac{1}{\cos(\frac{\pi}{2(s+1)})}\bigg]^{s+1}T_{s+1}(x), \end{split} \end{equation} where $T_{s+1}(x)$ is the Chebyshev polynomial of the first kind over $[-1,1]$ of degree $s+1$. Therefore, \begin{equation} \label{eq7} P(x) = \bigg[\frac{1}{\cos(\frac{\pi}{2(s+1)})}\bigg]^{s+1}T_{s+1}\bigg(x\cos(\frac{\pi}{2(s+1)})\bigg). \end{equation} Note that $\big(\cos(\frac{i\pi}{s+1})\big)_{i=1}^{s}$ are all critical points of the Chebyshev polynomial $T_{s+1}(x)$ and $|T_{s+1}(x)|\le 1$, $\forall x\in [-1,1]$. This and equation \eqref{eq7} imply that all critical points of $P(x)$ are \begin{equation} \label{eq13} d_i = \frac{\cos(\frac{i\pi}{s+1})}{\cos(\frac{\pi}{2(s+1)})},\qquad i=1,...,s, \end{equation} and we have \begin{equation} \label{au4eq1} P(d_i) = \bigg[\frac{1}{\cos(\frac{\pi}{2(s+1)})}\bigg]^{s+1} (-1)^i,\qquad i=1,...,s. \end{equation} Therefore, \begin{equation} \label{eq10} \min_{(c_i)_{i=0}^s\in\mathcal{C}}\max_{-1\le x\le 1}\bigg|\prod_{i=0}^{s}(x-c_i)\bigg| \le \max_{x\in[-1,1]}|P(x)| = \bigg[\frac{1}{\cos(\frac{\pi}{2(s+1)})}\bigg]^{s+1}. \end{equation} Let $(\tilde{c}_i)_{i=0}^s$ be a solution to \eqref{eq4}. Let us prove that $\tilde{c}_i=\bar{c}_i$ where $\bar{c}_i$, $i=0,..,s$, are defined by \eqref{eq9}. Let \begin{equation} \label{eq14} Q(x) := \prod_{i=0}^s(x-\tilde{c}_i),\qquad (\tilde{c}_i)_{i=0}^s \in \mathcal{C}, \end{equation} and \begin{equation} \label{eq11} R(x) := Q(x) - P(x). \end{equation} Since $P(x)$ and $Q(x)$ are monic polynomials of degree $s+1$, one concludes from \eqref{eq11} that $R(x)$ is a polynomial of degree at most $s$. Since $(\tilde{c}_i)_{i=0}^s$ is a solution to \eqref{eq4} and \eqref{eq10} holds, one gets \begin{equation} \label{eq12} |Q(x)| \le \bigg[\frac{1}{\cos(\frac{\pi}{2(s+1)})}\bigg]^{s+1} ,\qquad \forall x\in [-1,1]. \end{equation} From \eqref{au4eq1}, \eqref{eq11}, and \eqref{eq12}, one obtains \begin{equation} R(d_i)(-1)^i \ge 0,\qquad i=1,...,s. \end{equation} Thus, the polynomial $R(x)$ has at least $s-1$ zeros on the interval $[-d_1,d_{s}]\subset (-1,1)$. Since $\bar{c}_0=\tilde{c}_0=-1$ and $\bar{c}_s=\tilde{c}_s=1$, it is clear that $-1$ and $1$ are zeros of $Q(x)$ and $P(x)$. Thus, $-1$ and $1$ are also zeros of $R(x)$. Therefore, $R(x)$ has a total of $s+1$ zeros on the interval $[-1,1]$. This and the fact that $R(x)$ is a polynomial of degree at most $s$ imply that $R(x)=0$. Thus, $Q(x)\equiv P(x)$. Therefore, $\tilde{c}_i=\bar{c}_i$, $i=0,...,s$. Theorem \ref{theorem1} is proved. \end{proof} \begin{remark}{\rm From the proof of Theorem \ref{theorem1}, one gets \begin{equation} \label{au5e2} \min_{\bm{c}\in \mathcal{C}} \max_{-1\le x\le 1}\bigg|\prod_{i=0}^{s}(x-c_i)\bigg| = \max_{-1\le x\le 1}\bigg|\prod_{i=0}^{s}(x-\bar{c}_i)\bigg|= \bigg[\frac{1}{\cos(\frac{\pi}{2(s+1)})}\bigg]^{s+1}. \end{equation} } \end{remark} Let \begin{equation} \label{eq15} C^{s+1}_M:=\big\{f\in C^{s+1}[-1,1]: \max_{x\in [-1,1]}|f^{(s+1)}(x)|\le M \big \},\quad M>0. \end{equation} Let $L_{\bm{c}}(f)$ denote the Lagrange interpolation polynomial of $f$ over the nodes $\bm{c}:=(c_i)_{i=1}^s$. We are interested in solving the following problem \begin{equation} \label{eq40} \min_{\bm{c}\in\mathcal{C}}\sup_{f\in C^{s+1}_M} \|f-L_{\bm{c}}(f)\|_\infty. \end{equation} Here $\|g\|_\infty:=\sup_{x\in [-1,1]}|g(x)|$. We have the following result: \begin{theorem} \label{theorem2} Let $\bar{\bm{c}}:=(\bar{c}_i)_{i=0}^s$ where $(\bar{c}_i)_{i=0}^s$ are defined by \eqref{eq6}. Then $\bar{\bm{c}}$ is the solution to problem \eqref{eq40}. \end{theorem} \begin{proof} Let $\bm{c} = (c_i)_{i=0}^s\in \mathcal{C}$ be an arbitrary node distribution over $[-1,1]$. The error of Lagrange interpolation is given by the formula (see, e.g., \cite{Kress}) \begin{equation} \label{eq28.0} f(x) - L_{\bm{c}}(f)(x) = \frac{f^{(s+1)}(\xi(x))}{(s+1)!}\prod_{i=0}^s (x-c_i),\qquad \xi(x) \in [-1,1]. \end{equation} From equations \eqref{eq28.0} and \eqref{au5e2} one gets \begin{equation} \label{eq17} \begin{split} \min_{\bm{c}\in\mathcal{C}}\sup_{f\in C^{s+1}_M} \|f-L_{\bm{c}}(f)\|_\infty &\le \sup_{f\in C^{s+1}_M} \|f-L_{\bar{\bm{c}}}(f)\|_\infty \\ &\le \frac{M}{(s+1)!}\max_{x\in [-1,1]}\bigg|\prod_{i=0}^s (x-\bar{c}_i)\bigg| \\ &= \frac{M}{(s+1)!}\bigg[\frac{1}{\cos(\frac{\pi}{2(s+1)})}\bigg]^{s+1}. \end{split} \end{equation} Let $P_{0}(x)$ be a polynomial of degree $s+1$ such that $P_0^{(s+1)}(x) \equiv M$. Using formula \eqref{eq28.0} for $f(x)=P_{0}(x)$, one gets \begin{equation} \label{jl31eq1} P_0(x) - L_{\bm{c}}(P_0)(x) = \frac{P_0^{(s+1)}(\xi(x))}{(s+1)!}\prod_{i=0}^s (x-c_i) = \frac{M}{(s+1)!}\prod_{i=0}^s (x-c_i). \end{equation} From equations \eqref{jl31eq1} and \eqref{au5e2} we have \begin{equation} \label{eq16} \begin{split} \min_{\bm{c}\in\mathcal{C}}\sup_{f\in C^{s+1}_M} \|f-L_{\bm{c}}(f)\|_\infty &\ge \min_{\bm{c}\in\mathcal{C}} \|P_0-L_{\bm{c}}(P_0)\|_\infty\\ & = \frac{M}{(s+1)!}\min_{\bm{c}\in\mathcal{C}}\max_{x\in[-1,1]}\bigg|\prod_{i=0}^s (x - c_i)\bigg| \\ & = \frac{M}{(s+1)!}\bigg[\frac{1}{\cos(\frac{\pi}{2(s+1)})}\bigg]^{s+1}. \end{split} \end{equation} From equation \eqref{eq16} and \eqref{eq17}, we conclude that \begin{equation} \min_{\bm{c}\in\mathcal{C}}\sup_{f\in C^{s+1}_M} \|f-L_{\bm{c}}(f)\|_\infty = \frac{M}{(s+1)!}\bigg[\frac{1}{\cos(\frac{\pi}{2(s+1)})}\bigg]^{s+1}, \end{equation} and $\bar{\bm{c}}:=(\bar{c}_i)_{i=0}^s$, where $(\bar{c}_i)_{i=0}^s$ are defined by \eqref{eq6}, is the solution to \eqref{eq40}. Theorem \ref{theorem2} is proved. \end{proof} \begin{remark} Since the solution to \eqref{eq4} is unique, it follows from the proof of Theorem \ref{theorem2} that the solution to \eqref{eq40} is unique. Theorem \ref{theorem2} says that the node distribution from equation \eqref{eq6} is optimal in the sense of \eqref{eq40}. Namely, the node distribution defined by \eqref{eq6} is optimal for interpolation over the set of functions $C_M^{s+1}[-1,1]$. \end{remark} \section{Spectral differentiation matrices} \label{sec3} In many problems one is interested in finding the first derivative $f'$ of a function $f\in C^1[-1,1]$ based on values of $f$ at $(c_i)_{i=0}^s$, $c_i\in [-1,1]$. One of the approach is to use $(L_{\bm{c}}(f))'$ as an approximation to $f'$ where $L_{\bm{c}}(f)$ is the Lagrange interpolation polynomial of the function $f$ over the nodes $(c_i)_{i=0}^s$. Thus, the following problem arises \begin{equation} \label{eq28} \min_{\bm{c}\in \mathcal{C}}\max_{x\in [-1,1]}|(L_{\bm{c}}(f))'(x) - f'(x)|,\qquad f\in C^1[-1,1]. \end{equation} Unfortunately, a solution $\bm{c}$ to \eqref{eq28} if existing is not independent of $f$, in general, and is not easy to find even when $f$ belongs to the class of functions $C^{s+1}_M[-1,1]$. Let $f\in C^{s+1}[-1,1]$ and $\bm{c}=(c_i)_{i=0}^s$ be a node distribution over $[-1,1]$. Let $\lambda_k$ satisfy \begin{equation} \label{eq18.0} f'(c_k) - (L_{\bm{c}}(f))'(c_k) = \lambda_k \frac{d}{dx}\prod_{i=0}^s (x-c_i)\bigg|_{x=c_k},\qquad 0\le k\le s. \end{equation} It is clear that $(c_i)_{i=0}^s$ are $s+1$ zeros of the function (cf. \eqref{eq1}) \begin{equation} \label{eq29} R_k(x) := f(x) - L_{\bm{c}}(f)(x) - \lambda_k \prod_{i=0}^s (x-c_i). \end{equation} According to Rolle's Theorem the function $R_k'(x)$ has at least $s$ zeros $(\eta_{ki})_{i=1}^s$ on the interval $[c_0,c_s]$ and $\eta_{ki}\not = c_j$. Therefore, $R_k'(x)$ has at least $s+1$ zeros on the interval $[-1,1]$ which are $(\eta_{ki})_{i=1}^s$ and $c_k$ (see \eqref{eq18.0}). Thus, by Rolle's Theorem, there exists $\zeta_k\in (-1,1)$ such that $R_k^{(s+1)}(\zeta_k) = 0$. This and \eqref{eq29} imply \begin{equation} 0 = R_k^{(s+1)}(\zeta_k) = f^{(s+1)}(\zeta_k) - \lambda_k (s+1)!. \end{equation} Therefore, $\lambda_k = f^{(s+1)}(\zeta_k)/(s+1)!$ and we get from \eqref{eq18.0} the following relations \begin{equation} \label{eq18} f'(c_k) - (L_{\bm{c}}(f))'(c_k) = \frac{f^{(s+1)}(\zeta_k)}{(s+1)!} \frac{d}{dx}\prod_{i=0}^s (x-c_i)\bigg|_{x=c_k},\qquad k=0,...,s. \end{equation} Note that if $f\in C^{s+2}[-1,1]$, then equation \eqref{eq18} can also be obtained by differentiating equation \eqref{eq1} with respect to $x$ and assigning $x=c_k$. Fix $\bar{x}\in [-1,1]$ and $\bar{x}\not=c_i$, $i=0,...,s$. Let \begin{equation} R(x) := f(x) - L_{\bm{c}}(f)(x) - \frac{f^{(s+1)}(\xi(\bar{x}))}{(s+1)!} \prod_{i=0}^s (x-c_i). \end{equation} Then $R(x)$ has $s+2$ zeros which are $(c_i)_{i=0}^s$ and $\bar{x}$ (cf. \eqref{eq1}). Thus, by Rolle's Theorem, the function $R'(x)$ has at least $s+1$ zeros on $[-1,1]$. Let $(\eta_i)_{i=0}^s$ be zeros of $R'(x)$. Then one gets \begin{equation} \label{eq30} f'(\eta_k) - (L_{\bm{c}}(f))'(\eta_k) = \frac{f^{(s+1)}(\xi_{\bar{x}})}{(s+1)!} \frac{d}{dx}\prod_{i=0}^s (x-c_i)\bigg|_{x=\eta_k},\qquad k=0,...,s. \end{equation} From \eqref{eq18} and \eqref{eq30} one may ask whether or not there exists a constant $C>0$ such that \begin{equation} \label{jl30e1} |f'(x) - (L_{\bm{c}}(f))'(x)| \le C \bigg|\frac{d}{dx}\prod_{i=0}^s (x-c_i)\bigg|,\quad x\in [-1,1],\quad f\in C^{s+1}_M. \end{equation} Unfortunately, the answer to this question is negative. It is because zeros of the right side of \eqref{jl30e1} are, in general, not zeros of the left side of \eqref{jl30e1}. In particular, if $\xi$ is a zero of the right side of \eqref{jl30e1} but is not a zero of the left side of \eqref{jl30e1}, then equation \eqref{jl30e1} does not hold for any $C>0$ when $x=\xi$. To minimize the interpolation error $\|f' - (L_{\bm{c}}(f))'\|_\infty$, taking into account formulae \eqref{eq18} and \eqref{eq30}, we consider the following problem \begin{equation} \label{eq21} \max_{-1\le x\le 1} \bigg|\frac{d}{dx}\prod_{i=0}^s (x-c_i)\bigg| \longrightarrow \min_{\bm{c}\in \mathcal{C}}. \end{equation} From the theory of Chebyshev polynomials one concludes that the solution to problem \eqref{eq21} is a node distribution $(c_i)_{i=0}^s$ such that \begin{equation} \label{eq22} \frac{d}{dx}\prod_{i=0}^s (x-c_i) = \frac{(s+1)T_{s}(x)}{2^{s-1}}. \end{equation} Thus, we want to find $(c_i)_{i=0}^s$ so that \begin{equation} \label{eq31} \prod_{i=0}^s (x-c_i) = \int_0^x \frac{(s+1)T_{s}(\xi)}{2^{s-1}} d\xi + C, \end{equation} where $C$ is a suitable constant. To find $(c_i)_{i=0}^s$ satisfying \eqref{eq31} we need the following lemma: \begin{lemma} \label{lemma1} Let $T_{s}$ be the Chebyshev polynomial of degree $s$ over the interval $[-1,1]$. Then \begin{equation} \label{eq23} \int T_{s}(x)dx = \frac{1}{2}\bigg(\frac{T_{s+1}(x)}{s+1} - \frac{T_{s-1}(x)}{s-1}\bigg) + C,\qquad C=const. \end{equation} \end{lemma} \begin{proof} One has \begin{equation} \label{eq24} \begin{split} \int T_{s}(x)dx &= \int \cos(s \arccos(x))dx = \int \cos(s v)d\cos v,\qquad v:=\arccos x\\ & = -\int \cos(sv)\sin v dv \\ & = -\frac{1}{2}\int [\sin((s+1)v) - \sin((s-1)v)] dv\\ & = \frac{1}{2}\bigg[\frac{\cos((s+1)v)}{s+1} - \frac{\cos((s-1)v)}{s-1}\bigg] + C\\ & = \frac{1}{2}\bigg[\frac{T_{s+1}(x)}{s+1} - \frac{T_{s-1}(x)}{s-1}\bigg]+C. \end{split} \end{equation} Lemma \ref{lemma1} is proved. \end{proof} From \eqref{eq31} and \eqref{eq23} we need to find $(c_i)_{i=0}^s$ so that \begin{equation} \label{eq25} \prod_{i=0}^s (x-c_i) =\frac{s+1}{2^{s-1}} \bigg[ \frac{1}{2}\bigg(\frac{T_{s+1}(x)}{s+1} - \frac{T_{s-1}(x)}{s-1}\bigg) + C\bigg], \end{equation} where $C$ is a constant. However, it is not clear if there is a constant $C$ so that there exists $(c_i)_{i=0}^s$, $c_i\in [-1,1]$, satisfying equation \eqref{eq25}. Consider the case when $s$ is odd. Let \begin{equation} \label{eq26} P_{s+1}(x):= \frac{s+1}{2^{s-1}} \bigg[ \frac{1}{2}\bigg(\frac{T_{s+1}(x)}{s+1} - \frac{T_{s-1}(x)}{s-1}\bigg) + \frac{1}{s^2-1}\bigg]. \end{equation} We have the following result: \begin{theorem} \label{thm3.2xx} Let $s>0$ be an odd integer. The polynomial $P_{s+1}(x)$ defined in \eqref{eq26} has $s+1$ distinct zeros $(c_i)_{i=0}^s$ on the interval $[-1,1]$, $-1=c_0<c_1<...<c_s=1$. These zeros are symmetric about 0. \end{theorem} \begin{proof} When $s$ is odd, the polynomials $T_{s+1}(x)$ and $T_{s-1}(x)$ are even functions on $[-1,1]$. Thus, $P_{s+1}(x)$ is an even function and its zeros are symmetric about 0. Since $s+1$ and $s-1$ are even, one has $T_{s+1}(\pm 1) = T_{s-1}(\pm 1) = 1$. Thus, \begin{equation} \label{eq36} P_{s+1}(1) = P_{s+1}(-1) = 0. \end{equation} From \eqref{eq23} and \eqref{eq26} one gets $P'_{s+1}(x) = \frac{s+1}{2^{s-1}}T_s(x)$. Thus, $P'_{s+1}(x)$ and $T_s(x)$ share the same zeros which are $(x_i)_{i=0}^{s-1}$, $x_i=\cos(\frac{2i + 1}{2s})$. Since $T_{s+1}(x) + T_{s-1}(x) = 2xT_s(x)$ and $(x_i)_{i=0}^{s-1}$ are zeros of $T_s(x)$, one gets $T_{s+1}(x_i) + T_{s-1}(x_i) = 2x_iT_s(x_i)=0$. Thus, $T_{s+1}(x_i) = - T_{s-1}(x_i)$, $i=0,...,s-1$, and from \eqref{eq26} one gets \begin{equation} \label{eq33} \begin{split} P_{s+1}(x_i) &= \frac{s+1}{2^{s-1}} \bigg[ \frac{1}{2}\bigg(\frac{T_{s+1}(x_i)}{s+1} - \frac{T_{s-1}(x_i)}{s-1}\bigg) + \frac{1}{s^2-1}\bigg]\\ &= \frac{s+1}{2^{s-1}} \bigg[ \frac{T_{s+1}(x_i)}{2}\bigg(\frac{1}{s+1} + \frac{1}{s-1}\bigg) + \frac{1}{s^2-1}\bigg]\\ & = \frac{s+1}{2^{s-1}(s^2 - 1)} (sT_{s+1}(x_i)+1),\qquad i=0,...,s-1. \end{split} \end{equation} From the relation $\arccos(x_i) = \frac{2i+1}{2s}\pi$, one gets \begin{equation} \label{eq34} \begin{split} T_{s+1}(x_i) &= \cos((s+1)\arccos(x_i)) \\ &= \cos\bigg(\frac{2i+1}{2}\pi + \frac{2i+1}{2s}\pi\bigg) = (-1)^{i+1}\sin\bigg(\frac{2i+1}{2s}\pi\bigg). \end{split} \end{equation} Note that $$ x - \sin(\frac{\pi x}{2}) < 0,\quad \forall x\in(0,1). $$ Thus, \begin{equation} \label{eq35} \sin\bigg(\frac{2i+1}{2s}\pi\bigg) \ge \sin\bigg(\frac{\pi}{2s}\bigg) > \frac{1}{s},\qquad i=0,...,s-1,\quad s>1. \end{equation} From \eqref{eq33}--\eqref{eq35} one obtains \begin{equation} \label{eq37} \begin{split} (-1)^{i+1} P_{s+1}(x_i) &= (-1)^{i+1} \frac{s}{(s-1)2^{s-1}}\bigg(T_{s+1}(x_i) + \frac{1}{s}\bigg)\\ &= \frac{s}{(s-1)2^{s-1}}\bigg( \sin\bigg(\frac{2i+1}{2s}\pi\bigg) + \frac{(-1)^{i+1}}{s}\bigg)\\ &> \frac{s}{(s-1)2^{s-1}}\bigg(\frac{1}{s} + \frac{(-1)^{i+1}}{s}\bigg) \ge 0,\qquad i=0,...,s-1. \end{split} \end{equation} Thus, $P_{s+1}(x)$ has at least $s-1$ zeros on the interval $[x_0,x_{s-1}]\subset (-1,1)$. This and \eqref{eq36} imply that $P_{s+1}(x)$ has $s+1$ zeros on the interval $[-1,1]$. Theorem \ref{thm3.2xx} is proved. \end{proof} Consider the case when $s$ is an even integer. \begin{theorem} \label{theorem3} Let $0<s$ be an even integer. For any constant $C$ the polynomial \begin{equation} \label{eq25.x} g_{s+1}(x) = \frac{s+1}{2^{s-1}} \bigg[ \frac{1}{2}\bigg(\frac{T_{s+1}(x)}{s+1} - \frac{T_{s-1}(x)}{s-1}\bigg) + C\bigg], \end{equation} has at most $s$ zeros on the interval $[-1,1]$. \end{theorem} \begin{proof} From Lemma \ref{lemma1} and \eqref{eq25.x} one gets $g'_{s+1}(x) = \frac{s+1}{2^{s-1}}T_s(x)$. Thus, zeros of $g'_{s+1}(x)$ are zeros of $T_s(x)$ which are $(x_i)_{i=0}^{s-1}$, $x_i = \cos(\frac{2i+1}{2s}\pi)$, $i=0,...,s-1$. Since $g'_{s+1}(x)$ does not change sign on intervals $[x_i,x_{i+1}]$, $i=0,...,s-2$, the function $g_{s+1}(x)$ has at most $s-1$ zeros on $[x_0,x_{s-1}]$ for any given $C$. One has \begin{equation} g'_{s+1}(x) = \frac{s+1}{2^{s-1}}T_s(x) \ge 0,\qquad -x\in [-1,x_0]\cup [x_{s-1},1]. \end{equation} Thus \begin{equation} \min_{[-1,x_0]} g_{s+1}(x) \ge \frac{1}{s^2-1} + C > -\frac{1}{s^2-1} + C\ge \max_{x\in [x_{s-1},1]}g_{s+1}(x). \end{equation} Thus, for any given $C$ there exists at most one zero of $g_{s+1}(x)$ on $[-1,x_0]\cup [x_{s-1},1]$. Therefore, the function $g_{s+1}(x)$ has at most $s$ zeros on the interval $[-1,1]$. \end{proof} \begin{remark} Theorem \ref{theorem3} says that when $s$ is even, there does not exist a constant $C$ and $(c_i)_{i=0}^s$, $c_i\in [-1,1]$, so that equation \eqref{eq25} holds. Thus, when $s$ is even, there does not exist a node distribution $(c_i)_{i=0}^s$, $-1=c_0<c_1<...<c_s=1$, so that equation \eqref{eq22} holds. \end{remark} Let us propose two possible node distributions $(c_i)_{i=0}^s$ for computing differentiation matrices when $s$ is even. Let \begin{equation} \label{eq41} \tilde{Q}_{s+1}(x) := \frac{s+1}{2^{s}} \bigg(\frac{T_{s+1}(x)}{s+1} - \frac{T_{s-1}(x)}{s-1} + \frac{2x}{s^2-1}\bigg). \end{equation} Then $\tilde{Q}_{s+1}(x)$ has $s+1$ zeros $(c_i)_{i=0}^s$ on the interval $[-1,1]$, $-1=c_0<c_1<...<c_s=1$. Thus, one can use this node distribution for the computation of differential matrices. \begin{theorem} \label{thm3.4} Let $s$ be an even integer. The polynomial $\tilde{Q}_{s+1}(x)$ has $s+1$ zeros $(c_i)_{i=0}^{s}$ on the interval $[-1,1]$, $-1=c_0<c_1<...<c_s = 1$. \end{theorem} \begin{proof} When $s$ is even, we have $T_{s+1}(\pm 1)=\pm 1$, and $T_{s-1}(\pm 1)=\pm 1$. Thus, one gets \begin{equation} \label{eq38} \tilde{Q}_s(-1) = \tilde{Q}_s(1) = 0. \end{equation} Let $x_i=\cos(\frac{2i+1}{2s}\pi)$, $i=0,...,s-1$. Then $(x_i)_{i=0}^{s-1}$ are zeros of $T_{s}(x)$. By similar arguments as in Theorem \ref{thm3.2xx} (cf. \eqref{eq37}) one gets \begin{equation} \begin{split} (-1)^{i+1} \tilde{Q}_{s+1}(x_i) &= (-1)^{i+1} \frac{s}{(s-1)2^{s-1}}\bigg(T_{s+1}(x_i) + \frac{x_i}{s}\bigg)\\ &= \frac{s}{(s-1)2^{s-1}}\bigg( \sin\bigg(\frac{2i+1}{2s}\pi\bigg) + \frac{(-1)^{i+1}x_i}{s}\bigg)\\ &> \frac{s}{(s-1)2^{s-1}}\bigg( \frac{1}{s} - \frac{|x_i|}{s}\bigg) > 0,\qquad i=0,...,s-1. \end{split} \end{equation} Thus, $\tilde{Q}_{s+1}(x)$ has at least $s-1$ zeros on the interval $[x_0,x_{s-1}]$. Taking into account \eqref{eq38}, one concludes that $\tilde{Q}_{s+1}(x)$ has $s+1$ zeros on the interval $[-1,1]$. Theorem \ref{thm3.4} is proved. \end{proof} If $(c_i)_{i=0}^s$ are chosen as zeros of $\tilde{Q}_{s+1}(x)$, then we have \begin{equation} \frac{d}{dx} \prod_{i=0}^s(x-c_i) = \frac{s+1}{2^{s-1}}\bigg(T_s(x) + \frac{1}{(s-1)(s+1)}\bigg). \end{equation} Thus, for this choice of $(c_i)_{i=0}^s$ equation \eqref{eq22} is not satisfied. Let us discuss another possible choice for $(c_i)_{i=0}^s$ when $s$ is even. Consider the following polynomial \begin{equation} Q_{s+1}(x) := \frac{s+1}{2^{s}} \bigg(\frac{T_{s+1}(x)}{s+1} - \frac{T_{s-1}(x)}{s-1}\bigg). \end{equation} The functions $T_{s-1}(x)$ and $T_{s+1}(x)$ are odd functions when $s$ is an odd integer. Thus, $Q_{s+1}(s)$ is an odd function when $s$ is even. Therefore, zeros of $Q_{s+1}(s)$ are symmetric about $0$. By similar arguments as in Theorem ~\ref{thm3.2xx} one can show that $Q_{s+1}(s)$ has $s+1$ zeros. Note that not all these $s+1$ zeros are in $[-1,1]$. Let $d_0<d_1<...<d_s$ be zeros of $Q_{s+1}(s)$ and let $c_i := \frac{d_i}{d_s}$. Then \begin{equation} \label{eq44} -1=c_0<c_1<...<c_s = 1,\quad c_i = \frac{d_i}{d_s},\qquad i=0,...,s. \end{equation} Note that if $(c_i)_{i=0}^s$ are chosen by \eqref{eq44}, then equation \eqref{eq22} does not hold. In fact, for this choice of $(c_i)_{i=0}^s$ one has \begin{equation} \frac{d}{dx} \prod_{i=0}^s (x-c_i) = \bigg(\frac{1}{d_s}\bigg)^{s+1} \frac{s+1}{2^{s-1}} T_{s}(d_s x). \end{equation} \section{Numerical experiments} \label{sec4} \subsection{Interpolation} In this section we will carry out numerical experiments to compare the Lebesgue constants of the following node distributions: 1. Chebyshev-Gauss-Lobatto points \begin{equation} c_i = \cos(\frac{i}{s}\pi),\qquad i=0,...,s. \end{equation} 2. Scaled Chebyshev points \begin{equation} c_i = \frac{\cos(\frac{2i+1}{2s+2}\pi)}{\cos(\frac{1}{2s+2}\pi)},\qquad i=0,...,s. \end{equation} 3. Equidistant nodes \begin{equation} c_i = - 1 + \frac{2i}{s},\qquad i=0,...,s. \end{equation} The Lebesgue constant $\Lambda(\bm{c})$ can be computed by the formula (see, e.g., \cite{FB}) \begin{equation} 1 + \Lambda(\bm{c}) = \max_{x\in [-1,1]} F_{\bm{c}}(x),\qquad F_{\bm{c}}(x):= \sum_{k=0}^s |F_{k}(x)|, \end{equation} where \begin{equation} F_{k}(x) = \prod_{j=0 \atop{j\not=k}}^s (x-c_j)\bigg/ \prod_{j=0\atop{j\not=k}}^s (c_k-c_j),\qquad k=0,...,s. \end{equation} In all experiments, we denote by CGL the numerical solutions obtained by using Chebyshev-Gauss-Lobatto node distribution. Figure \ref{fig1} plots the function $F_{\bm{c}}(x)$ based on equidistant nodes, Chebyshev-Gauss-Lobatto nodes, and the extended Chebyshev nodes studied in this paper. From Figure \ref{fig1} one can see that the scaled Chebyshev node distribution yields a function $F_{\bm{c}}(x)$ with minimal sup-norm among the three node distributions. \begin{figure}[!h!t!b!] \centerline{ \begin{tabular}{c}ets \mbox{\includegraphics[scale=0.74]{figure1.eps}} \end{tabular} } \caption{\it Plots of $F_{\bm{c}}(x)$ when $s=4$ and $s=6$. } \label{fig1} \end{figure} Table \ref{table1} below presents Lebesgue constants for the three node distributions for various $s$. From Table \ref{table1} one concludes that the scaled Chebyshev nodes yield the smallest Lebesgue constants among the three node distributions. One can also see that the Lebesgue constant $\Lambda(n)$ of equidistant node distribution increases very fast when $n$ increases. \begin{table}[ht] \centering \small \begin{tabular}{|c@{\hspace{2mm}} @{\hspace{2mm}}|c@{\hspace{2mm}}c@{\hspace{2mm}}c@{\hspace{2mm}}c@{\hspace{2mm}}c@{\hspace{2mm}}c@{\hspace{2mm}}c@{\hspace{2mm}}|} \hline Node distribution & $s= 6$ & $s= 8$ &$s= 10$ &$n=12$ &$n=14$ &$n=16$ &$n=18$\\ \hline Equi-spaced &3.6 &9.9 &28.9 &88.3 &282.2 &933.5 &3170.1\\ \hline Chebyshev-Gauss-Lobatto &1.1 &1.3 &1.4 &1.5 &1.6 &1.7 &1.8\\ \hline Scaled Chebyshev &0.8 &0.9 &1.1 &1.2 &1.3 &1.3 &1.4\\ \hline \end{tabular} \caption{Lebesgue constants} \label{table1} \end{table} Figure \ref{fig2} plots absolute values of errors of Lagrange interpolation $|f(x) - L_{\bm{c}}(f)(x)|$ for equidistant nodes, Chebyshev-Gauss-Lobatto nodes, and the scaled Chebyshev nodes when $f(x)=e^{x}$ (left) and $f(x) = \cos(x)$ (right). From Figure \ref{fig2} one concludes that the scaled Chebyshev node distribution is the best among the three node distributions in this experiment. \begin{figure}[!h!t!b!] \centerline{ \begin{tabular}{c} \mbox{\includegraphics[scale=0.74]{figure2.eps}} \end{tabular} } \caption{\it Plots of interpolating errors for $f(x)=e^{x}$ (left) and $f(x) = \cos(x)$ (right). } \label{fig2} \end{figure} \subsection{Numerical differentiation} \label{} The Lagrange interpolation polynomial $L_{\bm{c}}(f)$ of $f$ over the nodes $(c_i)_{i=0}^s$ is given by \begin{equation} \label{eq39} L_{\bm{c}}(f)(x) = \sum_{i=0}^s f(c_i)\ell_{\bm{c},i}(x),\quad \ell_{\bm{c},i}(x) = \prod_{j=0\atop{j\not=i}}^s (x-c_j)/ \prod_{j=0\atop{j\not=i}}^s (c_i-c_j). \end{equation} Therefore, \begin{equation} (L_{\bm{c}}(f))'(x) = \sum_{i=0}^s f(c_i)\ell'_{\bm{c},i}(x). \end{equation} This implies \begin{equation} (L_{\bm{c}}(f))'(c_k) = \sum_{i=0}^s f(c_i)\ell'_{\bm{c},i}(c_k),\qquad k=0,...,s. \end{equation} These equations can be rewritten as \begin{equation} \label{au01e1} \begin{pmatrix} (L_{\bm{c}}(f))'(c_0)\\ (L_{\bm{c}}(f))'(c_1)\\ \vdots \\ (L_{\bm{c}}(f))'(c_s) \end{pmatrix} = \begin{pmatrix} d_{00} &d_{01} &\cdots &d_{0s}\\ d_{10} &d_{11} &\cdots &d_{1s}\\ \vdots &\vdots &\ddots &\vdots\\ d_{s0} &d_{s1} &\cdots &d_{ss} \end{pmatrix} \begin{pmatrix} f(c_0)\\ f(c_1)\\ \vdots\\ f(c_s) \end{pmatrix}, \qquad d_{ij} := \ell'_{\bm{c},j}(c_i). \end{equation} The matrix $D=(d_{ij})_{i,j=0}^s$ is called a differentiation matrix. The derivatives $f'(c_i)$, $i=0,...,s$, are approximated by $(L_{\bm{c}}(f))'(c_i)$ which are computed by \eqref{au01e1}. Let us derive formulae for computing the differentiation matrix $D=(d_{ij})_{i,j=0}^s$. From \eqref{eq39}, one gets \begin{equation} \ell'_{\bm{c},i}(x) = \ell_{\bm{c},i}(x) \sum_{j=0\atop{j\not=i}}^s \frac{1}{x-c_j},\qquad x\not = c_j. \end{equation} Thus, \begin{gather} d_{ji}=\ell'_{\bm{c},i}(c_j) = \prod_{k=0\atop{k\not=i,j}}^s (c_i-c_k)\bigg/ \prod_{k=0\atop{k\not=j}}^s (c_j-c_k),\quad i\not=j,\quad i,j=0,...,s,\\ d_{ii}=\ell'_{\bm{c},i}(c_i) = \sum_{k=0\atop{k\not=i}}^s (c_i-c_k),\qquad i=0,...,s. \end{gather} One can find similar formulae in \cite{Trefethen}. When $(c_i)_{i=0}^s$ are Chebyshev-Gauss-Lobatto points, the differentiation matrix $D = (d_{ij})_{i,j=0}^s$ is given by (see, e.g., \cite{Trefethen}) \begin{gather} d_{00}= \frac{2s^2 + 1}{6},\qquad d_{ss} = \frac{2s^2 + 1}{6},\\ d_{jj} = \frac{-c_j}{2(1-c_j^2)},\qquad j=1,...,s-1,\\ d_{ij} = \frac{a_i}{a_j} \frac{(-1)^{i+j}}{c_i - c_j},\qquad i\not=j,\quad i,j=0,...,s, \end{gather} where \begin{equation} a_i = \bigg \{\begin{matrix} 2,& i = 0\quad \text{or}\quad i=s,\\ 1,& \text{otherwise}. \end{matrix} \end{equation} Let us do some numerical experiments with the computation of the first derivative of a function $f$ using different types of node distributions. These node distributions are Chebyshev-Gauss-Lobatto points, equi-spaced distribution, the scaled Chebyshev points, and the node distribution developed in Section \ref{sec3}. In our experiments, the node distribution from Theorem \ref{thm3.2xx} is denoted by ND1 and the node distribution from Theorem \ref{thm3.4} is denoted by ND2. Figure \ref{figure3} plots the errors $|f'(c_i)- (L_{\bm{c}}(f))'(c_i)|$ for the four node distributions for the function $f(x)=e^{x}$. From Figure \ref{figure3} one can see that the node distribution ND1 studied in this paper yields the best results in the sup-norm. The approximation for $(f'(c_i))_{i=0}^s$ with equidistant nodes are very good when $c_i$ is close to $0$ but are not good when $c_i$ is close to the boundary $-1$ or $1$. The accuracy of numerical solutions from all node distributions in this experiment is high even with ten nodes. \begin{figure}[!h!t!b!] \centerline{ \begin{tabular}{c} \mbox{\includegraphics[scale=0.74]{figure3.eps}} \end{tabular} } \caption{\it Plots of $|f'(c_i)- (L_{\bm{c}}(f))'(c_i)|$, $i=0,...,s$, for $f(x)=e^{x}$. } \label{figure3} \end{figure} Figure \ref{figure4} plots the errors $|f'(c_i)- (L_{\bm{c}}(f))'(c_i)|$ for the four node distributions for the function $f(x)=e^{x^2}$. From Figure \ref{figure4} one can see that the result obtained from the node distribution ND1 is the best in the sup-norm. Again, the numerical approximations to $f'(c_i)$, $i=0,...,s$, with equidistant nodes are very good when $c_i$ is close to $0$ but are not good when $c_i$ is close to the boundary $-1$ or $1$. The accuracy of numerical solutions in this experiment is not very high since the function $e^{x^2}$ in this experiment grows much faster than the function $e^x$ in the previous experiment. \begin{figure}[!h!t!b!] \centerline{ \begin{tabular}{c} \mbox{\includegraphics[scale=0.74]{figure4.eps}} \end{tabular} } \caption{\it Plots of $|f'(c_i)- (L_{\bm{c}}(f))'(c_i)|$, $i=0,...,s$, for $f(x)=e^{x^2}$. } \label{figure4} \end{figure} Figures \ref{figure5} and \ref{figure6} plot numerical results for the four node distributions: Chebyshev-Gauss-Lobatto node distribution, the scaled Chebyshev node distribution, the equi-spaced nodes, and the node distribution ND2. Figure \ref{figure5} plots the numerical errors for computing $f'(c_i)$, $i=0,...,s$, for $f(x) = e^x$, on $[-1,1]$. It is clear from Figure \ref{figure5} that the ND2 node distribution yields the best result and the equi-spaced node distribution yields the worst result. From Figure \ref{figure5} we conclude that Chebyshev-Gauss-Lobatto nodes work better than the scaled Chebyshev nodes in this experiment. \begin{figure}[!h!t!b!] \centerline{ \begin{tabular}{c} \mbox{\includegraphics[scale=0.74]{figure5.eps}} \end{tabular} } \caption{\it Plots of $|f'(c_i)- (L_{\bm{c}}(f))'(c_i)|$, $i=0,...,s$, for $f(x)=e^{x}$. } \label{figure5} \end{figure} Figure \ref{figure6} plots the results for $f=e^{x^2}$. Again, it follows from Figure \ref{figure6} that the ND2 yields the best numerical result. It is clear from Figure \ref{figure6} that Chebyshev-Gauss-Lobatto nodes work better than the scaled Chebyshev nodes. The equi-spaced node distribution is the worst among these node distributions. \begin{figure}[!h!t!b!] \centerline{ \begin{tabular}{c} \mbox{\includegraphics[scale=0.74]{figure6.eps}} \end{tabular} } \caption{\it Plots of $|f'(c_i)- (L_{\bm{c}}(f))'(c_i)|$, $i=0,...,s$, for $f(x)=e^{x^2}$. } \label{figure6} \end{figure} \subsection{Solving a Volterra equation of the first kind} Let us do a numerical experiment with solving the following equation \begin{equation} \label{eqj30} \int_0^t K(t,\xi)u(\xi) d\xi = f(t),\qquad 0\le t\le 1. \end{equation} To solve equation \eqref{eqj30} we approximate $u(\xi)$ by its Lagrange interpolation polynomial $P(\xi)$ over the nodes $(c_i)_{i=0}^s$ and solve for $(u(c_i))_{i=0}^s$ from equation \eqref{eqj30}. In particular, we have \begin{equation} u(\xi) \approx P(\xi)=\sum_{j=0}^s \ell_j(\xi)u(c_j),\qquad \xi \in [0,1]. \end{equation} From equation \eqref{eqj30} one gets \begin{equation} \label{eqjl25} \begin{split} f(c_i) &\approx \int_0^{c_i} K(c_i,\xi)P(\xi)d\xi = \sum_{j=0}^s u(c_j) \int_0^{c_i}\ell_j(\xi)K(c_i,\xi) d\xi,\quad i=0,...,s. \end{split} \end{equation} Equation \eqref{eqjl25} can be written as \begin{equation} \label{au6e1} A_su_s \approx f_s, \end{equation} where \begin{gather} u_s = (u(c_0), u(c_1),...,u(c_s))^T,\quad f_s = (f(c_0), f(c_1),...,f(c_s))^T,\\ A_s = (a_{ij})_{i,j=0}^s,\qquad a_{ij} = \int_0^{c_i} \ell_j(\xi) K(c_i,\xi)d\xi\label{eqjl26.1}. \end{gather} Taking into account \eqref{au6e1}, we solve for $\tilde{u}_s$ from the linear algebraic system \begin{equation} A_s\tilde{u}_s = f_s, \end{equation} and take $\tilde{u}_s$ as an approximation to $u_s=(u(c_0), u(c_1),...,u(c_s))^T$. In our experiments we choose $K(t,\xi) = e^{t-\xi}$ and $u(t)=\cos(\pi t),\, t\in [0,1]$. We compare the three distributions: Chebyshev-Gauss-Lobatto points, the scaled Chebyshev points, and the two node distributions developed in Section \ref{sec3}. The elements $a_{ij}$, $i,j=0,...,s$, in equation \eqref{eqjl26.1} are computed by means of quadrature formulas. In fact, we used the function {\it quad} in MATLAB to compute these coefficients. Figure \ref{figure7} plots the results obtained by using Chebyshev-Gauss-Lobatto points, the scaled Chebyshev points, and the node distribution ND1 developed in Section \ref{sec3} for the case when $s=9$. From Figure \ref{figure7} we can see that the node distribution ND1 yields the best result in the sup-norm. The scaled Chebyshev node distribution yields the worst result in sup-norm in this experiment. \begin{figure}[!h!t!b!] \centerline{ \begin{tabular}{c} \mbox{\includegraphics[scale=0.68]{volt2.eps}} \end{tabular} } \caption{\it Plots of absolute values of errors for $u(\xi)=\cos(\pi \xi)$. } \label{figure7} \end{figure} Figure \ref{figure8} plots the results obtained by using Chebyshev-Gauss-Lobatto points, the scaled Chebyshev points, and the node distribution ND2 developed in Section \ref{sec3} for the case when $s=10$. We can see from Figure \ref{figure8} that the result obtained by using the node distribution ND2 is the best in the sup-norm. Again, the result obtained by using the scaled Chebyshev node distribution is the worst in sup-norm in this experiment. \begin{figure}[!h!t!b!] \centerline{ \begin{tabular}{c} \mbox{\includegraphics[scale=0.68]{volt1.eps}} \end{tabular} } \caption{\it Plots of absolute values of errors for $u(\xi)=\cos(\pi \xi)$. } \label{figure8} \end{figure}
{ "timestamp": "2013-05-28T02:03:05", "yymm": "1305", "arxiv_id": "1305.6104", "language": "en", "url": "https://arxiv.org/abs/1305.6104", "abstract": "A scaled Chebyshev node distribution is studied in this paper. It is proved that the node distribution is optimal for interpolation in $C_M^{s+1}[-1,1]$, the set of $(s+1)$-time differentiable functions whose $(s+1)$-th derivatives are bounded by a constant $M>0$. Node distributions for computing spectral differentiation matrices are proposed and studied. Numerical experiments show that the proposed node distributions yield results with higher accuracy than the most commonly used Chebyshev-Gauss-Lobatto node distribution.", "subjects": "Numerical Analysis (math.NA)", "title": "On node distributions for interpolation and spectral methods", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9890130589886011, "lm_q2_score": 0.8221891283434876, "lm_q1q2_score": 0.8131557848901643 }
https://arxiv.org/abs/math/0603630
Sharp bounds for eigenvalues of triangles
We prove that the first eigenvalue of the Dirichlet Laplacian for a triangle in the plane is bounded above by $\pi^2 L^2\over 9A^2$, where $L$ is the perimeter and $A$ is the area of this triangle. We show that the \mbox{constant 9} is optimal and that the optimal constant for the lower bound of the same form is 16. This gives a positive answer to a conjecture made by P. Freitas.
\section{Introduction} The purpose of this paper is to prove the following theorem. \begin{thm}\label{main} Let $T$ be a triangle in a plane of area $A$ and perimeter $L$. Then the first eigenvalue $\lambda_T$ of the Dirichlet Laplacian on $T$ satisfies \begin{gather} {\pi^2 L^2\over 16A^2}\leq\lambda_T\leq {\pi^2 L^2\over 9A^2}. \end{gather} The constants $9$ and $16$ are optimal. \end{thm} The lower bound was proved in a more general context in \cite{M}. In Section 6 we show that for ``tall" isosceles triangles there is an asymptotic equality in the lower bound. Hence it is impossible to decrease the constant $16$. The upper bound was recently stated as a conjecture in \cite{F} and numerical evidence for its validity are given in \cite{AF}. Bounds of this form but with different constants have been the subject of many papers in the literature. The eigenvalue of any doubly-connected domain is bounded above by the same fraction but with the constant $4$, see \cite{Po} and remarks in \cite{O}. There is also a sharper upper bound due to Freitas (\cite{F}) which is not of this form but it seems that in the worst case (``tall'' isosceles triangle) it gives the constant $6$, in the best (equilateral) $9$. It is worth noting that the constant $9$ may not be improved since equilateral triangles give equality in the upper bound of the Theorem \ref{main}. The spectral properties of a Dirichlet Laplacian on an arbitrary planar domain are important both in physics and in mathematics. Unfortunately, it is almost impossible to find the exact spectrum even for some simple classes of domains. Except for rectangles, balls and annuli, not much can be said in general. In the case of triangles the full spectrum is known only for equilateral and right triangles with smallest angles $\pi/4$ or $\pi/6$. For more information about these we refer the reader to \cite{Mc}. For all other triangles, the best we can hope to do is to give bounds for the eigenvalues, such as those given above. Even though Theorem \ref{main} gives sharp bounds in the sense that the constants are the best possible given the form of the bound, there is certainly room for improvements. In fact, sharper lower bounds are already known, see \cite{F}. One of these bounds is good for both equilateral and ``tall`` triangles. It gives the constant $9$ for the first and $16$ for the second. The upper bound good in both cases is still unknown. To the best of our knowledge it is also not clear what is the correct bound for the isosceles triangle with base almost equal to the half of the perimeter, but we think it should be $16$. By comparing our numerical results with the numerical studies contained in \cite{AF}, Section 5.1, we conjecture that \begin{con} Let $T$ be a triangle in a plane of area $A$ and perimeter $L$. Then the first eigenvalue $\lambda_T$ of the Dirichlet Laplacian on $T$ satisfies \begin{gather} {\pi^2L^2\over16A^2}+{7\sqrt3\pi^2\over12A}\leq\lambda_T\leq {\pi^2L^2\over12A^2}+{\sqrt3\pi^2\over3A}. \end{gather} \end{con} Here both bounds are of the form $$E_3(L,A,\theta)={4\pi^2\over \sqrt3A}+\theta {L^2-12\sqrt3A \over A^2}$$ considered in \cite{AF}. The lower bound, with $\theta=\pi^2/16$, is the best bound we can expect given this particular form. Indeed, this is the only bound which is sharper then the lower bound of Theorem \ref{main} and which might be true for ``tall'' triangles. The upper bound from our main result is also of this form, but with $\theta=\pi^2/9$. Hence the conjectured upper bound is sharper ($\theta=\pi^2/12$), and it is the best in the sense that the bound with $\theta=\pi^2/13$ is not valid. But, since only the constant $16$ can give a good upper bound for ``tall'' triangles, it is not possible to find a bound of the form $E_3$ which is good for both equilateral and ``tall'' triangles. Our proof of the upper bound from the Theorem \ref{main} contains two main parts. The first deals with an ``almost equilateral'' triangles. That is, with the triangles for which the longest side is comparable to the shortest side. For these our strategy is to find a suitable test function $\psi$. That is, we try to find a function which is $0$ on the boundary of the triangle $T$ and apply the Rayleigh quotient to get the upper bound for $\lambda_T$. We get \begin{gather} \lambda_T\leq {\int_T |\nabla\psi|^2\over \int_T \psi^2}. \end{gather} This part of the proof is contained in Sections 2 to 5. Included in Section 2 are also some preliminary results. The second part of the proof, contained in Section 6, deals with ``tall'' triangles. These can be approximated by a circular sections for which the eigenvalues can be found explicitly. \section{Eigenfunctions and notation} An arbitrary triangle $T'$ can be rotated and rescaled to obtain a triangle $T$ with vertices $(0,0)$, $(1,0)$ and $(a,b)$. This, together with the fact that the bound in the main theorem is invariant under translations, rotations and scaling allow us to restrict our attention to the triangles with such vertices. We can also assume that the side contained in the $x$-axis is the shortest. Hence we have that $$a^2+b^2\geq1\,\,\, \text{and}\,\,\, a\leq1/2$$ for our triangles. We will denote the length of the other two sides by $M$ and $N$, with $N$ denoting the longest. We start with the first eigenfunction of an equilateral triangle, and we will proceed as in \cite{F}. Such function is given by \begin{gather} f(x,y)=\sin\left(4\pi y\over \sqrt3\right)-\sin\left[2\pi\left( x+{y\over\sqrt3} \right)\right]+ \sin\left[2\pi\left( x-{y\over\sqrt3} \right) \right]. \end{gather} We can compose $f$ with a linear transformation to obtain a function $\phi$ which is equal to $0$ on the boundary of $T$. Namely consider \begin{gather} \begin{split} \phi(x,y)&=f\left(x-{a-1/2\over b}y,{\sqrt3\over 2b}y\right)\\&= \sin\left(2\pi y\over b\right)-\sin\left[2\pi\left( x+{(1-a)y\over b} \right)\right]+ \sin\left[2\pi\left( x-{a y\over b} \right) \right]. \end{split} \end{gather} This function was used in \cite{F} to obtain the upper bound from the Rayleigh quotient. Since the function $f$ is the first eigenfunction of the Dirichlet Laplacian on an equilateral triangle and its eigenvalue gives equal sign in the main bound, it is reasonable to expect that by taking any linear transformation we can only decrease the constant $9$ in the Theorem \ref{main}. Hence we want to find another eigenfunction of some other triangle. We will use the eigenfunctions of the equilateral triangle to find a test function for the right triangle with angles $\pi/3$ and $\pi/6$. In the recent paper \cite{Mc} the author constructs two families of eigenfunctions of the equilateral triangle. The antisymmetric mode has the property that it is $0$ on the altitude. Thus, such a function is also the eigenfunction for the right triangle. We can then take the antisymmetric eigenfunction corresponding to the smallest eigenvalue as our test function. A calculation leads to the following function \begin{gather} \begin{split} g(x,y)&=\sin\left( \sqrt3\pi y \right)\sin\left( \pi x/3 \right) \\&+ \sin\left( \sqrt3\pi y/3 \right)\sin\left( 5\pi x/3 \right) \\&+ \sin\left( 2\sqrt3\pi y/3 \right)\sin\left( 4\pi x/3 \right). \end{split} \end{gather} This function, as can be easily checked, is in fact the eigenfunction of the Dirichlet Laplacian on the triangle with the vertices $(0,0)$, $(1,0)$ and $(0,\sqrt3)$. The corresponding eigenvalue gives the better bound than the one in the Theorem \ref{main}, the constant is about $9.6$. Therefore, a linear transformation of this function should give a correct bound at least for the neighborhood of the point $(0,\sqrt3)$. By applying a suitable linear transformation we get the second test function \begin{gather} \begin{split} \varphi_1(x,y)&=g\left( x-{ay\over b},{\sqrt3 y\over b} \right) \\&= \sin\left( 3\pi y\over b \right)\sin\left[\frac\pi3\left( x-{ay\over b} \right) \right] \\&+ \sin\left(\pi y\over b \right)\sin\left[ \frac{5\pi}3\left(x-{ay\over b} \right) \right] \\&+ \sin\left( 2\pi y\over b \right)\sin\left[ \frac{4\pi}3\left(x-{ay\over b} \right) \right]. \end{split} \end{gather} Similarly, we can obtain the last two test functions. One will be a linear transformation of the eigenfunction of the triangle with vertices $(0,0)$, $(1,0)$ and $(1,\sqrt3)$. The other a linear transformation of the eigenfunction of the triangle with the vertices $(0,0)$, $(1,0)$ and $(0,1/\sqrt3)$. We get: \begin{gather} \begin{split} \varphi_2(x,y)&= \sin\left( 3\pi y\over b \right)\sin\left[\frac\pi3\left( 1-x+{(a-1)y\over b} \right) \right] \\&+ \sin\left(\pi y\over b \right)\sin\left[ \frac{5\pi}3\left(1-x+{(a-1)y\over b} \right) \right] \\&+ \sin\left( 2\pi y\over b \right)\sin\left[ \frac{4\pi}3\left(1-x+{(a-1)y\over b} \right) \right]. \\ \varphi_3(x,y)&= \sin\left( 5\pi y\over 3b \right)\sin\left[\pi\left( x-{ay\over b} \right) \right] \\&+ \sin\left(4\pi y\over 3b \right)\sin\left[2\pi\left(x-{ay\over b} \right) \right] \\&+ \sin\left( \pi y\over 3b \right)\sin\left[3\pi\left(x-{ay\over b} \right) \right]. \end{split} \end{gather} Now we can take a linear combination of these test functions. That is, we consider \begin{equation} \psi(x,y)=\alpha\varphi_1(x,y)+\beta\varphi_2(x,y)+\gamma\varphi_3(x,y)+\varepsilon\phi(x,y), \end{equation} and we can calculate the Rayleigh quotient for this function. After optimizing over all possible values of $\alpha$, $\beta$, $\gamma$ and $\varepsilon$, this will give an appropriate bound for the first eigenvalue. To prove Theorem \ref{main} we have to check that \begin{gather} \lambda_T\leq {\int_T|\nabla\psi|^2\over\int_T\psi^2}\leq {\pi^2 L^2\over 9A^2}, \end{gather} for some $\alpha$, $\beta$, $\gamma$ and $\varepsilon$ (possibly depending on $T$). The last inequality is equivalent to \begin{equation}\label{in1} 9A^2\int_T|\nabla\psi|^2\leq \pi^2 L^2\int_T\psi^2. \end{equation} Since the function $\psi$ is given explicitly and is a trigonometric function, it is possible to find the exact values of this integrals but calculations are very cumbersome. For this reason we will do the long calculations in Mathe\-matica. However, we wish to emphasize the fact that all the calculations are done symbolically. By our assumptions we have $L=1+\sqrt{a^2+b^2}+\sqrt{(a-1)^2+b^2}$ and $A=b/2$. As a result of running Mathematica we get that to prove the inequality (\ref{in1}) we have to find $\alpha$, $\beta$, $\gamma$ and $\varepsilon$ such that the inequality \begin{small} \begin{gather}\label{horrible} \begin{split} &0\geq8041366333\times \\&\;\;\Big\{ \left( -1594323 - 1792090a + 531441(a^2+b^2) + 201600\left( 3 + a^2 + b^2 \right) {\pi }^2 \right)\alpha^2 \\&\;\;\;+ \left( -2854972 + 729208a + 531441(a^2+b^2) + 201600\left( 3 + (a-1)^2 + b^2 \right) {\pi }^2 \right)\beta^2 \\&\;\;\;+ \left( 531441 - 1792090a - 1594323(a^2+b^2) + 201600\left( 1 + 3a^2 + 3b^2 \right) {\pi }^2 \right) {\gamma }^2\Big\} \\&\;\;+ 5558192409369600\left( 1 -a + (a^2 + b^2) \right) \pi^2\varepsilon^2 \\&\;\;+67672797192\times \\&\;\;\;\Big\{ \left( 729{\sqrt{3}} \left( 454 -128a + 339(a^2+b^2) \right) + 24640\left( 4 -8a + 9(a^2+b^2) \right) \pi \right) \gamma \epsilon \\&\;\;\;\;+ \left( 729{\sqrt{3}} \left( 665 - 780a + 454(a^2+b^2) \right) + 24640\left( 5 + 4(a^2+b^2) \right) \pi \right)\beta \epsilon \\&\;\;\;\;+ \left( 729{\sqrt{3}} \left( 339 -128a + 454(a^2+b^2) \right) + 24640\left( 9 -8a + 4(a^2+b^2) \right) \pi \right) \alpha\epsilon\Big\} \\&\;\;+\left( 1990033124626008a + 2553294638054160{\sqrt{3}} \left( 3 - 2a + 3(a^2+b^2) \right) \pi \right) \alpha \gamma \\&\;\;+1151172000\left( 35341051 - 26756686a + 32479596(a^2+b^2) \right) \beta \gamma \\&\;\;+ 189\Big\{ 819452341268271 + -73323642839420a + 73323642839420(a^2+b^2) \\&\;\;\;\;\;\;\;\;\;\;\;\;\;\;- 79935610875120{\sqrt{3}}\pi + 24336134222400{\sqrt{3}} \left(-a + a^2 + b^2 \right) \pi \Big\} \alpha \beta \\&\;\; - 9{\left( 1 + {\sqrt{{\left( -1 + a \right) }^2 + b^2}} + {\sqrt{a^2 + b^2}} \right) }^2 \Big\{ 444001222376712{\sqrt{3}}\left( \alpha + \gamma \right) \epsilon \\&\;\;\;\;\;\;\;\;- 1629547920\pi \left( {\sqrt{3}}\alpha \left( 4251\beta - 99484\gamma \right) - 113696\left( \alpha + \beta + \gamma \right) \epsilon \right) \\&\;\;\;\;\;\;\;\;+ 51464744531200{\pi }^2 \left( {\alpha }^2 + {\beta }^2 + {\gamma }^2 + 2{\epsilon }^2 \right) \\&\;\;\;\;\;\;\;\;+ 3\beta \left( 346474423262177\alpha + 85272\left( 3297684500\gamma + 1735627257{\sqrt{3}}\epsilon \right) \right) \Big\} \end{split} \end{gather} \end{small} is valid. This expression clearly shows that it would be very difficult to do the calculations by hand. Notice that this expression depends only on $b^2$ and $a$. Also note that the ``building" blocks for the expression are exactly equal to the length of the sides of the triangle $T$. Hence we make the substitution $M^2=a^2+b^2$ and $N^2=(a-1)^2+b^2$. As a result we get a polynomial of degree $2$ in $M$ and $N$, where $N\geq M\geq 1$. For further simplification (and to improve our chances of finding the appropriate coefficients) we divide all triangles into 4 classes. Each class will be handled in a separate section whose number corresponds to the case: \begin{enumerate} \item[3)] The triangles with $N\geq2$ and $M\leq15$, \item[4)] $1\leq N\leq2$ and $(N+1)/2\leq M\leq 2$, \item[5)] $1\leq N\leq2$ and $1\leq M\leq (N+1)/2$, \item[6)] $M\geq 15$. \end{enumerate} The method used to handle the last case will be totally different than the previous ones. \section{Case: $N\geq2$ and $M\leq15$} Let us take $\varepsilon=\beta=0$, $\alpha=1$ and $\gamma=-1/6$. This simplifies (\ref{horrible}) to \begin{gather} \begin{split} 0\geq &P(M,N)= -90851035780 - 16374894040M^2 + 33929984593N^2 \\&- 272432160{\sqrt{3}}\left( 10 - 8M + 10M^2 - 8N - 8MN + 3N^2 \right) \pi \\&+ 28828800 \left( 689 - 148M + 199M^2 - 148N - 148MN - 74N^2 \right) {\pi }^2. \end{split} \end{gather} To show this inequality we first find all the critical points of the right side and later check the values on the boundary. Both $\partial_M P$ and $\partial_N P$ are linear with respect to $M$ and $N$, therefore we have exactly $1$ critical point with $N\approx-42.2$. Hence it is enough to check this inequality on the boundary. The boundary conditions are given by $M=N$, $M=15$, $N=2$ and $M=N-1$. For each of these $P$ is a quadratic equation and we just have to check that the roots are outside of the bounds for $M$ or $N$ and that the inequality is true at the endpoints. We have \begin{itemize} \item $P(M,M)=0$ for $M\approx1.6$ and $M\approx15.15$; $P(2,2)<0$ and $P(15,15)<0$, \item $P(15,N)=0$ for $N\approx14.97$ and $N\approx42.5$; $P(15,15)<0$ and $P(15,16)<0$, \item $P(M,2)=0$ for $M\approx0.96$ and $M\approx2.61$; $P(1,2)<0$ and $P(2,2)<0$, \item $P(N-1,N)=0$ for $N\approx1.97$ and $N\approx20.56$; $P(1,2)<0$ and $P(15,16)<0$. \end{itemize} This shows that the desired inequality is true on the boundary and therefore everywhere. \section{Case: $1\leq N\leq 2$ and $(N+1)/2\leq M\leq 2$} In the next 2 sections we will have to deal with cases for which an equilateral triangle ($N=M=1$) is one of the possible triangles. We take $\varepsilon=1$, since only the eigenfunction of the equilateral triangle can give the constant $9$ in the Theorem \ref{main}. We also need for all the other coefficients to vanish near the equilateral triangle. Let us take $\gamma=0$ and $\alpha=\beta$. We just have to choose the common value for $\alpha$ and $\beta$. Due to the nature of the already very complicated calculations we cannot afford to pick a very complicated coefficient, thus we take $\alpha=(N+M-2)/2$. This choice has one additional advantage. In this case we are working with the eigenfunctions of the following triangles: One equilateral and two right triangles with shortest side $(0,0)-(1,0)$. Hence we have a symmetry about $a=1/2$, or in terms of $M$ and $N$, about $M=N$. Therefore it is natural to introduce the rotated coordinates $U=(M+N)/2-1$ and $V=(N-M)/2$. Note also that $\alpha=\beta=U$. This also moves the equilateral triangle to the origin. After applying these transformations the inequality (\ref{horrible}) becomes \begin{small} \begin{gather} \begin{split} 0\geq&P(U,V)=U^2\Bigl( 3293385188722144 - 451048860827136{\sqrt{3}} - 952832984463360\pi \\&\;- 874782993324240{\sqrt{3}}\pi + 463182700780800{\pi }^2 - 4817666363010084U \\&\;+ 916192998555120{\sqrt{3}}U + 710514087323040{\sqrt{3}}\pi U - 330844786272000{\pi }^2U \\&\;-1072431834636645U^2 + 346350633108480{\sqrt{3}}\pi U^2 - 33084478627200{\pi }^2U^2 \Bigr) \\+& 9V^2\Bigl( 44112638169600{\pi }^2 + 355514206276944{\sqrt{3}}U + 105870331607040\pi U \\&\;+ 177818984344461U^2 + 36504201333600{\sqrt{3}}\pi U^2 + 25732372265600{\pi }^2U^2 \Bigr) \end{split} \end{gather} \end{small} This is a polynomial of degree $4$ in $U$ and of degree $2$ in $V$. Hence we expect to be able to solve $\partial_V P(U,V)=0$. (In fact $\partial_V P$ is equal to $V$ times irreducible quadratic polynomial in $U$.) Therefore we have exactly one solution $V=0$, or $N=M$. But this is a boundary of the region, so we only have to check the boundary values. This time the boundary conditions are: $M=N$, $M=(N+1)/2$ and $N=2$. After changing variables to $U$ and $V$ these become $V=0$, $U=3V$ and $U+V=1$, respectively. Each time we get a polynomial of degree $4$. Thus we proceed as in the previous section. \begin{itemize} \item $P(U,0)=0$ for $U=0$ (double root), $U\approx5.65$ and $U\approx-0.24$; $P(0,0)=0$ and $P(1,0)<0$, \item $P(3V,V)=0$ for $V=0$ (double root), $V\approx0.55$ and $V\approx-0.04$; $P(0,0)=0$ and $P(3/4,1/4)<0$, \item $P(1-V,V)=0$ for $V\approx-0.52$ and $V\approx0.29$ ($2$ complex roots); $P(1,0)<0$ and $P(3/4,1/4)<0$. \end{itemize} Hence the inequality is true on the boundary and so also inside of the region. \section{Case: $1\leq N\leq 2$ and $1\leq M\leq (N+1)/2$} Here we take $\varepsilon=1$, $\beta=0$ and $\alpha=\gamma=(N+M-2)/\sqrt2$. Even though the symmetry described in the previous section does not exist here, we will still use the same rotated coordinates $U=(M+N)/2-1$ and $V=(N-M)/2$. This time the inequality (\ref{horrible}) becomes: \begin{small} \begin{gather} \begin{split} 0\geq&P(U,V)=32133332U^2\Bigl( -1898955433 - 549628092{\sqrt{6}} + 103783680{\sqrt{2}}\pi \\&\;\;\;- 22702680{\sqrt{3}}\pi + 345945600{\pi }^2 - 1063944882U + 222614730{\sqrt{6}}U \\&\;\;\;+ 259459200{\sqrt{2}}\pi U- 136216080{\sqrt{3}}\pi U + 115315200{\pi }^2U - 531972441U^2 \\&\;\;\;+ 113513400{\sqrt{3}}\pi U^2+ 172972800{\pi }^2U^2 \Bigr) \\&- 64266664\Bigl( 824442138{\sqrt{6}} - 155675520{\sqrt{2}}\pi - 2201993543U \\&\;\;\;+ 158918760{\sqrt{3}}\pi U + 403603200{\pi }^2U \Bigr) \left( U + U^2 \right) V \\&+ 3759599844\Bigl( 1478400{\pi }^2 + 10405746{\sqrt{6}}U + 5765760{\sqrt{2}}\pi U - 4546773U^2 \\&\;\;\;+ 4074840{\sqrt{3}}\pi U^2 + 3449600{\pi }^2U^2 \Bigr) V^2 \end{split} \end{gather} \end{small} Note that this is still a polynomial of degree $2$ in $V$ and therefore we proceed as in the previous section. Unfortunately this time the only solution of $\partial_V P=0$ is $V$ equal to a rational function of $U$ with an irreducible denominator of degree $2$. Hence, by plugging this into $\partial_U P=0$ we get a rational equation with squared irreducible polynomial of degree $2$ in the denominator. Thus, this equation is equivalent to the numerator being $0$. Fortunately the numerator is a solvable polynomial of degree $7$ with $4$ imaginary roots and $3$ real roots ($0$, $\approx-0.18$ and $\approx1.8$). Here we have the following bounds $U=3V$ (equivalent to $M=(N+1)/2$), $U+V=1$ ($N=2$), and $U=V$ ($M=1$). So this is a triangle with vertices $(0,0)$, $(3/4,1/4)$ and $(1/2,1/2)$. Hence neither critical point is inside of this region. Therefore we have to check the boundary values and like before this means we have to find the roots of certain polynomials of degree $4$ and values at the endpoints. \begin{itemize} \item $P(V,V)=0$ for $V=0$ (double root), $V\approx-0.27$ and $V\approx0.64$; $P(0,0)=0$ and $P(1/2,1/2)<0$, \item $P(3V,V)=0$ for $V=0$ (double root), $V\approx-0.06$ and $V\approx0.51$; $P(0,0)=0$ and $P(3/4,1/4)<0$, \item $P(U,1-U)=0$ for $U\approx0.48$ and $U\approx0.79$ ($2$ complex roots); $P(1/2,1/2)<0$ and $P(3/4,1/4)<0$. \end{itemize} Thus inequality is true. \section{Case: $M\geq 15$} For this case we will use a method different than all other cases. Since we are dealing with the triangles for which two sides are long and almost equal, we will estimate the eigenvalue by the eigenvalue of a circular sector contained in the triangle $T$. Let us denote the angle between the sides of length $N$ and $M$ by $\gamma$. First we take the isosceles triangle with angle $\gamma$ between the sides of length $M$. We can certainly put this triangle inside the triangle $T$. Since the shortest side of this isosceles triangle has length no larger than $1$, the altitude $h$ satisfies $h\geq\sqrt{M^2-1/4}$. Let us denote a circular sector with angle $\alpha$ and radius $r$ by $S(\alpha,r)$. It is known (see \cite{PS}), that the first eigenvalue of the sector $S(\alpha,r)$ is $j^2_{\pi/\alpha}r^{-2}$, where $j_{\nu}$ is the first zero of the Bessel function $J_{\nu}(x)$ of order $\nu$. It is clear that we can put a sector $S(\gamma,h)$ inside the triangle $T$. Hence, by domain monotonicity we have \begin{gather} \lambda_T\leq \lambda_{S(\gamma,h)}={j^2_{\pi/\gamma}\over h^2}. \end{gather} We need to prove that \begin{gather} {j^2_{\pi/\gamma}\over h^2}\leq {\pi^2 L_T^2\over 9A_T^2}. \end{gather} We have $L_T=1+M+N\geq 2N$ and $A_T= \sin(\gamma) NM/2\leq \gamma NM/2$. Therefore, it is enough to prove that \begin{gather} {9j^2_{\pi/\gamma}(\gamma NM/2)^2\over (M^2-1/4)^2 (2N)^2}\leq 1, \end{gather} or that \begin{gather}\label{est} {9j^2_{\pi/\gamma}\gamma^2 M^2\over 16\pi^2(M^2-1/4)^2}\leq 1. \end{gather} To find the bound for $j_{\nu}$ we will use the estimate obtained in \cite{QW} \begin{gather} j_{\nu}\leq \nu-{a_1\over \sqrt[3]{2}}\nu^{1/3}+{3a_1^2\sqrt[3]{2}\over 20}\nu^{-1/3}, \end{gather} where $a_1\approx-2.338$ is the first negative zero of the Airy function. Hence we have \begin{gather} {j_{\nu}\over \nu}\leq 1+2\nu^{-2/3}+2\nu^{-4/3}. \end{gather} Therefore \begin{gather} {9j^2_{\pi/\gamma}\gamma^2 M^2\over 16\pi^2(M^2-1/4)^2}\leq \left(1+2\left(\frac\gamma\pi\right)^{2/3}+2\left(\frac\gamma\pi\right)^{4/3}\right)^2 {9M^2\over 16(M^2-1/4)}. \end{gather} This last expression is increasing with $\gamma$ as can be easily verified by differentiating. Given $M$, the angle $\gamma$ is maximized for the isosceles triangle, hence $\gamma\leq 2\sin^{-1}(1/2M)$. In order to arrive at (\ref{est}), it is enough to show \begin{gather} \left(1+2\left(\frac{2\sin^{-1}(1/2M)}\pi\right)^{2/3}+2\left(\frac{2\sin^{-1}(1/2M)} \pi\right)^{4/3}\right)^2\!\! {9M^2\over 16(M^2-1/4)}\leq \! 1. \end{gather} It is easy to check that the function on the left side is decreasing with $M$, and that for $M=15$ inequality is true. Hence this is true for any triangle with $M\geq15$. Note also that if $M\longrightarrow\infty$, then the whole expression tends to $9/16$. This shows that the constant $16$ in the lower bound in Theorem \ref{main} is optimal. \section{Script in Mathematica} Here we give the script written in Mathematica to handle all the cumbersome calculations included in Sections 2 to 5. It is important to note that all the calculations are done symbolically. Only the exact values of the roots of all the polynomials are at end converted to numerical form. \begin{small} \begin{verbatim} (* Section 2 *) (* isosceles triangle with vertices (0,0), (1,0) and (Sqrt[3],0) *) g[x_,y_]=Sin [Sqrt[3]\[Pi] y]Sin[\[Pi] x/3] + \ Sin[\[Pi] y/Sqrt[3]]Sin[5\[Pi] x/3] + \ Sin[2\[Pi] y/Sqrt[3]]Sin[4\[Pi] x/3]; (* other right triangles *) g2[x_,y_]=g[1-x,y]; g3[x_,y_]=g[Sqrt[3]y,Sqrt[3]x]; (* test functions obtained from right triangles *) \[CurlyPhi]1=g[x-(a y /b),Sqrt[3]y/b]; \[CurlyPhi]2=g2[x-((a-1) y /b),Sqrt[3]y/b]; \[CurlyPhi]3=g3[x-(a y /b),y/(Sqrt[3]b)]; (* equilateral triangle after linear transformation *) \[Phi]:=Sin[2\[Pi]y/b]-Sin[2\[Pi](x+(1-a)y/b)]+Sin[2\[Pi](x-a y/b)]; (* final test function *) \[Psi]=\[Alpha] \[CurlyPhi]1 + \[Beta] \[CurlyPhi]2 + \ \[Gamma] \[CurlyPhi]3 + \[Epsilon] \[Phi]; grad=Simplify[Integrate[D[\[Psi],x]^2+D[\[Psi],y]^2,{y,0,b}, \ {x,a y/b, (a-1) y/b+1}]]; int=Simplify[ Integrate[\[Psi]^2,{y,0,b},{x,a y/b , (a-1)y/b +1}]]; (* we have to prove that this is <= 0 *) in=9b^2grad-4\[Pi]^2(1+Sqrt[a^2+b^2]+Sqrt[(a-1)^2+b^2])^2int; (* change from (a, b) to (M, N) and cancel b *) in2=Simplify[in/b /. b^2 -> M^2 - a^2 /. a -> (M^2 - N^2 + 1)/2, \ (N > 0) && (M > 0)]; (* inequality (2.9) *) Simplify[308788467187200in/b] (* Section 3 *) W=in2/. \[Epsilon] -> 0 /. \[Gamma] -> -1/6 /. \[Beta] -> 0 /. \ \[Alpha] -> 1; (* Inequality (3.1) *) Apart[1383782400W] (* Critical point *) Reduce[(D[W, M] == 0) && (D[W, N] == 0), {M, N}] // N (* Boundary : roots and endpoints *) Reduce[W == 0 /. N -> 2] // N Reduce[W == 0 /. M -> N - 1] // N Reduce[W == 0 /. M -> N] // N Reduce[W == 0 /. M -> 15] // N W /. M -> {1, 2} /. N -> 2 // N W /. M -> 15 /. N -> {15, 16} // N (* Section 4 *) W=in2/. \[Epsilon] -> 1 /. \[Gamma] -> 0 /. \[Beta] -> \[Alpha] /.\ \[Alpha] -> (N + M - 2)/2; pol = W/. M -> U - V /. N -> U + V /. U -> U + 1; (* inequality (4.1) *) Apart[22056319084800pol, V] (* Critical point *) Reduce[D[pol, V] == 0, V] // N (* Boundary : roots and endpoints *) Reduce[pol == 0 /. V -> 0] // N Reduce[pol == 0 /. U -> 1 - V] // N Reduce[pol == 0 /. U -> 3V] // N pol /. V -> 0 /. U -> {0, 1} // N pol /. V -> 1/4 /. U -> 3/4 // N (* Section 5 *) W=in2/. \[Epsilon] -> 1 /. \[Beta] -> 0 /. \[Gamma] -> \[Alpha] /.\ \[Alpha] -> (M + N - 2)/Sqrt[2]; pol = W /. M -> U - V /. N -> U + V /. U -> U + 1; (* inequality (5.1) *) Apart[9609600pol, V] (* Critical points *) Vs = Solve[D[pol, V] == 0, V]; Reduce[D[pol, V] == 0, V, Reals] (* denominator with complx roots only*) Reduce[Denominator[Together[D[pol, U] /. Vs]] == 0] // N (* polynomial of degree 7 in U *) Reduce[Numerator[Together[ D[pol, U] /. Vs]] == 0] // N (* Boundary : roots and endpoints *) Reduce[pol == 0 /. U -> 3V] // N Reduce[pol == 0 /. U -> V] // N Reduce[pol == 0 /. V -> 1 - U] // N pol /. U -> 1 - V /. V -> {1/4, 1/2} // N pol /. U -> 0 /. V -> 0 // N \end{verbatim} \end{small} \section*{Acknowledgements} The author wants to thank his thesis advisor, Professor Rodrigo Ba\~nuelos, for the support and quidance on this paper, which is a part of author's Ph. D. thesis. \bibliographystyle{amsplain}
{ "timestamp": "2006-03-27T19:33:02", "yymm": "0603", "arxiv_id": "math/0603630", "language": "en", "url": "https://arxiv.org/abs/math/0603630", "abstract": "We prove that the first eigenvalue of the Dirichlet Laplacian for a triangle in the plane is bounded above by $\\pi^2 L^2\\over 9A^2$, where $L$ is the perimeter and $A$ is the area of this triangle. We show that the \\mbox{constant 9} is optimal and that the optimal constant for the lower bound of the same form is 16. This gives a positive answer to a conjecture made by P. Freitas.", "subjects": "Spectral Theory (math.SP)", "title": "Sharp bounds for eigenvalues of triangles", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9835969703688159, "lm_q2_score": 0.8267117962054049, "lm_q1q2_score": 0.8131512181157982 }
https://arxiv.org/abs/2009.12450
Tinkering with Lattices: A New Take on the Erdős Distance Problem
The Erdős distance problem concerns the least number of distinct distances that can be determined by $N$ points in the plane. The integer lattice with $N$ points is known as \textit{near-optimal}, as it spans $\Theta(N/\sqrt{\log(N)})$ distinct distances, the lower bound for a set of $N$ points (Erdős, 1946). The only previous non-asymptotic work related to the Erdős distance problem that has been done was for $N \leq 13$. We take a new non-asymptotic approach to this problem in a model case, studying the distance distribution, or in other words, the plot of frequencies of each distance of the $N\times N$ integer lattice. In order to fully characterize this distribution, we adapt previous number-theoretic results from Fermat and Erdős in order to relate the frequency of a given distance on the lattice to the sum-of-squares formula.We study the distance distributions of all the lattice's possible subsets; although this is a restricted case, the structure of the integer lattice allows for the existence of subsets which can be chosen so that their distance distributions have certain properties, such as emulating the distribution of randomly distributed sets of points for certain small subsets, or emulating that of the larger lattice itself. We define an error which compares the distance distribution of a subset with that of the full lattice. The structure of the integer lattice allows us to take subsets with certain geometric properties in order to maximize error; we show these geometric constructions explicitly. Further, we calculate explicit upper bounds for the error when the number of points in the subset is $4$, $5$, $9$ or $\left \lceil N^2/2\right\rceil$ and prove a lower bound in cases with a small number of points.
\section{Introduction} In 1946, Paul Erd\H{o}s \cite{Er1} proposed the now-famous Erd\H{o}s distinct distances problem: given $N$ points in a plane, what is the minimum number of distinct distances, $f(N)$, they can determine? He accompanied this question with the first bounds on $f(N)$, \begin{equation}\sqrt{N-\frac{3}{4}}-\frac{1}{2} \ \leq \ f(N) \ \leq \ \frac{cN}{\sqrt{\log N}},\end{equation} and further conjectured that the upper bound was tight---to this day, nobody has found evidence to contradict this conjecture. However, since 1946, incidence theory and algebraic geometry have provided a series of improvements on the original lower bound, culminating with Guth and Katz's seminal result in 2015 \cite{GK}, which proved a lower bound of $\Omega(N/\log N)$. Since Erd\H{o}s's original upper bound, coming from an estimate for the number of distinct distances on the $\sqrt{N} \times \sqrt{N}$ integer lattice \cite{EF}, has not been improved on to this day, any set with $O(N/\sqrt{\log N})$ distinct distances is known as \emph{near-optimal}. Erd\H{o}s further conjectured in 1986 \cite{Er2} that any near-optimal set has a lattice structure, although the truth of this conjecture remains an open problem for large values of $N$. In addition to the Erd\H{o}s distinct distances problem, a significant amount of work has been published on related problems which analyze aspects of distributions of distinct distances on planar point sets. The unit distance problem, for instance, focuses on the number of times a single given distance---often, the unit distance---can appear in a planar set of $N$ points. However, most of the work done on these subjects has been asymptotic; previous non-asymptotic work has only been conducted on $N \leq 13$ \cite{BMP}. In this paper, we take a novel approach and examine the whole distance distribution for the lattice and its subsets in a non-asymptotic setting. Although working in $\mathbb{Z}^2$ is a simplification, our work is complicated by considering the whole distance distribution---namely, taking into account the frequency with which each distance appears on the lattice---rather than working asymptotically with only the number of distinct distances. In particular, we first examine the distance distribution for the lattice, characterizing its behavior and applying number-theoretic methods to determine an upper bound for the frequency of its most common distance, showing the asymptotic bound in Equation ~\eqref{upperbound}. We then turn to the distance distributions of subsets of the lattice and compare them to the distance distribution of the lattice itself. Although the sets are subsets of a highly regular set, the behavior of the distance distributions for these sets can vary widely. Some subsets have distance distributions that highly mimic that of the full lattice, while others have distance distributions that are similar to that of a random set. We devise in Section ~\ref{errorsection} an error that measures how similar or different a subset's distance distribution is from that of the lattice itself. For the upper bounds, we find specific configurations of $p$ points that maximize the error and then calculate the error of these configurations, demonstrated through the calculations in Examples ~\ref{ex:p=4} through ~\ref{ex:checkerboard}. For the lower bounds, we may find a concrete bound in the case of small subsets of the lattice, giving us the bound in Equation ~\eqref{errorboundsmallsubset}. In the case of larger subsets, we take a more theoretical approach and construct theoretical optimal distance distributions, ones that cannot necessarily be realized by an actual subset of the lattice. We then describe the error given by these optimal distance distributions and prove a lower bound on error for some values of $p$. Thus, in this paper we seek to highlight preliminary results on this new perspective on the Erd\H{o}s distinct distances problem. We begin with the following section which introduces some definitions and proves upper bounds for the frequency of the most common distance on the lattice. \section{Introducing Distance Distributions} \begin{figure} \centering \includegraphics[scale=.4]{distance_distribution.png} \caption{The distance distribution for the $200 \times 200$ lattice.} \label{fig:distance_distribution} \end{figure} Throughout this section, we characterize the distance distribution of the $N\times N$ integer lattice, as seen in Figure \ref{fig:distance_distribution}. For our characterization, we begin with some notational definitions and a lemma. \begin{definition} For a fixed $N$, denote by $\mathcal{L}_N\subset \mathbb{Z}^2$ the $N\times N$ integer lattice, where \begin{equation} \mathcal{L}_N=\{ (x,y)\in \mathbb{Z}^2\ \mid \ 0\leq x \leq N-1,\; 0\leq y\leq N-1 \} .\end{equation} \end{definition} \begin{definition}\label{def:D_n} For a fixed $N$, denote by $\mathcal{D}_N$ the set of distinct distances which appear at least once on the $N\times N$ lattice. \end{definition} \begin{definition} For any $d \ \geq \ 1$, denote by $L_{\sqrt{d}}$ the number of times that the distance $\sqrt{d}$ appears on $\mathcal{L}_N$. \end{definition} \begin{definition} For any $d\geq 1$, denote by $S_{\sqrt{d}}$ the number of times that the distance $\sqrt{d}$ appears in a given subset $S\subseteq \mathcal{L}_N$. \end{definition} \begin{lemma}\label{distancecount} For any $d\geq 1$, \begin{equation}\label{equationforLd} L_{\sqrt{d}} \ = \ \sum_{\substack{a^2+b^2=d \\ a\geq 1,\; b\geq 0}}^{} 2\left( N-a \right) \left( N-b \right). \end{equation} \end{lemma} \begin{proof} First, note that each occurrence of the distance $\sqrt{d}$ comes from two points $(w,x)$, $(y,z)$ satisfying $\sqrt{(w-y)^2 + (x-z)^2}=\sqrt{d}$. Let $a,b$ be an ordered pair with $a^2+b^2 = d$ and suppose both $a,b>0$. Notice that for any point $(w,x)$ on the integer lattice, $\left( w+a,x+b \right) $ is in $\mathcal{L}_N$ if and only if $0\leq w\leq N-a-1$ and $0\leq x\leq N-b-1$; hence there are $(N-a)(N-b)$ such points $(w,x)$. The same logic can be repeated for $(w+a,x-b)$ to conclude that there are precisely $2(N-a)(N-b)$ pairs of points separated by an $x $-distance of $a$ and a $y$-distance of $b$. In the case where $a>b=0$, by our assumption that $a>0$, we need only find the number of pairs $(x,y) \in \mathcal{L}_N$ for which $(x+a,y)\in \mathcal{L}_N,\ (x,y+a)\in \mathcal{L}_N$. There are $2N(N-a)$ such pairs. As this accounts for any possible occurrence of $\sqrt{d}$ on $\mathcal{L}_{N}$, we are done. As a final check, we note that the well-known identity \begin{equation} \sum_{a=1}^{N-1}\sum_{b=0}^{N-1}2(N-a)(N-b) \ = \ \frac{N^2\left( N^2-1 \right) }{2} \ = \ \binom{N^2}{2} \end{equation} confirms that we have counted all $\binom{N^2}{2}$ distances on the lattice. \end{proof} As seen in Figure \ref{fig:distance_distribution}, the frequencies of distances are arranged in distinct curves. Clearly, the curve which $L_{\sqrt{d}}$ falls on is closely tied to the distinct ways $d$ can be written as the sum of two squares; if $d$ has $m$ representations as the sum of two squares, then $L_{\sqrt{d}}$ falls on the $m$th highest curve. In fact, this is a subject which has been studied in some detail, which we summarize below. \begin{definition} For any $n \in \mathbb{Z} $, let $r_2(d)$ be the number of ordered pairs $\left( a,b \right) \in \mathbb{Z}^2$ such that $a^2+b ^2=d$. \end{definition} We state the following classical result due to Fermat; for a survey of the many possible proofs of this theorem, see \cite{Co}. \begin{theorem}[Fermat, 1640]\label{r2n} If $d$ is a positive integer with prime factorization $d=2^{f}p_1^{g_1}\cdots p_{m} ^{g_{m}}q_1 ^{h_1}\cdots q_{n} ^{h_{n}}$, where for any $i$, $p_{i}\equiv 1\pmod{4}$ and $q_{i}\equiv 3\pmod{4}$, then \begin{equation} r_2(d)=\begin{cases} 4\left( g_1+1 \right) \cdots \left( g_{m}+1 \right) & \; {\rm all} \ {\rm the } \; h_{i}\; {\rm are} \ {\rm even,} \; \\ 0 & \; {\rm otherwise.} \; \\ \end{cases} \end{equation} \end{theorem} For some motivation towards this result, recall that the equation $a^2+b^2=p$, where $a,b>0$ and $p$ is prime, has a unique solution in the case where $p\equiv 1 \pmod{4}$ and no solution otherwise. The proof of the theorem relies on building inductively upon this result. The coefficient $4$ simply accounts for the options of $\pm a, \pm b$. \begin{rek} The pairs $(a,b)\in \mathbb{Z}^2$ which are counted in the above sum may be negative; this contradicts our original condition for ordered pairs $(a,b)$ in Lemma \ref{distancecount}, which only required $a\geq 1,\; b\geq 0$. This is intentional, as both quantities are calculated through different methods and are used for different purposes. In particular, because of the way these pairs are counted, we see that for a distance $\sqrt{d}$ on the $m$-th curve, $r_2(d)=4m$. \end{rek} \begin{definition} For any $k\geq 1$, let $\sqrt{ n_{k} }$ be the smallest distance on the $k$-th curve, i.e., $n_{k}$ is the smallest positive integer such that $r_2(n_{k})=4k$. \end{definition} We use these resuls to find the most common distance on the integer lattice, which proves to be useful later. Recalling the original lattice distance distribution seen in Figure \ref{fig:distance_distribution}, we note that the most common distance on each individual curve is generally the leftmost, i.e., the smallest. For our estimates, then, we hope to take the smallest distance on the $k$-th curve---defined above as $n_k$---as an approximation of the most common distance on each curve. This proves a reasonable assumption: indeed, suppose two integers $d_1<d_2$ are such that $r_2(d_1)=r_2(d_2)=k$, yet $\sqrt{d_2}$ is the more common distance on some $N\times N$ integer lattice. Using Lemma \ref{distancecount}, this implies we may find at least two ordered pairs of integers $(a_1,b_1),(a_2,b_2)$ satisfying $a_i^2+b_i^2=d_i$ for each $i$, for which $2(N-a_1)(N-b_1) <2(N-a_2)(N-b_2) $, i.e., $a_2b_2-a_2-b_2>a_1b_1-a_1-b_1$. As $d_i \geq 2a_ib_i$ by the inequality of arithmetic and geometric means, with $d_i\leq 2\sqrt{2}a_ib_i$, we thus have a generous bound of $d_2 \leq \sqrt{2} d_1$. In our search for the most common distance on the lattice, we thus narrow our focus to the set of integers $n_1,n_2,\ldots$, and their asymptotic frequencies on the $N\times N$ lattice. By our previous argument, these $n_k$ values are at least within a constant of $\sqrt{2}$ of the true most common distance on the $k$-th curve. With these guiding principles, we now attempt to bound the size of the integers in this sequence. \begin{lemma} Let $p_1<p_2<\cdots$ be all the primes satisfying $p_{i}\equiv 1\pmod{4}$, listed in increasing order (so that $p_1=5,\ p_2=13,\ p_3=17$, and so on). Suppose $k=q_1^{a_1}\cdots q_{m}^{a_{m}}$, where $q_1,\ldots , q_{m}$ are any $m$ distinct primes and $q_1>q_2>\cdots >q_m$. Then an upper bound for $n_k$ is given by \begin{equation} \left( \underbrace{p_1\cdots p_{a_1}}_{a_1\; {\rm primes} \; } \right) ^{q_1-1}\!\left( \underbrace{p_{a_1+1}\cdots p_{a_1+a_2}}_{a_2\; {\rm primes} \; } \right) ^{q_2-1}\!\!\!\! \!\! \!\! \cdots \left( \underbrace{ p_{a_1+\cdots+a_{m-1}+1}\cdots p_{a_1+\cdots+p_{a_1+\cdots+a_{m}}}}_{a_{m}\; {\rm primes} \; } \right) ^{q_{m}-1}. \end{equation} \end{lemma} \begin{proof} Let $n_k'$ be the bound presented in the lemma. As $n_k$ is the minimum value for which $r_2(n_k)=4k$, it suffices to show that $n_k'$ has this property, in order to determine that it is an upper bound. By a direct application of Theorem \ref{r2n}, \begin{equation} r_2(n_k') \ = \ 4((q_1-1)+1)^{a_1}\cdots((q_m-1)+1)^{a_m} \ = \ 4k. \end{equation}\end{proof} Note that the sequence $n_1', n_2',\ldots$ of bounds as presented by this lemma is not strictly increasing. In particular, the bounds in Lemma \ref{enserio} are attainable for infinitely many values of $k$. In particular, for $k=2^{m}$, we see $n_{k}'=p_1\cdots p_{m}=n_k$, as $r_2(2^{f}p_1^{g_1}\cdots p_{m} ^{g_{m}}q_1 ^{h_1}\cdots q_{n} ^{h_{n}})=4(g_1+1)\cdots (g_m+1)=4\cdot 2^m$ implies $g_1=\cdots=g_m=1$; additionally, for $k$ prime, $r_2(n)=r_2(2^{f}p_1^{g_1}\cdots p_{m} ^{g_{m}}q_1 ^{h_1}\cdots q_{n} ^{h_{n}})=4(g_1+1)\cdots (g_m+1)=4\cdot k$ implies $n$ has a sole prime divisor with exponent $k-1$. It can be calculated that the proposed $n_k'$ bound matches the actual value for $n_k$ for most small values of $k$; in fact, we conjecture that the two are equivalent for the vast majority of $k$, although the closest result we may achieve is to show that the two sequences are asymptotic. The following lemma assists in finding explicit bounds for $n_k$. \begin{lemma}\label{enserio} Let $p_1<p_2<\cdots$ be all the primes satisfying $p_{i}\equiv 1\pmod{4}$, listed in increasing order . For any $k\geq 1$, \begin{equation}\label{boundingnk} \prod_{i=1}^{\left\lfloor\log_2(k) \right\rfloor }p_{i} \ \ll \ n_{k} \ \leq \ 5^{k-1} .\end{equation} \end{lemma} \begin{proof} Firstly, as $r_2(5^{k-1})= 4(k-1+1)=4k$, we see immediately that $n_k \leq 5^{k-1}$. We proceed to show the lower bound. Write $k=q_1^{a_1}\cdots q_{m}^{a_{m}}$, where $q_1,\ldots , q_{m}$ are $m$ distinct primes. Note that \begin{equation} \left\lfloor\log_2(k) \right\rfloor \ \leq \ a_1 + \cdots + a_m, \end{equation} with equality if and only if $k=2^l$ for some $l \in \mathbb{N}$. Suppose $d$ is such that $r_2(d)=4k$. We wish to find the minimum possible value for $d$. We may assume that $d$ is not divisible by any primes that are congruent to 2 or 3 mod 4, as this would strictly increase its size without affecting the value of $r_2(d)$, as per Theorem \ref{r2n}. Hence, we may write $d=s_1 ^{g_1}\cdots s_t ^{g_t}$, for primes $s_i\equiv 1\pmod {4}$. Assuming without loss of generality that $g_1 \geq \cdots\geq g_t$, we see that the value of $d$ can be strictly decreased without affecting $r_2\left( d \right) =4\left( g_1+1 \right) \cdots \left( g_{t}+1 \right) $ by replacing $s_1=p_1,\ s_2=p_2,\ldots s_{t}=p_{t}$, as this will assign the highest exponents to the lowest possible prime values which are $1 \pmod {4}$. Hence we see $d$ is of the form \begin{equation} d \ = \ p_1^{g_1}\cdots p_t ^{g_t} \ \approx \ p_t^{tg_t} \ \approx \ (t\log t)^{tg_t}. \end{equation} As $(g_1+1)\cdots (g_t+1)=k$, where $g_1\leq \cdots \leq g_t$, we see that $g_t^t \gg k$. Hence we may take $t\approx \log_{g_t}(k)$, and see that the quantity is maximized with large $t$ and small $g_t$. As the minimal possible value of $g_t$ is 1, which gives us $t\approx \log_2(k)$, we may take the asymptotic lower bound \begin{equation}\prod_{i=1}^{\log_2(k)} p_i.\end{equation} \end{proof} We wish to find a more explicit lower bound for $n_{k}$, which by Lemma \ref{enserio} is equivalent to estimating the product of the first $t$ primes $p_1<\cdots <p_{t}$ which are congruent to $1\pmod{4}$. While this quantity is not well-studied, we may adapt existing work on bounding the product of the first $t$ primes $q_1,\ldots , q_{t}$ of any class $\pmod{4}$, denoted $q_{t}\#$. The best general estimate is \begin{equation} q_{t}\#=\prod_{i=1}^{t}q_{i}=e^{\left( 1+{o}(1) \right) t \log t} .\end{equation} We can then approximate our quantity as \begin{equation}\label{estimate} \prod_{i=1}^{t}p_{i} \ = \ \left( \prod_{i=1}^{2t}q_{i} \right) ^{1/2} \ = \ e^{\frac{1}{2}\left( 1+o(1) \right)2t\log 2t } \ = \ e^{\left( 1+o(1) \right)t\log 2t } .\end{equation} Given the equal spread of the primes in any arithmetic progression, we know (\ref{estimate}) must converge towards the real quantity. Using this quantity, we have a general lower bound for $n_{k}$ using $t=\log_2(2k)$, namely \begin{equation} n_{k} \ \gg \ e^{\frac{1}{2}\left( 1+o(1) \right) \log_2(2k)\log \log_2(2k)} \ = \ (2k)^{\frac{1}{2}(1+o(1))\log_2(\log_2(2k))} .\end{equation} Now, given that a distance $\sqrt{d}$ is on the $k$-th curve of the distance distribution, we can maximize each summand in Lemma \ref{distancecount} to find the upper bound \begin{equation} \label{eq:upperbound} L_{\sqrt{d}} \ \leq \ 2k N \left( N-\sqrt{d} \right) .\end{equation} As (\ref{eq:upperbound}) assumes that all $k$ pairs of integers $(a,b)$ with $a^2+b ^2=d$ are identically $(\sqrt{d},0)$, we know that for $k>1$, (\ref{eq:upperbound}) is indeed a strict upper bound for this quantity. Putting these pieces together, we can determine \begin{equation}\label{upperbound} L_{\sqrt{n_{k}}} \ \ll \ 2k N\left( N- (2k)^{\frac{1}{4}(1+o(1))\log_2(\log_2(2k))} \right) .\end{equation} In particular, for large $N$, maximizing this quantity in terms of $k$ gives us a strict upper bound for the most common distance on the $N\times N$ lattice, which we denote $F_{N}$. Thus, we reach an asymptotic bound for the most common distance in the lattice, which proves useful in the following section. \section{Error Estimates}\label{errorsection} After studying the distance distribution for the lattice, the logical next step is to study behavior of distance distributions for subsets of the lattice. Although we know that the lattice is a near-optimal set, as previously discussed, the behavior of the distance distributions of its subsets can vary widely. Thus, we examine how different and similar the distance distributions for subsets of the lattice can be to that of the lattice, an analogous version of the Erd\H{o}s distinct distances problem on subsets of the lattice. We now define our method of calculating the difference between the integer lattice's distance distribution and that of one of its subsets. Recall that $L_{\sqrt{d}}$ is the frequency of a distance $\sqrt{d}$ in the lattice and $S_{\sqrt{d}}$ is its frequency in a particular subset $S$. We note that the $N \times N$ lattice has $N^2(N^2-1)/2 \approx N^4/2$ total distances and a subset with $p$ points has $p(p-1)/2 \approx p^2/2$ total distances. Thus we scale up each $S_{\sqrt{d}}$ by $N^4/p^2$ to have a distance distribution with about the same total number of frequencies as that of the lattice. Then, we sum $\left|(N^4/p^2)S_{\sqrt{d}}-L_{\sqrt{d}}\right|$ over all $d \in \mathcal{D}_N$. More explicitly, we have the formula in the following definition. \begin{definition} Let $\varepsilon$ be the error between the distance distribution of the integer lattice and one of its subsets. Then, \begin{equation} \varepsilon \ = \ \frac{1}{|\mathcal{D}_N |} \sum_{d\in\mathcal{D}_N}\left|\frac{N^4}{p^2}S_{\sqrt{d}}-L_{\sqrt{d}}\right| . \end{equation} \end{definition} In many of our calculations, instead of working with the actual distances, we work with individual pairs $(a,b)$ for which $\sqrt{a^2+b^2} \in \mathcal{D}_N$. This gives exact values of the contribution to the error for distances on the first and second curves; in all other cases, this is a simplification. Thus we introduce some new notation. \begin{definition} Let $L_{a,b}$ denote the number of unordered pairs $(a_1, b_1)$, $(a_2, b_2) \in \mathcal{L}_N$ such that $|a_1-a_2|=a$ and $|b_1-b_2|=b$ or $|a_1-a_2|=b$ and $|b_1-b_2|=a$. \end{definition} \begin{definition} Let $S_{a,b}$ denote the number of unordered pairs $(a_1, b_1)$, $(a_2, b_2)$ in a given subset $S\subset \mathcal{L}_N$ such that $|a_1-a_2|=a$ and $|b_1-b_2|=b$ or $|a_1-a_2|=b$ and $|b_1-b_2|=a$. \end{definition} \begin{definition} Let $\varepsilon_{a,b}=|(N^4/p^2)S_{a,b}-L_{a,b}|$. \end{definition} It may be shown that counting repeat distances as distinct either strictly increases error or has no effect on it. If $\sqrt{a^2+b^2}=\sqrt{c^2+d^2}$ for $\{a,b\}\neq\{c,d\}$, we then see that the total contribution to error for this distance is \begin{equation}\left|\frac{N^2}{p^2}S_{a,b} - L_{a,b} + \frac{N^2}{p^2}S_{c,d} - L_{c,d}\right|, \end{equation} whereas counting them as distinct gives a total contribution to error \begin{equation}\left|\frac{N^2}{p^2}S_{a,b} - L_{a,b}\right| + \left|\frac{N^2}{p^2}S_{c,d} - L_{c,d}\right|.\end{equation} Counting each pair as distinct thus gives an upper bound for total error. Furthermore, these two quantities are identical if and only if $\left(N^2/p^2\right)S_{a,b} - L_{a,b}$ and $\left(N^2/p^2\right)S_{c,d} - L_{c,d}$ have the same sign, which we suspect is true of most subsets which maximize error. Recall from the previous calculations we made on the frequency of distances on the lattice that when $b=0$ or $a=b$, $L_{a,b}=2(N-a)(N-b)$. Otherwise, for $b>0$ and $a>b$, we have $L_{a,b}=4(N-a)(N-b)$. For later error calculations, we need the average values of $2(N-a)(N-b)$ and $4(N-a)(N-b)$ and the fraction of $L_{a,b}$ that are of the form $2(N-a)(N-b)$ and the fraction of $L_{a,b}$ that are of the form $4(N-a)(N-b)$. We provide these values in the following two lemmas, which follow immediately by induction. \begin{lemma} \label{lem:avg_val} The average value of $2(N-a)(N-b)$ over $a=b$ or $b=0$ is \begin{equation}\left(2N\right)^{-1}\left(\sum_{a=1}^N 2(N-a)^2+ \sum_{a=1}^N 2N(N-a)\right)=\frac{N(5N-1)}{6}\end{equation} and the average value of $4(N-a)(N-b)$ over $b>0$ and $a>b$ is \begin{equation}\left(\frac{1}{2}N(N-1)\right)^{-1}\left(\sum_{b=1}^{N-1} \sum_{a=b+1}^N 4(N-a)(N-b)\right)= \frac{N(3N-1)}{3}.\end{equation} \end{lemma} \begin{lemma}\label{lem:frac} The fraction of $L_{a,b}$ that are of the form $2(N-a)(N-b)$ is $4 / (N+2)$ and the fraction of $L_{a,b}$ that are of the form $4(N-a)(N-b)$ is $(N-2) / (N+2)$. \end{lemma} Similarly to the original Erd\H{o}s distance problem, we are interested in finding upper and lower bounds for the behavior we are studying. For the upper bounds, we find patterns of configurations that maximize the error. We then calculate the error for these specific configurations. For the lower bounds, we construct a theoretical optimal distance distributions and calculate a lower bound on its error. \section{Bounds} We begin this section with explicit calculations of upper bounds on error for configurations of $p$ points. A configuration of $p$ points that maximizes error needs to have as different a distance distribution as possible from the original full lattice when scaled up. Thus, the ratio of each distance's frequency to the total number of distances, including repeated distances, must be as different as possible from that of the full lattice. Specifically, we want to have a subset that has many distances that were infrequent in the full lattice and minimal number of distances that were very frequent in the full lattice. For certain values of $p,$ by brute-force calculations we know the configuration that maximizes error. See Figures \ref{fig:p=4}, \ref{fig:p=5}, \ref{fig:p=9}, \ref{fig:p=4(N-1)}, \ref{fig:p=4(N-1)+4(N-3)}, and \ref{fig:p=ceiling[N.2]}. \begin{figure}[!ht] \centering \begin{floatrow} \ffigbox[\FBwidth]{\caption{$p=4$.}\label{fig:p=4}}{% \includegraphics[scale=.2]{p=4.png} } \ffigbox[\FBwidth]{\caption{$p=5$.}\label{fig:p=5}}{% \includegraphics[scale=.2]{p=5} } \end{floatrow} \centering \begin{floatrow} \ffigbox[\FBwidth]{\caption{$p=9$.}\label{fig:p=9}}{% \includegraphics[scale=.2]{p=9.png} } \ffigbox[\FBwidth]{\caption{$p=4(N-1)$.}\label{fig:p=4(N-1)}}{% \includegraphics[scale=.2]{1fringe.png} } \end{floatrow} \centering \begin{floatrow} \ffigbox[\FBwidth]{\caption{$p=4(N-1)+4(N-3)$.}\label{fig:p=4(N-1)+4(N-3)}}{% \includegraphics[scale=.2]{2fringe.png} } \ffigbox[\FBwidth]{\caption{$p=\left \lceil\frac{N^2}{2} \right \rceil$.}\label{fig:p=ceiling[N.2]}}{% \includegraphics[scale=.2]{checkerboard.png} } \end{floatrow} \end{figure} We briefly describe these subsets. For $p=4,$ the maximal error subset is all four corners of the lattice. To transition to $p=5$, the middle point is added in. For $p=9$, the maximal subset is a $3 \times 3$ lattice stretched to the size of the $N \times N$ lattice. To transition from $p=5$ to $p=9$, the points that are in $p=9$, but not in $p=5$ are added in one by one. We then know for $p=4(N-1)$, the maximal error subset is the perimeter of the lattice, with no other points. For $p=\sum_{i=1}^m 4(N-(2i-1))$, the maximal error subset is the filled-in perimeter with a depth of $i$ points. For example, Figure \ref{fig:p=4(N-1)+4(N-3)} is a filled-in perimeter with a depth of 2 points. To transition from $p=\sum_{i=1}^m 4(N-(2i-1))$ to $p=\sum_{i=1}^{m+1} 4(N-(2i-1))$, the points only present in the latter configuration are filled-in one by one. The final maximal error configuration we found is for $p=\left \lceil N^2/2 \right \rceil$, for which every other point is filled-in to make a configuration we refer to as a checkerboard lattice. One can note that depending on the value of $N$, $\left \lceil N^2/2 \right \rceil$ is less than $p=\sum_{i=1}^{m} 4(N-(2i-1))$ for different values of $m$. As a result, there is a transition between these two types of configurations and a transition back. In the following examples, we calculate error estimates for some of these configurations. For these calculations, we use the simplification of working with $\sqrt{a^2+b^2}$ where $0 \leq b \leq N-1$ and $b \leq a \leq N-1$, excluding $a=b=0$, instead of distinct distances. \begin{example} \label{ex:p=4} We begin with $p=4$, where the maximal error configuration is a point in each corner of the lattice. We first note that the scaling constant is $N^4/p^2=N^4/16$. We also have just two distinct distances, \begin{equation}\sqrt{(N-1)^2+(N-1)^2} \ = \ \sqrt{2}(N-1),\end{equation} which has $L_{N-1,N-1}=2$ and $S_{N-1,N-1}=2$, and \begin{equation}\sqrt{(N-1)^2+0^2} \ = \ N-1,\end{equation} which has $L_{N-1,0}=2N$ and $S_{N-1,0}=4$. To find $\varepsilon_{N-1,N-1}$ and $\varepsilon_{N-1,0}$, we have to scale $S_{N-1,N-1}$ and $S_{N-1,0}$ by $N^4/16$ and subtract $L_{N-1,N-1}$ and $L_{N-1,0}$, respectively. This gives us $\varepsilon_{N-1, N-1}=N^4/8-2$ and $\varepsilon_{N-1, 0}=N^4/4-2N.$ Thus the total error contribution from these two distances is $3N^4/8-2-2N$. Both $L_{N-1,N-1}$ and $L_{N-1,0}$ are of the form $2(N-a)(N-b)$, so we have to update the average value of $2(N-a)(N-b)$ from Lemma \ref{lem:avg_val} and the fraction of the time $L_{a,b}$ is of the form $2(N-a)(N-b)$ from Lemma \ref{lem:frac} to exclude $L_{N-1,N-1}$ and $L_{N-1,0}$. The new average is $(5N^2+4N+3)/6$ and the new fraction is $(4N-8)/(N^2+N-2)$. The fraction of $L_{a,b}$ that are $L_{N-1,N-1}$ and $L_{N-1,0}$ is $4/(N^2+N-2)$. We note that for $(a,b)$ not equal to $(N-1,N-1)$ or $ (N-1,0)$, $S_{a,b}=0$. Thus, the average error contribution for these distances is their average frequency in the lattice. We then put everything together to get our error estimate. \begin{align} \varepsilon & \ \leq \ \frac{4}{N^2+N-2}\left(\frac{3N^4}{8}-2-2N\right)+\frac{4N-8}{N^2+N-2}\left(\frac{5N^2+4N+3}{6}\right)\nonumber\\ &\ \ \ \ +\frac{N-2}{N+2}\left(\frac{N(3N-1)}{3}\right)\nonumber \\ & \ = \ \frac{5N^2}{2}-\frac{5N}{2}-\frac{15}{2(N-1)}-\frac{16}{N+2}+\frac{13}{2}. \end{align} This error estimate is an overestimate of the error when $N$ is small because of the fact that we are looking at $\sqrt{a^2+b^2}$, instead of distinct distances. However, the only distances affected by this way of estimating the error are the two distances present on the lattice, $\sqrt{2}(N-1)$ and $N-1$ . As $N \rightarrow \infty$, the fraction of the total distances that these distances represent goes to zero. Thus, this error estimate converges to the actual error. \end{example} \begin{example} \label{ex:p=5} Similarly, we can calculate the error when $p=5$. Recall, for this value of $p$, that the subset configuration that maximizes error is the four corners and the middle point of the lattice. This configuration has 3 distinct distances: \begin{equation}\sqrt{(N-1)^2+0^2} \ = \ N-1,\end{equation} for which we have $S_{N-1, 0}=4$ and $L_{N-1, 0}=2N$, \begin{equation}\sqrt{(N-1)^2+(N-1)^2} \ = \ \sqrt{2}(N-1),\end{equation} for which we have $S_{N-1, N-1}=2$ and $L_{N-1, N-1}=2$, and \begin{equation}\sqrt{((N-1)/2)^2+((N-1)/2)^2} \ = \ \sqrt{2}(N-1)/2,\end{equation} for which we have $S_{(N-1)/2, 0}=4$ and $L_{(N-1)/2, 0}=N+1$. To calculate $\varepsilon_{N-1, 0},$ $\varepsilon_{N-1, N-1}$, and $\varepsilon_{(N-1)/2, 0}$, we need to multiply $S_{a,b}$ by $N^4/25$ and subtract $L_{a,b}$. Thus, $\varepsilon_{N-1, 0}=4N^4/25-2n$, $\varepsilon_{N-1, N-1}=2N^4/25-2$ and $\varepsilon_{(N-1)/2, 0}=4N^4/25-(N+1)$. We know that $L_{N-1, 0},$ $L_{N-1, N-1}$, and $L_{(N-1)/2, 0}$ are of the form $2(N-a)(N-b),$ so we need to edit the average value of $2(N-a)(N-b)$ from Lemma \ref{lem:avg_val} and the fraction of the time $L_{a,b}$ is of the form $2(N-a)(N-b)$ from Lemma \ref{lem:frac} to no longer include the three distances listed above. The new average value is \begin{equation}\frac{5N^3-6N^2-8N-9}{3(2N-5)}\end{equation} and the fraction of the time $L_{a,b}$ is of the form $2(N-a)(N-b)$ is \begin{equation}\frac{6}{N+2}-\frac{2}{N-1}.\end{equation} The fraction of the total distances that are the three distances listed above is \begin{equation}\frac{2}{N-1}-\frac{2}{N+2}.\end{equation} We then put this all together to calculate the error: \begin{align} \varepsilon & \ \leq \ \left(\frac{2}{N-1}-\frac{2}{N+2}\right) \left[\left(\frac{4N^4}{25}-2N\right)+\left(\frac{2N^4}{25}-2\right)+\left(\frac{4N^4}{25}-(N+1)\right)\right]\nonumber\\ &\ \ \ +\left(\frac{6}{N+2}-\frac{2}{N-1}\right)\left[\frac{5N^3-6N^2-8N-9}{3(2N-5)}\right] +\frac{N-2}{N+2}\left[\frac{N(3N-1)}{3}\right]\nonumber \\ & \ = \ \frac{17N^2}{5}-\frac{17N}{5}-\frac{6}{N-2}-\frac{56}{5(N-1)}-\frac{124}{5(N+2)}-\frac{31}{3(2N-5)}+\frac{113}{15}.\end{align} Similarly to $p=4$, this error estimate overestimates the error when $N$ is small because of the fact that we are looking at $\sqrt{a^2+b^2}$, instead of distinct distances $\sqrt{d}$. However, as $N \rightarrow \infty$, this error estimate converges to the actual error. \end{example} \begin{example}\label{ex:p=9} We can then examine what happens when $p=9$. The 9 point configuration that maximizes error is a $3 \times 3$ lattice that has been stretched to the size of the $N \times N$ lattice. We first note that $S_{a,b}>0$ if and only if $a,b\equiv 0 \pmod{(N-1)/2}.$ Thus, the fraction of $S_{a,b}$ such that $S_{a,b} \neq 0$ for $b\neq 0$ and $a >b$ is \begin{equation}\frac{4}{(N-1)^2}\end{equation} and the fraction of $S_{a,b}$ such that $S_{a,b}\neq 0$ for $b=0$ or $a=b$ is \begin{equation}\frac{2}{N-1}.\end{equation} The scaling constant is $N^4/81$. We estimate the error from above by assuming that if $S_{a,b}\neq 0$, then $S_{a,b}=L_{a,b}.$ This assumption does not increase the error estimate by an unreasonable amount because, for large enough $N$, the fraction of total distances that are represented in this configuration is very low. We can then use our previous averages of $L_{a,b}$ to calculate error: \begin{align} \varepsilon & \ \leq \ \frac{4}{N+2}\Bigg[\frac{2}{N-1}\left( \frac{N^4}{81}\left(\frac{N(5N-1)}{6}\right)-\frac{N(5N-1)}{6}\right)\nonumber \\ &\ \ \ \ +\left(1-\frac{2}{N-1}\right)\left(\frac{N(5N-1)}{6}\right) \Bigg] \nonumber \\ &\ \ \ \ +\frac{N-2}{N+2}\Bigg[\frac{4}{(N-1)^2}\left(\frac{N^4}{81}\left(\frac{N(3N-1)}{3}\right)-\frac{N(3N-1)}{3}\right)\nonumber\\ &\ \ \ \ +\left(1-\frac{4}{(N-1)^2}\right)\left(\frac{N(3N-1)}{3}\right) \Bigg] \nonumber \\ & \ = \ \frac{32N^4}{243}-\frac{52N^3}{243}+\frac{4N^2}{9}-\frac{220N}{243}-\frac{23044}{2187(N-1)}-\frac{14000}{2187(N+2)}\nonumber \\ &\ \ \ \ -\frac{6200}{729(N-1)^2}+\frac{112}{27(N-1)^3}+\frac{32}{9(N-1)^4}+\frac{428}{243}.\end{align} \end{example} \begin{example}\label{ex:checkerboard} Finally, we examine the error for the configuration for $p=\left\lceil N^2/2\right \rceil$. This configuration is the \textquotedblleft checkerboard lattice," a subset that is missing every other point from the full lattice and resembles a checkerboard, as seen in Figure \ref{fig:p=ceiling[N.2]}. To provide some intuition, the checkerboard lattice is a reasonable configuration for maximizing error because it very strongly prioritizes distances where $b=0$ or $a=b$. These distances have $L_{a,b}=2(N-a)(N-b)$, which tend to be smaller than other frequencies where $L_{a,b}=4(N-a)(N-b)$. Thus, this configuration has many frequencies which are not very common in the full lattice. In a checkerboard, $\sqrt{a^2+b^2}$ only appears as a distance if either $a$ and $b$ are both odd or both even. We note that is this causes about $1/2$ of $S_{a,b}$ to be zero for $a> b$ and $b \neq 0$. Additionally, we note that this causes about $1/4$ of $S_{a,b}$ to be zero for $a= b$ or $b = 0$. We use the simplifying assumption that $S_{a,b}=L_{a,b}$ if $S_{a,b}\neq 0$. This ultimately increases our error estimate. We then have the following error estimate. \begin{align} \varepsilon & \ \leq \ \frac{4}{N+2}\left[\frac{3}{4}\left(4\left(\frac{N(5N-1)}{6}\right)-\frac{N(5N-1)}{6} \right)+\frac{1}{4} \left(\frac{N(5N-1)}{6}\right) \right]\nonumber\\ &\ \ \ \ + \frac{N-2}{N+2}\left[ \frac{1}{2}\left(4\left(\frac{N(3N-1)}{3}\right) -\frac{N(3N-1)}{3}\right)+\frac{1}{2} \left(\frac{N(3N-1)}{3}\right) \right]\nonumber\\ & \ = \ 2N^2-\frac{N}{3}-\frac{2}{3(N+2)}+\frac{1}{3}.\end{align} In Appendix \ref{checkerboardcalculations}, we more precisely calculate the frequency of the distances on the checkerboard lattice. These calculations can be used for a closer estimate. \end{example} We now turn our attention to finding lower bounds for configurations of $p$ points. To minimize error, we want to preserve the same ratio of total distances, including repeated distances, to frequency that appeared in the $N \times N$ lattice for each unique distance. Thus, to calculate a lower bound, we create an \textquotedblleft optimal" distribution of frequencies for $p$ points. This optimal distribution cannot always be achieved, as not every distance distribution is realizable by a certain configuration. To come up with the optimal distribution, we scale each $L_{a,b}$ by $N^4/p^2$ and round this number to the nearest integer. We then find the error for this optimal distribution in the same way as before. Code can easily be written to find the optimal distance distribution and calculate the error. For example, Figure \ref{fig:lower_bound_theoretical} demonstrates what this error is for a $100 \times 100$ lattice. Note that this figure does not include all possible values of $p$, rather it includes enough values to capture the behavior of the error. \begin{figure} \centering \includegraphics[scale=.35]{computer_lower_bound.png} \caption{Computer generated data for the error of the optimal distance distribution in the $100 \times 100$ lattice.} \label{fig:lower_bound_theoretical} \end{figure} We begin with providing intuition for the behavior of the second part of the graph, the decreasing curve for large $p$. Once the $L_{a,b}$'s are scaled down by $p^2/N^4$, rounded, and scaled up by $N^4/p^2$, they are a multiple of $N^4/p^2$. That means that the largest $\varepsilon_{a,b}$ can be is $N^4/2p^2$ and the smallest $\varepsilon_{a,b}$ can be is $0$. One might expect the average contribution to error to be $N^4/4p^2.$ However, for small $p$, many frequencies in the $N \times N$ lattice are much closer to $0$ than $N^4/2p^2$, as $N^4/2p^2$ is quite large. This explains the shape of the graph, however it does not provide a concrete value for the error. On the other hand, for the smaller values of $p$, we can give a more precise description. Notice that the graph in Figure \ref{fig:lower_bound_theoretical} begins with a short horizontal line. More rigorously, we have \begin{equation}\label{errorboundsmallsubset} \varepsilon \ \geq \ \binom{N^2}{2} \text{ if } p \ \leq \ \frac{N^2}{\sqrt{2F_N}}. \end{equation} Here, we note that $\binom{N^2}{2}$ is the precise value for error for the empty subset of the lattice, i.e., the sum of distance frequencies of the lattice itself. The idea is that the distance distribution of a small enough subset of the lattice, after rescaling, has a greater error than the empty subset, as its few distances are very overrepresented. As we use a scaling factor of $N^{4}/p^2$, we notice that if $p$ is such that \begin{equation} \frac{N^{4}}{p^2} \ > \ 2L_{\sqrt{d}} \end{equation} for any distance $\sqrt{d}$ on the integer lattice, then the error of any subset of size $p$ is strictly greater than that of the empty subset, as for any $\sqrt{d}$, \begin{equation} \left| \frac{N^{4}}{p^2}S_{\sqrt{d}}-L_{\sqrt{d}} \right| \ \geq \ \left| \frac{N^{4}}{p^2} - L_{\sqrt{n_{k}}} \right| \ \geq \ F_N ,\end{equation} where, as previously defined, $F_N$ denotes the highest frequency on the $N\times N$ lattice. \section{Future Work} There are several ways to improve and extend our work. We have already done some characterizations of the subsets that maximize error and how the subsets transition from one configuration to another. We know we have a checkerboard configuration when $p=\left\lceil N^2/2 \right\rceil$ and we have a filled-in perimeter when $p=\sum_{i=1}^{m} 4(N-(2i-1))$ for different values of $m$. However the transition between these two configurations has still yet to be characterized. Additionally, we hope to improve our lower bound work. Work can be done to find a characterization of the sets that minimize error. Furthermore, we hope to refine our lower bound formula by finding a rigorous lower bound the values of $p$ where $N^4/8p^2$ holds. Finally, as Erd\H{o}s conjectured that all near-optimal sets have lattice structure, it is natural to extend results to other lattice structures. We expect that the error is similar. \section{Introduction} In 1946, Paul Erd\H{o}s \cite{Er1} proposed the now-famous Erd\H{o}s distinct distances problem: given $N$ points in a plane, what is the minimum number of distinct distances, $f(N)$, they can determine? He accompanied this question with the first bounds on $f(N)$, \begin{equation}\sqrt{N-\frac{3}{4}}-\frac{1}{2} \ \leq \ f(N) \ \leq \ \frac{cN}{\sqrt{\log N}},\end{equation} and further conjectured that the upper bound was tight---to this day, nobody has found evidence to contradict this conjecture. However, since 1946, incidence theory and algebraic geometry have provided a series of improvements on the original lower bound, culminating with Guth and Katz's seminal result in 2015 \cite{GK}, which proved a lower bound of $\Omega(N/\log N)$. Since Erd\H{o}s's original upper bound, coming from an estimate for the number of distinct distances on the $\sqrt{N} \times \sqrt{N}$ integer lattice \cite{EF}, has not been improved on to this day, any set with $O(N/\sqrt{\log N})$ distinct distances is known as \emph{near-optimal}. Erd\H{o}s further conjectured in 1986 \cite{Er2} that any near-optimal set has a lattice structure, although the truth of this conjecture remains an open problem for large values of $N$. In addition to the Erd\H{o}s distinct distances problem, a significant amount of work has been published on related problems which analyze aspects of distributions of distinct distances on planar point sets. The unit distance problem, for instance, focuses on the number of times a single given distance---often, the unit distance---can appear in a planar set of $N$ points. However, most of the work done on these subjects has been asymptotic; previous non-asymptotic work has only been conducted on $N \leq 13$ \cite{BMP}. In this paper, we take a novel approach and examine the whole distance distribution for the lattice and its subsets in a non-asymptotic setting. Although working in $\mathbb{Z}^2$ is a simplification, our work is complicated by considering the whole distance distribution---namely, taking into account the frequency with which each distance appears on the lattice---rather than working asymptotically with only the number of distinct distances. In particular, we first examine the distance distribution for the lattice, characterizing its behavior and applying number-theoretic methods to determine an upper bound for the frequency of its most common distance, showing the asymptotic bound in Equation ~\eqref{upperbound}. We then turn to the distance distributions of subsets of the lattice and compare them to the distance distribution of the lattice itself. Although the sets are subsets of a highly regular set, the behavior of the distance distributions for these sets can vary widely. Some subsets have distance distributions that highly mimic that of the full lattice, while others have distance distributions that are similar to that of a random set. We devise in Section ~\ref{errorsection} an error that measures how similar or different a subset's distance distribution is from that of the lattice itself. For the upper bounds, we find specific configurations of $p$ points that maximize the error and then calculate the error of these configurations, demonstrated through the calculations in Examples ~\ref{ex:p=4} through ~\ref{ex:checkerboard}. For the lower bounds, we may find a concrete bound in the case of small subsets of the lattice, giving us the bound in Equation ~\eqref{errorboundsmallsubset}. In the case of larger subsets, we take a more theoretical approach and construct theoretical optimal distance distributions, ones that cannot necessarily be realized by an actual subset of the lattice. We then describe the error given by these optimal distance distributions and prove a lower bound on error for some values of $p$. Thus, in this paper we seek to highlight preliminary results on this new perspective on the Erd\H{o}s distinct distances problem. We begin with the following section which introduces some definitions and proves upper bounds for the frequency of the most common distance on the lattice. \section{Introducing Distance Distributions} \begin{figure} \centering \includegraphics[scale=.4]{distance_distribution.png} \caption{The distance distribution for the $200 \times 200$ lattice.} \label{fig:distance_distribution} \end{figure} Throughout this section, we characterize the distance distribution of the $N\times N$ integer lattice, as seen in Figure \ref{fig:distance_distribution}. For our characterization, we begin with some notational definitions and a lemma. \begin{definition} For a fixed $N$, denote by $\mathcal{L}_N\subset \mathbb{Z}^2$ the $N\times N$ integer lattice, where \begin{equation} \mathcal{L}_N=\{ (x,y)\in \mathbb{Z}^2\ \mid \ 0\leq x \leq N-1,\; 0\leq y\leq N-1 \} .\end{equation} \end{definition} \begin{definition}\label{def:D_n} For a fixed $N$, denote by $\mathcal{D}_N$ the set of distinct distances which appear at least once on the $N\times N$ lattice. \end{definition} \begin{definition} For any $d \ \geq \ 1$, denote by $L_{\sqrt{d}}$ the number of times that the distance $\sqrt{d}$ appears on $\mathcal{L}_N$. \end{definition} \begin{definition} For any $d\geq 1$, denote by $S_{\sqrt{d}}$ the number of times that the distance $\sqrt{d}$ appears in a given subset $S\subseteq \mathcal{L}_N$. \end{definition} \begin{lemma}\label{distancecount} For any $d\geq 1$, \begin{equation}\label{equationforLd} L_{\sqrt{d}} \ = \ \sum_{\substack{a^2+b^2=d \\ a\geq 1,\; b\geq 0}}^{} 2\left( N-a \right) \left( N-b \right). \end{equation} \end{lemma} \begin{proof} First, note that each occurrence of the distance $\sqrt{d}$ comes from two points $(w,x)$, $(y,z)$ satisfying $\sqrt{(w-y)^2 + (x-z)^2}=\sqrt{d}$. Let $a,b$ be an ordered pair with $a^2+b^2 = d$ and suppose both $a,b>0$. Notice that for any point $(w,x)$ on the integer lattice, $\left( w+a,x+b \right) $ is in $\mathcal{L}_N$ if and only if $0\leq w\leq N-a-1$ and $0\leq x\leq N-b-1$; hence there are $(N-a)(N-b)$ such points $(w,x)$. The same logic can be repeated for $(w+a,x-b)$ to conclude that there are precisely $2(N-a)(N-b)$ pairs of points separated by an $x $-distance of $a$ and a $y$-distance of $b$. In the case where $a>b=0$, by our assumption that $a>0$, we need only find the number of pairs $(x,y) \in \mathcal{L}_N$ for which $(x+a,y)\in \mathcal{L}_N,\ (x,y+a)\in \mathcal{L}_N$. There are $2N(N-a)$ such pairs. As this accounts for any possible occurrence of $\sqrt{d}$ on $\mathcal{L}_{N}$, we are done. As a final check, we note that the well-known identity \begin{equation} \sum_{a=1}^{N-1}\sum_{b=0}^{N-1}2(N-a)(N-b) \ = \ \frac{N^2\left( N^2-1 \right) }{2} \ = \ \binom{N^2}{2} \end{equation} confirms that we have counted all $\binom{N^2}{2}$ distances on the lattice. \end{proof} As seen in Figure \ref{fig:distance_distribution}, the frequencies of distances are arranged in distinct curves. Clearly, the curve which $L_{\sqrt{d}}$ falls on is closely tied to the distinct ways $d$ can be written as the sum of two squares; if $d$ has $m$ representations as the sum of two squares, then $L_{\sqrt{d}}$ falls on the $m$th highest curve. In fact, this is a subject which has been studied in some detail, which we summarize below. \begin{definition} For any $n \in \mathbb{Z} $, let $r_2(d)$ be the number of ordered pairs $\left( a,b \right) \in \mathbb{Z}^2$ such that $a^2+b ^2=d$. \end{definition} We state the following classical result due to Fermat; for a survey of the many possible proofs of this theorem, see \cite{Co}. \begin{theorem}[Fermat, 1640]\label{r2n} If $d$ is a positive integer with prime factorization $d=2^{f}p_1^{g_1}\cdots p_{m} ^{g_{m}}q_1 ^{h_1}\cdots q_{n} ^{h_{n}}$, where for any $i$, $p_{i}\equiv 1\pmod{4}$ and $q_{i}\equiv 3\pmod{4}$, then \begin{equation} r_2(d)=\begin{cases} 4\left( g_1+1 \right) \cdots \left( g_{m}+1 \right) & \; {\rm all} \ {\rm the } \; h_{i}\; {\rm are} \ {\rm even,} \; \\ 0 & \; {\rm otherwise.} \; \\ \end{cases} \end{equation} \end{theorem} For some motivation towards this result, recall that the equation $a^2+b^2=p$, where $a,b>0$ and $p$ is prime, has a unique solution in the case where $p\equiv 1 \pmod{4}$ and no solution otherwise. The proof of the theorem relies on building inductively upon this result. The coefficient $4$ simply accounts for the options of $\pm a, \pm b$. \begin{rek} The pairs $(a,b)\in \mathbb{Z}^2$ which are counted in the above sum may be negative; this contradicts our original condition for ordered pairs $(a,b)$ in Lemma \ref{distancecount}, which only required $a\geq 1,\; b\geq 0$. This is intentional, as both quantities are calculated through different methods and are used for different purposes. In particular, because of the way these pairs are counted, we see that for a distance $\sqrt{d}$ on the $m$-th curve, $r_2(d)=4m$. \end{rek} \begin{definition} For any $k\geq 1$, let $\sqrt{ n_{k} }$ be the smallest distance on the $k$-th curve, i.e., $n_{k}$ is the smallest positive integer such that $r_2(n_{k})=4k$. \end{definition} We use these resuls to find the most common distance on the integer lattice, which proves to be useful later. Recalling the original lattice distance distribution seen in Figure \ref{fig:distance_distribution}, we note that the most common distance on each individual curve is generally the leftmost, i.e., the smallest. For our estimates, then, we hope to take the smallest distance on the $k$-th curve---defined above as $n_k$---as an approximation of the most common distance on each curve. This proves a reasonable assumption: indeed, suppose two integers $d_1<d_2$ are such that $r_2(d_1)=r_2(d_2)=k$, yet $\sqrt{d_2}$ is the more common distance on some $N\times N$ integer lattice. Using Lemma \ref{distancecount}, this implies we may find at least two ordered pairs of integers $(a_1,b_1),(a_2,b_2)$ satisfying $a_i^2+b_i^2=d_i$ for each $i$, for which $2(N-a_1)(N-b_1) <2(N-a_2)(N-b_2) $, i.e., $a_2b_2-a_2-b_2>a_1b_1-a_1-b_1$. As $d_i \geq 2a_ib_i$ by the inequality of arithmetic and geometric means, with $d_i\leq 2\sqrt{2}a_ib_i$, we thus have a generous bound of $d_2 \leq \sqrt{2} d_1$. In our search for the most common distance on the lattice, we thus narrow our focus to the set of integers $n_1,n_2,\ldots$, and their asymptotic frequencies on the $N\times N$ lattice. By our previous argument, these $n_k$ values are at least within a constant of $\sqrt{2}$ of the true most common distance on the $k$-th curve. With these guiding principles, we now attempt to bound the size of the integers in this sequence. \begin{lemma} Let $p_1<p_2<\cdots$ be all the primes satisfying $p_{i}\equiv 1\pmod{4}$, listed in increasing order (so that $p_1=5,\ p_2=13,\ p_3=17$, and so on). Suppose $k=q_1^{a_1}\cdots q_{m}^{a_{m}}$, where $q_1,\ldots , q_{m}$ are any $m$ distinct primes and $q_1>q_2>\cdots >q_m$. Then an upper bound for $n_k$ is given by \begin{equation} \left( \underbrace{p_1\cdots p_{a_1}}_{a_1\; {\rm primes} \; } \right) ^{q_1-1}\!\left( \underbrace{p_{a_1+1}\cdots p_{a_1+a_2}}_{a_2\; {\rm primes} \; } \right) ^{q_2-1}\!\!\!\! \!\! \!\! \cdots \left( \underbrace{ p_{a_1+\cdots+a_{m-1}+1}\cdots p_{a_1+\cdots+p_{a_1+\cdots+a_{m}}}}_{a_{m}\; {\rm primes} \; } \right) ^{q_{m}-1}. \end{equation} \end{lemma} \begin{proof} Let $n_k'$ be the bound presented in the lemma. As $n_k$ is the minimum value for which $r_2(n_k)=4k$, it suffices to show that $n_k'$ has this property, in order to determine that it is an upper bound. By a direct application of Theorem \ref{r2n}, \begin{equation} r_2(n_k') \ = \ 4((q_1-1)+1)^{a_1}\cdots((q_m-1)+1)^{a_m} \ = \ 4k. \end{equation}\end{proof} Note that the sequence $n_1', n_2',\ldots$ of bounds as presented by this lemma is not strictly increasing. In particular, the bounds in Lemma \ref{enserio} are attainable for infinitely many values of $k$. In particular, for $k=2^{m}$, we see $n_{k}'=p_1\cdots p_{m}=n_k$, as $r_2(2^{f}p_1^{g_1}\cdots p_{m} ^{g_{m}}q_1 ^{h_1}\cdots q_{n} ^{h_{n}})=4(g_1+1)\cdots (g_m+1)=4\cdot 2^m$ implies $g_1=\cdots=g_m=1$; additionally, for $k$ prime, $r_2(n)=r_2(2^{f}p_1^{g_1}\cdots p_{m} ^{g_{m}}q_1 ^{h_1}\cdots q_{n} ^{h_{n}})=4(g_1+1)\cdots (g_m+1)=4\cdot k$ implies $n$ has a sole prime divisor with exponent $k-1$. It can be calculated that the proposed $n_k'$ bound matches the actual value for $n_k$ for most small values of $k$; in fact, we conjecture that the two are equivalent for the vast majority of $k$, although the closest result we may achieve is to show that the two sequences are asymptotic. The following lemma assists in finding explicit bounds for $n_k$. \begin{lemma}\label{enserio} Let $p_1<p_2<\cdots$ be all the primes satisfying $p_{i}\equiv 1\pmod{4}$, listed in increasing order . For any $k\geq 1$, \begin{equation}\label{boundingnk} \prod_{i=1}^{\left\lfloor\log_2(k) \right\rfloor }p_{i} \ \ll \ n_{k} \ \leq \ 5^{k-1} .\end{equation} \end{lemma} \begin{proof} Firstly, as $r_2(5^{k-1})= 4(k-1+1)=4k$, we see immediately that $n_k \leq 5^{k-1}$. We proceed to show the lower bound. Write $k=q_1^{a_1}\cdots q_{m}^{a_{m}}$, where $q_1,\ldots , q_{m}$ are $m$ distinct primes. Note that \begin{equation} \left\lfloor\log_2(k) \right\rfloor \ \leq \ a_1 + \cdots + a_m, \end{equation} with equality if and only if $k=2^l$ for some $l \in \mathbb{N}$. Suppose $d$ is such that $r_2(d)=4k$. We wish to find the minimum possible value for $d$. We may assume that $d$ is not divisible by any primes that are congruent to 2 or 3 mod 4, as this would strictly increase its size without affecting the value of $r_2(d)$, as per Theorem \ref{r2n}. Hence, we may write $d=s_1 ^{g_1}\cdots s_t ^{g_t}$, for primes $s_i\equiv 1\pmod {4}$. Assuming without loss of generality that $g_1 \geq \cdots\geq g_t$, we see that the value of $d$ can be strictly decreased without affecting $r_2\left( d \right) =4\left( g_1+1 \right) \cdots \left( g_{t}+1 \right) $ by replacing $s_1=p_1,\ s_2=p_2,\ldots s_{t}=p_{t}$, as this will assign the highest exponents to the lowest possible prime values which are $1 \pmod {4}$. Hence we see $d$ is of the form \begin{equation} d \ = \ p_1^{g_1}\cdots p_t ^{g_t} \ \approx \ p_t^{tg_t} \ \approx \ (t\log t)^{tg_t}. \end{equation} As $(g_1+1)\cdots (g_t+1)=k$, where $g_1\leq \cdots \leq g_t$, we see that $g_t^t \gg k$. Hence we may take $t\approx \log_{g_t}(k)$, and see that the quantity is maximized with large $t$ and small $g_t$. As the minimal possible value of $g_t$ is 1, which gives us $t\approx \log_2(k)$, we may take the asymptotic lower bound \begin{equation}\prod_{i=1}^{\log_2(k)} p_i.\end{equation} \end{proof} We wish to find a more explicit lower bound for $n_{k}$, which by Lemma \ref{enserio} is equivalent to estimating the product of the first $t$ primes $p_1<\cdots <p_{t}$ which are congruent to $1\pmod{4}$. While this quantity is not well-studied, we may adapt existing work on bounding the product of the first $t$ primes $q_1,\ldots , q_{t}$ of any class $\pmod{4}$, denoted $q_{t}\#$. The best general estimate is \begin{equation} q_{t}\#=\prod_{i=1}^{t}q_{i}=e^{\left( 1+{o}(1) \right) t \log t} .\end{equation} We can then approximate our quantity as \begin{equation}\label{estimate} \prod_{i=1}^{t}p_{i} \ = \ \left( \prod_{i=1}^{2t}q_{i} \right) ^{1/2} \ = \ e^{\frac{1}{2}\left( 1+o(1) \right)2t\log 2t } \ = \ e^{\left( 1+o(1) \right)t\log 2t } .\end{equation} Given the equal spread of the primes in any arithmetic progression, we know (\ref{estimate}) must converge towards the real quantity. Using this quantity, we have a general lower bound for $n_{k}$ using $t=\log_2(2k)$, namely \begin{equation} n_{k} \ \gg \ e^{\frac{1}{2}\left( 1+o(1) \right) \log_2(2k)\log \log_2(2k)} \ = \ (2k)^{\frac{1}{2}(1+o(1))\log_2(\log_2(2k))} .\end{equation} Now, given that a distance $\sqrt{d}$ is on the $k$-th curve of the distance distribution, we can maximize each summand in Lemma \ref{distancecount} to find the upper bound \begin{equation} \label{eq:upperbound} L_{\sqrt{d}} \ \leq \ 2k N \left( N-\sqrt{d} \right) .\end{equation} As (\ref{eq:upperbound}) assumes that all $k$ pairs of integers $(a,b)$ with $a^2+b ^2=d$ are identically $(\sqrt{d},0)$, we know that for $k>1$, (\ref{eq:upperbound}) is indeed a strict upper bound for this quantity. Putting these pieces together, we can determine \begin{equation}\label{upperbound} L_{\sqrt{n_{k}}} \ \ll \ 2k N\left( N- (2k)^{\frac{1}{4}(1+o(1))\log_2(\log_2(2k))} \right) .\end{equation} In particular, for large $N$, maximizing this quantity in terms of $k$ gives us a strict upper bound for the most common distance on the $N\times N$ lattice, which we denote $F_{N}$. Thus, we reach an asymptotic bound for the most common distance in the lattice, which proves useful in the following section. \section{Error Estimates}\label{errorsection} After studying the distance distribution for the lattice, the logical next step is to study behavior of distance distributions for subsets of the lattice. Although we know that the lattice is a near-optimal set, as previously discussed, the behavior of the distance distributions of its subsets can vary widely. Thus, we examine how different and similar the distance distributions for subsets of the lattice can be to that of the lattice, an analogous version of the Erd\H{o}s distinct distances problem on subsets of the lattice. We now define our method of calculating the difference between the integer lattice's distance distribution and that of one of its subsets. Recall that $L_{\sqrt{d}}$ is the frequency of a distance $\sqrt{d}$ in the lattice and $S_{\sqrt{d}}$ is its frequency in a particular subset $S$. We note that the $N \times N$ lattice has $N^2(N^2-1)/2 \approx N^4/2$ total distances and a subset with $p$ points has $p(p-1)/2 \approx p^2/2$ total distances. Thus we scale up each $S_{\sqrt{d}}$ by $N^4/p^2$ to have a distance distribution with about the same total number of frequencies as that of the lattice. Then, we sum $\left|(N^4/p^2)S_{\sqrt{d}}-L_{\sqrt{d}}\right|$ over all $d \in \mathcal{D}_N$. More explicitly, we have the formula in the following definition. \begin{definition} Let $\varepsilon$ be the error between the distance distribution of the integer lattice and one of its subsets. Then, \begin{equation} \varepsilon \ = \ \frac{1}{|\mathcal{D}_N |} \sum_{d\in\mathcal{D}_N}\left|\frac{N^4}{p^2}S_{\sqrt{d}}-L_{\sqrt{d}}\right| . \end{equation} \end{definition} In many of our calculations, instead of working with the actual distances, we work with individual pairs $(a,b)$ for which $\sqrt{a^2+b^2} \in \mathcal{D}_N$. This gives exact values of the contribution to the error for distances on the first and second curves; in all other cases, this is a simplification. Thus we introduce some new notation. \begin{definition} Let $L_{a,b}$ denote the number of unordered pairs $(a_1, b_1)$, $(a_2, b_2) \in \mathcal{L}_N$ such that $|a_1-a_2|=a$ and $|b_1-b_2|=b$ or $|a_1-a_2|=b$ and $|b_1-b_2|=a$. \end{definition} \begin{definition} Let $S_{a,b}$ denote the number of unordered pairs $(a_1, b_1)$, $(a_2, b_2)$ in a given subset $S\subset \mathcal{L}_N$ such that $|a_1-a_2|=a$ and $|b_1-b_2|=b$ or $|a_1-a_2|=b$ and $|b_1-b_2|=a$. \end{definition} \begin{definition} Let $\varepsilon_{a,b}=|(N^4/p^2)S_{a,b}-L_{a,b}|$. \end{definition} It may be shown that counting repeat distances as distinct either strictly increases error or has no effect on it. If $\sqrt{a^2+b^2}=\sqrt{c^2+d^2}$ for $\{a,b\}\neq\{c,d\}$, we then see that the total contribution to error for this distance is \begin{equation}\left|\frac{N^2}{p^2}S_{a,b} - L_{a,b} + \frac{N^2}{p^2}S_{c,d} - L_{c,d}\right|, \end{equation} whereas counting them as distinct gives a total contribution to error \begin{equation}\left|\frac{N^2}{p^2}S_{a,b} - L_{a,b}\right| + \left|\frac{N^2}{p^2}S_{c,d} - L_{c,d}\right|.\end{equation} Counting each pair as distinct thus gives an upper bound for total error. Furthermore, these two quantities are identical if and only if $\left(N^2/p^2\right)S_{a,b} - L_{a,b}$ and $\left(N^2/p^2\right)S_{c,d} - L_{c,d}$ have the same sign, which we suspect is true of most subsets which maximize error. Recall from the previous calculations we made on the frequency of distances on the lattice that when $b=0$ or $a=b$, $L_{a,b}=2(N-a)(N-b)$. Otherwise, for $b>0$ and $a>b$, we have $L_{a,b}=4(N-a)(N-b)$. For later error calculations, we need the average values of $2(N-a)(N-b)$ and $4(N-a)(N-b)$ and the fraction of $L_{a,b}$ that are of the form $2(N-a)(N-b)$ and the fraction of $L_{a,b}$ that are of the form $4(N-a)(N-b)$. We provide these values in the following two lemmas, which follow immediately by induction. \begin{lemma} \label{lem:avg_val} The average value of $2(N-a)(N-b)$ over $a=b$ or $b=0$ is \begin{equation}\left(2N\right)^{-1}\left(\sum_{a=1}^N 2(N-a)^2+ \sum_{a=1}^N 2N(N-a)\right)=\frac{N(5N-1)}{6}\end{equation} and the average value of $4(N-a)(N-b)$ over $b>0$ and $a>b$ is \begin{equation}\left(\frac{1}{2}N(N-1)\right)^{-1}\left(\sum_{b=1}^{N-1} \sum_{a=b+1}^N 4(N-a)(N-b)\right)= \frac{N(3N-1)}{3}.\end{equation} \end{lemma} \begin{lemma}\label{lem:frac} The fraction of $L_{a,b}$ that are of the form $2(N-a)(N-b)$ is $4 / (N+2)$ and the fraction of $L_{a,b}$ that are of the form $4(N-a)(N-b)$ is $(N-2) / (N+2)$. \end{lemma} Similarly to the original Erd\H{o}s distance problem, we are interested in finding upper and lower bounds for the behavior we are studying. For the upper bounds, we find patterns of configurations that maximize the error. We then calculate the error for these specific configurations. For the lower bounds, we construct a theoretical optimal distance distributions and calculate a lower bound on its error. \section{Bounds} We begin this section with explicit calculations of upper bounds on error for configurations of $p$ points. A configuration of $p$ points that maximizes error needs to have as different a distance distribution as possible from the original full lattice when scaled up. Thus, the ratio of each distance's frequency to the total number of distances, including repeated distances, must be as different as possible from that of the full lattice. Specifically, we want to have a subset that has many distances that were infrequent in the full lattice and minimal number of distances that were very frequent in the full lattice. For certain values of $p,$ by brute-force calculations we know the configuration that maximizes error. See Figures \ref{fig:p=4}, \ref{fig:p=5}, \ref{fig:p=9}, \ref{fig:p=4(N-1)}, \ref{fig:p=4(N-1)+4(N-3)}, and \ref{fig:p=ceiling[N.2]}. \begin{figure}[!ht] \centering \begin{floatrow} \ffigbox[\FBwidth]{\caption{$p=4$.}\label{fig:p=4}}{% \includegraphics[scale=.2]{p=4.png} } \ffigbox[\FBwidth]{\caption{$p=5$.}\label{fig:p=5}}{% \includegraphics[scale=.2]{p=5} } \end{floatrow} \centering \begin{floatrow} \ffigbox[\FBwidth]{\caption{$p=9$.}\label{fig:p=9}}{% \includegraphics[scale=.2]{p=9.png} } \ffigbox[\FBwidth]{\caption{$p=4(N-1)$.}\label{fig:p=4(N-1)}}{% \includegraphics[scale=.2]{1fringe.png} } \end{floatrow} \centering \begin{floatrow} \ffigbox[\FBwidth]{\caption{$p=4(N-1)+4(N-3)$.}\label{fig:p=4(N-1)+4(N-3)}}{% \includegraphics[scale=.2]{2fringe.png} } \ffigbox[\FBwidth]{\caption{$p=\left \lceil\frac{N^2}{2} \right \rceil$.}\label{fig:p=ceiling[N.2]}}{% \includegraphics[scale=.2]{checkerboard.png} } \end{floatrow} \end{figure} We briefly describe these subsets. For $p=4,$ the maximal error subset is all four corners of the lattice. To transition to $p=5$, the middle point is added in. For $p=9$, the maximal subset is a $3 \times 3$ lattice stretched to the size of the $N \times N$ lattice. To transition from $p=5$ to $p=9$, the points that are in $p=9$, but not in $p=5$ are added in one by one. We then know for $p=4(N-1)$, the maximal error subset is the perimeter of the lattice, with no other points. For $p=\sum_{i=1}^m 4(N-(2i-1))$, the maximal error subset is the filled-in perimeter with a depth of $i$ points. For example, Figure \ref{fig:p=4(N-1)+4(N-3)} is a filled-in perimeter with a depth of 2 points. To transition from $p=\sum_{i=1}^m 4(N-(2i-1))$ to $p=\sum_{i=1}^{m+1} 4(N-(2i-1))$, the points only present in the latter configuration are filled-in one by one. The final maximal error configuration we found is for $p=\left \lceil N^2/2 \right \rceil$, for which every other point is filled-in to make a configuration we refer to as a checkerboard lattice. One can note that depending on the value of $N$, $\left \lceil N^2/2 \right \rceil$ is less than $p=\sum_{i=1}^{m} 4(N-(2i-1))$ for different values of $m$. As a result, there is a transition between these two types of configurations and a transition back. In the following examples, we calculate error estimates for some of these configurations. For these calculations, we use the simplification of working with $\sqrt{a^2+b^2}$ where $0 \leq b \leq N-1$ and $b \leq a \leq N-1$, excluding $a=b=0$, instead of distinct distances. \begin{example} \label{ex:p=4} We begin with $p=4$, where the maximal error configuration is a point in each corner of the lattice. We first note that the scaling constant is $N^4/p^2=N^4/16$. We also have just two distinct distances, \begin{equation}\sqrt{(N-1)^2+(N-1)^2} \ = \ \sqrt{2}(N-1),\end{equation} which has $L_{N-1,N-1}=2$ and $S_{N-1,N-1}=2$, and \begin{equation}\sqrt{(N-1)^2+0^2} \ = \ N-1,\end{equation} which has $L_{N-1,0}=2N$ and $S_{N-1,0}=4$. To find $\varepsilon_{N-1,N-1}$ and $\varepsilon_{N-1,0}$, we have to scale $S_{N-1,N-1}$ and $S_{N-1,0}$ by $N^4/16$ and subtract $L_{N-1,N-1}$ and $L_{N-1,0}$, respectively. This gives us $\varepsilon_{N-1, N-1}=N^4/8-2$ and $\varepsilon_{N-1, 0}=N^4/4-2N.$ Thus the total error contribution from these two distances is $3N^4/8-2-2N$. Both $L_{N-1,N-1}$ and $L_{N-1,0}$ are of the form $2(N-a)(N-b)$, so we have to update the average value of $2(N-a)(N-b)$ from Lemma \ref{lem:avg_val} and the fraction of the time $L_{a,b}$ is of the form $2(N-a)(N-b)$ from Lemma \ref{lem:frac} to exclude $L_{N-1,N-1}$ and $L_{N-1,0}$. The new average is $(5N^2+4N+3)/6$ and the new fraction is $(4N-8)/(N^2+N-2)$. The fraction of $L_{a,b}$ that are $L_{N-1,N-1}$ and $L_{N-1,0}$ is $4/(N^2+N-2)$. We note that for $(a,b)$ not equal to $(N-1,N-1)$ or $ (N-1,0)$, $S_{a,b}=0$. Thus, the average error contribution for these distances is their average frequency in the lattice. We then put everything together to get our error estimate. \begin{align} \varepsilon & \ \leq \ \frac{4}{N^2+N-2}\left(\frac{3N^4}{8}-2-2N\right)+\frac{4N-8}{N^2+N-2}\left(\frac{5N^2+4N+3}{6}\right)\nonumber\\ &\ \ \ \ +\frac{N-2}{N+2}\left(\frac{N(3N-1)}{3}\right)\nonumber \\ & \ = \ \frac{5N^2}{2}-\frac{5N}{2}-\frac{15}{2(N-1)}-\frac{16}{N+2}+\frac{13}{2}. \end{align} This error estimate is an overestimate of the error when $N$ is small because of the fact that we are looking at $\sqrt{a^2+b^2}$, instead of distinct distances. However, the only distances affected by this way of estimating the error are the two distances present on the lattice, $\sqrt{2}(N-1)$ and $N-1$ . As $N \rightarrow \infty$, the fraction of the total distances that these distances represent goes to zero. Thus, this error estimate converges to the actual error. \end{example} \begin{example} \label{ex:p=5} Similarly, we can calculate the error when $p=5$. Recall, for this value of $p$, that the subset configuration that maximizes error is the four corners and the middle point of the lattice. This configuration has 3 distinct distances: \begin{equation}\sqrt{(N-1)^2+0^2} \ = \ N-1,\end{equation} for which we have $S_{N-1, 0}=4$ and $L_{N-1, 0}=2N$, \begin{equation}\sqrt{(N-1)^2+(N-1)^2} \ = \ \sqrt{2}(N-1),\end{equation} for which we have $S_{N-1, N-1}=2$ and $L_{N-1, N-1}=2$, and \begin{equation}\sqrt{((N-1)/2)^2+((N-1)/2)^2} \ = \ \sqrt{2}(N-1)/2,\end{equation} for which we have $S_{(N-1)/2, 0}=4$ and $L_{(N-1)/2, 0}=N+1$. To calculate $\varepsilon_{N-1, 0},$ $\varepsilon_{N-1, N-1}$, and $\varepsilon_{(N-1)/2, 0}$, we need to multiply $S_{a,b}$ by $N^4/25$ and subtract $L_{a,b}$. Thus, $\varepsilon_{N-1, 0}=4N^4/25-2n$, $\varepsilon_{N-1, N-1}=2N^4/25-2$ and $\varepsilon_{(N-1)/2, 0}=4N^4/25-(N+1)$. We know that $L_{N-1, 0},$ $L_{N-1, N-1}$, and $L_{(N-1)/2, 0}$ are of the form $2(N-a)(N-b),$ so we need to edit the average value of $2(N-a)(N-b)$ from Lemma \ref{lem:avg_val} and the fraction of the time $L_{a,b}$ is of the form $2(N-a)(N-b)$ from Lemma \ref{lem:frac} to no longer include the three distances listed above. The new average value is \begin{equation}\frac{5N^3-6N^2-8N-9}{3(2N-5)}\end{equation} and the fraction of the time $L_{a,b}$ is of the form $2(N-a)(N-b)$ is \begin{equation}\frac{6}{N+2}-\frac{2}{N-1}.\end{equation} The fraction of the total distances that are the three distances listed above is \begin{equation}\frac{2}{N-1}-\frac{2}{N+2}.\end{equation} We then put this all together to calculate the error: \begin{align} \varepsilon & \ \leq \ \left(\frac{2}{N-1}-\frac{2}{N+2}\right) \left[\left(\frac{4N^4}{25}-2N\right)+\left(\frac{2N^4}{25}-2\right)+\left(\frac{4N^4}{25}-(N+1)\right)\right]\nonumber\\ &\ \ \ +\left(\frac{6}{N+2}-\frac{2}{N-1}\right)\left[\frac{5N^3-6N^2-8N-9}{3(2N-5)}\right] +\frac{N-2}{N+2}\left[\frac{N(3N-1)}{3}\right]\nonumber \\ & \ = \ \frac{17N^2}{5}-\frac{17N}{5}-\frac{6}{N-2}-\frac{56}{5(N-1)}-\frac{124}{5(N+2)}-\frac{31}{3(2N-5)}+\frac{113}{15}.\end{align} Similarly to $p=4$, this error estimate overestimates the error when $N$ is small because of the fact that we are looking at $\sqrt{a^2+b^2}$, instead of distinct distances $\sqrt{d}$. However, as $N \rightarrow \infty$, this error estimate converges to the actual error. \end{example} \begin{example}\label{ex:p=9} We can then examine what happens when $p=9$. The 9 point configuration that maximizes error is a $3 \times 3$ lattice that has been stretched to the size of the $N \times N$ lattice. We first note that $S_{a,b}>0$ if and only if $a,b\equiv 0 \pmod{(N-1)/2}.$ Thus, the fraction of $S_{a,b}$ such that $S_{a,b} \neq 0$ for $b\neq 0$ and $a >b$ is \begin{equation}\frac{4}{(N-1)^2}\end{equation} and the fraction of $S_{a,b}$ such that $S_{a,b}\neq 0$ for $b=0$ or $a=b$ is \begin{equation}\frac{2}{N-1}.\end{equation} The scaling constant is $N^4/81$. We estimate the error from above by assuming that if $S_{a,b}\neq 0$, then $S_{a,b}=L_{a,b}.$ This assumption does not increase the error estimate by an unreasonable amount because, for large enough $N$, the fraction of total distances that are represented in this configuration is very low. We can then use our previous averages of $L_{a,b}$ to calculate error: \begin{align} \varepsilon & \ \leq \ \frac{4}{N+2}\Bigg[\frac{2}{N-1}\left( \frac{N^4}{81}\left(\frac{N(5N-1)}{6}\right)-\frac{N(5N-1)}{6}\right)\nonumber \\ &\ \ \ \ +\left(1-\frac{2}{N-1}\right)\left(\frac{N(5N-1)}{6}\right) \Bigg] \nonumber \\ &\ \ \ \ +\frac{N-2}{N+2}\Bigg[\frac{4}{(N-1)^2}\left(\frac{N^4}{81}\left(\frac{N(3N-1)}{3}\right)-\frac{N(3N-1)}{3}\right)\nonumber\\ &\ \ \ \ +\left(1-\frac{4}{(N-1)^2}\right)\left(\frac{N(3N-1)}{3}\right) \Bigg] \nonumber \\ & \ = \ \frac{32N^4}{243}-\frac{52N^3}{243}+\frac{4N^2}{9}-\frac{220N}{243}-\frac{23044}{2187(N-1)}-\frac{14000}{2187(N+2)}\nonumber \\ &\ \ \ \ -\frac{6200}{729(N-1)^2}+\frac{112}{27(N-1)^3}+\frac{32}{9(N-1)^4}+\frac{428}{243}.\end{align} \end{example} \begin{example}\label{ex:checkerboard} Finally, we examine the error for the configuration for $p=\left\lceil N^2/2\right \rceil$. This configuration is the \textquotedblleft checkerboard lattice," a subset that is missing every other point from the full lattice and resembles a checkerboard, as seen in Figure \ref{fig:p=ceiling[N.2]}. To provide some intuition, the checkerboard lattice is a reasonable configuration for maximizing error because it very strongly prioritizes distances where $b=0$ or $a=b$. These distances have $L_{a,b}=2(N-a)(N-b)$, which tend to be smaller than other frequencies where $L_{a,b}=4(N-a)(N-b)$. Thus, this configuration has many frequencies which are not very common in the full lattice. In a checkerboard, $\sqrt{a^2+b^2}$ only appears as a distance if either $a$ and $b$ are both odd or both even. We note that is this causes about $1/2$ of $S_{a,b}$ to be zero for $a> b$ and $b \neq 0$. Additionally, we note that this causes about $1/4$ of $S_{a,b}$ to be zero for $a= b$ or $b = 0$. We use the simplifying assumption that $S_{a,b}=L_{a,b}$ if $S_{a,b}\neq 0$. This ultimately increases our error estimate. We then have the following error estimate. \begin{align} \varepsilon & \ \leq \ \frac{4}{N+2}\left[\frac{3}{4}\left(4\left(\frac{N(5N-1)}{6}\right)-\frac{N(5N-1)}{6} \right)+\frac{1}{4} \left(\frac{N(5N-1)}{6}\right) \right]\nonumber\\ &\ \ \ \ + \frac{N-2}{N+2}\left[ \frac{1}{2}\left(4\left(\frac{N(3N-1)}{3}\right) -\frac{N(3N-1)}{3}\right)+\frac{1}{2} \left(\frac{N(3N-1)}{3}\right) \right]\nonumber\\ & \ = \ 2N^2-\frac{N}{3}-\frac{2}{3(N+2)}+\frac{1}{3}.\end{align} In Appendix \ref{checkerboardcalculations}, we more precisely calculate the frequency of the distances on the checkerboard lattice. These calculations can be used for a closer estimate. \end{example} We now turn our attention to finding lower bounds for configurations of $p$ points. To minimize error, we want to preserve the same ratio of total distances, including repeated distances, to frequency that appeared in the $N \times N$ lattice for each unique distance. Thus, to calculate a lower bound, we create an \textquotedblleft optimal" distribution of frequencies for $p$ points. This optimal distribution cannot always be achieved, as not every distance distribution is realizable by a certain configuration. To come up with the optimal distribution, we scale each $L_{a,b}$ by $N^4/p^2$ and round this number to the nearest integer. We then find the error for this optimal distribution in the same way as before. Code can easily be written to find the optimal distance distribution and calculate the error. For example, Figure \ref{fig:lower_bound_theoretical} demonstrates what this error is for a $100 \times 100$ lattice. Note that this figure does not include all possible values of $p$, rather it includes enough values to capture the behavior of the error. \begin{figure} \centering \includegraphics[scale=.35]{computer_lower_bound.png} \caption{Computer generated data for the error of the optimal distance distribution in the $100 \times 100$ lattice.} \label{fig:lower_bound_theoretical} \end{figure} We begin with providing intuition for the behavior of the second part of the graph, the decreasing curve for large $p$. Once the $L_{a,b}$'s are scaled down by $p^2/N^4$, rounded, and scaled up by $N^4/p^2$, they are a multiple of $N^4/p^2$. That means that the largest $\varepsilon_{a,b}$ can be is $N^4/2p^2$ and the smallest $\varepsilon_{a,b}$ can be is $0$. One might expect the average contribution to error to be $N^4/4p^2.$ However, for small $p$, many frequencies in the $N \times N$ lattice are much closer to $0$ than $N^4/2p^2$, as $N^4/2p^2$ is quite large. This explains the shape of the graph, however it does not provide a concrete value for the error. On the other hand, for the smaller values of $p$, we can give a more precise description. Notice that the graph in Figure \ref{fig:lower_bound_theoretical} begins with a short horizontal line. More rigorously, we have \begin{equation}\label{errorboundsmallsubset} \varepsilon \ \geq \ \binom{N^2}{2} \text{ if } p \ \leq \ \frac{N^2}{\sqrt{2F_N}}. \end{equation} Here, we note that $\binom{N^2}{2}$ is the precise value for error for the empty subset of the lattice, i.e., the sum of distance frequencies of the lattice itself. The idea is that the distance distribution of a small enough subset of the lattice, after rescaling, has a greater error than the empty subset, as its few distances are very overrepresented. As we use a scaling factor of $N^{4}/p^2$, we notice that if $p$ is such that \begin{equation} \frac{N^{4}}{p^2} \ > \ 2L_{\sqrt{d}} \end{equation} for any distance $\sqrt{d}$ on the integer lattice, then the error of any subset of size $p$ is strictly greater than that of the empty subset, as for any $\sqrt{d}$, \begin{equation} \left| \frac{N^{4}}{p^2}S_{\sqrt{d}}-L_{\sqrt{d}} \right| \ \geq \ \left| \frac{N^{4}}{p^2} - L_{\sqrt{n_{k}}} \right| \ \geq \ F_N ,\end{equation} where, as previously defined, $F_N$ denotes the highest frequency on the $N\times N$ lattice. \section{Future Work} There are several ways to improve and extend our work. We have already done some characterizations of the subsets that maximize error and how the subsets transition from one configuration to another. We know we have a checkerboard configuration when $p=\left\lceil N^2/2 \right\rceil$ and we have a filled-in perimeter when $p=\sum_{i=1}^{m} 4(N-(2i-1))$ for different values of $m$. However the transition between these two configurations has still yet to be characterized. Additionally, we hope to improve our lower bound work. Work can be done to find a characterization of the sets that minimize error. Furthermore, we hope to refine our lower bound formula by finding a rigorous lower bound the values of $p$ where $N^4/8p^2$ holds. Finally, as Erd\H{o}s conjectured that all near-optimal sets have lattice structure, it is natural to extend results to other lattice structures. We expect that the error is similar.
{ "timestamp": "2021-02-02T02:20:34", "yymm": "2009", "arxiv_id": "2009.12450", "language": "en", "url": "https://arxiv.org/abs/2009.12450", "abstract": "The Erdős distance problem concerns the least number of distinct distances that can be determined by $N$ points in the plane. The integer lattice with $N$ points is known as \\textit{near-optimal}, as it spans $\\Theta(N/\\sqrt{\\log(N)})$ distinct distances, the lower bound for a set of $N$ points (Erdős, 1946). The only previous non-asymptotic work related to the Erdős distance problem that has been done was for $N \\leq 13$. We take a new non-asymptotic approach to this problem in a model case, studying the distance distribution, or in other words, the plot of frequencies of each distance of the $N\\times N$ integer lattice. In order to fully characterize this distribution, we adapt previous number-theoretic results from Fermat and Erdős in order to relate the frequency of a given distance on the lattice to the sum-of-squares formula.We study the distance distributions of all the lattice's possible subsets; although this is a restricted case, the structure of the integer lattice allows for the existence of subsets which can be chosen so that their distance distributions have certain properties, such as emulating the distribution of randomly distributed sets of points for certain small subsets, or emulating that of the larger lattice itself. We define an error which compares the distance distribution of a subset with that of the full lattice. The structure of the integer lattice allows us to take subsets with certain geometric properties in order to maximize error; we show these geometric constructions explicitly. Further, we calculate explicit upper bounds for the error when the number of points in the subset is $4$, $5$, $9$ or $\\left \\lceil N^2/2\\right\\rceil$ and prove a lower bound in cases with a small number of points.", "subjects": "Number Theory (math.NT)", "title": "Tinkering with Lattices: A New Take on the Erdős Distance Problem", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9835969689263264, "lm_q2_score": 0.826711787666479, "lm_q1q2_score": 0.8131512085244135 }
https://arxiv.org/abs/1612.06314
Computing cohomology of configuration spaces
We give a concrete method to explicitly compute the rational cohomology of the unordered configuration spaces of connected, oriented, closed, even-dimensional manifolds of finite type which we have implemented in Sage [S+09]. As an application, we give acomplete computation of the stable and unstable rational cohomology of unordered configuration spaces in some cases, including that of $\mathbb{CP}^3$ and a genus 1 Riemann surface, which is equivalently the homology of the elliptic braid group. In an appendix, we also give large tables of unstable and stable Betti numbers of unordered configuration spaces. From these, we empirically observe stability phenomenon in the unstable cohomology of unordered configuration spaces of some manifolds, some of which we prove and some of which we state as conjecture.
\section{Introduction} \subsection{Cohomological stability of configuration spaces.} Given a sequence of topological spaces or groups, $\{X_n\}$, \emph{(rational) cohomological stability} is the property that, for each $i\geq 0$, $$H^i(X_n; \Q) = H^i(X_{n+1}; \Q)$$ for $n \geq f(i)$, where $f(i)$ is some function of $i$. We call $H^i(X_n; \Q)$ for $n \geq f(i)$ the \emph{stable cohomology groups} of $\{X_n\}$ and $n \geq f(i)$ the \emph{stable range}. Conversely, $H^i(X_n; \Q)$ for $n < f(i)$ are the \emph{unstable cohomology groups} of $\{X_n\}$, and $n < f(i)$ is the \emph{unstable range}. Let $X$ be a topological space and $n \in \N.$ The $n$th \emph{ordered configuration space} of $X$ (or the space of $n$ distinct labeled points in $X$) is $$\PConf^nX = \left(X^n - \bigcup_{1 \leq i < j \leq n} \Delta_{ij}\right)$$ where $\Delta_{ij} := \{(x_1, \hdots, x_n) \in X^n : x_i = x_j\}.$ The symmetric group $S_n$ acts on $\PConf^nX$ by permuting coordinates, and if we mod out by this action, we obtain the $n$th \emph{unordered configuration space} of $X$ (or the space of $n$ distinct unlabeled points in $X$) $$\Conf^nX := \PConf^nX / S_n.$$ In this paper, we are primarily interested in the unorderd configuration spaces, $\Conf^nX$, for $X$ a manifold. This paper, using previously developed theoretical framework, gives a very concrete method to explicitly compute $H^i(\Conf^nX; \Q)$ for $X$ a connected, oriented, closed, even-dimensional manifold of finite type which we have implemented in Sage \cite{sage}. In the appendix, we give several tables of output from our program from which we empirically observe many more phenomena in the cohomology of configuration spaces than just the well-known phenomenon of cohomological stability. There has been recent work showing that the actual stable Betti numbers of some specific manifolds, for example closed surfaces, have structure and exhibiting structure in the unstable Betti numbers, as well \cite{Drummond-ColeKnudsen2016, Scheissl2016, MillerWilson2016}. As an application of our approach, we give examples of such structure, some conjectured and some proven. \subsection{Previous Work} One of the first stability results for configuration spaces was given by Arnol'd \cite{Arnol'd1969} in 1969 who proved that there are inclusions $$\Conf^n\R^2 \hookrightarrow \Conf^{n+1}\R^2$$ that induce isomorphsims $$H_i(\Conf^n\R^2;\Z) \rightarrow H_i(\Conf^{n+1}\R^2; \Z)$$ for $n$ sufficiently large (depending on $i$). McDuff \cite[Theorem 1.3]{McDuff1975} and Segal \cite[Proposition A.1]{Segal1979} generalized Arnol'd's result to open manifolds. McDuff defined a map (often referred to as the \emph{scanning map}) from $\Conf^nX$ to the the space of degree-k compactly-supported sections of the fiberwise one-point compactification of the tangent bundle of $X$ and used that to prove integral homological stability for open manifolds. Segal then gave explicit bounds for the stable range and a new proof of homological stability via an argument more similar to Arnol'd's. Integral homological stability is known not to hold in general for closed manifolds. For example, $H_1(\Conf^nS^2; \Z) = \Z/(2n-2)\Z$ which has a dependence on $n$ that cannot be avoided by simply taking $n$ to be very large. For many years, it was thought that integral homological stability for open manifolds was the best one could do until Church considered the problem via the lens of \emph{representation stability}. By regarding $H^i(\PConf^n X; \Q)$ as a rational representation of $S_n$, Church \cite[Theorem 1]{Church2012} proved that the decomposition of $H^i(\PConf^nX; \Q)$ into irreducible representations remains the same in some sense for $n$ sufficiently large (depending on $i$) when $X$ is a connected, orientable, manifold with the homotopy type of a finite CW complex. (For example, the trivial representation of $S_n$ can be thought of as the same as the trivial representation of $S_m$ even when $n \neq m.$) Then as a corollary (\cite[Corallary 3]{Church2012}), Church concludes that $H^i(\Conf^n X; \Q) = H^i(\Conf^{n+1}X; \Q)$ for $n > i$, i.e. that the \emph{rational} cohomology groups of $\Conf^n X$ satisfy cohomological stability. Shortly following Church's proof, Randal-Williams \cite[Theorem C]{Randal-Williams2013} recovered Church's stability result using more traditional topological methods and was able to give an improvement on the bound of the stable range in some cases. Using the method of factorization homology, Knudsen \cite[Theorem 1.3]{Knudsen2015} was able to generalize this to non-orientable manifolds. Combining these results yields the following stability theorem. \begin{theorem}[\cite{Arnol'd1969, McDuff1975, Segal1979, Church2012, Randal-Williams2013, Knudsen2015}] \label{stability} For $X$ a connected manifold with $H^*(X;\Q)$ finite-dimensional and $n\geq i+1$, we have that $$ H^i(\Conf^n X; \Q)=H^i(\Conf^{n+1} X;\Q). $$ \end{theorem} \subsection{Stable and unstable Betti numbers of configuration spaces} Theorem \ref{stability} gives an important characterization of the rational cohomology of configuration spaces of manifolds, but a more fundamental question is ``given a manifold $X$, what are the Betti numbers of $\Conf^n(X)$?'' While there are several theoretical tools one can use to explicitly compute these Betti numbers (see \cite{CohenTaylor1978Springer, Totaro1996, FultonMacPherson1994, Kriz1994, FelixTanre2005, FelixThomas2000, FelixThomas2004, Knudsen2015, McDuff1975, BenderskyGitler1991, FadellNeuwirth1962}), surprisingly few have been computed. Using the theoretical framework of the Cohen--Taylor--Totaro--Kriz spectral sequence, we implemented an algorithm in Sage \cite{sage} to compute Betti numbers of unordered configuration spaces of connected, oriented, even-dimensional, closed manifolds of finite type (see the appendix for tables of our computations). As examples of our approach, in Section 4, we compute all the Betti numbers of $\Conf^n\C\P^1,$ $\Conf^n\C\P^2,$ and $\Conf^n\C\P^3$. Most of these were previously known, with the exception of the unstable Betti numbers of $\Conf^n\C\P^3.$ See Section 4 for remarks on methods used in previous work. One of the first phenomena we empirically observed was the structure of the stable Betti numbers of the unordered configuration space of the closed genus 1 Riemann surface, $\Sigma_1$. At the time, very few Betti numbers of configuration spaces had been computed, and we were uncertain as to whether or not we would observe any structure in these values. We remark that the cohomology of $\Conf^n \Sigma_1$, is equivalently the cohomology of the elliptic braid group, $B_n(\Sigma_1)$ \cite{Birman1969, Scott1970}. \begin{proposition} \label{stableg1betti} The stable Betti numbers of $\Conf^n\Sigma_1$ are $$b_0 = 1, b_1 = 2, b_2 = 3, b_3 = 5, b_4 = 7, \hdots, b_i = 2i-1, \hdots.$$ \end{proposition} We give the full rational cohomology of $\Conf^n\Sigma_1$, i.e. also the unstable Betti numbers, in Section 4 (see Proposition \ref{P:g1SV}). Prior to our work, Napolitano \cite[Table 2]{Napolitano2003} had computed $H_i(\Conf^n\Sigma_1;\Z)$ for $1 \leq n \leq 6$ and $0 \leq i \leq 7$, and Kallel \cite[Corollary 1.7]{Kallel2008} computed $H_1(\Conf^n\Sigma_1; \Z)$ for $n \geq 3.$ Concurrently with our work, Scheissl \cite{Scheissl2016} independently computed the stable and unstable Betti numbers of $\Conf^n\Sigma_1$ also using the Cohen--Taylor--Totaro--Kriz spectral sequence, and Drummond-Cole and Knudsen \cite[Corollaries 4.5--4.7]{Drummond-ColeKnudsen2016} not only computed the stable and unstable Betti numbers of $\Conf^n\Sigma_1$, but did so for all surfaces of finite type via a method derived from factorization homology. For other related computations, see \cite{BrownWhite1981, BodigheimerCohen1988, BodigheimerCohenTaylor1989, Knudsen2015, Azam2015}. In the appendix, we provide tables of Betti numbers we've computed for unordered configuration spaces of the following manifolds: $\C\P^1 \times \C\P^1$, $\C\P^1 \times \C\P^1 \times \C\P^1$, $\C\P^1 \times \C\P^2$, $\P_{\C\P^2}(\O \oplus \O(1))$, $\C\P^3$, $\C\P^4$, $\C\P^5$, $\C\P^6$, $\Sigma_1$, $\Sigma_1 \times \C\P^1$, $\Sigma_2$, $\Sigma_3$, and $\Sigma_4$ (where $\Sigma_g$ is the clsoed genus $g$ Riemann surface). For $X = \C\P^1 \times \C\P^2$ and $Y = \P_{\C\P^2}(\O\oplus \O(1))$, Totaro \cite[Section 5]{Totaro1996} pointed out that $\Conf^3X$ and $\Conf^3Y$ have different rational cohomology. In \cite[Section 1.19]{VakilWood2015}, Vakil and Wood ask if this difference goes away in the stable range. However, from tables \ref{Tab:cp1xcp2one} and \ref{Tab:cp1xcp2Tone}, we see that $\Conf^nX$ and $\Conf^nY$ do have significantly different rational cohomology; specifically, in the stable range, $H^{11}(\Conf^{15}X; \Q) \neq H^{11}(\Conf^{15}Y; \Q)$ and $H^{12}(\Conf^{15}X; \Q) \neq H^{12}(\Conf^{15}Y; \Q).$ For other computational results not discussed above, the interested reader should consult \cite{Cohen1973, CohenTaylor1978Springer, FultonMacPherson1994, BodigheimerCohenTaylor1989, FelixThomas2000, Napolitano2003, Azam2015, Knudsen2015, Fuks1970, BrownWhite1981, CohenTaylor1978BullAmer, Vainshtein1978, Kallel2008, Vershinin1999, FeichtnerZiegler2000, DominguezGonzalezLandweber2013, Sohail2010, BodigheimerCohen1988}. \subsection{Stable instability and vanishing} Traditional stability is the statement that $H^i(X_n;\Q) \approx H^i(X_{n+1};\Q)$ for $i$ fixed and $n$ sufficiently large. But one might reasonably ask about other relationships between the cohomology groups, i.e. stability where neither $i$ nor $n$ is fixed, but grow in some other way. For example, Miller and Wilson \cite[Theorem 1.2]{MillerWilson2016} have recently proven stability phenomenon following from maps the form $H^i(\PConf^nX;\Q) \rightarrow H^{i+1}(\PConf^{n+2}X;\Q)$ for some $X$. Additionally, Church, Farb, and Putman \cite{ChurchFarbPutman2014} have made conjectures about the unstable cohomology of $\{X_n\} = \{\SL_n(\Z)\}, \{\Mod_n\}$, where $\Mod_n$ is the mapping glass group of the genus $n$ closed Riemann surface. Specifically, if we let $D_n$ denote the virtual cohomological dimension of $X_n$, they conjecture: \begin{itemize} \item (Stable instability Conjecture) For $j\geq 0$, the group $H^{D_n-j}(X_n;\Q)$ does not depend on $n$ for $n$ sufficiently large, and \item (Vanishing Conjecture) For each $i\geq 0$, we have $H^{D_n-j}(X_n;\Q)=0$ for $n$ sufficiently large. \end{itemize} \begin{figure}[h] \includegraphics[width=4.5in]{Mod_n.eps} \caption{Betti numbers of the mapping class group of genus $n$; $\dim H^i(\Mod_n;\Q)$ sits at the point $(i,n).$ Red lines illustrate traditional cohomological stability. Blue lines illustrate stable instability.} \label{fig:mod_n} \end{figure} In particular, they prove that in these two cases, the stable instability conjecture implies the vanishing conjecture. We observe similar phenomena occuring in the cohomology of unordered configuration spaces of manifolds. As an example, our approach gives a simple proof of the following theorem which follows from a result first proved by Kallel \cite[Theorem 1.1]{Kallel2008} and was also proven by Napolitano \cite[Theorem 3]{Napolitano2003} in the case when $X$ is any connected surface. \begin{theorem}[Stable Instability and Vanishing Theorem, \cite{Kallel2008}]\label{T:SIVC} Let $X$ be an oriented real manifold of dimension $D$. Then for $n\geq j+2$, we have $$ H^{nD-j}(\Conf^n X;\Q)=0. $$ \end{theorem} Said another way, for $i\geq (D-1)n+2$, we have $$ H^{i}(\Conf^n X;\Q)=0. $$ For example, when $X$ is an oriented, compact surface, we see that the only possibly non-zero unstable cohomology is $H^i(\Conf^n X;\Q)$ for $i=n,n+1$. In general, the vanishing in Theorem~\ref{T:SIVC} is sharp, as demonstrated by the following proposition. \begin{proposition}\label{P:gCD} Let $\Sigma_g$ be a closed Riemann surface of genus $g$ with $g\geq 1$. Then for $n\geq 3$, $$ H^{n+1}(\Conf^n \Sigma_g; \Q)\ne 0. $$ \end{proposition} In many cases, for example in the case of $\Sigma_1$, the only alternate stability that we observe stabilizes to zero. However, in some examples, we do see some \emph{non-vanishing stable instability}. Specifically, for some values of $i$ (depending on $n$), we have that $H^i(\Conf^n\C\P^3;\Q) \approx H^{i+2}(\Conf^{n+1}\C\P^3;\Q) \neq 0$ for $n$ sufficiently large. (See Figure \ref{fig:cp3} for an illustration of stable instability.) \begin{proposition}[Stable instability for $\C\P^3$]\label{cp3stableinstability} For $n \geq \frac{i}{2},$ $$H^i(\Conf^n \C\P^3;\Q) = H^i(\Conf^{n+1}\C\P^3; \Q);$$ i.e. the stable range of $\Conf^n \C\P^3$ is $n \geq \frac{i}{2}.$ Furthermore, for all $n \geq 11$ and $i > 2n$, $$H^{i}(\Conf^n\C\P^3; \Q) = H^{i+2}(\Conf^{n+1}\C\P^3; \Q).$$ Specifically, $$H^{2n+j}(\Conf^n\C\P^3; \Q) = H^{2(n+1)+j}(\Conf^{n+1}\C\P^3;\Q) = \Q$$ for $n \geq 11$ and $j \in \{2,4,6,7,8,10,12\}$, $$H^{2n+j}(\Conf^n\C\P^3; \Q) = H^{2(n+1)+j}(\Conf^{n+1}\C\P^3;\Q) = \Q^2$$ for $n \geq 10$ and $j \in \{1,3,5\},$ and $$H^{2n+j}(\Conf^n\C\P^3; \Q) = H^{2(n+1)+j}(\Conf^{n+1}\C\P^3;\Q) = 0$$ for $n \geq 1$ and $j \in \N - \{1, \hdots, 8,10,12\}.$ \end{proposition} Our bound for the stable range of $\Conf^n \C\P^3$ improves slightly on the bound of $n \geq \frac{i}{2} + 1$ given by Church \cite[Proposition 4.1]{Church2012}. Our computations (see tables \ref{Tab:cp3one}--\ref{Tab:cp6five} and propositions \ref{cp1betti}--\ref{cp3betti}) have led us to make the following conjecture about the top non-vanishing cohomology of $\Conf^n\C\P^k$ for all $k \geq 1$, which we also believe to be an example of non-vanishing stable instability. \begin{conjecture}[Non-vanishing stable instability for $\C\P^k$] \label{stableinstability} Given a positive integer $k$, there exist positive integers $n_0$ and $c_s$ such that, for $n \geq \frac{i-c_s}{2\lceil k/2\rceil -2},$ $$H^i(\Conf^n\C\P^k;\Q) = H^i(\Conf^{n+1}\C\P^k;\Q),$$ and for all $n \geq n_0$ and $i > (2\lceil k/2\rceil -2)n + c_s,$ $$H^i(\Conf^n\C\P^k;\Q) \approx H^{i + 2\lceil k/2 \rceil -2}(\Conf^{n+1}\C\P^k; \Q).$$ Specifically, there exists $c_t$ such that $$H^{(2\lceil k/2 \rceil -2)n + c_t}(\Conf^n\C\P^k;\Q) = H^{(2\lceil k/2 \rceil -2)(n+1) + c_t}(\Conf^{n+1}\C\P^k;\Q) = \Q$$ for all $n \geq n_0$, and $$H^{(2\lceil k/2 \rceil -2)n + j}(\Conf^n\C\P^k;\Q) = H^{(2\lceil k/2 \rceil -2)(n+1) + j}(\Conf^{n+1}\C\P^k;\Q) = 0$$ for all $n \geq n_0$ and $j > c_t.$ \end{conjecture} \begin{figure}[h] \includegraphics[width=4.5in]{CP3.eps} \caption{Betti numbers of the unordered configuration space of $n$ points in $\C\P^3$; $\dim H^i(\Conf^n\C\P^3)$ sits at the point $(i,n).$ Red lines illustrate traditional cohomological stability. Blue lines illustrate stable instability.} \label{fig:cp3} \end{figure} For $k = 3,$ Conjecture \ref{stableinstability} follows from Proposition \ref{cp3stableinstability}. In Section 4, we will see that Conjecture~\ref{stableinstability} is also true for $k=1,2$, and in particular that $H^3(\Conf^n\C\P^1;\Q) = \Q$ is the top non-vanishing cohomology of $\Conf^n\C\P^1$ and that $H^{11}(\Conf^n\C\P^2; \Q) = \Q$ is the top non-vanishing cohomology of $\Conf^n\C\P^2.$ \section{The Cohen--Taylor--Totaro--Kriz spectral sequence} There are several theoretical tools that can be used to compute the cohomology of configuration spaces (for example, see \cite{CohenTaylor1978Springer, Totaro1996, FultonMacPherson1994, Kriz1994, FelixTanre2005, FelixThomas2000, FelixThomas2004, Knudsen2015, McDuff1975, BenderskyGitler1991, FadellNeuwirth1962}). But the one we use is a Leray spectral sequence for the fibration $\PConf^nX \hookrightarrow X^n$ converging to $H^*(\PConf^nX;\Q)$ whose $S_n$-invariants converge to $H^*(\Conf^nX;\Q)$ when $X$ is a connected, oriented manifold of finite type. Cohen and Taylor \cite[Section 2]{CohenTaylor1978Springer} were the first to describe this spectral sequence (although they failed to identify it as Leray). They described the $E_2$-page and the first nontrivial differential for $X$ a connected, oriented manifold of finite type. Then Totaro \cite[Theorem 3]{Totaro1996} and Kriz \cite[Theorem 1.1]{Kriz1994} proved that the sequence collapses after the first nontrivial differential for $X$ a smooth, complex projective variety. F\'{e}lix and Thomas \cite[Theorem A]{FelixThomas2000} and Church \cite[Proposition 4.3]{Church2012v1} described the $S_n$-invariants of the $E_2$-page for $X$ even-dimensional. In the odd-dimensional case, Bodigheimer, Cohen, and Taylor \cite[Theorem C]{BodigheimerCohenTaylor1989} and F\'{e}lix and Tanr\'{e} \cite[Theorem 4]{FelixTanre2005} proved that $H^*(\Conf^nX; \Q) = \bigoplus_{i+j = n} \Sym^iH^{even}(X;\Q) \otimes \bigwedge^j H^{odd}(X;\Q).$ F\'{e}lix and Tanr\'{e} \cite[Theorem 3]{FelixTanre2005} and Knudsen\cite[Corollary 4.11]{Knudsen2015} proved that, in the even-dimensional case, the Cohen--Taylor--Totaro--Kriz spectral sequence only has one nontrivial differential for $X$ a connected manifold of finite type. We now give a description of what this spectral sequence looks like. Let $X$ be an even-dimensional manifold. Let $V_X$ be the bi-graded vector space $\tilde{H}^*(X,\Q)$, with second grading identically $0$. Let $W_X$ be the bi-graded vector space $H^*(X;\Q)$ in which an element of degree $g$ from $\tilde{H}^*(X,\Q)$ has grade $(g,1)$. Let $R_X=\Q[V_X\oplus W_X]$ be the free graded-commutative algebra on $V_X\oplus W_X$, in which the grading for the graded-commutativity is given by the sum of the two grades in the bigrading of $V_X\oplus W_X$. We put a third grading, \emph{length}, on $R_X$ induced by giving elements of $V_X$ length $1$ and $W_X$ length $2$. \begin{theorem}[\cite{CohenTaylor1978Springer, FelixThomas2000, Church2012v1}]\label{T:E2} Let $X$ be an oriented even dimensional manifold. Let $\mathcal{E}_2(n)$ be the $S_n$ invariants of the $E_2$ page of the Leray spectral sequence for $\PConf^nX \hookrightarrow X^n$, with the rows that are not indexed by multiples of $\dim X-1$ removed. Then we have a isomorphism of bi-graded vector spaces $$ \mathcal{E}_2(n) \isom R_{X,\leq n}, $$ where $R_{X,\leq n}$ is the quotient of $R_{X}$ be elements of length $>n$. \end{theorem} We differ from the standard literature by giving the elements of $W_X$ the grade $(\ast, 1)$ instead of $(\ast, \dim X -1)$. We make this choice so as to remove all the zero rows, i.e. those not indexed by multiples of $\dim X -1$, from the Cohen--Taylor--Totaro--Kriz spectral sequence. (Note that the parity of the grading is preserved since we are restricting to the case when $\dim X$ is even.) So, if $R_{X, \leq n}^{p,q}$ denotes the elements of $R_{X, \leq n}$ with bi-grade $(p,q)$, Theorem \ref{T:E2} gives a vector space isomorphism $\mathcal{E}_2^{p,q}(n) \rightarrow R_{X, \leq n}^{p,q}.$ In order to describe how to interpret the differential on $\mathcal{E}_2(n)$ in the $R_{X, \leq n}$ setting, we first need to develop some notation for the elements of $R_{X, \leq n}.$ Let $\{y_0 = 1, y_1, \hdots, y_m\}$ be a graded basis of $H^*(X;\Q).$ Then we denote the corresponding basis of $V_X$ by $\{Y_1, \hdots, Y_m\}$, where $y_i$ corresponds to $Y_i$, and we denote the corresponding basis of $W_X$ by $\{\Y_0 = \1, \Y_1, \hdots, \Y_m\}$, where $y_i$ corresponds to $\Y_i$. This induces bases on $R_X$ and $R_{X, \leq n}.$ Namely, we see that $$B^{p,q}(n) = \left\{\begin{array}{c|c}Y^{\r}\Y^{\s} & \ell(Y^{\r}\Y^{\s}) \leq n, \sum_{i=1}^m (r_s + s_i)|y_i| = p, \sum_{i=0}^m s_i = q, \mbox{~and~}r_i, s_j \in \{0,1\}\\ ~ & \mbox{~for~} |y_i| \mbox{~odd,~} |y_j| \mbox{~even}\end{array}\right\}$$ is a basis for $R_{X, \leq n}^{p,q},$ where $Y^{\r}\Y^{\s} : = Y_1^{r_1}\cdots Y_m^{r_m}\Y_0^{s_0}\cdots \Y_m^{s_m}.$ \begin{theorem} [\cite{CohenTaylor1978Springer,FelixTanre2005}] Let $X$ be a connected, oriented, closed, even-dimensional manifold of finite type. Under the isomorphism of Theorem~\ref{T:E2}, the differential on $\mathcal{E}_2(n)$ corresponds to the differential $d$ on $R_X$ such that $d(V_X)=0$ and, for $h \in H^*(X;\Q)$ with corresponding element $\H \in W_X$, we have $$ d \H = \frac{1}{2}\sum_{i=0}^m (-1)^{|y_i^{\vee}|}\H_i \Y_i^{\vee} $$ where $\{y_0^{\vee}, \hdots, y_m^{\vee}\}$ is a dual basis of $\{y_0, \hdots, y_m\}$ with respect to the cup product pairing (i.e. $y_i \cdot y_i^{\vee} = y_m$ where $H^{\dim X}(X;\Q) = \Q \cdot y_m$), $\H_i$ is the element of $W_X$ corresponding to $h \cdot y_i$, and $\Y_i^{\vee}$ is the element of $W_X$ corresponding to $y_i^{\vee}.$ \end{theorem} Note that due to our convention of giving elements of $W_X$ the grade $(\ast, 1)$, our differentials now have different codomains, i.e. $d:E_2^{p,q}(n) \rightarrow E_2^{p+D, q-1}(n).$ \begin{theorem}[\cite{Totaro1996, Kriz1994, FelixTanre2005, Knudsen2015}]\label{T:1D} The spectral sequence $\mathcal{E}_2(n)$ has only one nontrivial differential. \end{theorem} Combining theorems \ref{T:E2} -- \ref{T:1D}, turns the task of computing the cohomology groups $H^i(\Conf^n X; \Q)$ into a linear algebra problem that one can program a computer to solve. \subsection*{Notation} Throughout the rest of this paper, we will refer to $R_{X, \leq n}$ as $E_2(n),$ and $E_2^{p,q}(n)$ will refer to the elements of $R_{X, \leq n}$ with bi-grade $(p,q).$ We will sometimes just say $E_2$, the $E_2$-page, or $E_2^{p,q}$ if $n$ is fixed. For clarity, we shall sometimes write $d^{p,q}$ for $d: E_2^{p,q}(n) \rightarrow E_2^{p+D,q-1}(n).$ \section{An exact subcomplex} We now define a subcomplex of the cochain complex associated to the $E_2$-page involving the orientation class of $X$. Using a filtration on the cochain complex, we show that this subcomplex is exact. This allows us to quotient out by it and is the key fact that allows us to prove Theorem \ref{T:SIVC}. The ordered version of our subcomplex is defined in \cite[Proposition 5.5]{AshrafAzamBerceanu2014}, although it takes a bit of work to see this. \begin{proposition} \label{top} Let $X$ be a complex manifold of real dimension $D$, let $B(X) = \{y_0=1, y_1, \hdots, y_m\}$ be a graded basis of $H^*(X; \Q),$ and fix $n \geq 1.$ Let $C^i(n) = \bigoplus_{p+(D-1)q = i} E_2^{p,q}(n)$ be the cochain complex associated to the $E_2$-page of $\Conf^nX.$ Recall that $$B^{p,q}(n) = \left\{\begin{array}{c|c}Y^{\r}\Y^{\s} & \ell(Y^{\r}\Y^{\s}) \leq n, \sum_{i=1}^m (r_s + s_i)|y_i| = p, \sum_{i=0}^m s_i = q, \mbox{~and~}r_i, s_j \in \{0,1\}\\ ~ & \mbox{~for~} |y_i| \mbox{~odd,~} |y_j| \mbox{~even}\end{array}\right\}$$ is a basis for $E_2^{p,q}(n).$ Let $A^{p,q}(n) \subseteq E_2^{p,q}(n)$ denote the subspace $$A^{p,q}(n) := \vspan\{Y^{\r}\Y^{\s} \in B^{p,q}(n) | r_m \geq 2 \mbox{~or~} s_m \geq 1\},$$ and let $\mathbf{A}(n)$ be the corresponding subcomplex of $\mathbf{C}(n)$. We claim that $\mathbf{A}(n)$ is exact. \end{proposition} \begin{proof} To prove the claim, we shall impose a filtration $\{\mathbf{A_k}(n)\}_{k \in \N}$ on $\mathbf{A}(n)$ and show, via induction, that each $\mathbf{A_k}(n)$ is an exact subcomplex of $\mathbf{C}(n).$ To this end, define $A_k^{p,q}(n) \subseteq E_2^{p,q}(n)$ to be the subspace spanned by $$B_k^{p,q}(n) : = \left \{ \begin{array}{c|c}Y^{\r}\Y^{\s} \in B^{p,q}(n) & \sum_{i=0}^{m-1} s_i \leq k, \mbox{~and~} r_m \geq 2 \mbox{~or~} s_m \geq 1 \end{array}\right\}.$$ To show that $\mathbf{A_k}(n)$ is indeed a subcomplex of $\mathbf{A}(n),$ consider our two ``typical" basis elements of $A_k^{p,q}(n)$: $Y^{\r}\Y_0^{s_0} \cdots \Y_{m-1}^{s_{m-1}}$ with $r_m \geq 2$ and $Y^{\r}\Y_0^{s_0}\cdots \Y_{m-1}^{s_{m-1}}\Y_m^{s_m}$ with $s_m \geq 1.$ Now $$d(Y^{\r}\Y_0^{s_0}\Y_1^{s_1}\cdots \Y_{m-1}^{s_{m-1}}) = Y^{\r}\cdot d(\Y_0^{s_0} \cdots \Y_{m-1}^{s_{m-1}})$$ so $d(Y^{\r}\Y_0^{s_0}\cdots \Y_{m-1}^{s_{m-1}}) \in \mathbf{A}_k(n).$ Furthermore, $Y^{\r}\Y_0^{s_0}\cdots \Y_{m-1}^{s_{m-1}} \in A_k^{p,q}(n)$ means that $q \leq k,$ and since $d:A^{p,q}(n) \rightarrow A^{p+D, q-1}(n),$ we see that $d(Y^{\r}\Y_0^{s_0}\Y_1^{s_1}\cdots \Y_{m-1}^{s_{m-1}}) \in A_k^{p+D, q-1}(n).$ Similarly, for $Y^{\r}\Y_0^{s_0} \cdots \Y_{m-1}^{s_{m-1}}\Y_m^{s_m}$ with $s_m \geq 1$, \begin{eqnarray*} d(Y^{\r}\Y_0^{s_0}\cdots\Y_{m-1}^{s_{m-1}}\Y_m^{s_m}) & = & Y^{\r}d(\Y_0^{s_0}\cdots \Y_{m-1}^{s_{m-1}})\Y_m^{s_m} + (-1)^tY^{\r}\Y_0^{s_0}\cdots \Y_{m-1}^{s_{m-1}}\cdot d(\Y_m^{s_m})\\ & = & Y^{\r}\cdot d(\Y_0^{s_0}\cdots \Y_{m-1}^{s_{m-1}})\Y_m^{s_m}\\ &~& + (-1)^t s_m Y_1^{r_1} \cdots Y_{m-1}^{r_{m-1}}Y_m^{r_m+2}\Y_0^{s_0}\cdots \Y_{m-1}^{s_{m-1}}\Y_m^{s_m-1}, \end{eqnarray*} where $t = \sum_{i=0}^{m-1}s_i(|y_i| + 1).$ So $d(Y^{\r}\Y_0^{s_0}\cdots\Y_{m-1}^{s_{m-1}}\Y_m^{s_m}) \in \mathbf{A_k}(n)$ when $s_m\geq 1.$ Since $Y_1^{r_1} \cdots Y_m^{r_m}\Y_0^{s_0}\cdots \Y_m^{s_m} \in A_k^{p,q}(n)$ implies that $q \leq k+s_m,$ and $d: A^{p,q}(n) \rightarrow A^{p+D, q-1}(n),$ we see that $d(Y_1^{r_1} \cdots Y_m^{r_m}\Y_0^{s_0}\cdots \Y_{m-1}^{s_{m-1}}\Y_m^{s_m}) \in A_k^{p+D, q-1}(n).$ Thus $\mathbf{A_k}(n)$ is indeed a subcomplex of $\mathbf{A}(n)$. We first prove that $\mathbf{A_0}(n)$ is exact. First consider $A_0^{p,0}(n)$. Since $A_0^{p+D,-1}(n) = \{0\},$ we see that $\ker(d^{p,0}) = A_0^{p,0}(n).$ Let $Y^{\r} \in B_0^{p,0}(n).$ Then $$d(Y_1^{r_1}\cdots Y_{m-1}^{r_{m-1}}Y_m^{r_m-2}\Y_m) = Y^{\r}$$ and $Y_1^{r_1}\cdots Y_{m-1}^{r_{m-1}}Y_m^{r_m-2}\Y_m \in A_0^{p-D,1}(n)$ since $s_m \geq 2.$ Thus $\ker\left(d^{p,0}|_{A_0^{p,0}}\right) = \im\left(d^{p-D,1}|_{A_0^{p-D,1}}\right).$ Now suppose $q \geq 1$, and let $$\alpha = \sum_{Y^{\r}\Y_m^q \in B_0^{p,q}(n)} c_{\r}Y^{\r}\Y_m^q \in \ker(d^{p,q})$$ where $c_{\r} \in \Q.$ Then $$d(\alpha) = \sum_{Y^{\r}\Y_m^q \in B_0^{p,q}(n)} c_{\r}qY_1^{r_1}\cdots Y_{m-1}^{r_{m-1}}Y_m^{r_m+2}\Y_m^{q-1} = 0.$$ But the only way $d(\alpha) = 0$ is if $\alpha = 0.$ Thus $\ker\left(d^{p,q}|_{A_0^{p,q}}\right) = \{0\}$ for $q\geq 1$. It follows trivially that $\ker\left(d^{p,q}|_{A_0^{p,q}}\right) = \im\left(d^{p-D, q+1}|_{A_0^{p-D,q+1}}\right)$, and so $\mathbf{A_0}(n)$ is exact. As our inductive hypothesis, suppose that $\mathbf{A_{k-1}}(n)$ is exact. To show that $\mathbf{A_k}(n)$ is exact, it suffices to show that $\mathbf{A_k}(n)/\mathbf{ A_{k-1}}(n) =: \mathbf{\tilde{A}_k}(n)$ is exact. Note that $\tilde{A}_k^{p,q}(n) = \{0\}$ for $q < k.$ First suppose that $q = k.$ Then $$\tilde{B}_k^{p,k}(n) := B_k^{p,k}(n) - B_{k-1}^{p,k}(n) = \left\{Y^{\r}\Y_0^{s_0} \cdots \Y_{m-1}^{s_{m-1}} \in B_k^{p,k}(n) \right\}.$$ Since $\tilde{A}_k^{p+D, k-1}(n) = \{0\},$ we see that $\ker\left(d^{p,k}|_{\tilde{A}_k^{p,k}(n)}\right) = \tilde{A}_k^{p,k}(n).$ Given $Y^{\r}\Y_0^{s_0} \cdots \Y_{m-1}^{s_{m-1}} \in \tilde{A}_k^{p, k}(n),$ we see that $$d(Y_1^{r_1} \cdots Y_{m-1}^{r_{m-1}} Y_m^{r_m-2} \Y_0^{s_0} \cdots \Y_{m-1}^{s_{m-1}}\Y_m) = Y^{\r} \Y_0^{s_0} \cdots \Y_{m-1}^{s_{m-1}},$$ and $Y_1^{r_1} \cdots Y_{m-1}^{r_{m-1}}Y_m^{r_m-2} \Y_0^{s_0} \cdots \Y_{m-1}^{s_{m-1}}\Y_m \in \tilde{A}_k^{p-D, k+1}(n)$ since $s_m = 1.$ Now consider $\tilde{A}_k^{p,q}(n)$ for $q \geq k+1.$ Then $$\tilde{B}_k^{p, q}(n) = B_k^{p, q}(n) - B_{k-1}^{p, q}(n) = \left \{Y^{\r}\Y_0^{s_0} \cdots \Y_{m-1}^{s_{m-1}}\Y_m^{q-k} \in B_k^{p, q}(n) \right \}.$$ Let $$\alpha = \sum_{Y^{\r}\Y^{\s} \in \tilde{B}_k^{p, q}}(n) c_{\r, \s}Y^{\r}\Y_0^{s_0}\cdots \Y_{m-1}^{s_{m-1}}\Y_m^{q-k} \in \ker\left(d^{p,q}|_{\tilde{A}_k^{p, q}(n)}\right),$$ where $c_{\r,\s} \in \Q.$ Then \begin{eqnarray*} d(\alpha) & = & \sum_{Y^{\r}\Y^{\s} \in \tilde{B}_k^{p, q}(n)} c_{\r, \s}Y^{\r}\cdot d(\Y_0^{s_0}\cdots \Y_{m-1}^{s_{m-1}}\Y_m^{q-k})\\ & = & \sum_{Y^{\r}\Y^{\s} \in \tilde{B}_k^{p, q}(n)} \left( c_{\r, \s}Y^{\r}\cdot d(\Y_0^{s_0} \cdots \Y_{m-1}^{s_{m-1}}) \cdot \Y_m^{q-k} + (-1)^t c_{\r, \s} Y^{\r} \Y_0^{s_0} \cdots \Y_{m-1}^{s_{m-1}} \cdot d(\Y_m^{q-k})\right)\\ & = & \sum_{Y^{\r}\Y^{\s} \in \tilde{B}_k^{p, q}(n)} (-1)^t(q-k) c_{\r, \s} Y_1^{r_1} \cdots Y_{m-1}^{r_{m-1}} Y_m^{r_m+2} \Y_0^{s_0} \cdots \Y_{m-1}^{s_{m-1}}\Y_m^{q-k-1}\\ & = & 0 \end{eqnarray*} where $\displaystyle{ t = \sum_{i=0}^{m-1} s_i(|y_i|+1).}$ From this computation, we see that the only way $d(\alpha) = 0$ is if $\alpha = 0.$ Thus $\ker\left(d^{p,q}|_{\tilde{A}_k^{p,q}(n)}\right) = 0$, and so we trivially have that $\ker\left(d^{p,q}|_{\tilde{A}_k^{p,q}(n)}\right) = \im\left(d^{p-D,q+1}|_{\tilde{A}_k^{p-D,q+1}(n)}\right).$ Thus $\mathbf{A_k}(n)$ is exact, and by induction it follows that $\mathbf{A}(n)$ is exact. \end{proof} From now on, we will always work in the $E_2$-page quotiented out by this exact subcomplex (i.e. that $r_m \in \{0,1\}$ and $s_m = 0$), but by a slight abuse of notation, we will still refer to the $p,q$ entry as $E_2^{p,q}(n).$ Armed with Proposition \ref{top}, we may now prove Theorem \ref{T:SIVC}. \begin{proof}[Proof of Theorem \ref{T:SIVC}.] Fix $q \geq 0.$ Recall that given $Y^{\mathbf{r}}\Y^{\mathbf{s}} \in B^{p,q}(n),$ \begin{eqnarray*} \ell(Y^{\r}\Y^{\s}) & = & 2s_0 + \sum_{i=1}^m(r+i+2s_i)\\ & = & 2q + \sum_{i=1}^m r_i\\ & \leq & n. \end{eqnarray*} Thus $\sum_{i=1}^m r_i \leq n-2q.$ Additionally, from Proposition \ref{top}, we know that $r_m \in \{0,1\}$ and $s_m = 0.$ It follows that for a fixed $q$ and $n$, $Y^{\r}\Y^{\s}$ can have a $p$-value of at most $q(D-1) + (n-1-2q)(D-1) + D$, and so $B^{p,q}(n) = \emptyset$ for \begin{eqnarray} p & \geq & q(D-1) + (n-2q-1)(D-1) + D + 1\nonumber\\ & = & (n-q)(D-1) + 2.\label{star} \end{eqnarray} Recall that $$H^i(\Conf^nX;\Q) = \bigoplus_{p+(D-1)q = i} E_\infty^{p,q}(n).$$ From \eqref{star}, we have that, for a fixed $n$, $$E_\infty^{p,q}(n) = \{0\}$$ for $p+(D-1)q \geq n(D-1)+2.$ Thus it follows that $H^i(\Conf^nX;\Q) = 0$ for $i \geq n(D-1) + 2.$ \end{proof} The bound given by Theorem \ref{T:SIVC} is sharp. As we stated earlier in Proposition \ref{P:gCD}, $$H^{n+1}(\Conf^n\Sigma_g; \Q) \neq 0$$ for $n \geq 3.$ We now give the proof of Proposition \ref{P:gCD}. \begin{proof}[Proof of Proposition \ref{P:gCD}.] Recall that $\{1, a_1, \cdots, a_g, b_1, \cdots, b_g, t\}$ is a graded basis of $H^*(X;\Q)$ with $a_ib_i = -b_ia_i = t$ for all $1 \leq i \leq g.$ From our description of $d$ given in Section 2, we see that $$d(\A^{\r}\B^{\s}) = -2\sum_{i=1}^g(r_iA_iT\A^{\r-\mathbf{e_i}}\B^{\s} + s_iB_iT\A^{\r}\B^{\s - \mathbf{e_i}})$$ where $\mathbf{e_i}$ is the vector whose $i$th component is $1$ and all other components are $0$. First suppose that $n$ is even. Then $n = 2k$ for some $k \in \N.$ We shall show that $E_\infty^{k+2,k-1}(n) \neq \{0\}$, and thus $$H^{n+1}(\Conf^nX;\Q) = H^{2k+1}(\Conf^nX;\Q) = \bigoplus_{p+q = 2k+1} E_\infty^{p,q}(n) \neq \{0\}.$$ Now $E_2^{k+2,k-1}(n)$ has basis $$B^{k+2,k-1}(n) = \left\{A_iT\A^{\r}\B^{\s}, B_iT\A^{\r}\B^{\s} : 1 \leq i \leq g, \sum_{i=1}^g(r_i+s_i) = k-1\right\}.$$ Thus $$\dim E_2^{k+2,k-1}(n) = 2g \cdot \binom{k+2g-2}{2g-1}.$$ Since $d(A_iT\A^{\r}\B^{\s}) = d(B_iT\A^{\r}\B^{\s}) = 0$ for $1 \leq i \leq g$, it follows that $\rank d^{k+2, k-1} = 0.$ Now $E_2^{k,k}(n)$ has basis $$B^{k,k}(n) = \left\{\A^{\r}\B^{\s} : \sum_{i=1}^g(r_i+s_i) = k \right\}.$$ Then $$\rank d^{k,k} \leq \dim E_2^{k,k}(n) = \binom{k+2g-1}{2g-1}.$$ It follows that \begin{eqnarray*} \dim E_\infty^{k+2,k-1}(n) & = & \dim E_2^{k+2,k-1} - \rank d^{k+2,k-1} - \rank d^{k,k}\\ & \geq & 2g \cdot \binom{k+2g-2}{2g-1} - \binom{k+2g-1}{2g-1}\\ & = & \frac{2g \cdot (k+2g-2)!}{(k-1)!(2g-1)!} - \frac{(k+2g-1)!}{k! \cdot (2g-1)!}\\ & = & (k-1)\cdot \binom{k+2g-2}{2g-2}\\ & > & 0, \end{eqnarray*} since $g \geq 1$ and $k \geq 2.$ Now suppose that $n$ is odd. Then $n = 2k+1$ for some $k \in \N.$ In a manner similar to the above, we shall show that $E_{\infty}^{k+2,k} \neq \{0\}$, and it will thus follows that $$H^{n+1}(\Conf^nX;\Q) = H^{2k+2}(\Conf^nX;\Q) = \bigoplus_{p+q = 2k+2} E_\infty^{p,q}(n) = \{0\}.$$ We see that $E_2^{k+2,k}(n)$ has basis $$B^{k+2,k}(n) = \left\{T\A^{\r}\B^{\s} : \sum_{i=1}^g(r_i+s_i) = k \right\} \neq \emptyset.$$ Since $d(T\A^{\r}\B^{\s}) = 0$, we see that $\rank d^{k+2,k} = 0.$ Furthermore, $B^{k,k+1}(n) = \emptyset$, so $\rank d^{k,k+1} = 0.$ Thus $$\dim E_\infty^{k+2,k}(n) = \dim E_2^{k+2,k} - \rank d^{k+2,k} - \rank d^{k, k+1} > 0,$$ and so $H^{n+1}(\Conf^nX;\Q) \neq 0.$ \end{proof} \section{Proofs of unstable and stable values} In this section, we give concrete calculations for $H^*(\Conf^nX;\Q)$ for four spaces: $\C\P^1$, $\C\P^2$, $\C\P^3$, and $\Sigma_1$. This proves propositions \ref{stableg1betti} and \ref{cp3stableinstability}. Recall that the $i$th Betti number $b_i(n)$ is given by $$b_i(n) = \sum_{p+(D-1)q = i} \dim(E_\infty^{p,q}(n)),$$ and $$\dim(E_\infty^{p,q}(n)) = \dim(E_2^{p,q}(n)) - \rank(d^{p,q}) - \rank(d^{p-D, q+1}).$$ Thus for the examples given below, we begin by computing bases for each $E_2^{p,q}(n)$, then we compute $d^{p,q}$ of each basis and determine the rank of each $d^{p,q}$. From there, we add and subtract the relevant quantities to determine the stable and unstable values of the Betti numbers. \subsection{Case of $\C\P^1$} Recall that $\{1, x\}$ is a graded basis of $H^*(\C\P^1;\Q)$ with $|1| = 0$ and $|x| = 2.$ An arbitrary basis element of the $E_2$-page is of the form $X^{r}\1^{s}$ with $r, s \in \{0,1\}.$ Thus we see that $B^{p,q}(n) = \emptyset$ for $p \geq 3$ and $q \geq 1.$ Since only a finite number of $B^{p,q}(n)$ are nonempty, our program can compute all the Betti numbers of $\Conf^n\C\P^1,$ proving the following proposition. \begin{proposition} \label{cp1betti} The stable Betti numbers of $\Conf^n \C\P^1$ are $b_0 = b_3 = 1$ and $b_i = 0$ for $i \neq 0,3.$ Furthermore, the entire cohomology is given by $$ H^i(\Conf^n\C\P^1; \Q) = \begin{cases} \Q & \mbox{if } i = 0\\ \Q & \mbox{if } n = 1, i = 2\\ \Q & \mbox{if } n \geq 3, i =3\\ 0 & \mbox{else} \end{cases}. $$ \end{proposition} These numbers have been previously computed by many others. Cohen and Taylor \cite[Proposition 5.4]{CohenTaylor1978Springer} computed the stable Betti numbers of $\Conf^n\C\P^1.$ Sevryuk \cite[Theorem 2]{Sevryuk1984}, Salvatore \cite[Theorem 18]{Salvatore2004}, Randal-Williams \cite[Theorem 1.1]{Randal-Williams2013spheres}, and Ashraf, Azam, and Berceanu \cite[Lemma 6.1, Proposition 6.1]{AshrafAzamBerceanu2014} compute all the Betti numbers of $\Conf^n\C\P^1$ for $n \geq 1.$ Related computations are also done in \cite{BodigheimerCohenTaylor1989, FeichtnerZiegler2000, Napolitano2003}. \subsection{Case of $\C\P^2$} Recall that $\{1,x,x^2\}$ is a graded basis of $H^*(\C\P^2; \Q)$ with $|1| = 0$ and $|x| = 2.$ Set $x_i = x^i$ for $0 \leq i \leq 2.$ For $\r = (r_0,r_1) \in \Z^2$, we shall denote $\X^{\r} = \X_0^{r_0}\X_1^{r_1}.$ Since $|x_i|$ is even for $i = 0,1$ it follows that $r_i \in \{0,1\}$. Thus $B^{2p,q}(n) = \emptyset$ for $q \geq 3.$ From the description of $d$ given in Section 2, we see that \begin{eqnarray*} d(\X^{(1,0)}) & = & 2X_2 + X_1^2,\\ d(\X^{(0,1)}) & = & 2X_1X_2, \quad \mbox{and}\\ d(\X^{(1,1)}) & = & 2X_2\X^{(0,1)} + X_1^2\X^{(0,1)} - 2X_1X_2\X^{(1,0)}. \end{eqnarray*} For $n \geq p$, $$B^{2p,0}(n) = \{X_1^p, X_1^{p-2}X_2\}$$ where $\ell(X_1^p) = p$ and $\ell(X_1^{p-2}X_2) = p-1.$ The following tables record the values of $\dim E_2^{2p,0}(n)$ for $p \geq 2$ and $0 \leq p \leq 1$, respectively. \begin{center} \begin{tabular}{c|c} range of $n$ & $\dim E_2^{2p,0}(n)$ \\ \hline $n \geq p$ & $2$\\ $n = p-1$ & $1$\\ $n \leq p-2$ & $0$ \end{tabular} \end{center} \begin{center} \label{Tab:cp2dim0small} \centering \begin{tabular}{c|c|c} range of $n$ & $\dim E_2^{0,0}(n)$ & $\dim E_2^{2,0}(n)$\\ \hline $n \geq 1$ & $1$ & $1$ \end{tabular} \end{center} Since $E_2^{2p,-1}(n) = \{0\}$ for all $p$ and $n$, we see that $\rank d^{2p,0} = 0$. For $n \geq p+2$, $$B^{2p,1}(n) = \{X_1^p\X^{(1,0)}, X_1^{p-2}X_1\X^{(1,0)}, X_1^{p-1}\X^{(0,1)}, X_1^{p-3}X_2\X^{(0,1)}\}$$ where $\ell(X_1^p\X^{(1,0)}) = p+2$, $\ell(X_1^{p-2}X_2\X^{(1,0)}) = \ell(X_1^{p-1}\X^{(0,1)}) = p+1,$ and $\ell(X_1^{p-3}X_2\X^{(0,1)}) = p.$ The following tables record the values of $\dim E_2^{2p,1}(n)$ for $p \geq 3$ and $0 \leq p \leq 2$, respectively. \begin{center} \begin{tabular}{c|c} range of $n$ & $\dim E_2^{2p,1}(n)$\\ \hline $n \geq p+2$ & $4$\\ $n = p+1$ & $3$\\ $n = p$ & $1$\\ $n \leq p-1$ & $0$ \end{tabular} \end{center} \begin{center} \begin{tabular}{c|c|c|c} range of $n$ & $\dim E_2^{0,1}(n)$ & $\dim E_2^{2,1}(n)$ & $\dim E_2^{4,1}(n)$\\ \hline $n \geq 4$ & $3$ & $2$ & $1$\\ $n = 3$ & $2$ & $2$ & $1$\\ $n = 2$ & $0$ & $1$ & $1$\\ $n = 1$ & $0$ & $0$ & $0$ \end{tabular} \end{center} In order to compute $\rank d^{2p,1}$, we first compute $d^{2p,1}$ of each of the basis elements of $B^{2p,1}(n).$ \begin{center} \begin{tabular}{l|l} basis vector & image under $d^{2p,1}$\\ \hline $X_1^p\X^{(1,0)}$ & $2X_1^pX_2 + X_1^{p+2}$\\ $X_1^{p-2}X_2\X^{(1,0)}$ & $X_1^pX_2$\\ $X_1^{p-1}\X^{(0,1)}$ & $2X_1^pX_2$\\ $X_1^{p-3}X_2\X^{(0,1)}$ & $0$ \end{tabular} \end{center} Thus counting the number of linearly independent vectors in the right column above, we obtain $\rank d^{2p,1}.$ The following tables record the values of $\rank d^{2p,1}$ for $p \geq 1$ and for $p = 0$, respectively. \begin{center} \begin{tabular}{c|c} range of $n$ & $\rank d^{2p,1}$\\ \hline $n \geq p+2$ & $2$\\ $n = p+1$ & $1$\\ $n \leq p$ & $0$ \end{tabular} \end{center} \begin{center} \begin{tabular}{c|c} range of $n$ & $\rank d^{0,1}$\\ \hline $n \geq 2$ & $1$\\ $n = 1$ & $0$ \end{tabular} \end{center} For $n \geq p+3$, $$B^{2p,2}(n) = \left\{X_1^{p-1}\X^{(1,1)}, X_1^{p-3}X_2\X^{(1,1)}\right\}$$ where $\ell(X_1^{p-1}\X^{(1,1)}) = p+3$ and $\ell(X_1^{p-3}X_2\X^{(1,1)}) = p+2.$ The following tables record the values of $\dim E_2^{2p,2}(n)$ for $p \geq 3$ and for $0 \leq p \leq 2$, respectively. \begin{center} \begin{tabular}{c|c} range of $n$ & $\dim E_2^{2p,2}(n)$\\ \hline $n \geq p+3$ & $2$\\ $n = p+2$ & $1$\\ $n \leq p+1$ & $0$ \end{tabular} \end{center} \begin{center} \begin{tabular}{c|c|c|c} range of $n$ & $\dim E_2^{0,2}(n)$ & $\dim E_2^{2,2}(n)$ & $\dim E_2^{4,2}(n)$\\ \hline $n \geq 5$ & $0$ & $1$ & $1$\\ $n = 4$ & $0$ & $1$ & $0$\\ $n \leq 3$ & $0$ & $0$ & $0$ \end{tabular} \end{center} To compute $\rank d^{2p,2}$, we first compute $d^{2p,2}$ of each of the elements of $B^{2p,2}(n).$ \begin{center} \begin{tabular}{l|l} basis vector & image under $d^{2p,2}$\\ \hline $X_1^{p-1}\X^{(1,1)}$ & $2X_1^{p-1}X_2\X^{(0,1)} + X_1^{p+1}\X^{(0,1)} - 2X_1^pX_2\X^{(1,0)}$\\ $X_1^{p-3}X_2\X^{(1,1)}$ & $X_1^{p-1}X_2\X^{(0,1)}$ \end{tabular} \end{center} Thus counting the number of linearly independent vectors in the right column above, we obtain $\rank d^{2p,2}.$ The following tables record the values of $\rank d^{2p,2}$ for $p \geq 3$ and for $0 \leq p \leq 2$, respectively. \begin{center} \begin{tabular}{c|c} range of $n$ & $\rank d^{2p,2}$\\ \hline $n \geq p+3$ & $2$\\ $n = p+2$ & $1$\\ $n \leq p+1$ & $0$ \end{tabular} \end{center} \begin{center} \begin{tabular}{c|c|c|c} range of $n$ & $\rank d^{0,2}$ & $\rank d^{2,2}$ & $\rank d^{4,2}$\\ \hline $n \geq 5$ & $0$ & $1$ & $1$\\ $n = 4$ & $0$ & $1$ & $0$\\ $n \leq 3$ & $0$ & $0$ & $0$ \end{tabular} \end{center} Now that we know $\dim E_2^{2p,q}(n)$ and $\rank d^{2p,q}$ for all $p \geq 0$ and $q = 0,1,2$, we can compute $\dim E_\infty^{2p,q}(n).$ Recall that $$\dim E_\infty^{2p,q}(n) = \dim E_2^{2p,q}(n) - \rank d^{2p,q} - \rank d^{2p-4, q+1}.$$ Following the formula above, we see that there are only finitely many values of $p$ and $q$ for which $\dim E_\infty^{2p,q}(n) \neq 0.$ We record these values in the following table. \begin{center} \begin{tabular}{c|c|c|c|c|c|c} range of $n$ & $\dim E_\infty^{0,0}(n)$ & $\dim E_\infty^{2,0}(n)$ & $\dim E_\infty^{4,0}(n)$ & $\dim E_\infty^{4,1}(n)$ & $\dim E_\infty^{6,1}(n)$ & $\dim E_\infty^{8,1}(n)$\\ \hline $n \geq 4$ & $1$ & $1$ & $1$ & $1$ & $1$ & $1$\\ $n = 3$ & $1$ & $1$ & $1$ & $1$ & $1$ & $0$\\ $n \leq 2$ & $1$ & $1$ & $1$ & $0$ & $0$ & $0$ \end{tabular} \end{center} Recall that $$b_i(n) = \sum_{2p+3q = i} \dim E_\infty^{2p,q}(n).$$ We have now proven the following proposition. \begin{proposition} \label{cp2betti} The stable Betti numbers of $\Conf^n \C\P^2$ are $b_0 = b_2 = b_4 = b_7 = b_9 = b_{11} = 1$ and $b_i = 0$ for $i \neq 0,2,4,7,9,11$. Furthermore, the entire cohomology is given by $$ H^i(\Conf^n \C\P^2; \Q) = \begin{cases} \Q & \mbox{if } i = 0,2,4\\ \Q & \mbox{if } n \geq 3, i = 7,9\\ \Q & \mbox{if } n \geq 4, i =11\\ 0 & \mbox{else} \end{cases}. $$ \end{proposition} These numbers have been previously computed. F\'{e}lix and Tanr\'{e} \cite[Theorem 2]{FelixTanre2005} computed the algebra structure of $H^*(\Conf^n\C\P^2; \Z)$ for all $n \geq 1.$ The stable Betti numbers of $\Conf^n\C\P^2$ prove Conjecture G of \cite{VakilWood2015} which was first noticed by Kupers and Miller \cite[Theorem 1.1]{KupersMiller2014} who computed the stable Betti numbers of $\Conf^n \C\P^2$ using McDuff's scanning map. Related computations are also done in \cite{Sohail2010, AshrafBerceanu2014}. \subsection{Case of $\C\P^3$} Recall that $H^\ast(\C\P^3; \Q) \simeq \Q[x]\slash (x^4)$ where $|1| = 0$ and $|x| = 2.$ Then if we set $x_i := x^i$ for $0 \leq i \leq 3,$ we see that $\{x_0, x_1, x_2, x_3\}$ is a graded basis of $\C\P^3.$ For $\r = (r_0, r_1, r_2) \in \Z^3,$ we shall denote $\X^\r = \X_0^{r_0}\X_1^{r_1}\X_2^{r_2}.$ Since $|x_i|$ is even for $i=0,1,2$, it follows that $r_i \in \{0,1\}$. Thus $B^{2p,q}(n) = \emptyset$ for $q \geq 4.$ From the description of $d$ given in Section 2, it follows that \begin{eqnarray*} d\left(\X^{(1,0,0)}\right) & = & 2X_1X_2 + 2X_3,\\ d\left(\X^{(0,1,0)}\right) & = & 2X_1X_3 + X_2^2,\\ d\left(\X^{(0,0,1)}\right) & = & 2X_2X_3,\\ d\left(\X^{(1,1,0)}\right) & = & 2X_1X_2\X^{(0,1,0)} + 2X_3\X^{(0,1,0)} - 2X_1X_3\X^{(1,0,0)} - X_2^2\X^{(1,0,0)},\\ d\left(\X^{(1,0,1)}\right) & = & 2X_1X_2\X^{(0,0,1)} + 2X_3\X^{(0,0,1)} - 2X_2X_3\X^{(1,0,0)},\\ d\left(\X^{(0,1,1)}\right) & = & 2X_1X_3\X^{(0,0,1)} + X_2^2\X^{(0,0,1)} - 2X_2X_3\X^{(0,1,0)}, \mbox{~and}\\ d\left(\X^{(1,1,1)}\right) & = & 2X_1X_2\X^{(0,1,1)} + 2X_3\X^{(0,1,1)} - 2X_1X_3\X^{(1,0,1)} - X_2^2\X^{(1,0,1)} + 2X_2X_3\X^{(1,1,0)}. \end{eqnarray*} For $q = 0$, the elements of $B^{2p,0}(n)$ have no $q$-part. So we have two types of basis elements, those with an $X_3$ and those without an $X_3:$ \[ B^{2p,0}(n) = \left\{\begin{array}{l|l} X_1^{p-2j}X_2^j, & \max\{p-n,0\} \leq j \leq \floor*{\frac{p}{2}},\\ X_1^{p-3-2k}X_2^kX_3 & \max\{p-n-2,0\} \leq k \leq \floor*{\frac{p-3}{2}} \end{array}\right\}. \] The left-hand sides of the inequalities come from the length requirement on elements of $E_2(n)$, and the right-hand sides of the inequalities are due to the fact that the exponent of $X_1$ must be positive and integral. Since we have an explicit description of $B^{2p,0}(n)$, computing $\dim E_2^{2p,0}(n)$ is a just a matter of counting while taking into account the various cases presented by the inequalities, e.g. when $p - n \geq 0.$ The following tables record the values of $\dim E_2^{2p,0}(n)$ for $p \geq 4$ and for $0 \leq p \leq 3$, respectively. \begin{center} \begin{tabular}{c|c} range of $n$ & $\dim E_2^{2p,0}(n)$ \\ \hline $n \geq p$ & $p$\\ $p-1 \leq n \leq p-2$ & $n$\\ $\frac{p-2}{2} \leq n \leq p-3$ & $2n-p+2$\\ $n \leq \frac{p-3}{2}$ & $0$ \end{tabular} \end{center} \begin{center} \begin{tabular}{c|c|c|c|c} range of $n$ & $\dim E_2^{0,0}(n)$ & $\dim E_2^{2,0}(n)$ & $\dim E_2^{4,0}(n)$ & $\dim E_2^{6,0}(n)$ \\ \hline $n \geq 3$ & $1$ & $1$ & $2$ & $3$\\ $n = 2$ & $1$ & $1$ & $2$ & $2$\\ $n = 1$ & $1$ & $1$ & $1$ & $1$ \end{tabular} \end{center} Recall that for all $p$, $d^{2p,0}$ is the zero map, and so $\rank d^{2p,0} = 0$. There are three possible $q$-parts for elements of $B^{2p,1}(n)$: $\X^{(1,0,0)},\: \X^{(0,1,0)},$ and $\X^{(0,0,1)}$. Thus when we further separate basis vectors by whether or not they have $X_3$ as a factor, we obtain six types of basis vectors: \[ B^{2p,1}(n) = \left\{\begin{array}{l|l} X_1^{p-2j}X_2^j\X^{(1,0,0)}, & \max\{p-n+2,0\} \leq j \leq \floor*{\frac{p}{2}},\\ X_1^{p-3-2k}X_2^kX_3\X^{(1,0,0)}, & \max\{p-n,0\} \leq k \leq \floor*{\frac{p-3}{2}},\\ X_1^{p-1-2\ell}X_2^\ell\X^{(0,1,0)}, & \max\{p-n+1,0\} \leq \ell \leq \floor*{\frac{p-1}{2}},\\ X_1^{p-4-2r}X_2^rX_3\X^{(0,1,0)}, & \max\{p-n-1,0\} \leq r \leq \floor*{\frac{p-4}{2}},\\ X_1^{p-2-2s}X_2^s\X^{(0,0,1)}, & \max\{p-n,0\} \leq s \leq \floor*{\frac{p-2}{2}},\\ X_1^{p-5-2t}X_2^tX_3\X^{(0,0,1)} & \max\{p-n-2, 0\} \leq t \leq \floor*{\frac{p-5}{2}} \end{array}\right\}. \] Again, the left-hand sides of the inequalities are due to the length requirement, and the right-hand sides come from the necessity that the exponent of $X_1$ be positive and integral. As in the case of $E_2^{2p,0}(n),$ since we have an explicit description of $B^{2p,1}(n),$ computing $\dim E_2^{2p,1}(n)$ is just a matter of counting while taking into account the various cases presented by the inequalities. The following tables record the values of $\dim E_2^{2p,1}(n)$ for $p \geq 6$ and for $0 \leq p \leq 5$, respectively. \begin{center} \begin{tabular}{c|c} range of $n$ & $\dim E_2^{2p,1}$ \\ \hline $n \geq p+2$ & $3p-3$\\ $n = p+1$ & $3p-4$\\ $n=p$ & $3p-6$\\ $n=p-1$ & $3p-10$\\ $\frac{p+2}{2} \leq n \leq p-2$ & $6n-3p-3$\\ $n = \frac{p+1}{2}$ & $1$\\ $n\leq \frac{p}{2}$ & $0$ \end{tabular} \end{center} \begin{center} \begin{tabular}{c|c|c|c|c|c|c} range of $n$ & $\dim E_2^{0,1}(n)$ & $\dim E_2^{2,1}(n)$ & $\dim E_2^{4,1}(n)$ & $\dim E_2^{6,1}(n)$ & $\dim E_2^{8,1}(n)$ & $\dim E_2^{10,1}(n)$\\ \hline $n \geq 7$ & $1$ & $2$ & $4$ & $6$ & $9$ & $12$\\ $n = 6$ & $1$ & $2$ & $4$ & $6$ & $9$ & $11$\\ $n = 5$ & $1$ & $2$ & $4$ & $6$ & $8$ & $9$\\ $n = 4$ & $1$ & $2$ & $4$ & $5$ & $6$ & $5$\\ $n = 3$ & $1$ & $2$ & $3$ & $3$ & $2$ & $1$\\ $n = 2$ & $1$ & $1$ & $1$ & $0$ & $0$ & $0$\\ $n = 1$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ \end{tabular} \end{center} To compute the rank of $d^{2p,1}$, we first compute $d^{2p,1}$ of each of the basis vectors of $B^{2p,1}(n).$ \begin{center} \begin{tabular}{l|l} basis vector & image under $d^{2p,1}$\\ \hline $X_1^{p-2j}X_2^j\X^{(1,0,0)}$ & $2X_1^{p+1-2j}X_2^{j+1} + 2X_1^{p-2j}X_2^jX_3$\\ $X_1^{p-3-2k}X_2^kX_3\X^{(1,0,0)}$ & $2X_1^{p-2-2k}X_2^{k+1}X_3$\\ $X_1^{p-1-2\ell}X_2^{\ell}\X^{(0,1,0)}$ & $2X_1^{p-2\ell}X_2^{\ell}X_3 + X_1^{p-1-2\ell}X_2^{\ell+2}$\\ $X_1^{p-4-2r}X_2^rX_3\X^{(0,1,0)}$ & $X_1^{p-4-2r}X_2^{r+2}X_3$\\ $X_1^{p-2-2s}X_2^s\X^{(0,0,1)}$ & $2X_1^{p-2-2s}X_2^{s+1}X_3$\\ $X_1^{p-5-2t}X_2^tX_3\X^{(0,0,1)}$ & $0$ \end{tabular} \end{center} We then count the number of linearly independent vectors in the image of $d^{2p,1}$ to obtain $\rank d^{2p,1}.$ The following tables record the values of $\rank d^{2p,1}$ for $p \geq 2$ and for $p = 0,1$, respectively. \begin{center} \begin{tabular}{c|c} range of $n$ & $\rank d^{2p,1}$\\ \hline $n \geq p+2$ & $p+2$\\ $\frac{p+2}{2} \leq n \leq p+1$ & $2n-p-1$\\ $n \leq \frac{p+1}{2}$ & $0$ \end{tabular} \end{center} \begin{center} \begin{tabular}{c|c|c} range of $n$ & $\rank d^{0,1}$ & $\rank d^{2,1}$\\ \hline $n \geq 3$ & $1$ & $2$\\ $n = 2$ & $1$ & $1$\\ $n = 1$ & $0$ & $0$ \end{tabular} \end{center} For elements of $B^{2p,2}(n)$, there are three possible $q$-parts: $\X^{(1,1,0)},$ $\X^{(1,0,1)}$, and $\X^{(0,1,1)}$. Thus when we further distinguish basis vectors by whether or not they have $X_3$ as a factor, we have six types of basis vectors: \[ B^{2p,2}(n) = \left\{\begin{array}{l|l} X_1^{p-1-2j}X_2^j\X^{(1,1,0)}, & \max\{p-n+3,0\} \leq j \leq \floor*{\frac{p-1}{2}},\\ X_1^{p-4-2k}X_2^kX_3\X^{(1,1,0)}, & \max\{p-n+1,0\} \leq k \leq \floor*{\frac{p-4}{2}},\\ X_1^{p-2-2\ell}X_2^\ell\X^{(1,0,1)}, & \max\{p-n+2,0\} \leq \ell \leq \floor*{\frac{p-2}{2}},\\ X_1^{p-5-2r}X_2^rX_3\X^{(1,0,1)}, & \max\{p-n,0\} \leq r \leq \floor*{\frac{p-5}{2}},\\ X_1^{p-3-2s}X_2^s\X^{(0,1,1)}, & \max\{p-n+1,0\} \leq s \leq \floor*{\frac{p-3}{2}},\\ X_1^{p-6-2t}X_2^tX_3\X^{(0,1,1)} & \max\{p-n-1,0\} \leq t \leq \floor*{\frac{p-6}{2}} \end{array}\right\}. \] As we have done previously, we count the elements of $B^{2p,2}(n)$ to determine $\dim E_2^{2p,2}(n).$ The following tables record the values of $\dim E_2^{2p,2}(n)$ for $p \geq 7$ and for $0 \leq p \leq 6$, respectively. \begin{center} \begin{tabular}{c|c} range of $n$ & $\dim E_2^{2p,2}(n)$\\ \hline $n \geq p+3$ & $3p-6$\\ $n = p+2$ & $3p-7$\\ $n = p+1$ & $3p-9$\\ $n = p$ & $3p-13$\\ $\frac{p+5}{2} \leq n \leq p-1$ & $6n-3p-12$\\ $n = \frac{p+4}{2}$ & $1$\\ $n \leq \frac{p+3}{2}$ & $0$ \end{tabular} \end{center} \begin{center} \begin{tabular}{c|c|c|c|c|c|c|c} range of $n$ & $\dim E_2^{0,2}$ & $\dim E_2^{2,2}$ & $\dim E_2^{4,2}$ & $\dim E_2^{6,2}$ & $\dim E_2^{8,2}$ & $\dim E_2^{10,2}$ & $\dim E_2^{12,2}$\\ \hline $n \geq 9$ & $0$ & $1$ & $2$ & $4$ & $6$ & $9$ & $12$\\ $n = 8$ & $0$ & $1$ & $2$ & $4$ & $6$ & $9$ & $11$\\ $n = 7$ & $0$ & $1$ & $2$ & $4$ & $6$ & $8$ & $9$\\ $n = 6$ & $0$ & $1$ & $2$ & $4$ & $5$ & $6$ & $5$\\ $n = 5$ & $0$ & $1$ & $2$ & $3$ & $3$ & $2$ & $1$\\ $n = 4$ & $0$ & $1$ & $1$ & $0$ & $0$ & $0$ & $0$\\ $n = 3$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ \end{tabular} \end{center} To compute the rank of $d^{2p,2},$ we first compute $d^{2p,2}$ of each of the elements of $B^{2p,2}(n).$ \begin{center} \begin{tabular}{l|l} basis vector & image under $d^{2p,2}$\\ \hline $X_1^{p-1-2j}X_2^j\X^{(1,1,0)}$ & $~ 2X_1^{p-2j}X_2^{j+1}\X^{(0,1,0)} + 2X_1^{p-1-2j}X_2^jX_3\X^{(0,1,0)}$\\ ~ & $- 2X_1^{p-2j}X_2^jX_3\X^{(1,0,0)} - X_1^{p-1-2j}X_2^{j+2}\X^{(1,0,0)}$\\ $X_1^{p-4-2k}X_2^kX_3\X^{(1,1,0)}$ & $~ 2X_1^{p-3-2k}X_2^{k+1}X_3\X^{(0,1,0)} - X_1^{p-4-2k}X_2^{k+2}X_3\X^{(1,0,0)}$\\ $X_1^{p-2-2\ell}X_2^\ell\X^{(1,0,1)}$ & $~ 2X_1^{p-1-2\ell}X_2^{\ell+1}\X^{(0,0,1)} + 2X_1^{p-2-2\ell}X_2^\ell X_3\X^{(0,0,1)}$\\ ~ & $- 2X_1^{p-2-2\ell}X_2^{\ell+1}X_3\X^{(1,0,0)}$\\ $X_1^{p-5-2r}X_2^rX_3\X^{(1,0,1)}$ & $~ 2X_1^{p-4-2r}X_2^{r+1}X_3\X^{(0,0,1)}$\\ $X_1^{p-3-2s}X_2^s\X^{(0,1,1)}$ & $~ 2X_1^{p-2-2s}X_2^sX_3\X^{(0,0,1)} + X_1^{p-3-2s}X_2^{s+2}\X^{(0,0,1)}$\\ ~ & $- 2X_1^{p-3-2s}X_2^{s+1}X_3\X^{(0,1,0)}$\\ $X_1^{p-6-2t}X_2^tX_3\X^{(0,1,1)}$ & $~ X_1^{p-6-2t}X_2^{t+2}X_3\X^{(0,0,1)}$ \end{tabular} \end{center} We then count the number of linearly independent vectors in the image of $d^{2p,2}$ to obtain $\rank d^{2p,2}.$ The following tables record the values of $\rank d^{2p,2}$ for $p \geq 5$ and for $0 \leq p \leq 4$, respectively. \begin{center} \begin{tabular}{c|c} range of $n$ & $\rank d^{2p,2}$\\ \hline $n \geq p+3$ & $2p-1$\\ $n = p+2$ & $2p-2$\\ $n = p+1$ & $2p-4$\\ $\frac{p+5}{2} \leq n \leq p$ & $4n-2p-8$\\ $n = \frac{p+4}{2}$ & $1$\\ $n \leq \frac{p+3}{2}$ & $0$ \end{tabular} \end{center} \begin{center} \begin{tabular}{c|c|c|c|c|c} range of $n$ & $\rank d^{0,2}$ & $\rank d^{2,2}$ & $\rank d^{4,2}$ & $\rank d^{6,2}$ & $\rank d^{8,2}$\\ \hline $n \geq 10$ & $0$ & $1$ & $2$ & $4$ & $6$\\ $n = 9$ & $0$ & $1$ & $2$ & $4$ & $6$\\ $n = 8$ & $0$ & $1$ & $2$ & $4$ & $6$\\ $n = 7$ & $0$ & $1$ & $2$ & $4$ & $6$\\ $n = 6$ & $0$ & $1$ & $2$ & $4$ & $5$\\ $n = 5$ & $0$ & $1$ & $2$ & $3$ & $3$\\ $n = 4$ & $0$ & $1$ & $1$ & $0$ & $0$\\ $n\leq 3$ & $0$ & $0$ & $0$ & $0$ & $0$ \end{tabular} \end{center} There is one possible $q$-part for elements of $B^{2p,3}(n)$, namely $\X^{(1,1,1)}.$ Thus the basis vectors of $B^{2p,3}(n)$ are differentiated by whether or not they have an $X_3$: \[B^{2p,3}(n) = \left\{\begin{array}{l|l} X_1^{p-3-2j}X_2^j\X^{(1,1,1)}, & \max\{p-n+3,0\} \leq j \leq \floor*{\frac{p-3}{2}},\\ X_1^{p-6-2k}X_2^kX_3\X^{(1,1,1)} & \max\{p-n+1, 0\} \leq k \leq \floor*{\frac{p-6}{2}} \end{array}\right\}. \] To determine $\dim E_2^{2p,3}(n)$, we count the elements of $B^{2p,3}(n).$ The following tables record the values of $\dim E_2^{2p,3}(n)$ for $p \geq 6$ and for $0 \leq p \leq 5$, respectively. \begin{center} \begin{tabular}{c|c} range of $n$ & $\dim E_2^{2p,3}(n)$\\ \hline $n \geq p+3$ & $p-3$\\ $n = p+2$ & $p-4$\\ $\frac{p+8}{2} \leq n \leq p+1$ & $2n-p-7$\\ $n \leq \frac{p+7}{2}$ & $0$ \end{tabular} \end{center} \begin{center} \begin{tabular}{c|c|c|c|c|c|c} range of $n$ & $\dim E_2^{0,3}(n)$ & $\dim E_2^{2,3}(n)$ & $\dim E_2^{4,3}(n)$ & $\dim E_2^{6,3}(n)$ & $\dim E_2^{8,3}(n)$ & $\dim E_2^{10,3}(n)$\\ \hline $n \geq 8$ & $0$ & $0$ & $0$ & $1$ & $1$ & $2$\\ $n = 7$ & $0$ & $0$ & $0$ & $1$ & $1$ & $1$\\ $n = 6$ & $0$ & $0$ & $0$ & $1$ & $0$ & $0$\\ $n \leq 5$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ \end{tabular} \end{center} Next we compute $d^{2p,3}$ of each of the basis elements of $B^{2p,3}(n).$ \begin{center} \begin{tabular}{l|l} basis vector & image under $d^{2p,3}$\\ \hline $X_1^{p-3-2j}X_2^j\X^{(1,1,1)}$ & $~ 2X_1^{p-2-2j}X_2^{j+1}\X^{(0,1,1)} + 2X_1^{p-3-2j}X_2^jX_3\X^{(0,1,1)}$\\ ~ & $- 2X_1^{p-2-2j}X_2^jX_3\X^{(1,0,1)} - X_1^{p-3-2j}X_2^{j+2}\X^{(1,0,1)}$\\ ~ & $+ 2X_1^{p-3-2j}X_2^{j+1}X_3\X^{(1,1,0)}$\\ $X_1^{p-6-2k}X_2^kX_3\X^{(1,1,1)}$ & $~ 2X_1^{p-5-2k}X_2^{k+1}X_3\X^{(0,1,1)} - X_1^{p-6-2k}X_2^{k+2}X_3\X^{(1,0,1)}$ \end{tabular} \end{center} Note that $d^{2p,3}\left(B^{2p,3}(n)\right)$ is a linearly independent set, so $\rank d^{2p,3} = \dim E_2^{2p,3}(n)$ for all $p$. The following tables record the values of $\rank d^{2p,3}$ for $p \geq 6$ and for $0 \leq p \leq 5$, respectively. \begin{center} \begin{tabular}{c|c} range of $n$ & $\rank d^{2p,3}$\\ \hline $n \geq p+3$ & $p-3$\\ $n = p+2$ & $p-4$\\ $\frac{p+8}{2} \leq n \leq p+1$ & $2n-p-7$\\ $n \leq \frac{p+7}{2}$ & $0$ \end{tabular} \end{center} \begin{center} \begin{tabular}{c|c|c|c|c|c|c} range of $n$ & $\rank d^{0,3}$ & $\rank d^{2,3}$ & $\rank d^{4,3}$ & $\rank d^{6,3}$ & $\rank d^{8,3}$ & $\rank d^{10,3}$\\ \hline $n \geq 8$ & $0$ & $0$ & $0$ & $1$ & $1$ & $2$\\ $n = 7$ & $0$ & $0$ & $0$ & $1$ & $1$ & $1$\\ $n = 6$ & $0$ & $0$ & $0$ & $1$ & $0$ & $0$\\ $n \leq 5$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ \end{tabular} \end{center} Now that we know $\dim E_2^{2p,q}(n)$ and $\rank d^{2p,q}$ for all $p \geq 0$ and $q = 0,1,2,3$, we can compute $\dim E_\infty^{2p,q}(n)$. Recall that $$\dim E_\infty^{2p,q}(n) = \dim E_2^{2p,q}(n) - \rank d^{2p,q} - \rank d^{2p-6,q+1}.$$ The following tables record the values of $\dim E_\infty^{2p,0}(n)$ for $p \geq 5$ and for $0 \leq p \leq 4$, respectively. \begin{center} \begin{tabular}{c|c} range of $n$ & $\dim E_\infty^{2p,0}(n)$\\ \hline $n \geq p$ & $1$\\ $n \leq p-1$ & $0$ \end{tabular} \end{center} \begin{center} \begin{tabular}{c|c|c|c|c|c} range of $n$ & $\dim E_\infty^{0,0}(n)$ & $\dim E_\infty^{2,0}(n)$ & $ \dim E_\infty^{4,0}(n)$ & $\dim E_\infty^{6,0}(n)$ & $\dim E_\infty^{8,0}(n)$\\ \hline $n \geq 4$ & $1$ & $1$ & $2$ & $2$ & $2$\\ $n = 3$ & $1$ & $1$ & $2$ & $2$ & $1$\\ $n = 2$ & $1$ & $1$ & $2$ & $1$ & $1$\\ $n = 1$ & $1$ & $1$ & $1$ & $1$ & $0$ \end{tabular} \end{center} The following tables record the values of $\dim E_\infty^{2p,1}(n)$ for $p \geq 8$ and for $0 \leq p \leq 7$, respectively. \begin{center} \begin{tabular}{c|c} range of $n$ & $\dim E_\infty^{2p,1}(n)$, \\ \hline $n \geq p$ & $2$\\ $n = p-1$ & $1$\\ $n \leq p-2$ & $0$ \end{tabular} \end{center} \begin{center} \begin{tabular}{c|c|c|c|c|c|c|c|c} range of $n$ & $\dim E_\infty^{0,1}$ & $\dim E_\infty^{2,1}$ & $\dim E_\infty^{4,1}$ & $\dim E_\infty^{6,1}$ & $\dim E_\infty^{8,1}$ & $\dim E_\infty^{10,1}$ & $\dim E_\infty^{12,1}$ & $\dim E_\infty^{14,1}$\\ \hline $n \geq 7$ & $0$ & $0$ & $0$ & $1$ & $2$ & $3$ & $3$ & $3$\\ $n = 6$ & $0$ & $0$ & $0$ & $1$ & $2$ & $3$ & $3$ & $2$\\ $n = 5$ & $0$ & $0$ & $0$ & $1$ & $2$ & $3$ & $2$ & $1$\\ $n = 4$ & $0$ & $0$ & $0$ & $1$ & $2$ & $2$ & $1$ & $1$\\ $n = 3$ & $0$ & $0$ & $0$ & $1$ & $1$ & $1$ & $0$ & $0$\\ $n \leq 2$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ \end{tabular} \end{center} The following table records the values of $\dim E_\infty^{2p,2}(n)$ for $p \geq 7.$ \begin{center} \begin{tabular}{c|c} range of $n$ & $\dim E_\infty^{2p,2}(n)$\\ \hline $n \geq p-1$ & $1$\\ $n \leq p-2$ & $0$ \end{tabular} \end{center} For $0 \leq p \leq 6$ and $n \geq 1$, $\dim E_\infty^{2p,2}(n) = 0$. Since $\dim E_2^{2p,3}(n) = \rank d^{2p,3}$ and $d^{2p,4}$ is the zero map, it follows that $\dim E_\infty^{2n,3}(n) = 0$ for all $n$ and $p$. Recall that for $i$ even, $$b_i(n) = \dim E_\infty^{i,0}(n) + \dim E_\infty^{i-10,2}(n).$$ and for $i$ odd, $$b_i(n) = \dim E_\infty^{i-5,1}(n) + \dim E_\infty^{i-15,3}(n).$$ We have now proven the following proposition, as well as Proposition \ref{cp3stableinstability}. \begin{proposition} \label{cp3betti} The stable Betti numbers of $\Conf^n \C\P^3$ are $b_i = 2$ for $i \geq 23$. Furthermore, the entire cohomology is given by $$ H^i(\Conf^n \C\P^3; \Q) = \begin{cases} \Q^3 & \mbox{if } n \geq \frac{i-5}{2}, i =15,17,19\\ \Q^2 & \mbox{if } n = \frac{i-7}{2}, i = 15,17,19\\ \Q^2 & \mbox{if } n \geq \frac{i}{2}, i\mbox{ even }, i \geq 24 \mbox{ or } i = 4,8\\ \Q^2 & \mbox{if } n\geq \frac{i-5}{2}, i \mbox{ odd }, i \geq 21 \mbox{ or } i = 13,21\\ \Q & \mbox{if } \frac{i-12}{2} \leq n \leq \frac{i-2}{2}, i \mbox{ even } i \geq 24\\ \Q & \mbox{if } n \geq \frac{i}{2}, i \mbox{ even }, 10 \leq i \leq 22,\\ \Q & \mbox{if } i = 0,2\\ \Q & \mbox{if } n = 1, i=4\\ \Q & \mbox{if } n =1,2, i = 6\\ \Q & \mbox{if } n = 2,3, i =8\\ \Q & \mbox{if } n = \frac{i-7}{2}, i \geq 21 \mbox{ or } i =13\\ \Q & \mbox{if } n = \frac{i-9}{2}, i = 15,17,19\\ \Q & \mbox{if } n = \frac{i-11}{2}, i = 15,19\\ \Q & \mbox{if } n \geq 3, i = 11\\ 0 & \mbox{else} \end{cases}. $$ \end{proposition} The stable Betti numbers of $\Conf^n\C\P^3$ prove Conjecture H of \cite{VakilWood2015}. F\'{e}lix and Thomas \cite[Section 3.2]{FelixThomas2000} computed $H_i(\Conf^3\C\P^3; \Q)$ for $0 \leq i \leq 15,$ and later Ashraf and Berceanu \cite[Theorem 1.4]{AshrafBerceanu2014} computed the cohomology algebra $H^*(\Conf^3\C\P^3;\Q).$ But Kupers and Miller \cite[Theorem 1.1]{KupersMiller2014} were the first to verify Conjecture H by computing all the stable Betti numbers of $\Conf^n\C\P^3$ using McDuff's scanning map. \subsection{Genus 1 Riemann surface} Recall that $\{1,a,b,t\}$ is a graded basis of $H^*(\Sigma_1; \Q)$ where $|a| = |b| = 1,$ $|t| = 2$, and $ab = -ba = t.$ An arbitrary basis vector of $E_2^{p,q}(n)$ is of the form $A^iB^jT^k\1^\ell\A^r\B^s$ where $i+j+r+s+k = p$, $\ell +r+s = q$, and $i+j+k+2\ell+2r+2s \leq n.$ But since $|1|=0$ is even and $|a|=|b|=1$ is odd, it follows that $i,j,\ell \in \{0,1\}.$ Furthermore, since $t$ is the orientation class of $H^*(X; \Q)$, we have that $k \in \{0,1\}$. Thus for a fixed $q$, it follows that $E_2^{p,q}(n) = \{0\}$ for $p \leq q-2$ and $p \geq q+5.$ From the description of $d$ given in Section 2, we have that \begin{eqnarray*} d\left(\1\A^j\B^{q-1-j}\right) & = & 2T\A^j\B^{q-1-j} - 2AB\A^j\B^{q-1-j} - 2jAT\1\A^{j-1}\B^{q-1-j}\\ &~& - 2(q-1-j)T\1\A^j\B^{q-2-j} \quad \mbox{and}\\ d\left(\A^k\B^{q-k}\right) & = & -2kAT\A^{k-1} - 2(q-k)BT\A^k\B^{q-1-k}. \end{eqnarray*} For $q \geq 1$ and $n\geq 2q$, $$B^{q-1,q}(n) = \{\1\A^j\B^{q-j-1} : 0 \leq j \leq q-1\}.$$ Since $\ell(\1\A^j\B^{q-1-j}) = 2q$, we see that $B^{q-1,q}(n) = \emptyset$ for $n \leq 2q-1$. The following table records the values of $\dim E_2^{q-1,q}(n)$ for $q \geq 1.$ \begin{center} \begin{tabular}{c|c} range of $n$ & $\dim E_2^{q-1,q}(n)$\\ \hline $n \geq 2q$ & $q$\\ $n \leq 2q-1$ & $0$ \end{tabular} \end{center} To compute the rank of $d^{q-1,q}$, we compute the image of each basis vector of $B^{q-1,q}(n)$. \begin{center} \begin{tabular}{l|l} basis vector & image under $d^{q-1,q}$\\ \hline $\1\A^j\B^{q-1-j}$ & $2T\A^j\B^{q-1-j} - 2AB\A^j\B^{q-1-j} - 2jAT\1\A^{j-1}\B^{q-1-j}$\\ ~ & $- 2(q-1-j)BT\1\A^j\B^{q-2-j}$ \end{tabular} \end{center} We count the number of linearly independent vectors in the rightmost column above to obtain $\rank d^{q-1,q}.$ The following table records the values of $d^{q-1,q}$ for $q \geq 1.$ \begin{center} \begin{tabular}{c|c} range of $n$ & $\rank d^{q-1,q}$ \\ \hline $n \geq 2q$ & $q$\\ $n \leq 2q-1$ & $0$ \end{tabular} \end{center} For $q \geq 1$ and $n \geq 2q+1$, $$B^{q,q}(n) = \{ A\1\A^j\B^{q-1-j}, B\1\A^j\B^{q-1-j}, \A^k\B^{q-k} : 0 \leq j \leq q-1, 0 \leq k \leq q\},$$ where $\ell(A\1\A^j\B^{q-1-j}) = \ell(B\1\A^j\B^{q-1-j}) = 2q+1$ and $\ell(\A^k\B^{q-k}) = 2q$. The following table records the values of $\dim E_2^{q,q}(n)$ for $q \geq 1.$ \begin{center} \begin{tabular}{c|c} range of $n$ & $\dim E_2^{q,q}(n)$\\ \hline $n \geq 2q+1$ & $3q+1$\\ $n = 2q$ & $q+1$\\ $n \leq 2q-1$ & $0$ \end{tabular} \end{center} To compute the rank of $d^{q,q}$, we first compute the image of each element of $B^{q,q}(n)$. \begin{center} \begin{tabular}{l|l} basis vector & image under $d^{q,q}$\\ \hline $A\1\A^j\B^{q-1-j}$ & $2(q-1-j)ABT\1\A^j\B^{q-2-j} - 2AT\A^j\B^{q-1-j}$\\ $B\1\A^j\B^{q-1-j}$ & $-2jABT\A^{j-1}\B^{q-1-j} - 2BT\A^j\B^{q-1-j}$\\ $\A^k\B^k$ & $-2kAT\A^{k-1}\B^{q-k} - 2(q-k)BT\A^k\B^{q-1-k}$ \end{tabular} \end{center} Thus counting the number or linearly independent vectors in the image, we obtain $\rank d^{q,q}.$ The following table records the values of $\rank d^{q,q}$ for $q \geq 1.$ \begin{center} \begin{tabular}{c|c} range of $n$ & $\rank d^{q,q}$ \\ \hline $n \geq 2q+1$ & $2q$\\ $n = 2q$ & $q+1$\\ $n \leq 2q-1$ & $0$ \end{tabular} \end{center} For $q \geq 1$ and $n \geq 2q+2$, we have $$B^{q+1,q}(n) = \{AB\1\A^j\B^{q-1-j}, T\1\A^j\B^{q-1-j}, A\A^k\B^{q-k}, B\A^k\B^{q-k} : 0 \leq j \leq q-1, 0\leq k \leq q\},$$ where $\ell(AB\1\A^j\B^{q-1-j}) = 2q+2$ and $\ell(T\1\A^j\B^{q-1-j}) = \ell(A\A^k\B^{q-k}) = \ell(B\A^k\B^{q-k}) = 2q+1$. The following table records the values of $\dim E_2^{q+1,q}(n)$ for $ q \geq 1.$ \begin{center} \begin{tabular}{c|c} range of $n$ & $\dim E_2^{q+1,q}(n)$\\ \hline $n \geq 2q+2$ & $4q+2$\\ $n = 2q+1$ & $3q+2$\\ $n \leq 2q$ & $0$ \end{tabular} \end{center} We compute the image under $d^{q+1,q}$ of each element of $B^{q+1,q}$. \begin{center} \begin{tabular}{l|l} basis vector & image under $d^{q+1,q}$\\ \hline $AB\1\A^j\B^{q-1-j}$ & $2ABT\A^j\B^{q-1-j}$\\ $T\1\A^j\B^{q-1-j}$ & $-2ABT\A^j\B^{q-1-j}$\\ $A\A^k\B^{q-k}$ & $-2kABT\A^{k-1}\B^{q-k}$\\ $B\A^k\B^{q-k}$ & $2(q-k)ABT\A^k\B^{q-k}$ \end{tabular} \end{center} Thus counting the number of linearly independent vectors in the image, we obtain $\rank d^{q+1,q}.$ The following table records the values of $d^{q+1,q}$ for $q \geq 1.$ \begin{center} \begin{tabular}{c|c} range of $n$ & $\rank d^{q+1,q}$\\ \hline $n \geq 2q+1$ & $q$\\ $n \leq 2q$ & $0$ \end{tabular} \end{center} For $q \geq 1$ and $n \geq 2q+2$, we have $$B^{q+2,q}(n) = \{AT\1\A^j\B^{q-1-j}, BT\1\A^j\B^{q-1-j}, AB\A^k\B^{q-k}, T\A^k\B^{q-k}: 0\leq j \leq q-1, 0 \leq k \leq q\},$$ where $\ell(AT\1\A^j\B^{q-1-j}) = \ell(BT\1\A^j\B^{q-1-j}) = \ell(AB\A^k\B^{q-k}) = 2q+2$ and $\ell(T\A^k\B^{q-k}) = 2q+1$. The following table records the values of $\dim E_2^{q+2,q}(n)$ for $q \geq 1.$ \begin{center} \begin{tabular}{c|c} range of $n$ & $\dim E_2^{q+2,q}(n)$\\ \hline $n \geq 2q+2$ & $4q+2$\\ $n = 2q+1$ & $q+1$\\ $n \leq 2q$ & $0$ \end{tabular} \end{center} Since $d^{q+2,q}: E_2^{q+2,q}(n) \rightarrow E_2^{q+4,q-1}(n) = \{0\}$, it follows that $\rank d^{q+2,q} = 0$ for all $q \geq 0$ and $n \geq 1.$ For $q \geq 1$ and $n \geq 2q+3$, we have $$B^{q+3,q}(n) = \{ABT\1\A^j\B^{q-1-j}, AT\A^k\B^{q-k}, BT\A^k\B^{q-k} : 0 \leq j \leq q-1, 0 \leq q \leq k\},$$ where $\ell(ABT\1\A^j\B^{q-1-j}) = 2q+3$ and $\ell(AT\A^k\B^{q-k}) = \ell(BT\A^k\B^{q-k}) = 2q+2$. The following table records the values of $\dim E_2^{q+3,q}(n)$ for $q \geq 1.$ \begin{center} \begin{tabular}{c|c} range of $n$ & $\dim E_2^{q+3,q}(n)$\\ \hline $n \geq 2q+3$ & $3q+2$\\ $n = 2q+2$ & $2q+2$\\ $n \leq 2q+1$ & $0$ \end{tabular} \end{center} Since $d^{q+3,q}: E_2^{q+3,q}(n) \rightarrow E_2^{q+5,q-1}(n) = \{0\},$ we see that $\rank d^{q+3,q} = 0$ for all $q \geq 0$ and $n \geq 1.$ For $q \geq 1$ and $n \geq 2q+3$, we have $$B^{q+4,q} = \{ABT\A^k\B^{q-k} : 0\leq k \leq q\},$$ where $\ell(ABT\A^k\B^{q-k}) = 2q+3$. The following table records the values of $\dim E_2^{q+4,q}(n)$ for $q \geq 1$. \begin{center} \begin{tabular}{c|c} range of $n$ & $\dim E_2^{q+4,q}(n)$ \\ \hline $n \geq 2q+3$ & $q+1$\\ $n \leq 2q+2$ & $0$ \end{tabular} \end{center} Since $d^{q+4,q}: E_2^{q+4,q}(n) \rightarrow E_2^{q+6, q-1} = \{0\}$, it follows that $\rank d^{q+4,q} = 0$ for all $q \geq 0$ and $n \geq 1.$ We have yet to consider the $q=0$ case. For $q = 0$ and $n \geq 3$, we have \begin{eqnarray*} B^{0,0}(n) & = & \{1\},\\ B^{1,0}(n) & = & \{A,B\},\\ B^{2,0}(n) & = & \{AB, T\},\\ B^{3,0}(n) & = & \{AT, BT\},\quad \mbox{and}\\ B^{4,0}(n) & = & \{ABT\}, \end{eqnarray*} where $\ell(1) = \ell(A) = \ell(B) = \ell(T) = 1$, $\ell(AB) = \ell(AT) = \ell(BT) = 2$, and $\ell(ABT) = 3.$ From this we obtain the following table which records the values of $\dim E_2^{p,0}(n)$ for $0 \leq p \leq 4.$ \begin{center} \begin{tabular}{c|c|c|c|c|c} range of $n$ & $\dim E_2^{0,0}(n)$ & $\dim E_2^{1,0}(n)$ & $\dim E_2^{2,0}(n)$ & $\dim E_2^{3,0}(n)$ & $\dim E_2^{4,0}(n)$\\ \hline $n \geq 3$ & $1$ & $2$ & $2$ & $2$ & $1$\\ $n = 2$ & $1$ & $2$ & $2$ & $2$ & $0$\\ $n = 1$ & $1$ & $2$ & $1$ & $0$ & $0$ \end{tabular} \end{center} Since $E_2^{p,-1}(n) = \{0\}$, we have that $\rank d^{p,0} = 0$ for all $p \geq 0$ and $n \geq 1.$ Now that we know $\dim E_2^{p,q}(n)$ and $\rank d^{p,q}$, we can compute $\dim E_\infty(n)$ for all $q \geq 0$ and $q-1 \leq p \leq q+4.$ Recall that $$\dim E_\infty^{p,q}(n) = \dim E_2^{p,q}(n) - \rank d^{p,q} - \rank d^{p-2, q+1}.$$ Since $\dim E_2^{q-1,q}(n) = \rank d^{q-1,q}$, we have that $\dim E_\infty^{q-1,q}(n) = 0$ for all $q \geq 0$ and $n \geq 1.$ Additionally, since $\dim E_2^{q+4,q}(n) = \rank d^{q+2, q+1}$, it follows that $\dim E_\infty^{q+4,q}(n) = 0$ for all $q \geq 0$ and $n \geq 1.$ The following tables record the values of $\dim E_\infty^{p,q}(n)$ for $q \geq 1$ and $q \leq p \leq q+3$ and for $q\geq 1$ and $0 \leq p \leq 4$, respectively. \begin{center} \begin{tabular}{c|c|c|c|c} range of $n$ & $\dim E_\infty^{q,q}(n)$ & $\dim E_\infty^{q+1,q}(n)$ & $\dim E_\infty^{q+2,q}(n)$ & $\dim E_\infty^{q+3,q}(n)$ \\ \hline $n \geq 2q+2$ & $q+1$ & $3q+2$ & $3q+1$ & $q$\\ $n = 2q+1$ & $q+1$ & $2q+2$ & $q+1$ & $0$\\ $n \leq 2q$ & $0$ & $0$ & $0$ & $0$ \end{tabular} \end{center} \begin{center} \begin{tabular}{c|c|c|c|c|c} range of $n$ & $\dim E_\infty^{0,0}(n)$ & $\dim E_\infty^{1,0}(n)$ & $\dim E_\infty^{2,0}(n)$ & $\dim E_\infty^{3,0}(n)$ & $\dim E_\infty^{4,0}(n)$ \\ \hline $n \geq 1$ & $1$ & $2$ & $1$ & $0$ & $0$ \end{tabular} \end{center} Recall that $$b_i(n) = \sum_{p+q = i} \dim E_\infty^{p,q}(n).$$ Thus for $i$ even, $$b_i(n) = \dim E_\infty^{\frac{i}{2},\frac{i}{2}}(n) + \dim E_\infty^{\frac{i+2}{2}, \frac{i-2}{2}}(n),$$ and for $i$ odd, $$b_i(n) = \dim E_\infty^{\frac{i+1}{2}, \frac{i-1}{2}}(n) + \dim E_\infty^{\frac{i+3}{2}, \frac{i-3}{2}}(n).$$ We have now proven the following propostion, as well as Proposition \ref{stableg1betti}. \begin{proposition}[Cohomology of the elliptic braid group] \label{P:g1SV} The stable Betti numbers of $\Conf^n \Sigma_1$ are $$b_0 =1, b_1 = 2, b_2 = 3, b_3 = 5, b_ 4 = 7, \hdots, b_i = 2i-1,\hdots.$$ Furthermore, the entire cohomology is given by $$ H^{i}(\Conf^n \Sigma_1; \Q)= \begin{cases} \Q^{2i-1} & \mbox{if } n \geq i+1, i \geq 2\\ \Q^{\frac{3i-4}{2}} & \mbox{if } n = i, i \mbox{ even}, i \geq 2\\ \Q^{\frac{3i-1}{2}} & \mbox{if } n = i, i \mbox{ odd}, i \geq 3\\ \Q^{\frac{i}{2}} & \mbox{if } n =i-1, i \mbox{ even}, i \geq 4\\ \Q^{\frac{i-3}{2}} & \mbox{if } n = i-1, i \mbox{ odd}, i \geq 3\\ \Q^{2} & \mbox{if } i = 1,\\ \Q & \mbox{if } i =0,\\ 0 & \mbox{else} \end{cases}. $$ Since $\Conf^n \Sigma_1$ is $K(\pi,1)$ for the elliptic braid group $B_n(\Sigma_1)$ \cite{Birman1969, Scott1970}, this gives $$ H^{i}(B_n(\Sigma_1); \Q)= H^i(\Conf^n\Sigma_1;\Q). $$ \end{proposition} Prior to our work, Napolitano \cite[Table 2]{Napolitano2003} had computed $H_i(\Conf^n\Sigma_1;\Z)$ for $1 \leq n \leq 6$ and $0 \leq i \leq 7$, and Kallel \cite[Corollary 1.7]{Kallel2008} computed $H_1(\Conf^n\Sigma_1; \Z)$ for $n \geq 3.$ Concurrently with our work, Scheissl \cite{Scheissl2016} independently computed the stable and unstable Betti numbers of $\Conf^n\Sigma_1$ also using the Cohen--Taylor--Totaro--Kriz spectral sequence, and Drummond-Cole and Knudsen \cite[Corollaries 4.5--4.7]{Drummond-ColeKnudsen2016} not only computed the stable and unstable Betti numbers of $\Conf^n\Sigma_1$, but did so for all surfaces of finite type via a method derived from factorization homology. For other related computations, see \cite{BrownWhite1981, BodigheimerCohen1988, BodigheimerCohenTaylor1989, Knudsen2015, Azam2015}.
{ "timestamp": "2016-12-20T02:13:53", "yymm": "1612", "arxiv_id": "1612.06314", "language": "en", "url": "https://arxiv.org/abs/1612.06314", "abstract": "We give a concrete method to explicitly compute the rational cohomology of the unordered configuration spaces of connected, oriented, closed, even-dimensional manifolds of finite type which we have implemented in Sage [S+09]. As an application, we give acomplete computation of the stable and unstable rational cohomology of unordered configuration spaces in some cases, including that of $\\mathbb{CP}^3$ and a genus 1 Riemann surface, which is equivalently the homology of the elliptic braid group. In an appendix, we also give large tables of unstable and stable Betti numbers of unordered configuration spaces. From these, we empirically observe stability phenomenon in the unstable cohomology of unordered configuration spaces of some manifolds, some of which we prove and some of which we state as conjecture.", "subjects": "Algebraic Topology (math.AT)", "title": "Computing cohomology of configuration spaces", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9861513869353992, "lm_q2_score": 0.8244619199068831, "lm_q1q2_score": 0.8130442657915947 }
https://arxiv.org/abs/2209.07698
Hitting a prime in 2.43 dice rolls (on average)
What is the number of rolls of fair 6-sided dice until the first time the total sum of all rolls is a prime? We compute the expectation and the variance of this random variable up to an additive error of less than 10^{-4}. This is a solution to a puzzle suggested by DasGupta (2017) in the Bulletin of the Institute of Mathematical Statistics, where the published solution is incomplete. The proof is simple, combining a basic dynamic programming algorithm with a quick Matlab computation and basic facts about the distribution of primes.
\section{Description of the problem} The following puzzle appears in the Bulletin of the Institute of Mathematical Statistics \citep{D2017}: Let $X_1,X_2,\ldots$ be independent uniform random variables on the integers $1,2,\ldots,6$, and define $S_n=X_1+\ldots+X_n$ for $n=1,2,\ldots$. Denote by $\tau$ the discrete time in which $S_n$ first hits the set of prime numbers $P$: \begin{equation*} \label{eq:aaa} \tau=\min\left\{n\geq 1: S_n \in P\right\}. \end{equation*} The contributing Editor \citep{D2017} provides a lower bound of $2.3479$ for the expectation $E(\tau)$ and mentions the following heuristic approximation for it: $E\left(\tau\right)\approx 7.6.$ He also adds that it is not known whether or not $\tau$ has finite variance. In this note we compute the value of $E(\tau)$ up to an additive error of less than $10^{-7}$, showing that it is much closer to the lower bound mentioned above than to $7.6$. We also show that the variance is finite and compute its value up to an additive error of less than $10^{-4}$. It will be clear from the discussion that it is not difficult to get better approximation for both quantities by increasing the amount of computation performed. The proof is very simple, and the note is written mainly in order to illustrate the power of the combination of a simple dynamic programming algorithm combined with a quick computer-aided computation and basic facts about the distribution of primes in the study of problems of this type. Before describing the rigorous argument, we present, in Table 1 below, the outcomes of Monte-Carlo simulations of the process. \begin{table}[H] \label{eq:t} \caption{Monte-Carlo simulations} \begin{center} \begin{tabular}{lllllll} number of repetitions & $mean(\tau)$ & $variance(\tau)$ & $\max(\tau)$\\ \hline $10^6$ & 2.4316 & 6.2735 & 49\\ $2*10^6$ & 2.4274 & 6.2572 &67\\ $3*10^6$ & 2.4305 & 6.2372 &70\\ $5*10^6$ & 2.4287 & 6.2418 &64\\ $10^7$ & 2.4286 &6.2463 &65\\ \end{tabular} \end{center} \end{table} We proceed with a rigorous computation of $E(\tau)$ and $Var(\tau)$ up to an additive error smaller than $1/10,000$. Not surprisingly, this computation shows that the simulations supply accurate values. Note first that \begin{equation} \label{eq:E} E\left(\tau\right)=\sum_{k\geq 1}P(\tau\geq k),\,\,\,\,\,\,\, E\left(\tau^2\right)=\sum_{k\geq 1}(2k-1)P(\tau\geq k). \end{equation} We next apply dynamic programming to compute $p(k)\equiv P(\tau\geq k)$ exactly for every $k$ up to $1000$, and then provide an upper bound for the sums of the remaining terms. \smallskip \noindent {\bf Dynamic-Programming Algorithm} \smallskip For each integer $k\geq 1$ and each non-prime $n$ satisfying $k\leq n\leq 6k$, let $p(k,n)$ denote the probability that $X_1+\ldots+X_k=n$ and that for every $i<k$, $X_1+\ldots+X_i$ is non prime. Fix a parameter $K$ (in our computation we later take $K=1000$.) By the definition of $p(k,n)$ and the rule of total probability we have the following dynamic programming algorithm for computing $p(k,n)$ precisely for all $1 \leq k \leq K$ and $k \leq n \leq 6k$. \begin{enumerate} \item[1.] $p(1,1)=p(1,4)=p(1,6)=1/6.$ \item[2.] For $k=2,\ldots,K$ and any non-prime $n$ between $k$ and $6k$ \begin{equation*} p(k,n)=\frac{1}{6}\sum_{i}p(k-1,n-i), \end{equation*} where the sum ranges over all $i$ between $1$ and $6$ so that $n-i$ is non-prime. \end{enumerate} Denote by $E_{K}$ and $Var_{K}$ the estimators (lower bounds) of $E(\tau)$ and $Var(\tau)$ based on the values of $p(k)$ for the first $K$ values, which we obtain as follows: \begin{align*} & E\left(\tau\right)=E_{K}+R_{K},\,\,\,\, E\left(\tau^2\right)=E^{(2)}_{K}+R^{(2)}_{K}, \end{align*} where $E_K=\sum_{k=1}^{K}p(k)$, $R_K=\sum_{k\geq K+1}p(k)$, $E^{(2)}_K=\sum_{k=1}^{K}(2k-1)p(k)$, $R^{(2)}_K=\sum_{k\geq K+1}(2k-1)p(k)$. We also have \begin{align*} & Var{(\tau)}=Var_K+RV_{K}, \end{align*} where $Var_K=E^{(2)}_{K}-(E_K)^2$, $RV_{K}=R^{(2)}_{K}-2E_K R_K-(R_K)^2$. Applying the dynamic-programming algorithm in Matlab, with an execution time of less than $5$ seconds, we obtain $$E_{1000}=2.428497913693504,\,\,\,\, Var_{1000}= 6.242778668279075.$$ \noindent {\bf Bounding the remainders} \smallskip It remains to show that each of the sums of the remaining terms is bounded by $10^{-4}$. To this end, we prove the following simple result by induction on $k$. \begin{pro} For every $k$ and every non-prime $n$, \begin{equation} \label{eq:cl1} p(k,n)<\frac{1}{3}\left(\frac{5}{6}\right)^{\pi(n)}, \end{equation} where $\pi(n)$ is the number of primes smaller than $n$. \end{pro} \begin{proof} Note first that \eqref{eq:cl1} holds for $k=1$, as $1/6=p(1,6)<(1/3) (5/6)^3$, $1/6=p(1,4)<(1/3)(5/6)^2$ and $1/6=p(1,1)<(1/3)(5/6)^{0}$, with room to spare. Assuming the inequality holds for $k-1$ (and every relevant $n$) we prove it for $k$. Suppose there are $q$ primes in the set $\left\{n-6,\ldots,n-1\right\}$, then $\pi(n-i)\geq \pi(n)-q$ for all non-prime $n-i$ in this set. Thus, by the induction hypothesis \begin{align*} & p(k,n) \leq \frac{1}{6}(6-q)\frac{1}{3} \left(\frac{5}{6}\right)^{\pi(n)-q} \leq \left(\frac{5}{6}\right)^q \frac{1}{3} \left(\frac{5}{6}\right)^{\pi(n)-q} =\frac{1}{3} \left(\frac{5}{6}\right)^{\pi(n)}. \end{align*} \end{proof} By the prime number theorem (cf., e.g., \cite{HW2008}), for every $n>1000$ $\pi(n) > 0.9 \frac{n}{\ln n}$ (again, with room to spare). Therefore, by the above estimate we get: \begin{cor} \label{eq:cor1} For every $k>1000$ and every non-prime $n (n\geq k)$, \begin{equation} \label{eq:cl2} p(k,n)<\frac{1}{3}\left(\frac{5}{6}\right)^{0.9 \frac{n}{\ln n}}. \end{equation} \end{cor} Using the above, we can now bound the sums of the remaining terms. \begin{align*} & R_{1000}\equiv\sum_{k>1000}P(\tau\geq k) =\sum_{k>1000} \sum_{\left\{n:\,\, k \leq n \leq 6k\right\}} p(k,n) \nonumber\\ & < \sum_{k >1000} \sum_{\left\{n:\,\, k \leq n \leq 6k\right\}} \frac{1}{3} \left(\frac{5}{6}\right)^{0.9 n/ \ln n} =\sum_{n\geq 1001}\sum_{k=\max(1001,n/6)}^{n}\frac{1}{3} \left(\frac{5}{6}\right)^{0.9 n/ \ln n}\nonumber \\ & < \sum_{n\geq 1000}(n-1000)\frac{1}{3} \left(\frac{5}{6}\right)^{0.9 n/ \ln n}, \end{align*} where the first inequality is obtained from Corollary \ref{eq:cor1}. Define $f(n)=(n-1000)\frac{1}{3} \left(\frac{5}{6}\right)^{0.9 n/ \ln n},$ where $n$ is an integer $\geq 1000$. It is easy to check that for $n \geq 1000$ the function $f(n)$ has a unique maximum at $n=1050$. (To see it, it suffices to compute $f(n)$ precisely for all $1000 \leq n \leq 1100$ and observe that for $n>1100$ $f(n)$ is far smaller than $f(1050)$.) It is also easy to check that for any $n\geq1050$ $f(n+13\ln n)/f(n)<1/2$. Therefore, \begin{align} \label{eq:RE} & R_{1000}<\sum_{n\geq 1000}(n-1000)\frac{1}{3} \left(\frac{5}{6}\right)^{0.9 n/ \ln n}=\sum_{n\geq 1000}f(n) \nonumber\\ &< 50f(1050)+2(13\ln 1050) f(1050)<7\cdot10^{-8}. \end{align} Similarly $$ R^{(2)}_{1000}\equiv\sum_{k>1000}(2k-1)P(\tau\geq k) =\sum_{k>1000} \sum_{\left\{n:\,\, k \leq n \leq 6k\right\}} (2k-1)p(k,n) $$ $$ <\sum_{k>1000} \sum_{\left\{n:\,\, k \leq n \leq 6k\right\}} (2k-1)\left(\frac{5}{6}\right)^{0.9 n/ \ln n} \leq \sum_{n\geq 1001}\frac{1}{3} \left(\frac{5}{6}\right)^{0.9 n/ \ln n} \sum_{k=\max(1001,n/6)}^{n}(2k-1) $$ $$ \leq \sum_{n\geq 1000}\frac{1}{3} \left(\frac{5}{6}\right)^{0.9 n/ \ln n} (n^2-1000^2), $$ where the first inequality is obtained from Corollary \ref{eq:cor1}. Denote by $g(n)=(n^2-1000^2)\frac{1}{3} \left(\frac{5}{6}\right)^{0.9 n/ \ln n},$ where $n$ is an integer $\geq 1000$. For $n \geq 1000$ the function $g(n)$ has an unique maximum at $n=1051$ and also for any $n\geq1051$ $g(n+13\ln n)/g(n)<1/2$. Therefore, \begin{align} \nonumber & R^{(2)}_{1000}<\sum_{n\geq 1000}(n^2-1000^2)\frac{1}{3} \left(\frac{5}{6}\right)^{0.9 n/ \ln n}=\sum_{n\geq 1000}g(n)\\ & < 51g(1051)+2(13\ln 1051)g(1051)< 3.1\cdot 10^{-5}. \end{align} Therefore, the error $RV_{1000}$ in the variance estimation based on the first $1000$ values is below $1/10,000$. \iffalse \section*{Acknowledgements} The research of YM was supported by grant no. 2020063 from the United States--Israel Binational Science Foundation (BSF), Jerusalem, Israel. \fi
{ "timestamp": "2022-09-19T02:07:05", "yymm": "2209", "arxiv_id": "2209.07698", "language": "en", "url": "https://arxiv.org/abs/2209.07698", "abstract": "What is the number of rolls of fair 6-sided dice until the first time the total sum of all rolls is a prime? We compute the expectation and the variance of this random variable up to an additive error of less than 10^{-4}. This is a solution to a puzzle suggested by DasGupta (2017) in the Bulletin of the Institute of Mathematical Statistics, where the published solution is incomplete. The proof is simple, combining a basic dynamic programming algorithm with a quick Matlab computation and basic facts about the distribution of primes.", "subjects": "Probability (math.PR); Number Theory (math.NT); Other Statistics (stat.OT)", "title": "Hitting a prime in 2.43 dice rolls (on average)", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9888419677654414, "lm_q2_score": 0.8221891305219504, "lm_q1q2_score": 0.8130151177006828 }
https://arxiv.org/abs/math/0510367
A Weierstrass-type theorem for homogeneous polynomials
By the celebrated Weierstrass Theorem the set of algebraic polynomials is dense in the space of continuous functions on a compact set in R^d. In this paper we study the following question: does the density hold if we approximate only by homogeneous polynomials? Since the set of homogeneous polynomials is nonlinear this leads to a nontrivial problem. It is easy to see that: 1) density may hold only on star-like origin-symmetric surfaces; 2) at least 2 homogeneous polynomials are needed for approximation. The most interesting special case of a star-like surface is a convex surface. It has been conjectured by the second author that functions continuous on origin-symmetric convex surfaces in R^d can be approximated by a pair of homogeneous polynomials. This conjecture is not resolved yet but we make substantial progress towards its positive settlement. In particular, it is shown in the present paper that the above conjecture holds for 1) d=2, 2) convex surfaces in R^d with C^(1+epsilon) boundary.
\section{Introduction} The celebrated theorem of Weierstrass on the density of real algebraic polynomials in the space of real continuous functions on an interval $[a,b]$ is one of the main results in analysis. Its generalization for real multivariate polynomials was given by Picard, subsequently the Stone-Weierstrass theorem led to the extension of these results for subalgebras in $C(K)$. In this paper we shall consider the question of density of \emph{homogeneous polynomials}. Homogeneous polynomials are a standard tool appearing in many areas of analysis, so the question of their density in the space of continuous functions is a natural problem. Clearly, the set of homogeneous polynomials is substantially smaller relative to all algebraic polynomials. More importantly, this set is nonlinear, so its density can not be handled via the Stone-Weierstrass theorem. Furthermore, due to the special structure of homogeneous polynomials some restrictions should be made on the sets were we want to approximate (they have to be star-like), and at least 2 polynomials are always needed for approximation (an even and an odd one). On the 5-th International Conference on Functional Analysis and Approximation Theory (Maratea, Italy, 2004) the second author proposed the following conjecture. \begin{conjecture}\label{k0} Let $K\subset \mathbb R^{d}$ be a convex body which is centrally symmetric to the origin. Then for any function $f$ continuous on the boundary $ Bd(K) $ of $K$ and any $\epsilon >0$ there exist two homogeneous polynomials $h$ and $g$ such that $|f-h-g|\leq \epsilon$ on $ Bd(K) $. \end{conjecture} From now on we agree on the terminology that by ``centrally symmetric" we mean ``centrally symmetric to the origin". Subsequently in \cite{ksz} the authors verified the above Conjecture for \emph{crosspolytopes} in $\mathbb R^{d}$ and arbitrary convex polygons in $\mathbb R^{2}$. In this paper we shall verify the Conjecture for those convex bodies in $\mathbb R^{d}$ whose boundary $ Bd(K) $ is $C^{1+\epsilon}$ for some $0<\epsilon\leq 1$ (Theorem \ref{k1}). Moreover, the Conjecture will be verified in its full generality for $d=2$ (Theorem \ref{k2}). It should be noted that parallel to our investigations P. Varj\'u \cite{varju} also proved the Conjecture for $d=2$. In addition, he gives in \cite{varju} an affirmative answer to the Conjecture for arbitrary centrally symmetric polytopes in $\mathbb R^{d}$, and for those convex bodies in $\mathbb R^{d}$ whose boundary is $C^{2}$ and has positive curvature. We also would like to point out that our method of verifying the Conjecture for $d=2$ is based on the potential theory and is different from the approach taken in \cite{varju} (which is also based on the potential theory). Likewise our method of treating $C^{1+\epsilon}$ convex bodies is different from the approach used in \cite{varju} for $C^{2}$ convex bodies with positive curvature. \section{Main Results} Let $K$ be a centrally symmetric convex body in $\mathbb R^{d}$. We may assume that $2\leq d$ and $\dim(K)=d$. The boundary of $K$ is $ Bd(K) $ which is given by the representation $$ Bd(K) :=\{{\bf u} r({\bf u}): {\bf u}\in S^{d-1}\}$$ where $r$ is a positive even real-valued function on $S^{d-1}$. Here $S^{d-1}$ stands for the unit sphere in $\mathbb R^{d}$. We shall say that $K$ is $C^{1+\epsilon}$, written $K \in C^{1+\epsilon}$, if the first partial derivatives of $r$ satisfy a Lip$\epsilon$ property on the unit sphere, $\epsilon>0$. Furthermore denote by $$H^{d}_{n}:= \Big\{\displaystyle\sum_{k_{1}+...+k_{d}=n}c_{\textbf{k}}\textbf{x}^{\textbf{k}}: c_{\textbf{k}}\in \mathbb R\Big\}$$ the space of real homogeneous polynomials of degree $n$ in $\mathbb R^{d}$. Our first main result is the following. \begin{theorem}\label{k1} Let $K\in C^{1+\epsilon}$ be a centrally symmetric convex body in $\mathbb R^{d}$, where $0<\epsilon \leq 1$. Then for every $f\in C( Bd(K) )$ there exist $h_{n}\in H^{d}_{n}+H^{d}_{n-1}, n\in \mathbb N$ such that $ h_{n} \rightarrow f$ uniformly on $ Bd(K) $ as $n\rightarrow \infty.$ \end{theorem} Thus Theorem \ref{k1} gives an affirmative answer to the Conjecture under the additional condition of $C^{1+\epsilon}$ smoothness of the convex surface. For $d=2$ we can verify the Conjecture in its full generality. Thus we shall prove the following. \begin{theorem}\label{k2} Let K be a centrally symmetric convex body in $\mathbb R^{2}$. Then for every $f\in C( Bd(K) )$ there exist $h_{n}\in H_{n}^{2}+H_{n-1}^{2},n\in \mathbb N$ such that $h_{n}\rightarrow f$ uniformly on $ Bd(K) $ as $n\rightarrow \infty.$ \end{theorem} We shall see that Theorem \ref{k2} follows from \begin{theorem}\label{k10} Let $1/W(x)$ be a positive convex function on $\rr$ such that \noindent $|x|/W(-1/x)$ is also positive and convex. Let $g(x)$ be a continuous function which has the same limits at $-\infty$ and at $+\infty$. Then we can approximate $g(x)$ uniformly on $\rr$ by weighted polynomilas $W(x)^n p_n(x)$, $n=0,2,4,...$, $\deg p_n \le n$. \end{theorem} \section{Proof of Theorem \ref{k1}} The proof of Theorem \ref{k1} will be based on several lemmas. The main auxiliary result is the next lemma which provides an estimate for the approximation of unity by even homogeneous polynomials. In what follows $||...||_{D}$ stands for the uniform norm on $D$. Our main lemma to prove Theorem \ref{k1} is the following. \begin{lemma}\label{k3} Let $\tau \in (0,1)$. Under conditions of Theorem \ref{k1} there exist $h_{2n}\in H^{d}_{2n}, n\in \mathbb N$, such that $$||1-h_{2n}||_{ Bd(K) } = o(n^{-\tau\epsilon}).$$ \end{lemma} The next lemma provides a partition of unity which we shall need below. In what follows a cube in $\mathbb R^{d}$ is called regular if all its edges are parallel to the coordinate axises. We denote the set $\{0,1,2,...\}^d$ by $\mathbb Z^{d}_{+}$. \begin{lemma}\label{k4} Given $0<h\leq 1$ there exist non-negative even functions $g_{\textbf{k}}\in C^{\infty}(\mathbb R^{d})$ such that their support consists of $2^{d}$ regular cubes with edge $h$, at most $2^{d}$ of supports of $g_{\textbf{k}}$'s have nonempty intersection, and $$\displaystyle\sum_{\textbf{k}\in\mathbb Z^{d}_{+}}g_{\bf k}(\textbf{x})=1, \quad \textbf{x}\in\mathbb R^{d}, \eqno(1)$$ $$|\partial^{m}g_{\textbf{k}}(\textbf{x})/\partial x_{j}^{m}|\leq c/h^{m},\quad {\bf x} \in \mathbb R^{d}, m\in \mathbb Z_{+}^{1}, 1\leq j\leq d, \eqno(2)$$ where $c>0$ depends only on $m \in \mathbb Z^{1}_{+}$ and $d$. \end{lemma} For the centrally symmetric convex body $K$ let $$|\textbf{x}|_{K}:= \inf\{a>0: \textbf{x}/a \in K \}$$ be its Minkowski functional and set $$\delta_{K}:= \sup\{|\textbf{x}|/|\textbf{x}|_{K}: \textbf{x}\in \mathbb R^{d}\} = \max\{|{\bf x} | : \ {\bf x} \in Bd(K) \}. $$ Moreover for $a\in Bd(K) $ denote by $L_{a}$ a supporting hyperplane at $a$. \begin{lemma}\label{k5} Let $\textbf{a}\in Bd(K) , h_{n}\in H^{d}_{2n}$ be such that for any $\textbf{x}\in L_{\textbf{a}}, |\textbf{x}-\textbf{a}|\leq 4\delta_{K}$ we have $|h_{n}(\textbf{x})|\leq 1$. Then whenever $\textbf{x}\in L_{\textbf{a}}$ satisfies $|\textbf{x}-\textbf{a}|>4\delta_{K}$ and $\textbf{x}/t \in K$ we have $$|h_{n}(\textbf{x}/t)|\leq (2/3)^{2n}. \eqno(3)$$ \end{lemma} \begin{lemma}\label{k6} Consider the functions $g_{\textbf{k}}$ from Lemma \ref{k4}. Then for at most $8^{d}/2h^{d}$ of them their support has nonempty intersection with $S^{d-1}$. \end{lemma} We shall verify first the technical Lemmas \ref{k4}-\ref{k6}, then the proof of Lemma \ref{k3} will be given. Finally it will be shown that Theorem \ref{k1} follows easily from Lemma \ref{k3}. \textbf{Proof of Lemma \ref{k4}.} The main step of the proof consists of verifying the lemma for $d=1$. Let $g\in C^{\infty}(\mathbb R)$ be an odd function on $\mathbb R$ such that $g=1$ for $x<-1/2$ and monotone decreasing from 1 to 0 on $(-1/2,0)$. Further, let $g^{*}(x)$ be an even function on $\mathbb R$ such that $g^{*}(x)$ equals 1 on [0,1], $g(x-3/2)/4+3/4$ on [1,2], and $g(x-5/2)/4+1/4$ on [2,3]. Then it is easy to see that $g^{*}\in C^{\infty}(\mathbb R)$, it equals 1 on $[-1,1]$, 0 for $|x|>3$ and is monotone decreasing on [1,3]. Moreover $$g^{*}(x) + g^{*}(x-4) = 1,\quad x\in [-1,5]. \eqno(4)$$ Set now $$g_{k}(x):= g^{*}(x-4k)+g^{*}(x+4k),\quad k\in \mathbb Z^{1}_{+}.$$ Then $g_{k}$'s are even functions which by (4) satisfy relation $$\displaystyle\sum_{k=0}^{\infty}g_{k}(x) = 1, \quad x\in \mathbb R.$$ In addition, the support of $g_{k}$ equals $\pm [-3+4k,3+4k]$ and at most 2 of $g_{k}$'s can be nonzero at any given $x\in \mathbb R$. Finally, for a fixed $0<h\leq1$, $\textbf{x}\in \mathbb R^{d}$ and $\textbf{k} = (k_1,...,k_d) \in \mathbb Z^{d}_{+}$ set $$g_{\textbf{k}}(\textbf{x}):=\displaystyle\prod_{j=1}^{d}g_{k_{j}}(6x_{j}/h).$$ It is easy to see that these functions give the needed partition of unity. \hfill \hspace{-6pt}\rule[-14pt]{6pt}{12pt \textbf{Proof of Lemma \ref{k5}.} Clearly the conditions of lemma yield that whenever $|{\bf x}-{\bf a} |>4\delta_{K}$ $$1/|\textbf{x}|_{K}\leq \delta_{K}/|\textbf{x}|\leq\delta_{K}/(|\textbf{x}-\textbf{a}|-|\textbf{a}|)\leq\delta_{K}/(|\textbf{x}-\textbf{a}|-\delta_{K})\leq4\delta_{K}/3|\textbf{x}-\textbf{a}|. \eqno(5)$$ It is well known that for any univariate polynomial $p$ of degree at most $n$ such that $|p|\leq 1$ in $[-a,a]$ it holds that $|p(x)|\leq (2x/a)^{n}$ whenever $|x|>a$. Therefore using (5) and the assumption imposed on $h_{n}$ we have $$|h_{n}(\textbf{x})|\leq (2|\textbf{x}-\textbf{a}|/4\delta_{K})^{2n}\leq (2|\textbf{x}|_{K}/3)^{2n}. \eqno (6)$$ Now it remains to note that by $\textbf{x}/t \in K$ it follows that $|\textbf{x}|_{K}\leq |t|$, and thus we obtain (3) from (6). This completes the proof of the lemma. \hfill \hspace{-6pt}\rule[-14pt]{6pt}{12pt \textbf{Proof of Lemma \ref{k6}.} Recall that the support of $g_{k}$'s consists of a pair of regular cubes with edge $h\leq 1$, so if $A_{k}:=$supp$g_{k}$ has nonempty intersection with the unit sphere $S^{d-1}$ then $A_{k}\subset D$, where $D$ stands for the regular cube centered at 0 with edge 4. Let now $f_{k}$ be the characteristic function of $A_{k}$. Since at most $2^{d}$ of $A_{k}$'s have nonempty intersection it follows that $$\sum f_{k}(\textbf{x})\leq 2^{d}, \textbf{x}\in \mathbb R^{d}. \eqno (7)$$ Moreover, $m(A_{k})=2h^{d}$, where $m(.)$ stands for the Lebesgue measure in $\mathbb R^{d}$. Using (7) we have that $$\sum \int_{D}f_{k}dm\leq 2^{d}m(D)=8^{d}. \eqno (8)$$ Since $$\int _{D}f_{k}dm=m(A_{k})=2h^{d}$$ whenever $A_{k}\subset D$ the statement of the lemma easily follows from (8). \hfill \hspace{-6pt}\rule[-14pt]{6pt}{12pt \textbf{Proof of Lemma \ref{k3}.} Denote by $g_{k}, 1\leq k\leq N$ those functions from Lemma \ref{k4} whose support $A_{k}$ has a nonempty intersection with $S^{d-1}$. Then by Lemma \ref{k6} $$N\leq 8^{d}/2h^{d}. \eqno(9)$$ Moreover, by (1) $$ \displaystyle\sum_{k=1}^{N}g_{k}=1 \ \hbox{ on } \ S^{d-1}. \eqno(10)$$ Set $$B_{k}:=A_{k}\cap S^{d-1},\quad C_{k}:=\{\textbf{u}r(\textbf{u}): \textbf{u}\in B_{k}\}\subset Bd(K) , \quad 1\leq k\leq N.$$ For each $1\leq k\leq N$ choose a point $\textbf{u}_{k}\in B_{k}$ and set $\textbf{x}_{k}:=\textbf{u}_{k}r(\textbf{u}_{k})\in Bd(K) .$ Furthermore let $L_{k}$ be the supporting plane to $ Bd(K) $ at the point $\textbf{x}_{k}$ and set for $1\leq k\leq N, L_{k}^{*}:=L_{k}\cup (-L_{k})$ $$ D_{k}:=\{\textbf{x}\in L_{k}^{*}: \textbf{x}=t\textbf{u} \ \hbox{ for some } \ \textbf{u}\in B_{k}, t>0\}; $$ $$ f_{k}(\textbf{x}):=g_{k}(\textbf{u}), \quad \textbf{x}\in Bd(K) , \textbf{x}=\textbf{u}r(\textbf{u}),\quad \textbf{u}\in S^{d-1}$$ $$q_{k}(\textbf{x}):=g_{k}(\textbf{u}),\quad \textbf{x}\in L_{k}^{*}, \textbf{x}=t\textbf{u},\quad \textbf{u}\in S^{d-1},\quad t>0.$$ Clearly, $q_{k}\in C^{\infty}(L_{k}^{*} )$ is an even positive function which by property (2) can be extended to a regular centrally symmetric cube $I\supset K$ so that we have on $I$ $$|\partial^{m}q_{k}/\partial x_{j}^{m}|\leq c/h^{m},\quad 1\leq j\leq d,\quad 1\leq k\leq N. \eqno (11)$$ Here and in what follows we denote by $c$ (possibly distinct) positive constants depending only on $d, m$ and $K$. We can assume that $I$ is sufficiently large so that $$ I\supset G_{k}:=\{\textbf{x}\in L_{k}: |\textbf{x}-\textbf{x}_{k}|\leq 4\delta_{K}\}, \quad 1\leq k\leq N.$$ Then by the multivariate Jackson Theorem (see e.g. \cite{timan} ) applied to the even functions $q_{k}$ satisfying (11) for arbitrary $m\in \mathbb N$ (to be specified below), there exist even multivariate polynomials $p_{k}$ of total degree at most $2n$ such that $$||q_{k}-p_{k}||_{G_{k}^{*}}\leq c/(hn)^{m}\leq 1,\quad 1\leq k\leq N, \eqno(12)$$ where $G_{k}^{*}:=G_{k}\cup(-G_{k})$, $h:=n^{-\gamma}$ ($0<\gamma<1$ is specified below), and $n$ is sufficiently large. We claim now that without loss of generality it may be assumed that each $p_{k}$ is in $H_{2n}^{d}$. Indeed, since $G_{k}^{*}\subset L_{k}^{*}$ it follows that the homogeneous polynomial $h_{2}:=<\textbf{x},\textbf{w}>^{2}\in H_{2}^{d}$ is identically equal to 1 on $G_{k}^{*}$ (here $\textbf{w}$ is a properly normalized normal vector to $L_{k}$), so multiplying the even degree monomials of $p_{k}$ by even powers of $h_{2}$ we can replace $p_{k}$ by a homogeneous polynomial from $H_{2n}^{d}$ so that (12) holds. Thus we may assume that $p_{k}\in H_{2n}^{d}$ and relations (12) hold. In particular, (12) also yields that $$||p_{k}||_{G_{k}^{*}}\leq 2, \quad 1\leq k\leq N.\eqno(13)$$ Now consider an arbitrary $\textbf{x}\in Bd(K) \setminus C_{k}$. Then with some $t>1$ we have $t\textbf{x}\in L_{k}^{*}$ and $q_{k}(t\textbf{x})=0$. Hence if $t\textbf{x}\in G_{k}^{*}$ then by (12) it follows that $$|p_{k}(\textbf{x})|\leq |p_{k}(t\textbf{x})|\leq c/(hn)^{m}.$$ On the other hand if $t\textbf{x}\notin G_{k}^{*}$ then by (13) and Lemma \ref{k5} we obtain $$|p_{k}(\textbf{x})|\leq 2(2/3)^{2n}.$$ The last two estimates yield that for every $\textbf{x}\in Bd(K) \setminus C_{k}$ we have $$|p_{k}(\textbf{x})|\leq c((2/3)^{2n}+(hn)^{-m}), \quad 1\leq k\leq N.\eqno(14)$$ Now let us assume that ${\bf x} \in C_k$. Clearly, the $C^{1+\epsilon}$ property of $ Bd(K) $ yields that whenever $\textbf{x}\in Bd(K) , t\textbf{x}\in L_{k}^{*}, t>1$ we have for every $1\leq k\leq N$ $$(t-1)|\textbf{x}|=|\textbf{x}-t\textbf{x}|\leq c\min\{|\textbf{x}-\textbf{x}_{k}|,|\textbf{x}+\textbf{x}_{k}|\}^{1+\epsilon}. \eqno(15)$$ Obviously, for every $\textbf{u}\in B_{k}$ $$\min\{|\textbf{u}-\textbf{u}_{k}|,|\textbf{u}+\textbf{u}_{k}|\}\leq \sqrt{d}h.$$ This and (15) yields that for $\textbf{u}\in B_{k}, \textbf{x}=\textbf{u}r(\textbf{u})\in C_{k}, t\textbf{x}\in D_{k} (c>t>1)$ we have for $1<t<c, 0<h<c$ $$ t-1\leq ch^{1+\epsilon},\quad D_{k}\subset G_{k}^{*}, 0<h<h_{0}. \eqno(16)$$ Hence using (12), (13) and (16) we obtain for $0<h^{1+\epsilon} \le cn^{-1} , 1\leq k\leq N$ $$|f_{k}(\textbf{x})-p_{k}(\textbf{x})|=|q_{k}(t\textbf{x})-p_{k}(\textbf{x})|\leq |q_{k}-p_{k}|(t\textbf{x})+|p_{k}(t\textbf{x})-p_{k}(\textbf{x})|\leq$$ $$c/(hn)^{m}+|p_{k}(\textbf{x})|(t^{2n}-1)\leq c( (hn)^{-m}+nh^{1+\epsilon}),\quad \textbf{x}\in C_{k}. \eqno(17)$$ Denote for $\textbf{x}\in Bd(K) $ $$R(\textbf{x}):= \{k: \textbf{x}\in C_{k}\},\quad \#R(\textbf{x})\leq 2^{d}.$$ Then using the above relation together with (10),(17),(14) and (9) we obtain for every $\textbf{x}\in Bd(K) $ $$|1-\displaystyle\sum_{k=1}^{N}p_{k}(\textbf{x})|=|\displaystyle\sum_{k=1}^{N}(f_{k}-p_{k})(\textbf{x})|\leq |\sum_{k\in R(\textbf{x})}...| + |\sum_{k\notin R(\textbf{x})}...|$$ $$\leq c2^{d}(1/(hn)^{m}+nh^{1+\epsilon}) + N||p_{k}||_{ Bd(K) \setminus C_{k}}$$ $$\leq c( h^{-m-d}n^{-m} +h^{-d}(2/3)^{2n}+ nh^{1+\epsilon} ).\eqno(18)$$ Now it remains to choose proper values for $m$ and $h$. Choose $m \in \mathbb{N}$ to be so large that $$R:={m\epsilon -d \over 1+m+\epsilon+d} > \tau\epsilon \ \hbox{ and let } \ \gamma:= {1+m \over 1+m+\epsilon+d} . $$ Letting $h:=n^{-\gamma}$ we see that $h^{-m-d}n^{-m} = nh^{1+\epsilon} = n^{-R}$. (Hence the $h^{1+\epsilon} \le cn^{-1} $ condition is satisfied.) In addition $h^{-d}(2/3)^{2n} = O(n^{-R})$, too. This completes the proof of Lemma \ref{k3}. \hfill \hspace{-6pt}\rule[-14pt]{6pt}{12pt \textbf{Proof of Theorem \ref{k1}.} First we use the classical Weierstrass Theorem to approximate $f\in C( Bd(K) )$ by a polynomial $$p_{m}=\displaystyle\sum_{j=0}^{m}h_{j}^{*}, \quad h_{j}^{*}\in H_{j}^{d},\quad 0\leq j\leq m$$ of degree at most $m$ so that $$||f-p_{m}||_{ Bd(K) }\leq \delta$$ with any given $\delta>0$. Let $\tau \in (0,1)$ be arbitrary. According to Lemma \ref{k3} there exist $h_{n,j}\in H_{2n-2[j/2]}^{d}$ such that $||1-h_{n,j}||_{ Bd(K) }=O(n^{-\tau\epsilon}), 0\leq j\leq m$. Clearly, $$h^{*}:=\displaystyle\sum_{j=0}^{m}h_{n,j}h_{j}^{*}\in H_{2n}^{d}+H_{2n+1}^{d}$$ and $$||f-h^{*}||_{ Bd(K) }\leq \delta +O(n^{-\tau\epsilon}).$$ \hfill \hspace{-6pt}\rule[-14pt]{6pt}{12pt \section{Proof of Theorem \ref{k2}} \begin{definitions} Let $L\subset\rr$ and let $f: \ L\to\rr \cup\{-\infty \} \cup\{+\infty \} $ be a function which is defined almost everywhere (a.e.) on $L$. We say that $f$ is {\it increasing} if $f(x)\le f(y)$ whenever $f$ is defined at $x$ and $y$ and $x\le y$. We say that $f$ is {\it increasing} almost everywhere if there exists $L^* \subset L$ such that $L\setminus L^*$ has Lebesgue measure zero, $f(x)$ is defined for all $x \in L^*$ and $f(x)\le f(y)$ whenever $x,y\in L^*, \ x\le y$. We say that $f$ is convex if $f$ is absolutely continuous, and $f'(x)$ (which exists a.e.) is increasing a.e. on $L$. Let $\rrr:=\rr\cup\{\infty\}$ denote the one-point compactified real line (whose topology is isomorphic to the topology of the unit circle). \end{definitions} Let $W: \rrr\to\rr$ be a non-negative function. Define $Q: \ \rrr\to(-\infty,+\infty]$ by $$W(t)=\exp(-Q(t)).$$ In the rest of the paper we will have the following assumptions on the weight $W(t)$, $t\in\rrr$: \begin{eqnarray}\label{42} {1\over W(t)} \hbox{ is positive and convex on } \rr \end{eqnarray} \begin{eqnarray}\label{43} {|t|\over W(-{1\over t})} \hbox{ is positive and convex on } \rr. \end{eqnarray} \begin{remark} Equivalently, instead of \rf{43} we may assume that \rf{41} below holds and $ \lim_{t\to+\infty} $ $t(tQ'(t)-1) \le \lim_{t\to-\infty} t(tQ'(t)-1).$ We also remark that \rf{42} implies that \rf{43} is satisfied on $(-\infty,0)$ and on $(0,+\infty)$. \end{remark} We mention the function $W(t)=(1+|t|^m)^{-1/m}$, $1\le m$, as an example which satisfies \rf{42} and \rf{43}. We say that a property is satisfied {\it inside} $\rr$ if it is satisfied on all compact subsets of $\rr$. Some consequences of \rf{42} and \rf{43} are as follows. \begin{eqnarray}\label{41} \lim_{t\to\pm\infty} |t|W(t)=\rho\in(0,+\infty) \hbox{ exists. } \end{eqnarray} Since $\exp(Q(t))$ is convex, it is Lipschitz continuous inside $\rr$. So $\exp(Q(t))$ is absolutely continuous inside $\rr$ which implies that both $W(t)$ and $Q(t)$ are absolutely continuous inside $\rr$. $Q'(t)$ is bounded inside $\rr$ a.e. because by \rf{42} $\exp(Q(t))Q'(t)$ is increasing a.e. We collected below some frequently used definitions and notations in the paper. \begin{definitions} Let $L\subset\rr$ and let $f: \ L\to\rr \cup\{-\infty \} \cup\{+\infty \} $. $f$ is H\"older continuous with H\"older index $0<\tau \le 1$ if with some $K$ constant $|f(x)-f(y)| \le K|x-y|^\tau,$ $x,y\in L$. In this case we write $f \in H^\tau(L).$ The $L^p$ norm of $f$ is denoted by $||f||_p$. When $p=\infty$ we will also use the $||f||_{L}$ notation. We say that an integral or limit exists if it exists as a real number. Let $x\in\rr$. If $f$ is integrable on $ L\setminus(x-\epsilon,x+\epsilon) $ for all $0<\epsilon$ then the Cauchy principal value integral is defined as $$ PV \int_L f(t)dt := \lim_{\epsilon\to 0^+} \int_{L\setminus(x-\epsilon,x+\epsilon)} f(t)dt ,$$ if the limit exists. It is known that $PV \int_L g(t)/(t-x)dt$ exists for almost every $x\in\rr$ if $g: \ L\to\rr$ is integrable. For $0<\iota$ and $a\in\rr$ we define $$a_\iota^+ := \max(a,\iota) \ \hbox { and } \ a_\iota^- := \max( -a,\iota).$$ For $a>b$ the interval $[a,b]$ is an empty set. We say that a property is satisfied {\it inside} $L$ if it is satisfied on all compact subsets of $L$. $o(1)$ will denote a number which is approaching to zero. For example, we may write $10^x = 100+o(1)$ as $x\to 2$. Sometimes we also specify the domain (which may change with $\epsilon$) where the equation should be considered. For example, $\sin(x) = o(1)$ for $x\in[\pi,\pi+\epsilon]$ when $\epsilon\to 0^+$. The equilibrium measure and its support $S_w$ is defined on the next page. Let $[a_\lambda,b_\lambda]$ denote the support $S_{W^\lambda}$ (see Lemma \rf{145}). For $ x\not\in(a_\lambda ,b_\lambda )$ let $V_\lambda (x):=0$, and for a.e. $x\in(a_\lambda ,b_\lambda )$ let \begin{eqnarray}\label{90} V_\lambda (x):={ PV \int_{a_\lambda }^{b_\lambda } { \lambda {\sqrt{(t-a_\lambda )(b_\lambda -t)} } Q'(t) \over t-x}dt \over \pi^2\sqrt{ (x-a_\lambda )(b_\lambda -x)} } +{1\over \pi\sqrt{(x-a_\lambda)(b_\lambda-x)}}. \end{eqnarray} Let $x\in[-1,1]$. Depending on the value of $c\in [-1,1]$ the following integrals may or may not be principal value integrals. $$ v_c(x):= - PV \int_{-1}^c {\lambda \sqrt{1-t^2} e^{-Q(t)} \over \pi^2 \sqrt{1-x^2} (t-x) }dt, $$ \begin{eqnarray*}\label{} h_c(x):= PV \int_c^1 {\lambda\sqrt{1-t^2} e^{-Q(t)} \over \pi^2 \sqrt{1-x^2} (t-x)}dt. \end{eqnarray*} (We should keep it in mind that $v_c(x)$ and $h_c(x)$ also depends on $\lambda$.) Define \begin{eqnarray*}\label{} B(x) :=v_c(x)-h_c(x) =v_1(x) = -PV \int_{-1}^1 { \lambda \sqrt{1-t^2} e^{-Q(t)} \over \pi^2 \sqrt{1-x^2} (t-x) }dt, \quad x\in[-1,1]. \end{eqnarray*} $P_n(x)$ and $p_n(x)$ denote polynomials of degree at most $n$. \end{definitions} Functions with smooth integrals was introduced by Totik in \cite{totik}. \begin{definitions} We say that $f$ has smooth integral on $R\subset L$, if $f$ is non-negative a.e. on $R$ and \begin{eqnarray}\label{99} \int_I f =(1+o(1))\int_J f \end{eqnarray} where $I, \ J \subset R$ are any two adjacent intervals, both of which has length $0<epsilon$, and $\epsilon\to 0$. The $o(1)$ term depends on $\epsilon$ and not on $I$ and $J$. We say that a family of functions $ {\cal F} $ has uniformly smooth integral on $R$, if any $f \in {\cal F}$ is non-negative a.e. on $R$ and \rf{99} holds, where the $o(1)$ term depends on $\epsilon$ only, and not on the choice of $f$, $I$ or $J$. \end{definitions} Cleary, if $f$ is continuous and it has a positive lower bound on $R$ then $f$ has smooth integral on $R$. Also, non-negative linear combinations of finitely many functions with smooth integrals on $R$ has also smooth integral on $R$. From the Fubini Theorem it follows that if $\nu$ is a finite positive Borel measure on $T\subset \rr$ and $\{ v_t(x): \ t\in T\}$ is a family of functions with uniformly smooth integral on $R$ for which $t \to v_t(x)$ is measureable for a.e. $x \in [a,b]$, then \begin{eqnarray*}\label{} v(x):=\int_T v_t(x) d\nu(t) \end{eqnarray*} has also smooth integral on $R$. Finally, if $f_n \to f$ uniformly a.e. on $R$, $f_n$ has smooth integral on $R$ and $f$ has positive lower bound a.e. on $R$ then $f$ has smooth integral on $R$. \begin{remark}\label{146} Since $\exp(-Q)$ is absolutely continuous inside $\rr$ and $(\exp(-Q))'=-\exp(-Q)Q'$ is bounded a.e. on $[-1,1]$, by the fundamental theorem of calculus we see that $\exp(-Q(t))\in H^1([-1,1])$. And $\sqrt{1-t}, \ \sqrt{1+t} \in H^{0.5}([-1,1])$, so $\sqrt{1-t}\sqrt{1+t}\exp(-Q(t)) \in H^{0.5}([-1,1])$ so $\sqrt{1-x^2}B(x)\in H^{0.5}([-1,1])$ by the Plemelj-Privalov Theorem (\cite{musk}, \textsection{19}). As a consequence, $v_c(x)$ and $h_c(x)$ exist for any $x \in [-1,1]\setminus \{c\}.$ \end{remark} \vskip 0.5 cm The following definitions and facts are well known in logarithmic potential theory (see \cite{st} and \cite{simeonov}). Let $w(x)\not\equiv 0$ be a non-negative continuous function on $\rrr$ such that \begin{eqnarray}\label{170} \lim_{x\to \infty} |x|w(x)=\alpha \in[0,+\infty) \ \hbox{ exists }. \end{eqnarray} When $\alpha=0$, then $w$ belongs to the class of so called ``admissible" weights. We write $w(x)=\exp(-q(x))$ and call $q(x)$ external field. If $\mu$ is a positive Borel unit measure on $\rrr$ - in short a ``probability measure", then its weighted energy is defined by \begin{eqnarray*}\label{} I_w(\mu) :=\int\int \log{1 \over |x-y|w(x)w(y)} d\mu(x)d\mu(y). \end{eqnarray*} The integrand is bounded from below (\cite{simeonov}, pp. 3), so $I_w(\mu)$ is well defined and $-\infty <I_w(\mu)$. Whenever it makes sense, we define the (unweighted) logarithmic energy of $\mu$ as $I_1(\mu)$ where $1$ denotes the constant 1 function. There exists a unique probability measure $\mu_w$ - called the equilibrium measure associated with $w$ - which minimizes $I_w(\mu)$. Also, $$V_w := I_w(\mu_w) \quad \hbox{ is finite,}$$ and $\mu_w$ has finite logarithmic energy when $\alpha=0$. If the support of $\mu$ is compact, we define its potential as $$U^\mu(x) :=\int\log{1 \over |t-x|}d\mu(t).$$ This definition makes sense for a signed measure $\nu$, too, if $\int \Big|\log|t-x|\Big| d|\nu|(t)$ exists. Let $$S_w :={\rm supp} (\mu_w) \ \hbox{ denote the support of } \ \mu_w.$$ When $\alpha=0$, then $S_w$ is a compact subset of $\rr$. In this case with some $F_w$ constant we have $$U^{\mu_w}+Q(x) =F_w, \quad x\in S_w.$$ \vskip 0.5 cm Let $Bd(K)$ be the boundary of a two dimensional convex region $K \subset \rr^2$ which is centrally symmetric to the origin $(0,0)$. For $t\in\rr$ let $(x(t),y(t))$ be any of the two points on $Bd(K) $ for which \begin{eqnarray}\label{177} {y(t)\over x(t)}=t. \end{eqnarray} Let $x(\infty):=0$ and choose the value $y(\infty)$ such that $(0, y(\infty) ) \in Bd(K) $. We define $y(\infty) / 0$ to be $\infty$, so \rf{177} also holds for $t=\infty$. Define \begin{eqnarray*}\label{ } W(t):=e^{-Q(t)}:=|x(t)|, \quad t\in\rrr. \end{eqnarray*} \begin{lemma} $W(t)$ satisfies properties \rf{42}, \rf{43}. And $S_W=\rrr$. \end{lemma} \noindent {\bf Proof.}\ $W$ is positive on $\rr$. We may assume that $x(t)>0, \ t\in \rr$. Let $t_1, t_3 \in\rr$ and $t_2:=\alpha t_1+(1-\alpha)t_3$, where $0<\alpha <1$. Let $(x_2,y_2)$ be the intersection of the line segments $\overline{ (x(t_1),y(t_1))(x(t_3),y(t_3)) }$ and $\overline{ (0,0)(x(t_2),y(t_2)) } $. Note that $1/x(t_2) \le 1/x_2$ and by elementary calculations: $$ {1 \over x_2} = \alpha {1\over x(t_1)}+(1-\alpha ){1\over x(t_3)} , $$ so \rf{42} holds. The proof of \rf{43} is identical to the proof of \rf{42} once we notice that $y(-1/t)/x(-1/t) = -1/t$, and so $|t|/W(-1/t)=1/|y(-1/t)|$. $S_W=\rrr$ follows from Corollary 3 of \cite{bdd}, since \rf{42} implies that (2.2) in \cite{bdd} is increasing on $(0,2\pi)$ with the choice $c:=0$, and \rf{43} implies that (2.2) in \cite{bdd} is increasing on $(\pi,3\pi)$ with the choice $c:=\pi$. (Corollary 3 can be used since \rf{41} shows that $q(\theta ):=Q(-\cot({\theta / 2}))+\log|\sin({\theta / 2})|+\log 2$ is a continuous function on $[0,2\pi]$. And $q(\theta)$ is absolutely continuous inside $(0,2\pi)$, so it is absolutely continuous on $[0,2\pi]$.) \hfill \hspace{-6pt}\rule[-14pt]{6pt}{12pt \begin{lemma}\label{145} Let $1<\lambda $. Then $S_{W^\lambda }$ is a finite interval $[a_\lambda ,b_\lambda ]$, and $ \mu_{W^\lambda } $ is absolutely continuous with respect to the Lebesgue measure and its density is $d\mu_{W^\lambda }(x) = V_\lambda (x) dx.$ \end{lemma} \noindent {\bf Proof.}\ Let $1<p$. Note that $\exp(\lambda Q(x))$ is a convex function because it is the composition of two continuous convex functions. So by \cite{b1}, Theorem 5, $S_{W^\lambda }$ is an interval $[a_\lambda ,b_\lambda ]$, which is finite since $\lim_{x\to\pm\infty} |x|W^\lambda (x)=0$. The density function $(d\mu_{W^\lambda }(x))/dx$ exists, since $(W^\lambda)'= -\exp(-\lambda Q)\lambda Q' \in L^p(\rr)$, see Theorem IV.2.2 of \cite{st}. The integral at \rf{90} is the Hilbert transform on $\rr$ of the function defined as $\lambda \sqrt{(t-a_\lambda )(b_\lambda -t)} Q'(t) $ on $(a_\lambda ,b_\lambda )$ and $0$ elsewhere. This function is in $L^p(\rr)$, so by the M. Riesz' Theorem the integral is also in $L^p(\rr)$ hence $V_\lambda (x)$ exists for a.e. $x\in[a_\lambda ,b_\lambda ]$. Moreover, by the H\"older inequality ($1/a+1/b=1/c$ implies $||fg||_c\le ||f||_a||g||_b$) we see that $V_\lambda \in L_{1.9}(\rr)$, so $V_\lambda \in L_1(\rr)$, too. By the proof of Lemma 16 of \cite{b0}, the function $V_\lambda $ satisfies $\int V_\lambda(x)dx =1$ and \begin{eqnarray}\label{91} \int_{a_\lambda }^{b_\lambda } \log|t-x|V_\lambda (t)dt = \lambda Q(x)+C, \quad x\in (a_\lambda ,b_\lambda ). \end{eqnarray} The left hand side is well defined since by the H\"older inequlaity \begin{eqnarray}\label{200} x \ \mapsto \ \int_{a_\lambda }^{b_\lambda } \Big|\log|t-x| \Big| |V_\lambda (t)|dt \quad \hbox{ is uniformly bounded on } \ [a_\lambda ,b_\lambda ]. \end{eqnarray} Consider the unit signed measure $\mu$ defined by $d\mu(x):=V_\lambda (x)dx$. By \rf{91} $U^\mu(x)+\lambda Q(x)=-C$, $x\in(a_\lambda ,b_\lambda )$. From this and from $ U^{\mu_{W^\lambda} }(x) + \lambda Q(x) = F_{W^\lambda} $, $x \in[a_\lambda ,b_\lambda ]$, we get $U^\mu(x) = U^{\mu_{W^\lambda} }(x) $, $x \in(a_\lambda ,b_\lambda )$. But \rf{200} shows that $U^{\mu^+}(x)$ and $U^{\mu^-}(x)$ are finite for all $x \in[a_\lambda ,b_\lambda ]$. So $U^{ \mu^+ }(x) = U^{ \mu_{W^\lambda} + \mu^- }(x)$, $x\in(a_\lambda ,b_\lambda )$. Here $\mu^+$ and $\mu_{W^\lambda} + \mu^-$ are positive measures which have the same mass. $\mu_{W^\lambda} $, $\mu^-$ (and $\mu^+$) all have finite logarithmic energy (see \rf{200}), hence $ \mu_{W^\lambda} + \mu^-$ has it, too. Applying Theorem II.3.2. of \cite{st} we get $U^{ \mu^+ }(z) = U^{ \mu_{W^\lambda} + \mu^- }(z) $ for all $z\in \mathbb C$. By the unicity theorem ( \cite{st}, Theorem II.2.1. ) $\mu^+ = \mu_{W^\lambda} + \mu^- $. Hence $\mu=\mu_{W^\lambda} $ and our lemma is proved. \hfill \hspace{-6pt}\rule[-14pt]{6pt}{12pt \begin{lemma}\label{30} For any $[a,b]$ interval if $1<\lambda$, and $\lambda$ is sufficiently close to $1$ then $[a,b]\subset(a_\lambda,b_\lambda)$ and $V_\lambda(x)$ has positive lower bound a.e. on $[a,b]$. \end{lemma} \noindent {\bf Proof.}\ First we show that $\lim_{\lambda \to 1^+} a_\lambda =-\infty$ and $\lim_{\lambda \to 1^+} b_\lambda =+\infty$. Fix $z\in\rr$ and let let $\lambda _n \searrow 1$ be arbitrary. We show that $z\in ( a_{\lambda _n},b_{\lambda _n})$ for large $n$. If this were not the case then for a subsequence (indexed also by $\lambda _n$) we have \begin{eqnarray}\label{20} [a_{\lambda _n},b_{\lambda _n}]\subset [z,+\infty). \end{eqnarray} (Or, for a subsequence we have $[a_{\lambda _n},b_{\lambda _n}]\subset (-\infty,z],$ which can be handled similarly.) $\rrr$ is compact so by Helly's Selection Theorem (\cite{st}, Theorem 0.1.3) we can find a subsequence of the equilibrium measures $\mu_{W^{\lambda _n}}$ (indexed also by $\lambda_n$) which weak-* converges to a probability measure $\mu$. This we denote by $\mu_{W^{\lambda _n}} \weak \mu$. For fixed large $0<N$ we define the probability measure $$\nu_N := { \mu_W \Big|_{[-N,N]} \over ||\mu_W \Big|_{[-N,N]} || } .$$ We remark that $\mu_W(\{\infty\})=0$ which implies that \begin{eqnarray}\label{57} ||\mu_W \Big|_{[-N,N]} || \to 1 \ \hbox{ as } N\to +\infty. \end{eqnarray} By (\cite{simeonov}, pp. 3) there exists $K \in \rr$ such that \begin{eqnarray}\label{56} K \le \log{ 1 \over |z-t| W(z)W(t) }, \quad z,t \in \rrr. \end{eqnarray} Now we show that \begin{eqnarray}\label{55} \int\int \log{ 1 \over |z-t| W^{\lambda_1}(z)W^{\lambda_1}(t) } d\nu_N(t)d\nu_N(z) \ \hbox{ is finite. } \end{eqnarray} By \rf{56} the double integral at \rf{55} is bounded from below. It equals to: \begin{eqnarray*} \int\int \log{ 1 \over |z-t|^{\lambda_1} W^{\lambda_1}(z)W^{\lambda_1}(t) } d\nu_N(t)d\nu_N(z) + \int\int \log |z-t|^{\lambda_1-1} d\nu_N(t)d\nu_N(z). \end{eqnarray*} Here the first double integral is finite because $V_W$ is finite (\cite{simeonov}, Theorem 1.2). And the second integral is bounded from above since $\nu_N$ has compact support. So \rf{55} is established. Choose $0<\tau$ such that $||\tau W(x)||_\infty\le 1$. Now, $$ I_W(\mu) -\log(\tau^2)$$ $$ = \lim_{M\to+\infty}\int\int \min\Big(M,\log{1\over |z-t|(\tau W(z))(\tau W(t))}\Big)d\mu(t)d\mu(z) $$ $$ =\lim_{M\to+\infty}\lim_{n\to+\infty}\int\int \min\Big(M,\log{1\over |z-t|(\tau W(z))(\tau W(t))}\Big)d\mu_{ W^{\lambda _n} }(t)d\mu_{W^{\lambda _n}}(z)$$ $$ \le \lim_{n\to+\infty}\int\int \log{1\over |z-t|(\tau W(z))^{\lambda _n}(\tau W(t))^{\lambda _n}}d\mu_{W^{\lambda _n}}(t)d\mu_{W^{\lambda _n}}(z)$$ $$ \le \lim_{n\to+\infty} \int\int \log{1\over |z-t|(\tau W(z))^{\lambda _n}(\tau W(t))^{\lambda _n}}d\nu_N(t)d\nu_N(z) $$ \begin{eqnarray}\label{150} = \int\int \log{1\over |z-t| W(z) W(t)}d\nu_N(t)d\nu_N(z) -\log(\tau^2). \end{eqnarray} Above in the first equality we used the monotone convergence theorem (see also \rf{56}). In the second equality we used $\mu_{W^{\lambda _n}} \times\mu_{W^{\lambda _n}} \weak \mu \times\mu$. In the second inequality it was used that $\mu_{W^{\lambda _n}}$ is the probability measure which minimizes the double integral of $-\log( |z-t| W^{\lambda _n}(z) W^{\lambda _n}(t))$. In the last equality we used the monotone convergence theorem again. (It can be used because of \rf{56}, plus the integral is finite even with the power $\lambda_1$ by \rf{55}.) Also, $$ \int\int \Big[ \log{1\over |z-t| W(z) W(t)} - K \Big] d\nu_N(t)d\nu_N(z) $$ \begin{eqnarray*} \le \int\int \Big[ \log{1\over |z-t| W(z) W(t)} - K \Big] {d \mu_W(t) \over ||\mu_W \Big|_{[-N,N]} || } {d \mu_W(z) \over ||\mu_W \Big|_{[-N,N]} || }. \end{eqnarray*} Combining this with \rf{150} we have $$I_W(\mu) \le K \Big[ 1 - {1 \over ||\mu_W \Big|_{[-N,N]} ||^2 } \Big] + {1 \over ||\mu_W \Big|_{[-N,N]} ||^2 } V_W. $$ Letting $N\to+\infty$ we gain $I_W(\mu)\le V_W$. Therefore $\mu=\mu_W$. Thus $\mu_{W^{\lambda _n}} \weak \mu_W $ which contradicts \rf{20}, since $S_W=\rrr$. To prove the positive lower bound of $V_\lambda(x)$ a.e. on $[a,b]$, let $I:=[a-1,b+1]$. Since $W^\lambda $ is an admissible weight, we can use \cite{st}, Theorem IV.4.9., to get \begin{eqnarray*}\label{} \mu_{W^\lambda }\Big|_{ S_{W^{\lambda ^2}} } \ge \Big(1-{1\over \lambda ^2}\Big) \omega_{S_{W^\lambda }}\Big|_{S_{W^{\lambda ^2}}}, \end{eqnarray*} where $\omega_{ S_{W^\lambda}}$ is the classical equilibrium measure of the set $S_{W^\lambda }$ (with no external field present). (We remark that $S_{W^{\lambda}} \supset S_{W^{\lambda ^2}}$.) It follows that if $\lambda $ is so close to $1$ that $S_{W^{\lambda ^2}} \supset I$ holds, then $[a,b]\subset(a_\lambda,b_\lambda)$ and $V_\lambda(x)$ has positive lower bound a.e. on $[a,b]$. \hfill \hspace{-6pt}\rule[-14pt]{6pt}{12pt We will need Lemma 22 of \cite{b0}. We formulate it as follows: \begin{lemma}\label{50} Let $A<B<1$, $f\in L^1[A,1]$ and $f\in H^1[A,(B+1)/2]$. Define $v^*(x) := \int_c^1 f(t)/(t-x)dt$, where $c\in[A,B]$ and $x<c$. Then \begin{eqnarray*}\label{} v^*(x) = (f(c)+o(1)) \log{1 \over c-x}, \quad \hbox{ as } \quad x\to c^-. \end{eqnarray*} Here $o(1)$ depends on $c-x$ only. \end{lemma} \begin{lemma}\label{51} Let $-1<a<b<1$ and $0<\iota$ be fixed. Let $0<\epsilon < 1/10$ and $\delta :=\sqrt{\epsilon } - 2\epsilon $. Then for $x_1,x_2\in [a,b]\cap (c-\delta,c+\delta)^c$, $|x_1-x_2| \le \epsilon $, all the quotients \begin{eqnarray*}\label{} { v_c(x_1)_\iota ^+ \over v_c(x_2)_\iota ^+ }, \quad {v_c(x_1)_\iota ^- \over v_c(x_2)_\iota ^-}, \quad {h_c(x_1)_\iota ^+ \over h_c(x_2)_\iota ^+}, \quad {h_c(x_1)_\iota ^- \over h_c(x_2)_\iota ^-} \end{eqnarray*} equal to $1+o(1)$ as $\epsilon \to 0^+$. Here the $o(1)$ term is independent of $x_1,x_2$ and $c$. \end{lemma} \noindent {\bf Proof.}\ First we consider the case when $x_1, \ x_2 \le c-\delta.$ Note that for $x_1>x_2$ we have $1/(t-x_2)<1/(t-x_1)$, $t\in [c,1]$, whereas for $x_1 \le x_2$ we have $${1 \over t-x_2} \le \Big(1+{x_2-x_1 \over c-x_2} \Big){1\over t-x_1} = (1+o(1)){1 \over t-x_1}, \quad t\in[c,1].$$ Multiplying these inequalities by $\lambda \sqrt{1-t^2} \exp(-Q(t))/\pi^2$ and integrating on $[c,1]$ we gain \begin{eqnarray}\label{66} {h_c(x_2) \over h_c(x_1) } = 1+o(1), \end{eqnarray} where $\sqrt{1-x_2^2}/\sqrt{1-x_1^2} = 1+o(1)$ was also used. By the same argument, if $x_1,\ x_2 \ge c+\delta$, we have $v_c(x_2)/v_c(x_1)=1+o(1)$, from which \begin{eqnarray}\label{67} {v_c(x_2)_\iota^+ \over v_c(x_1)_\iota^+} = 1+o(1). \end{eqnarray} Returning to the case of $x_1, \ x_2 \le c-\delta,$ from $v_c(x)=h_c(x)+B(x)$, from \rf{66} and from $B(x_2)=B(x_1)+o(1)$ we get $$|v_c(x_2)-v_c(x_1)| = |o(1)| (1+|v_c(x_1)-B(x_1)|)$$ \begin{eqnarray}\label{68} \le |o(1)|(|v_c(x_1)|+1+||B||_{[a,b]} ). \end{eqnarray} Assuming $|v_c(x_1)| \le 1$, we have \begin{eqnarray*}\label{} |v_c(x_2)_\iota^+ - v_c(x_1)_\iota^+| \le |v_c(x_2)-v_c(x_1)| \le |o(1)|, \end{eqnarray*} so \rf{67} holds again. Finally, if $|v_c(x_1)| \ge 1$, then from \rf{68} \begin{eqnarray*}\label{} \Big| {v_c(x_2) \over v_c(x_1)} -1 \Big| = |o(1)| \Big(1+ {1+||B||_{[a,b]} \over |v_c(x_1)|} \Big) = |o(1)|, \end{eqnarray*} from which \rf{67} again easily follows. The proof of the rest of our lemma is similar. \hfill \hspace{-6pt}\rule[-14pt]{6pt}{12pt \begin{lemma}\label{79} Let $-1<a<b<1$ and $0<\iota$ be fixed. Then the family of functions $ {\cal F}^+ :=\{ v_c(x)_\iota^+ : \ c\in [-1,1] \}$ and ${\cal F}^-:=\{ v_c(x)_\iota^- : \ c\in [-1,1] \}$ have uniformly smooth integrals on $[a,b]$. \end{lemma} \noindent {\bf Proof.}\ We consider ${\cal F}^+$ only (${\cal F}^-$ can be handled similarly). Let $c\in[-1,1]$. Let $I:=[u-\epsilon ,u]$, $J:=[u,u+\epsilon ]$ be two adjacent intervals of $[a,b] $, where $0<\epsilon <1/10$. We have to show that \begin{eqnarray*}\label{} { \int_I v_c(t)_\iota ^+ dt \over \int_J v_c(t)_\iota ^+ dt } =1+o(1), \quad \hbox{ as } \quad \epsilon \to 0^+, \end{eqnarray*} where $o(1)$ is independent of $I,J$ and $c$. Let $\delta :=\sqrt{\epsilon }-2\epsilon \ (>\epsilon)$. {\it Case 1:} Assume $I\cup J \subset (c-\delta,c+\delta)^c$. From Lemma \ref{51} we have $v_c(t)_\iota ^+ = (1+o(1)) v_c(t+\epsilon )_\iota ^+,$ $t\in I$. Thus $ \int_I v_c(t)_\iota ^+ dt = (1+o(1)) \int_J v_c(t)_\iota ^+ dt.$ {\it Case 2:} Assume $(I\cup J) \cap (c-\delta ,c+\delta ) \not=\emptyset$. So $I\cup J\subset [ c-\sqrt{\epsilon }, c+\sqrt{\epsilon }].$ Let $\epsilon$ be so small that $c\in [(a-1)/2,(b+1)/2] $. (This can be done because of our assumption of Case 2.) Let $f(t):=\lambda\sqrt{1-t^2} \exp(-Q(t)) / \pi^2$. Applying Lemma \ref{50} (with $A:=(a-1)/2$, $B:=(b+1)/2$) we have $\sqrt{1-x^2}h_c(x)=(f(c)+o(1)) (-\log|c-x|)$ for $x\in [c- \sqrt{\epsilon} ,c)$ as $\epsilon\to 0^+$, which easily leads to \begin{eqnarray*}\label{} h_c(x)= ({f(c) \over \sqrt{1-c^2}}+o(1)) (-\log|c-x|) \hbox{ for } x\in [c- \sqrt{\epsilon} ,c) \hbox{ as } \epsilon\to 0^+. \end{eqnarray*} From here using $h_c(x)=v_c(x)-B(x)$ we get \begin{eqnarray}\label{63} v_c(x)= ( {f(c) \over \sqrt{1-c^2}} +o(1)) (-\log|c-x|) \hbox{ for } x\in [c-\sqrt{\epsilon} ,c) \hbox{ as } \epsilon\to 0^+. \end{eqnarray} Clearly, \rf{63} also holds for $x\in (c,c+\sqrt{\epsilon} ]$ (which can be seen by stating Lemma \ref{50} for $-1<A<B$ instead of $A<B<1$). $f(x)$ has a positive lower bound on $[(a-1)/2,(b+1)/2]$. So we can choose $\epsilon $ so small that the right hand side of \rf{63} is at least $\iota $ for all possible values of $c$ and $x$. Hence $v_c(x)=v_c(x)_\iota ^+$ and \begin{eqnarray*}\label{} { \int_I v_c(t)_\iota ^+ dt \over \int_J v_c(t)_\iota ^+ dt } = { ( {f(c) \over \sqrt{1-c^2}} +o(1)) \int_I \log{1\over |c-t|} dt \over ( {f(c) \over \sqrt{1-c^2}} +o(1)) \int_J \log{1\over |c-t|} dt } =(1+o(1))^2 =1+o(1), \end{eqnarray*} where we used that $\log(1/|x|)$ has smooth integral on $[-2,2] $ (\cite{b0}, Proposition 20). \hfill \hspace{-6pt}\rule[-14pt]{6pt}{12pt \begin{lemma}\label{78} Let $F(x)=G(x)-H(x)$, where $F(x), \ G(x), \ H(x)$ are a.e. non-negative functions defined on an interval, $G(x)$ and $H(x)$ have smooth integrals and $H(x)\le (1-\eta)G(x)$ a.e. with some $\eta\in(0,1)$. Assume also that $\int_I F=0$ implies $\int_I G=\int_I H =0$, when the interval $I$ is small enough. Then $F(x)$ has smooth integral. \end{lemma} \noindent {\bf Proof.}\ Let $I$ and $J$ be two adjacent intervals of equal lengths $\epsilon$, where $\epsilon$ is ``small enough". Let $a:=\int_I G$, $A:=\int_J G$, $b:=\int_I H$, $B:=\int_J H$. By assumption \begin{eqnarray}\label{71} A=(1+o(1))a \quad \hbox{ and } \quad B=(1+o(1))b, \quad \hbox{ as } \epsilon\to 0^+ \end{eqnarray} and we have to show that $A-B=(1+o(1))(a-b).$ We may assume that $a-b\not= 0,$ otherwise $a=b=0$ from the assumption of the lemma and so $A=B=0.$ Integrating $H\le (1-\eta)G $ on $I$ we get $b\le (1-\eta)a,$ from which $(a+b)/$ $(a-b)\le (1+(1-\eta))/(1-(1-\eta)).$ Thus, from \rf{71} \begin{eqnarray*}\label{} |(A-a)-(B-b)| \le |o(1)|(a+b)\le |o(1)|(a-b). \end{eqnarray*} \hfill \hspace{-6pt}\rule[-14pt]{6pt}{12pt Following the proof of Lemma 24 of \cite{b0} we will prove the following lemma. But we remark that the absolutely continuous hypothesis of Lemma 24 is unnecessary at \cite{b0}. \begin{lemma}\label{140} Let $N(x)$ be a bounded, increasing, right-continuous function on $[-1,1]$ and let $f(x) \in L^1([-1,1])$ be non-negative. Then \begin{eqnarray}\label{110} PV \int_{-1}^1 {f(t)N(t)\over t-x}dt = - N(1)f_{1}(x) + \int_{(-1,1]} f_t(x)dN(t), \quad a.e. \ x\in[-1,1], \end{eqnarray} where the integral on the right hand side is a Lebesgue-Stieltjes integral and \begin{eqnarray*}\label{} f_c(x):= -PV \int_{-1}^c {f(t)\over t-x}dt, \quad a.e. \ x\in[-1,1]. \end{eqnarray*} \end{lemma} \noindent {\bf Proof.}\ Let us denote the left hand side of \rf{110} by $F(x)$. Since $f(x)$ and $f(x)N(x)$ are in $L^1[-1,1]$ and $N(x)$ is increasing, there is a set of full measure in $(-1,1)$ where $f_1(x)$, $F(x)$ and $N'(x)$ all exist. Let $x$ be chosen from this set. It follows that $f_c(x)$ exist for all $c\in[-1,1]\setminus\{x\}$. Also, \begin{eqnarray}\label{178} F(x) = \lim_{\epsilon \to 0^+} \Big( \int_ {-1}^{x-\epsilon } {f(t)N(t)\over t-x}dt + \int_{x+\epsilon }^1 {f(t)N(t)\over t-x}dt \Big). \end{eqnarray} $t \to f_t(x)$ is a continuous increasing function on $[-1,x)$ and it is a continuous decreasing function on $(x,1]$ so at \rf{178} we can use integration by parts to get $$ \int_{-1}^{x-\epsilon} + \int_{x+\epsilon }^1 = -f_{x-\epsilon }(x)N(x-\epsilon ) + f_{-1}(x)N(-1) + \int_{(-1,x-\epsilon ]} f_t(x)dN(t) $$ \begin{eqnarray*}\label{} +f_{x+\epsilon }(x)N(x+\epsilon ) - f_{1}(x)N(1) + \int_{(x+\epsilon ,1]} f_t(x)dN(t) \end{eqnarray*} But above $f_{-1}(x)=0$ and $$ f_{x+\epsilon }(x)N(x+\epsilon ) - f_{x-\epsilon }(x)N(x-\epsilon ) $$ \begin{eqnarray}\label{120} = [f_{x+\epsilon }(x) - f_{x-\epsilon }(x)]N(x+\epsilon ) +f_{x-\epsilon }(x) [N(x+\epsilon ) -N(x-\epsilon ) ]. \end{eqnarray} Note that \begin{eqnarray*}\label{} f_{x+\epsilon }(x) - f_{x-\epsilon }(x) = -PV \int_{x-\epsilon }^{x+\epsilon } {f(t) \over t-x}dt\to 0 \quad \hbox{ as } \epsilon\to 0^+, \end{eqnarray*} since $f_1(x)$ exists. Also, $0 \le f_{x-\epsilon }(x) \le c_1\log(1/\epsilon )$, $0<\epsilon<1$, which implies that the second term at \rf{120} tends to zero (since $N$ is differentiable at $x$). Putting these together, we get that on one hand, \begin{eqnarray}\label{121} \lim_{\epsilon \to 0^+} \Big( \int_{(-1,x-\epsilon ]} f_t(x)dN(t) + \int_{(x+\epsilon,1]} f_t(x)dN(t) \Big) \end{eqnarray} exists and equals to $F(x)+f_{1}(x)N(1)$, and on the other hand, \rf{121} equals to \begin{eqnarray}\label{122} \int_{ (-1,1]\setminus\{x\} } f_t(x)dN(t) = \int_{(-1,1]} f_t(x)dN(t) \end{eqnarray} by the monotone convergence theorem (which can be used since $c \to f_c(x)$ is bounded from below on $[-1,1]$ since $f_1(x)$ is finite). The the continuity of $N$ at $x$ allowed us to integrate on the whole $[-1,1]$ at \rf{122}. \hfill \hspace{-6pt}\rule[-14pt]{6pt}{12pt \begin{lemma}\label{127} Let $[a,b]$ be arbitrary and let $1<\lambda $ be chosen to satisfy the conclusion of Lemma \ref{30}. Then $V_\lambda(x)$ has smooth integral on $[a,b]$. \end{lemma} \noindent {\bf Proof.}\ To keep the notations simple we will assume that $-1<a<b<1$, and $a_\lambda =-1$, $b_\lambda =1$, that is, the support of $\mu_{W^\lambda }$ is $[-1,1]$. This can be done without loss of generality. Define \begin{eqnarray*}\label{} v(t) := {\lambda \sqrt{1-t^2}e^{-Q(t)}\over \pi^2\sqrt{1-x^2}} \ \hbox{ and } \ M(t):=\lim_{s\to t^+} e^{Q(s)}Q'(s), \end{eqnarray*} where $v(t)$ also depends on the choice of $x$. Note that $M(t)$, $t\in[-1,1]$, is a bounded, increasing, right-continuous function which agrees with $\exp(Q(t))Q'(t)$ almost everywhere. Applying Lemma \ref{140} for $f(t):=v(t)$ and $N(t):=M(t)$, let us fix an $x \in [a,b]$ value for which both \rf{110} and $d\mu_{W^\lambda }(x) = V_\lambda (x) dx$ are satisfied. (These are satisfied almost everywhere.) From \rf{90} and Lemma \ref{140} we have $$ V_\lambda (x) = {1 \over \pi\sqrt{1-x^2}} + PV \int_{-1}^1 {\lambda \sqrt{1-t^2}Q'(t) \over \pi^2\sqrt{1-x^2}(t-x)}dt $$ \begin{eqnarray*}\label{} = {1 \over \pi\sqrt{1-x^2}} + PV \int_{-1}^1 {v(t)M(t)\over t-x}dt = L(x) + \int_{(-1,1 ]} v_t(x)dM(t), \end{eqnarray*} where $L(x):= 1 / (\pi\sqrt{1-x^2}) - M(1)B(x) $. Let $0<\iota$. Since $L(x)$ is a continuous function on $[a,b]$ (see Remark \ref{146}), $L(x)_\iota^+$ and $L(x)_\iota^-$ have smooth integrals on $[a,b]$. Also, by Lemma \ref{79} ${\cal F}^+$ and ${\cal F}^-$ have uniformly smooth integrals on $[a,b]$, so both $$ V_\lambda (x)_{(\iota)}^{(+)} := L(x)_\iota^+ + \int_{(-1,1 ]} v_t(x)_\iota^+ dM(t) \quad \hbox{ and } $$ \begin{eqnarray*}\label{} V_\lambda (x)_{(\iota)}^{(-)} := L(x)_\iota^- + \int_{(-1,1 ]} v_t(x)_\iota^- dM(t) \end{eqnarray*} have smooth integrals on $[a,b]$. (These new functions are not to be mixed with $V_\lambda (x)_{\iota}^{-}$ and $V_\lambda (x)_{\iota}^{-}$.) Set \begin{eqnarray*}\label{} V_\lambda (x)_{(\iota)} := V_\lambda (x)_{(\iota)}^{(+)} - V_\lambda (x)_{(\iota)}^{(-)}. \end{eqnarray*} Then, using $| z_\iota^+ - z_\iota^- - z| \le \iota, \ z\in\rr$, we get $$ | V_\lambda(x)_{(\iota)} - V_\lambda (x)| \le |L(x)_\iota^+ - L(x)_\iota^- - L(x)| + \int_{(-1,1]} |v_t(x)_\iota^+ - v_t(x)_\iota^- - v_t(x)| dM(t) $$ \begin{eqnarray}\label{80} \le \iota + \int_{(-1,1]} \iota dM(t) = \iota (1+M(1)-M(-1)). \end{eqnarray} So \begin{eqnarray}\label{105} V_\lambda(x)_{(\iota)} \to V_\lambda (x) \ \hbox{ uniformly a.e. on } [a,b] \hbox{ as } \iota \to 0^+. \end{eqnarray} And since \begin{eqnarray}\label{81} V_\lambda (x) \ \hbox{ has positive lower bound a.e. on } \ [a,b], \end{eqnarray} $V_\lambda (x)_{(\iota)}$ has also positive lower bound a.e. on $[a,b]$, assuming $\iota$ is small enough. In addition, $v_t(x)\ge 0$ when $t\in[0,x]$, whereas $v_t(x)\ge B(x)\ge -||B||_{[a,b]} $ when $t\in(x,1]$, so $V_\lambda (x)_{(\iota)}^{(-)}$ is bounded a.e. on $[a,b]$. It follows that $V_\lambda (x)_{(\iota)}^{(-)} \le (1-\eta)V_\lambda(x)_{(\iota)}^{(+)}$ a.e. $x\in [a,b]$ for some $\eta\in(0,1)$. Applying Lemma \ref{78} we conclude that $V_\lambda (x)_{(\iota)}$ has smooth integral on $[a,b]$ (if $\iota$ is small enough). Therefore $V_\lambda (x)$ has smooth integral by \rf{105} and \rf{81}. \hfill \hspace{-6pt}\rule[-14pt]{6pt}{12pt Approximation by weighted polynomials with varying weights was introduced by Saff (\cite{saff}). In our proof we shall utilize the strong connection between weighted polynomials and homogeneous polynomials on the plane. It was proved by Kuijlaars (\cite{kuijlaars}, see also \cite{st}, Theorem VI.1.1) that when $\alpha=0$ at \rf{170} then there exists a closed set $Z(w)\subset\rr$ with the property that a continuous function $f(x), \ x\in\rr$, is the uniform limit of weighted polynomials $w^nP_n$ $(n=0,1,2,...)$ on $\rr$ if and only if $f(x)$ vanishes on $Z(w)$. We formulate the following version of this theorem. \begin{lemma}\label{39} Assume that $0<\alpha$ at \rf{170}. Then there exists a closed set $ Z_\rrr(w) $ such that a continuous function $f(x), \ x\in\rrr$, is the uniform limit of weighted polynomials $ w^n p_n$ $(n=0,2,4,...)$ on $\rrr$ if and only if $f(x)$ vanishes on $Z_\rrr(w)$. \end{lemma} \noindent {\bf Proof.}\ Let $X:=\rrr$. Note that $w^n p_n$ is continuous on $\rrr$ when $n$ is even. (Naturally the value $(w^n p_n)(\infty)$ is defined to be $\lim_{x\to \pm \infty} (w^n p_n)(x)$.) Let ${\cal A}$ be the collection of continuous functions $f$ on $X$ such that $w^n p_n \to f$ $(n=0,2,4,...)$ uniformly on $X$ for some $p_n$. Define the set $ Z_\rrr(w) :=\{x\in X: \ f(x)=0 \hbox{ for all } f\in {\cal A} \}$, which is certainly closed. It is easy to see (similarly as in \cite{st}, Theorem VI.1.1) that ${\cal A}$ is an algebra which is closed under uniform limits. Also, it separates points in the sense that if $x_1, x_2 \in X\setminus Z_\rrr(w) $ are two distinct points, then there exists $f \in {\cal A} $ such that $f(x_1)\not= f(x_2)$. Indeed, let us assume that, say, $x_2$ is finite and let $g \in {\cal A}$ such that $g(x_1)\not=0$. Let $w^n p_n \to g$ $(n=0,2,4,...)$ uniformly on $X$. Then $w^{n+2}(x)[(x-x_2)^2 p_n(x)] \to w^2(x)(x-x_2)^2g(x)=:f(x)$ $(n=0,2,4,...)$ uniformly on $X$ because $||w^2(x)(x-x_2)^2||_\rrr < +\infty$. Thus $f(x) \in{\cal A} $. And $f(x_1)\not=0=f(x_2)$ (which holds even if $x_1$ was infinity.) Since ${\cal A}$ satisfies the properties above, by the Stone-Weierstrass Theorem \begin{eqnarray*}\label{} {\cal A} = \{f: \ f \hbox{ is continuous on } X \hbox{ and } f\equiv 0 \hbox{ on } Z_\rrr(w) \}. \end{eqnarray*} \hfill \hspace{-6pt}\rule[-14pt]{6pt}{12pt We now restate Theorem \ref{k10} and prove it. \begin{theorem}\label{201} For a weight satisfying \rf{42} and \rf{43} we have $ Z_\rrr(W) =\emptyset.$ That is, any continuous function $g:\rrr\to\rr$ can be uniformly approximated by weighted polynomials $W^n p_n$ $(n=0,2,4,...)$ on $\rrr$. \end{theorem} \noindent {\bf Proof.}\ Let $x_0\in\rrr$. We show that $x_0\not\in Z_\rrr(W)$. First let us assume that $x_0$ is finite. Choose $J:=[a,b]$ such that $a<x_0<b$ holds. Let $f(x)$ be a continuous function which is zero outside $J$ and $f(x_0)\not= 0$. Let $1<\lambda =u/v$ ($u,v\in\mathbb N^+$) be a rational number for which the conclusion of Lemma \ref{30} holds. Now we use a powerful theorem of Totik. Since $V_\lambda $ has a positive lower bound a.e. on $J$ and it has smooth integral on $J$ (see Lemma \ref{127}), by \cite{totik}, Theorem 1.2, $(a,b)\cap {Z}(W^\lambda )=\emptyset$. So we can find $P_n$ $(n=0,1,2,...)$ such that $(W^\lambda )^nP_n \to f$ uniformly on $\rrr$. So for $n:=Nv,$ we have \begin{eqnarray}\label{31} W^{Nu} p_{Nu} \to f, \quad N=0,1,2,..., \ \hbox{ uniformly on } \ \rrr, \end{eqnarray} where $p_{Nu}:=P_{Nv}$ and $\deg (p_{Nu})\le Nv \le Nu$. For all fixed $s\in\{0,...,u-1 \}$ if we approximate $f/W^s$ instead of $f$ at \rf{31}, it easily follows that there exist $p_k$ $(k=0,1,2,...)$ such that \begin{eqnarray}\label{38} W^{k} p_{k} \to f, \quad k=0,1,2,..., \ \hbox{ uniformly on } \ \rrr. \end{eqnarray} Using only $k=0,2,4,...$, we get $x_0\not\in Z_\rrr(W)$ by Lemma \ref{39}. Now let $x_0=\infty$. Define $$W_0(x):={1 \over |x|} W(-{1 \over x}).$$ Note that $1/W_0(x) \ (= |x|/W(-1/x) )$ and $|x|/W_0(-1/x) \ (= 1/W(x))$ are positive and convex functions because $W$ satisfies \rf{43} and \rf{42}. Let $g$ be a continuous function on $\rrr$. Define $-1/\infty$ to be $0$ and $-1/0$ to be $\infty$. (So $g(x)$ is continuous on $\rrr$ if and only if $g(-1/x)$ is continuous on $\rrr$.) Observe that for some $p_n$ we have \noindent $W^n(x)p_n(x) \to g(x)$ $(n=0,2,4,...)$ uniformly on $\rrr$, iff \noindent $W^n(-1/x)p_n(-1/x) \to g(-1/x)$ $(n=0,2,4,...)$ uniformly on $\rrr$, iff \noindent ${W_0}^n(x) q_n(x), \to g(-1/x)$ $(n=0,2,4,...)$ uniformly on $\rrr$, \noindent where $q_n(x):=x^n p_n(-1/x)$ are polynomials, $\deg q_n \le n$. Now let $f(x)$ be a continuous function on $\rrr$ which is zero in a neighborhood of $0$ but $f(\infty)\not=0$. By what we have already proved, $q_n$ polynomials exist such that ${W_0}^n(x)q_n(x)$ $(n=0,2,4,...)$ tends to $f(-1/x)$ uniformly. Therefore we can approximate $f(x)$ uniformly by $W^n(x)p_n(x)$ $(n=0,2,4,...)$, where $p_n(x):=x^n q_n(-1/x)$. \hfill \hspace{-6pt}\rule[-14pt]{6pt}{12pt \begin{lemma}\label{157} Let $f(x,y), \ (x,y) \in Bd(K) $, be a continuous function such that $f(x,y)=f(-x,-y)$ for all $(x,y)\in Bd(K) $. Then homogeneous polynomials $$ h_n(x,y) := \sum_{k=0}^n a_k^{(n)} x^{n-k}y^k, \quad n=0,2,4,...$$ exist such that $h_n(x,y) \to f(x,y)$ $(n=0,2,4,...)$ uniformly on $ Bd(K) $. \end{lemma} \noindent {\bf Proof.}\ Recall the definition: $y(t)/x(t)=t, \ t\in\rrr,$ where $(x(t),y(t))\in Bd(K) $ and $W(t):= |x(t)|$. Define $$f(t):=f(x(t),y(t))=f(-x(t),-y(t)), \quad t\in\rrr.$$ Note that if $n$ is an even number (and $a_k^{(n)}$ are unknowns) then $$\sum_{k=0}^n a_k^{(n)} x^{n-k}(t) y^k(t) = x^n(t) \sum_{k=0}^n a_k^{(n)} \Big({y(t) \over x(t)} \Big)^k$$ \begin{eqnarray}\label{202} = |x(t)|^n \sum_{k=0}^n a_k^{(n)} t^k =W^n(t) p_n(t), \end{eqnarray} where $p_n(t):=\sum_{k=0}^n a_k^{(n)} t^k$, $\deg p_n \le n$. (When $t=\infty$, the left hand side of \rf{202} again equals to $(W^n p_n)(\infty) := \lim_{t\to \pm \infty} W^n(t)p_n(t)$.) But by Theorem \ref{201} there exist $W^n(t)p_n(t)$ $(n=0,2,4,...)$ which tends to $f(t)$ uniformly on $\rrr$. This completes the proof, since for any $(x,y) \in Bd(K) $ there exists $t\in\rrr$ such that either $(x(t),y(t))=(x,y)$ or $(-x(t),-y(t))=(x,y)$. \hfill \hspace{-6pt}\rule[-14pt]{6pt}{12pt \noindent {\bf Proof of Theorem \ref{k2}.} \noindent Define $f(x,y):=1, \ (x,y) \in Bd(K)$. By Lemma \ref{157} there exist $h_{2n} \in H_{2n}^2$, $n \in \mathbb{N}$, such that $||1-h_{2n}||_{ Bd(K) } \to 0$. From here Theorem \ref{k2} follows the same way Theorem \ref{k1} follows from Lemma \ref{k3}. \hfill \hspace{-6pt}\rule[-14pt]{6pt}{12pt
{ "timestamp": "2005-10-18T08:26:27", "yymm": "0510", "arxiv_id": "math/0510367", "language": "en", "url": "https://arxiv.org/abs/math/0510367", "abstract": "By the celebrated Weierstrass Theorem the set of algebraic polynomials is dense in the space of continuous functions on a compact set in R^d. In this paper we study the following question: does the density hold if we approximate only by homogeneous polynomials? Since the set of homogeneous polynomials is nonlinear this leads to a nontrivial problem. It is easy to see that: 1) density may hold only on star-like origin-symmetric surfaces; 2) at least 2 homogeneous polynomials are needed for approximation. The most interesting special case of a star-like surface is a convex surface. It has been conjectured by the second author that functions continuous on origin-symmetric convex surfaces in R^d can be approximated by a pair of homogeneous polynomials. This conjecture is not resolved yet but we make substantial progress towards its positive settlement. In particular, it is shown in the present paper that the above conjecture holds for 1) d=2, 2) convex surfaces in R^d with C^(1+epsilon) boundary.", "subjects": "Classical Analysis and ODEs (math.CA)", "title": "A Weierstrass-type theorem for homogeneous polynomials", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9888419680942663, "lm_q2_score": 0.8221891261650247, "lm_q1q2_score": 0.813015113662728 }
https://arxiv.org/abs/1604.00592
The isoperimetric problem in the plane with the sum of two Gaussian densities
We consider the isoperimetric problem for the sum of two Gaussian densities in the line and the plane. We prove that the double Gaussian isoperimetric regions in the line are rays and that if the double Gaussian isoperimetric regions in the plane are half-spaces, then they must be bounded by vertical lines.
\section{Introduction} Sudakov-Tsirelson and Borell proved independently (see \cite[18.2]{morgan}) that for $\mathbb{R}^n$ endowed with a Gaussian measure, half-spaces bounded by hyperplanes are isoperimetric, i.e., minimize weighted perimeter for given weighted volume. Ca\~{n}ete et al. \cite[Question 6]{canete}, in response to a question of Brancolini, conjectured that for $\mathbb{R}^n$ endowed with a finite sum of Gaussian measures centered on the $x$-axis, half-spaces bounded by vertical hyperplanes are isoperimetric. We consider the case of two such Gaussians in $\mathbb{R}^1$ or $\mathbb{R}^2$. Our Theorem \ref{2.18} proves that on the double Gaussian line, rays are isoperimetric. Section 4 provides evidence that on the double Gaussian plane, half-spaces are isoperimetric. \begin{DGL} Theorem \ref{2.18} states that the isoperimetric regions in the double Gaussian line are rays. We may assume that the two Gaussians have centers at $1$ and $-1$. For small variances, the theorem follows by comparison with the single Gaussian. For larger variances, additional quantitative and stability arguments are needed to rule out certain non-ray cases. \end{DGL} \begin{DGP} A conjecture, stated in this paper as Conjecture \ref{4.1}, of Ca\~{n}ete et al. \cite[Question 6]{canete} states that isoperimetric regions in the double Gaussian plane are half-planes bounded by vertical lines. We use variational arguments to show that horizontal and vertical lines are the only lines that are candidates, and that vertical lines always beat horizontal lines. \end{DGP} \section{First and Second Variations} Formulas \ref{2.3} and \ref{2.5} state standard first and second variation formulas, analogous to the first and second derivative conditions for local minima of twice-differentiable real functions. \begin{definition} \label{2.31} A \textit{density} $e^\psi$ on $\mathbb{R}^n$ is a positive, continuous function used to weight volume and hypersurface area. Given a density $e^\psi$, the (weighted) \textit{volume} of a region $R$ is given by $$\int_R e^\psi\, dV_0.$$ The (weighted) \textit{hypersurface area} of its boundary $\partial R$ is given by $$\int_{\partial R} e^\psi\, dA_0.$$ $R$ is called \textit{isoperimetric} if no other region of the same weighted volume has a boundary with smaller hypersurface area. \end{definition} We now assume that the density $e^\psi$ is smooth. The existence and regularity of isoperimetric regions for densities of finite total volume is standard. \begin{exisreg}[{see \cite[5.5, 9.1, 8.5]{morgan}}] \label{2.32} Suppose that $e^\psi$ is a density in the line or plane such that the line or plane has finite measure $A_0$. Then for any $0 < A < A_0$, an isoperimetric region $R$ of weighted volume $A$ exists and is a finite union of intervals bounded by finitely many points in the line or a finite union of regions with smooth boundaries in the plane. \end{exisreg} Let $e^\psi$ be a smooth density on $\mathbb{R}^{n+1}$. Let $R$ be a smooth region in $\mathbb{R}^{n+1}$. Let $\varphi_t$ be a smooth, one-parameter family of deformations on $\mathbb{R}^{n+1}$ such that $\varphi_0$ is the identity. For a given $x \in \partial R$, $\varphi_t(x)$ traces out a small path in $\mathbb{R}^{n+1}$ beginning at $x$ and $\varphi_t(\partial R)$ is a curve for each $t$. Therefore $\{\varphi_t\}$ where $|t| < \epsilon$ describes a perturbation of $\partial R$. Define $$V(t) = \int_{\varphi_t(R)} e^\psi dV_0 ,\,\, P(t) = \int_{\varphi_t(\partial R)} e^\psi dA_0.$$ \begin{firstvar}[{see \cite[Lemma 3.1]{rosal}}] \label{2.3} Suppose that $\mathbf{n}$ and $H$ are the inward unit normal and mean curvature of $\partial R$. Let $X$ be the vector field $d\phi_t/dt$ and $u = \langle X, \mathbf{n} \rangle$. Then we have that $$V'(0) = -\int_{\partial R} e^\psi u \, dA_0, \,\, P'(0) = -\int_{\partial R} (nH - \langle \nabla{\psi}, \mathbf{n} \rangle) e^\psi u \, dA_0.$$ \end{firstvar} \iffalse The proof we give of Theorem 2.3 is the same as that given in \cite{rosal}. \begin{proof} We have that $V(t) = \int_{\varphi_t(A)} e^\psi dV_0 = \int_{A} e^\psi dV_0_t$. Therefore $$V'(t) = \int_A e^\psi \frac{d}{dt} (dV_0_t) \big|_{t=0} + \frac{d(e^\psi)}{dt} dV_0_t = \int_A e^\psi \frac{d}{dt} (dV_0_t) \big|_{t=0} + \langle \nabla{e^\psi}, X \rangle dV_0_t.$$ It has been shown that $d/dt (dV_0_t) \big|_{t=0} = (\text{ div} X ) \, dV_0$, so that $$V'(t) = \int_A e^\psi \text{ div} X \, dV_0 + \langle \nabla{e^\psi}, X \rangle dV_0_t.$$ Therefore $$V'(0) = \int_A f \text{ div} X \, dV_0 + \langle \nabla{f}, X \rangle dV_0 = -\int_{l} fu \, dA_0.$$ The last equality comes from applying the Divergence Theorem; the negative sign appears because $N$ is the inward normal. Let $\text{div}_l$ denote the divergence relative to $l$, and let $\text{grad}_l$ denote the gradient relative to $l$. For the perimeter, we have that $P(t) = \int_{\varphi_t(l)} f \, dA_0 = \int_{l} f \, dA_0_t.$ Therefore $$P'(t) = \int_l f \frac{d}{dt} (dA_0_t) \big|_{t=0} + \frac{df}{dt} \, dA_0_t = \int_l f \text{ div}_l X \, dA_0 + \langle \nabla{f}, X \rangle \, dA_0_t.$$ Note that $\langle \nabla f, X \rangle = \langle \nabla_l f, X \rangle + \langle \nabla f, uN \rangle.$ Therefore $$P'(0) = \int_l \text{div}_l (fX) + \langle \nabla f, uN \rangle \, dA_0 = \int_l \text{div}_l (fX) + \langle \nabla e^{\psi}, uN \rangle \, dA_0$$$$ = \int_l -nHfu + e^{\psi}u \langle \nabla \psi, N \rangle \, dA_0 = \int_l -nHfu + fu \langle \nabla \psi, N \rangle \, dA_0.$$ \end{proof} \fi Since any isoperimetric curve is a local minimum among all curves enclosing a certain volume $A$, it satisfies $P'(0) = 0$ for any $\varphi_t$ such that $V(t) = A$ for small $t.$ \begin{corollary} \label{2.4} If a curve $\partial R$ is isoperimetric, then $(nH - \langle \nabla{\psi}, \mathbf{n} \rangle)$ is constant on $\partial R$. \end{corollary} \begin{proof} If a curve $\partial R$ is isoperimetric, then it satisfies $P'(0) = 0$. By Formula \ref{2.3}, this occurs if and only if $(nH - \langle \nabla{\psi}, \mathbf{n} \rangle)$ is constant on $\partial R$. \end{proof} \begin{definition} \label{4.4} Let $C$ be a boundary in the line or plane with unit inward normal $\textbf{n}$ and let $\kappa$ denote the standard curvature. For a density $e^\psi$, we call $\kappa_\psi = \kappa - d\psi/d\textbf{n}$ the \textit{generalized curvature} of $C$. \end{definition} By Corollary \ref{2.4}, all isoperimetric curves have constant generalized curvature. In the real line, $n=0$, so that isoperimetric curves have $\langle \nabla \psi, \mathbf{n} \rangle$ constant. For the interval $[a,b]$, the generalized curvature evaluated at $b$ is equal to $\psi ' (b)$ while the generalized curvature evaluated at $a$ is equal to $-\psi ' (a)$. \begin{secondV_0ar}[{see \cite[Proposition 3.6]{rosal}}] \label{2.5} Let the real line be with smooth density $e^\psi$. If a one-dimensional boundary $l = \partial R$ satisfies $P'(0) = 0$ for any volume-preserving $\{\varphi_t\}$, then $$(P - \kappa_\psi V)''(0) = \int_l fu^2(\frac{d^2\psi}{dx^2}) \,da.$$ \end{secondV_0ar} \begin{proof} This formula comes from Proposition 3.6 in \cite{rosal}, where the second variation is stated for arbitrary dimensions. Some terms from the general formula cancel in the one-dimensional case. \end{proof} \begin{corollary} \label{2.7} Let $S$ be a subset of the real line such that $\psi '' (x) \leq 0$ for all $x \in S$ with equality holding at no more than one point. If $B$ is an isoperimetric boundary contained in $S$, then $B$ is connected and thus a single point. \end{corollary} \begin{proof} If $B$ has at least two connected components, then since by Proposition \ref{2.32} $B$ consists of a finite union of points, there is a nontrivial volume-preserving flow on $B$ given by moving one component so as to increase the volume and the other so as to decrease it. By the Second Variation Formula \ref{2.5}, the second variation satisfies $$(P - \kappa_\psi V)''(0) = \int_B fu^2(\psi '' (x)) \,da < 0.$$ This contradicts that $B$ is isoperimetric. \end{proof} \section{Isoperimetric Regions on the Double Gaussian Line} Theorem \ref{2.18} states that for the real line with density given by the sum of two Gaussians with the same variance $a^2$, isoperimetric regions are rays bounded by single points. This theorem is a necessary condition for Conjecture \ref{4.1}, which states that isoperimetric regions in the double Gaussian plane are half-planes bounded by vertical lines. Propositions \ref{2.8}, \ref{2.22}, and \ref{2.10} treat the cases $a^2 \geq 1$, $1 > a^2 > 1/2$, and $1/2 \geq a^2 > 0$. Lemma \ref{2.9} shows that the Gaussians having the same variance allows us to reduce the problem to ruling out a few non-interval but still symmetrical cases. When the Gaussians have different variances, the problem is harder and not treated by our results. Let $g_{c,a}$ denote the Gaussian density with mean $c$ and variance $a^2$, and let $$f_{c,a}(x) = \dfrac{1}{2}(\dfrac{e^{-(x-c)^2/2a^2} + e^{-(x+c)^2/2a^2}}{a\sqrt{2\pi}}) = \dfrac{1}{2}(g_{c,a}(x) + g_{-c,a}(x)).$$ Let $$f(x) = \frac{1}{2}(f_1(x) + f_2(x)) = \frac{1}{2}(g_{1,a}(x) + g_{-1,a}(x)).$$ In one dimension, the regions are unions of intervals and their boundaries are points. Since the total measure is finite, isoperimetric regions exist by Proposition \ref{2.32}. For a given weighted length $A$, we seek to find the set of points with the smallest total density which bounds a region of weighted length $A$. Since the complement of a region of weighted length $A$ has weighted length $1-A$, we can assume that our regions have weighted length $0 \leq A \leq 1/2$. The following proposition shows that it suffices to consider the density $f$. \begin{proposition} \label{2.1} Suppose that $B$ is an isoperimetric boundary enclosing $a$ region $L$ of weighted length $A$ for the density $f_{1,a}(x)$. Then for any $b > 0$, $bB$ is an isoperimetric boundary enclosing region $bL$ of weighted length $A$ for the density $f_{b,ab}(x).$ \end{proposition} \begin{proof} Let $g$ denote the standard Gaussian density. First, we show that for any boundary $P$ enclosing a region $Q$, the weighted length of $bQ$ for the density $f_{b,ab}(x)$ is the same as the weighted length of $Q$ for the density $f_{1,a}(x)$. We have that $$|Q| = \int_Q f_{1,a}(x) dx = \frac{1}{2}\int_Q g_{1,a}(x) dx + \frac{1}{2}\int_Q g_{-1,a}(x) dx$$ $$= \frac{1}{2}\int_{(Q-1)/a} g(x) dx + \frac{1}{2}\int_{(Q+1)/a} g(x) dx = \frac{1}{2}\int_{(bQ-b)/(ab)} g(x) dx + \frac{1}{2}\int_{(bQ+b)/(ab)} g(x) dx $$ $$=\frac{1}{2}\int_Q g_{b,ab}(x) dx + \frac{1}{2}\int_Q g_{-b,ab}(x) dx = |bQ|,$$ where the $|...|$ denotes the weighted length in the appropriate densities. Second, for any two boundaries $P_1$ and $P_2$, we have that $f_{b,ab}(bx) = \frac{1}{b}f_{1,a}(x)$ for $x \in P_i.$ Thus, $|P_1| \geq |P_2|$ in the density $f_{1,a}(x)$ exactly when $|bP_1| \geq |bP_2|$ in the density $f_{b,ab}(x)$. Therefore $|bL| = A$ in the density $f_{b,ab}(x)$, and if any other boundary $P$ enclosing region $Q$ satisfies $|Q| = A$ in the density $f_{b,ab}(x)$, then since $B$ is isoperimetric, we have that $|B| \leq |P/b|$ in the density $f_{1,a}(x)$. Therefore $|bB| \leq |P|$ in the density $f_{b,ab}(x)$, so that $bP$ is isoperimetric. \end{proof} As a result of Proposition \ref{2.1}, it suffices to consider the density $$f = \frac{1}{2}(f_1 + f_2) = \frac{1}{2}(g_{1,a} + g_{-1,a}).$$. \begin{figure}[h] \centering \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=\linewidth]{f_plot.png} \caption{ $f(x)$} \end{subfigure}% \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=\linewidth]{p_plot.png} \caption{$\psi(x)$} \end{subfigure} \caption{Plots of $f$ and $\psi$. The purple curves are for $a^2 = 0.16,$ the blue curves for $a^2 = 1/2$, and the green curves for $a^2 = 1$.} \label{fg2.31} \end{figure} \begin{proposition} \label{2.2} Let $X$ be the disjoint union of two real-lines $X_1$ and $X_2$, each with a standard Gaussian density scaled so that it has weighted length $1/2$. For any given length $0 < A < 1/2$, the isoperimetric region in $X$ of length $A$ is a ray contained entirely in $X_1$ or $X_2.$ \end{proposition} \begin{proof} Let $B$ be an isoperimetric boundary and $B_i$ its intersection with $X_i$. If either $B_1$ or $B_2$ are nonempty, then they each must be single points since the isoperimetric boundaries for the single Gaussian are always single points. Assume, in contradiction to the proposition, that for $i=1,2$, $B_i = \{b_i\}$ is the $i$-th component on the $i$-th Gaussian bounding a ray $L_i$ of weighted length $A_i$. Since $A_1 + A_2 < 1/2$, it is possible to put a point $b_1'$ on the 1st Gaussian at the same height as that of $b_2$ bounding a ray $L_1'$ disjoint from $L_1$ and with weighted length $A_2$. Consider the boundary $B' = \{b_1,b_1'\}$, which has the same weighted perimeter as that of $B$. There exists a single point on $B_1$ bounding a ray of area $A$ and with weighted density smaller than $|B'| = |B|$. This contradicts the fact that $B$ is isoperimetric. \end{proof} \iffalse \begin{proposition} \label{2.6} When the line is endowed with the double Gaussian density $f$, isoperimetric boundaries $B$ always consist of a single point. Thus isoperimetric regions are always rays. \end{proposition} \begin{proof} Let $C = 1/2 \log(1 + \sqrt{2}).$ We first show that if $B$ is contained entirely outside of $[-C, C]$, then $B$ must be connected. Suppose that $B$ has two connected components. Then we can define an area-preserving veloctity $\{\varphi_t\}$. We have that $$\psi '' (x) = \frac{d^2\psi}{dx^2}= -\dfrac{2(e^{8x} -6e^{4x}+1)}{(e^{4x}+1)^2}.$$ Therefore we have that have that $C,-C$ are the only zeros of $\psi ''$. In addition, we have that $$\psi'''(x) = -\dfrac{64e^{4x}(e^{4x}-1)}{(e^{4x}+1)^3}.$$ When $x < 0,$ we have that $\psi'''(x) > 0$. Therefore $\psi '' (x) < 0$ for $x < -C$. Similarly, we have that $\psi'''(x) < 0$ for $x > 0$, so that $\psi '' (x) < 0$ for $x > C$. Since $\psi '' (x) < 0$ outside of $[-C, C]$, by the second variation formula, $$(P - \kappa_\psi V)''(0) = \int_l fu^2(\psi '' (x)) \,da < 0.$$ This is a contradiction. Suppose that $B$ contains at least two points $x,y$. Since $B$ must then contain a point in $[-C, C]$, we can assume $x \in [-C, C]$. We have that $$\psi ' (x) = -\dfrac{2 (1 + e^{4 x}(-1 + x) + x)}{(1 + e^{4 x})}.$$ Since $B$ has constant generalized curvature, it follows that $|\psi ' (x)| = |\psi ' (y)|$. Recall that we have shown that the only zeros of $\psi ''$ are $C, -C.$ By applying the first derivative test, it is shown that the maximum and minimum of $\psi '$ on $[-1.5,1.5]$ are achieved at $-1.5$ and $1.5$, respectively. Since we previously showed that $\psi '' (x) < 0$ for $x \geq 1.5$ and $\psi '' (x) > 0$ for $x \leq -1.5$, it follows that $B \subset [-1.5,1.5]$. Since $f(0) \leq f(c)$ for all $c \in [-1.5, 1.5]$, we have that $$f(x) + f(y) \geq 2f(0) > 0.288 > f(c)$$ for all $c \in \mathbb{R}$. This means that the that ray enclosing the same weighted length as $B$ has a smaller perimeter than that of $B$, contradicting the fact that $B$ is an isoperimetric boundary. \end{proof} We now begin analyzing the problem when the centers of the Gaussians are still $-1$ and $1$ but when the variances are different. \fi \begin{proposition} \label{2.33} For the double Gaussian density $f$, the log derivative $\psi'$ is given by $$\psi ' (x) = a^{-2}(-x + \tanh\frac{x}{a^2}).$$ \end{proposition} \begin{proof} We have that $$\psi'(x) = \dfrac{\dfrac{-e^{-\frac{(-1+x)^2}{2a^2}}(-1+x)}{a^2} + \dfrac{-e^{-\frac{(1+x)^2}{2a^2}}(1+x)}{a^2}}{e^{-\frac{(-1+x)^2}{2a^2}} + e^{-\frac{(1+x)^2}{2a^2}}}.$$ By using the substitution $$\tanh(x/a^2) = (e^{x/a^2} - e^{-x/a^2})/(e^{x/a^2} + e^{-x/a^2}),$$ we get that $$\psi ' (x) = a^{-2}(-x + \tanh\frac{x}{a^2}).$$ \end{proof} \begin{proposition} \label{2.8} For the double Gaussian density $f$, if $a \geq 1$, isoperimetric boundaries are single points. \end{proposition} \begin{proof} For any given $a$, we have that $$\psi ' (x) = a^{-2}(-x + \tanh\frac{x}{a^2}),$$$$ \psi '' (x) = a^{-4}(-a^2+\sech^2\frac{x}{a^2}),$$ and $$\psi ''' (x) = -2a^{-6}\sech^2\frac{x}{a^2}\tanh\frac{x}{a^2}.$$ \begin{figure}[h] \centering \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=\linewidth]{p1_plot.png} \caption{ $\psi'(x)$} \end{subfigure}% \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=\linewidth]{p2_plot.png} \caption{$\psi''(x)$} \end{subfigure} \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=\linewidth]{p3_plot.png} \caption{$\psi'''(x)$} \end{subfigure} \caption{Plots of $\psi',\psi'',$ and $\psi'''$. The purple curves are for $a^2 = 0.16,$ the blue curves for $a^2 = 1/2$, and the green curves for $a^2 = 1$.} \label{fg2.32} \end{figure} As shown in Figure \ref{fg2.32}, $\psi ''' (x)$ is positive for any $x< 0$ and negative for $x > 0$, so that $\psi '' (x)$ achieves its unique maximum at $x = 0$ for any given $a$. We have that $\psi ''(0) = (1-a^2)/(a^4)$, so that $\psi''(0)$ is greater than $0$ for $a < 1$, and less than or equal to $0$ for $a \geq 1$. If $a \geq1$, by Corollary \ref{2.7}, isoperimetric boundaries are always connected. Since isoperimetric boundaries consist of finite unions of points, they must be single points. \end{proof} \begin{lemma} \label{2.9} Let $p$ and $q$ be two real functions with $p(0) = q(0)$. Suppose $p$ and $q$ satisfy \begin{enumerate} \item $p'(0) = q'(0) \geq 0$, \item $q''(0) \geq p''(0)$ , \item $q''(0) \geq 0$, and \item $p''' < 0$ and $q''' > 0$ on $(0, \infty)$. \end{enumerate} For any $a, b> 0$, if $p(a) = q(b)$, then $q'(b) > p'(a)$. \end{lemma} \begin{figure}[h] \includegraphics[height=3in]{pic2.png} \caption{$q'$ is blue while $p'$ is purple. When the areas are equal as in the picture, $q'$ is higher. } \label{fg2.3} \end{figure} \begin{proof} As in Figure \ref{fg2.3}, for all $x > 0,$ by (2) and (4) $q''(x) > p''(x)$ and by (3) and (4) $q''(x) > 0$. If we choose $a'$ so that $q'(a') = p'(a)$, we will have that $a' < a$. \newpage Since by (4) $p'$ is concave and $q'$ is convex, \[ q(a') = \int_0^{a'} q'(t)\,dt \leq \frac{1}{2}(a'*q'(a'))\] \[< \frac{1}{2}(a*p'(a)) \leq \int_0^{a'} q'(t)\,dt = p(a) = q(b).\] Therefore $b > a'$, so that $q'(b) > p'(a)$, as asserted. \end{proof} \begin{proposition} \label{2.12} Suppose that $[s,t]$ is an interval of $f$-weighted length $0 < A < 1/2$ with $-1 < s < t < 1$. Then there exists a union of rays $B = (-\infty, c] \cup [d,\infty]$ of $f_1$-weighted length $A$ such that $f_1(c) <f_1(t) < f(t)$ and $f_1(d) < f_2(s) < f(s).$ \end{proposition} \begin{figure}[h] \centering \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=\linewidth]{int1.png} \caption{Interval in the double Gaussian} \end{subfigure}% \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=\linewidth]{int2.png} \caption{Two rays in the single Gaussian} \end{subfigure} \caption{The total areas are the same, but the heights in (B) are slightly lower.} \label{fg2.7} \end{figure} \begin{proof} Since $2+s = 1 + (s-(-1))$, we have that $f_1(2+s) = f_2(s)$. The union of rays $(-\infty,t] \cup [2+s,\infty)$ has greater $f_1$-weighted length than the $f$-weighted length of $[s,t]$. Therefore there exists $c < t$ and $d > 2+s$ such that $(-\infty,c] \cup [d,\infty)$ has $f_1$-weighted length $A$, and $$f_1(c) + f_1(d) < f_1(t) + f_2(s) < f(t) + f(s).$$ \end{proof} \iffalse \begin{proposition} \label{2.19} Suppose that $[a,b]$ is an interval of $f$-weighted length $0 < A < 1/2$ with $-1 < a < 1 < b$. Then there exists an interval $[c,d]$ of $f_1$-weighted length $A$ such that $f_1(c) <f_1(a) < f(a)$ and $f_1(d) < f_1(d) < f(d).$ \end{proposition} \begin{figure}[h] \centering \begin{subfigure}{0.4\textwidth} \centering \includegraphics[width=\linewidth]{bigint.png} \caption{$[a,b]$} \label{fig:sub1} \end{subfigure}% \begin{subfigure}{0.4\textwidth} \centering \includegraphics[width=\linewidth]{bigint2.png} \caption{$[a,b]$} \label{fig:sub2} \end{subfigure} \caption{The interval on the right has less area, and the height at the boundary points is smaller.} \label{fg2.8} \end{figure} \begin{proof} The interval $[a,b]$ has $f_1$-weighted length less than $A$.Therefore there exists $c < a$ and $d > b$ such that $[c,d]$ has $f_1$-weighted length $A$. Since $a < 1 < b$, we have that $f_1(c) < f_1(a) < f(a)$ and $f_1(d) < f_1(b) < f(b)$. \end{proof} \fi \begin{proposition} \label{2.13} If $[s,\infty)$ has $(1/2)f_1$-weighted length $0 < A \leq 1/4$, then there exists $t > s$ such that $[t, \infty)$ has $f$-weighted length $A$. \end{proposition} \begin{figure}[h] \centering \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=\linewidth]{ray1.png} \caption{Ray in the single Gaussian} \label{fig:sub1} \end{subfigure}% \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=\linewidth]{ray2.png} \caption{Ray in the double Gaussian} \label{fig:sub2} \end{subfigure} \caption{The total areas are the same, but the height in (B) is slightly lower.} \label{fg2.9} \end{figure} \begin{proof} If $[s,\infty)$ has $(1/2)f_1$-weighted length $0 < A \leq 1/4$, then $s \geq 1.$ The interval $[s, \infty)$ has $f$-weighted length greater than $A$. Therefore there exists $t > s$ such that $[t, \infty)$ has $f$-weighted length $A$. \end{proof} Now we begin analyzing the case where the variance satisfies $0 < a^2 < 1$. \begin{proposition} \label{3.36} If $a^2$ satisfies $0 < a^2 \leq 1$, then $\psi''(x) = 0$ exactly when $x = \pm a^2\arccosh (1/a)$. \end{proposition} \begin{proof} This follows from the formula for $\psi''(x)$ given in Proposition \ref{2.8}. \end{proof} Suppose that $a^2$ is a variance. In the proof of the following proposition, we will use the quantity \[ c_a = a^2\arccosh (1/a). \] \begin{proposition} \label{2.14} Suppose that $0 < a^2 \leq 1$ and $B$ is an isoperimetric boundary with at least one point $s$ in $[0,c]$ where $c = c_a,$ enclosing a region of weighted-length $0 < A < 1/2$. Then the boundary $B$ is one of the following: \begin{enumerate} \item a single point $s$ enclosing the ray $[s, \infty)$, \item \{s, t\} where $t > s$ enclosing the interval $[s,t]$, \item \{s, t\} where $s > 0 > t$ enclosing the interval $[t,s]$, \item \{s, -s, t\} enclosing $[-s,s] \cup (-\infty, t]$, $[-s,s] \cup [t,\infty)$ or $[s,t] \cup (-\infty, -s]$. \end{enumerate} The analogous claims apply if $s \in [-c,0]$. \end{proposition} \begin{figure}[h] \includegraphics[height=2in]{points2.png} \caption{On the graph of $\psi = \log f$, there are at most three points with $x > 0$ with the same value for $|\psi'(x)|$.} \label{fg2.10} \end{figure} \begin{proof} Since $B$ is isoperimetric, it can contain at most one point $x$ at which $\psi''(x) < 0.$ If it contained two such points, then by slightly shifting the two points we could create a new region with the same weighted length. By the second variation formula, the boundary of this region would have a smaller total density. Therefore $B$ can contain at most one point outside of $[-c,c]$. In addition, $B$ has constant curvature, so that $|\psi'|$ is constant on $B$ (see Figure \ref{fg2.10}). Since $\psi''(s)$ is positive on $[0, c)$ and negative on $(c, \infty]$, there exists one point $t > s > 0$ such that $\psi'(t) = \psi'(s)$ and one point $u > t > s > 0$ such that $-\psi'(u) = \psi'(s).$ Therefore $B$ is a subset of $\{s, t, u, -s, -t , -u\}.$ Suppose $B$ is not $(1)$. If $B$ contains no points outside of $[-c,c]$, then $B$ is $(3)$. Suppose $B$ contains one point $y$ outside of $[-c,c]$. If $t > 0$, then the only possibilities are $(2)$ or $(4)$. If $t < 0$, then the only possibilities are $(3)$ or $(4)$. The regions enclosed follow from the fact that we assume $0 < A < 1/2$. \end{proof} \begin{proposition} \label{2.16} Suppose that $B$ is an isoperimetric boundary with at least one point $s \in [-c,c].$ If $B$ is of type \ref{2.14}\emph{(3)} and $0 < a^2 \leq 1/2$, then the region $R$ enclosed by $B$ has $f$-weighted length no more than $1/4$. \end{proposition} \begin{proof} We have that $$\dfrac{d}{dx}(x - \arccosh(x)) = 1 - \dfrac{1}{\sqrt{x-1}\sqrt{1+x}} > 0$$ for $x > \sqrt{2},$ so that $x - \arccosh(x)$ is increasing on $(\sqrt{2}, \infty)$. If $y = 1/x,$ then the function $y - \arccosh(y) - 0.5$ decreases on $(0, 1/\sqrt{2})$. Since $\sqrt{2} - \arccosh(\sqrt{2}) - 0.5 > 0$, we have that $\arccosh(y) < y - 0.5$ on $(0, 1/\sqrt{2})$. Therefore $$c < a - 0.5a^2 \leq 1/\sqrt{2} - 1/4 < 1/2.$$ Consider the function $$I(x) = \int_{x-1}^x f_1(x) dx + \int_{x-1}^x f_2(x) dx$$ which sends $x$ to the weighted length of $[x-1,x].$ Then $$I'(x) = f_1(x) - f_1(x-1) + f_2(x) - f_2(x-1) = [f_2(x) - f_1(x-1)] + [f_1(x) - f_2(x-1)].$$ For $|x| < 1/2$, both the bracketed quantities are negative, so that $I$ is decreasing on $[0,c]$. We have that $$I(0) = \int_{-1}^0 f_2(x) dx + \int_{-1}^x f_1(x) dx= \int_{-1}^1 f_2(x) dx < \int_{-1}^\infty f_2(x) dx = 1/4.$$ Therefore if we can show that $s - t \leq 1$, we will have that the $f$-weighted length of $[s,t]$ is less than $I(s) \leq 1/4$ and be done. This follows immediately when $t = -s$, since $s \leq c < 1/2.$ When $t \neq -s$, we observe that $s-1$ is to the left of $-c$, so it suffices to show that $\psi'(s-1) \geq \psi'(t) = -\psi'(s)$. Thus we want to show that $$\psi'(s-1) + \psi'(s) = \psi'(s) - \psi'(1-s)$$$$ = ([(1 - s) -s] + [\tanh(s/a^2) - \tanh((1-s)/a^2)])/(a^2) \geq 0$$ on $[0,1/2]$. This is equivalent to showing that $$\gamma(s) := ([(1 - s) -s] + [\tanh(s/a^2) - \tanh((1-s)/a^2)]) \geq 0$$ on $[0,c]$. Since $|\tanh|< 1$, $\gamma(0) > 0$. In addition, $\gamma(1/2) = 0.$ Therefore it suffices to show that $\gamma$ achieves its minimum value on $[0,1/2]$ at $s = 1/2.$ We will do this by using the first derivative test to show that there is only one other local extremum in the interval and further demonstrating that this local extremum is not the minimum point. We have that $$\gamma'(s) = \sech^2(s/a^2)/a^2 + \sech^2((1-s)/a^2)/a^2 - 2.$$ Since $1/a^2 \geq 2$, we have that $\gamma'(0) > 0$. In addition, $$\gamma'(1/2) =2\sech^2(1/(2a^2))/a^2 - 2.$$ By using the substitution $$\sech^2(x) = 4/(e^{2x} + e^{-2x} +2),$$ we get that $$\sech^2(1/2x) = 4/(e^{1/x} + e^{-1/x} + 2) \leq 4/(e^{1/x} + 2).$$ Therefore $$\sech^2(1/2x)(1/x) \leq 4/(xe^{1/x} + 2x).$$ We have that $$\alpha(x) := (xe^{1/x}+2x)' = (2 + e^{1/x} - e^{1/x}/x).$$ When $0 < x \leq 1/2$, we have that $$\alpha(x) \leq 2 + e^{1/x} - 2e^{1/x} = 2 - e^{1/x} \leq 2 - e^2 < 0.$$ Therefore $\alpha(x)$ attains a minimum value of $e^2/2 + 1 > 4$ on $(0, 1/2]$. This shows that $$\sech^2(1/2x)(1/x) \leq 4/(xe^{1/x} + 2x) < 1$$ on $(0,1/2]$, so that $\gamma'(1/2) < 0.$ By the intermediate value theorem, there exists $z_1 \in (0, 1/2)$ such that $\gamma'(z_1) = 0.$ It follows that $z_2 = 1 - z_1 > 1/2$ is also a zero of $\gamma'$. Now $\sech^2(x) = \sech^2(-x)$ tends to $0$ as $x$ tends to $\infty$, so that $\gamma' < 0$ for some $s << 0$. Therefore there exists $z_3$ in $(-\infty , 0)$ such that $\gamma'(z_3) = 0$, and $z_4 = 1 - z_3 > 1$ is also a zero of $\gamma'$. Again using the substitution $$\sech^2(x) = 4/(e^{2x} + e^{-2x} +2),$$ we see that $\gamma'(s)$ is a rational function of $e^{2s/a^2}$ whose numerator is quartic. Therefore $\gamma'$ has at most $4$ zeros, so that $z_1$ is the only zero of $\gamma'$ in $(0, 1/2).$ Since $\gamma'(0) > 0$, $$\gamma(z_1) > \gamma(0) > \gamma(1/2),$$ so that $\gamma(s) \geq \gamma(1/2) = 0$ for $s \in [0,1/2].$ \end{proof} \begin{figure}[h] \centering \includegraphics[height=2in]{evidence.png} \caption{$\psi'(s) - \psi'(1-s)$} \label{ev} \end{figure} \begin{proposition} \label{2.17} If the variance $0 < a^2 \leq 1/2$, then the isoperimetric boundaries $B$ with one point $b$ in $[0,c]$ cannot be of type \ref{2.14}$(3)$. \end{proposition} \begin{figure}[h] \centering \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=\linewidth]{int1.png} \caption{Interval in the double Gaussian} \label{fig:sub1} \end{subfigure}% \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=\linewidth]{int2.png} \caption{Two rays in the single Gaussian} \label{fig:sub2} \end{subfigure}\\ \centering \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=\linewidth]{ray1.png} \caption{Ray in the single Gaussian} \label{fig:sub1} \end{subfigure}% \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=\linewidth]{ray2.png} \caption{Ray in the double Gaussian} \label{fig:sub2} \end{subfigure} \caption{When all the areas are the same, we have $(A) > (B) > (C)$ and $(D) > (C)$} \label{fg2.11} \end{figure} \begin{proof} Let $A$ be the weighted length of $B$. If, in contradiction to the proposition, $B$ is of type \ref{2.14}$(3)$, then $B$ is of the form $[a,b]$ where $-1 < a < b < 1$, as shown in Figure \ref{fg2.11}A. By Proposition \ref{2.12}, there exists a union of rays $(-\infty,c] \cup [d,\infty)$ with $f_1$-weighted length $A$ such that $f_1(c) + f_1(d) < f(a) + f(b)$. This is shown in Figure \ref{fg2.11}B. By the solution to the single Gaussian isoperimetric problem, there exists a ray $[s, \infty)$, as shown in Figure \ref{fg2.11}C, with $f_1$-weighted length $A$ such that $f_1(t) < f_1(c) + f_1(d)$. By Proposition \ref{2.16}, $A \leq 1/4$, so that $s \geq 1.$ By Proposition \ref{2.13}, there exists a ray $[t, \infty)$, as shown in Figure \ref{fg2.11}D, with $f$-weighted length $A$ such that $t > s.$ To get a contradiction to the fact that $B$ is isoperimetric, we show that $(f(a) + f(b)) - f(t) > 0.$ Write $$(f(a) + f(b)) - f(t)= [(f(a) + f(b)) - (f_1(c) + f_1(d))]$$$$ + [(f_1(c) + f_1(d)) - f_1(s)] + [f_1(s) - f(t)].$$ Since $[(f_1(c) + f_1(d)) - f_1(s)] > 0$, it suffices to show that $[(f(a) + f(b)) - (f_1(c) + f_1(d))] > [f(t) - f_1(s)]$. Since $f(a) > f_1(d)$, we have that $$[(f(a) + f(b)) - (f_1(c) + f_1(d))] > f(b) - f_1(c) > f(b) - f_1(b) = f_2(b).$$ Since $f(t) < f(s)$, we have that $$[f(t) - f_1(s)] < f(s) - f_1(s) = f_2(s).$$ Since $-1 < b < 1 < s$, we have that $f_2(s) < f_2(b)$, and this proves the claim. \end{proof} \begin{proposition} \label{2.21} If the variance $a^2 \leq 1/2$, then the isoperimetric boundaries $B$ with one point $b >0 $ in $[-c,c]$ cannot be of type \ref{2.14}\emph{(2)}. \end{proposition} \begin{figure}[h] \centering \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=\linewidth]{bigint1.png} \caption{Original ray} \label{fig:sub1} \end{subfigure}% \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=\linewidth]{bigint2.png} \caption{Reflected ray} \label{fig:sub2} \end{subfigure} \caption{When all the areas are the same, we have $(A) > (B) > (C)$ and $(D) > (C)$} \label{fg2.21} \end{figure} \begin{proof} We know that $f(a) < f(b)$ [recall the concavity/convexity argument], and since $f_2(b) < f_2(a)$, we must have that $f_1(a) < f_1(b).$ Pick $d > c$ such that $f_1(c) = f_1(b)$ and $f_1(d) = f_1(a)$ . In other words, we get $[c,d]$ by reflecting $[a,b]$ over the line $x = 1$. Since $a < 1$, either have $c < 1 < d$ or $1 < d < c$. In the first case, we have that $[c,d]$ has the same $f_1$-length as $[a,b]$, and since $c > a$ and $d > b$, we have that $f_2(c) < f_2(a)$ and $f_2(d) < f_2(b)$. Therefore $f(c) + f(d) < f(a) + f(b)$. At the same time, the $f_2$-length of $[c,d]$ is less than that of $[a,b]$. This difference is at most the $f_2$ length of $[a,\infty)$. Since $f_1(d) = f_1(a) > f_2(a)$, we can find $e > d$ such that $[c,e]$ has $f$-length $A$. In addition, $f(c) + f(e) < f(c) + f(d) < f(a) + f(b)$, so that $[a,b]$ is not isoperimetric. In the second case, we have that $[d,c]$ has the same $f_1$-length as $[a,b]$, and since $d,c > a,b$, we have that $f_2(d) < f_2(a)$ and $f_2(c) < f_2(b)$. Therefore $f(c) + f(d) < f(a) + f(b)$. At the same time, the $f_2$-length of $[d,c]$ is less than that of $[a,b]$. This difference is at most the $f_2$ length of $[a,\infty)$. Since $f_1(c) = f_1(a) > f_2(a)$, we can find $e > c$ such that $[d,e]$ has the $f$-length $A$. In addition, $f(c) + f(e) < f(c) + f(d) < f(a) + f(b)$, so that $[a,b]$ is not isoperimetric. \end{proof} \begin{proposition} \label{2.20} If the variance $a^2 \leq 1/2$, then the isoperimetric boundaries $B$ with one point $b$ in $[-c,c]$ cannot be of type \ref{2.14}\emph{(4)}. \end{proposition} \begin{proof} We may assume without loss of generality that $b \geq 0$. Suppose that $B$ is of type \ref{2.14}$(4)$. Then the region $L$ enclosed by $B$ consists of the union of an interval of type \ref{2.14}$(2)$ or \ref{2.14}$(3)$ and a ray. Apply Propositions \ref{2.17} and \ref{2.21} to get a new region $L'$ that beats the interval. Since $A < 1/2$, $L'$ may be chosen to not intersect the ray. Then the union of $L'$ and the ray beats $L$. \end{proof} \begin{proposition} \label{2.22} If $B$ is an isoperimetric boundary and the variance $a^2 \leq 1/2$, then $B$ is a single point. \end{proposition} \begin{proof} If $B$ does not contain a point $s \in [-c,c]$, then by Proposition \ref{2.7}, then $B$ is a single point. Otherwise, apply Propositions \ref{2.17}-\ref{2.20} to complete the proof. \end{proof} \begin{proposition} \label{2.10} For the line endowed with density $f(x)$, if the variance $a^2$ is such that $1/2 \leq a^2 < 1$, then isoperimetric regions $R$ are always rays with boundary $B$ consisting of a single point. \end{proposition} \begin{proof} By Proposition \ref{3.36}, we have that $\psi '' (x) = 0$ exactly when $x$ is $c = \pm a^2\arccosh (1/a)$. Since $\psi ''' (x) > 0$ for $x < 0$ and $\psi ''' (x) < 0$ for $x > 0$, we have that $\psi ''$ is negative outside of $[-c,c]$ and is positive in $(-c,c)$. \iffalse \begin{figure}[h] \includegraphics[height=2in]{secondderiv.png} \caption{$\psi '' (x) =\frac{dh}{dx}$ } \label{fg2.4} \end{figure} \begin{figure}[h] \includegraphics[height=2in]{derivative.png} \caption{$\psi ' (x) = \frac{d\psi}{dx}$} \label{fg2.5} \end{figure} \fi Suppose that $B$ is an isoperimetric boundary containing more than two points. By Corollary \ref{2.7}, $B$ does not lie entirely outside $[-c,c]$. Since $\psi '' (x) > 0$ on $(-c,c)$ and $\psi '' (\pm c) = 0$, the maximum and minimum of $\psi ' (x)$ on $[-c,c]$ are achieved at $c$ and $-c$ with $\psi ' (-c)$ negative and $\psi ' (c)$ positive. Since $\psi ' (x)$ tends to $-\infty$ as $x$ approaches $\infty$, there exists a unique point $b>c$ such that $f(\pm b)= f(\pm c)$. Since $b > c$, $\psi '' (x) < 0$ outside of $[-b,b]$. We claim that $B$ must lie in $[-b,b]$. Since $|\psi ' (x)|$ is constant on $B$, to show that $B \subset [-b,b]$ it suffices to show that the maximum and minimum of $\psi ' (x)$ on $[-b,b]$ are achieved at $-b$ and $b$. Since $0$ is a local minimum for $f(x)$, it suffices to show that $|\psi ' (b)| > |\psi ' (c)|.$ Since $\psi'(c)$ is postive and $\psi''(x) < 0$ for $x > c,$ there exists a unique point $d > c$ where $\psi'(d) = 0$ and $\psi'$ changes from positive to negative at $d$. To apply Lemma \ref{2.9}, consider functions $p$ and $q$ denoting the increase in $\psi$ moving left of $d$ and the decrease in $\psi$ moving right of $d$: $$p(x) = \psi(d) - \psi(d-x)$$ $$q(x) = g(x) = \psi(d) - \psi(d+x)$$ which satisfy the hypotheses of Lemma \ref{2.9}. Since $\psi(c) = \psi(b)$, we have that $$|\psi'(c)| = \psi'(c) = p'(d-c) < g'(b-d) -\psi'(b) = |\psi'(b)|.$$ There are five candidates for the minimum points of $f(x)$ on $[-b,b]$: $\pm b, 0$, and $\pm d$. Since $d >c$, we have that $\psi''(d) < 0$ so that $\pm d$ is not a candidate. Since, also by the preceding paragraph, $\psi ' (x)$ is positive between $0$ and $c$, we have that $f(b) = f(c) > f(0)$. Therefore the minimum on this interval is $f(0)$. We have that $$\dfrac{d}{da}(f(0,a)) = -\dfrac{\sqrt{\frac{1}{a^2}}(-1+a^2)e^{-\frac{1}{2a^2}}}{a^3\sqrt{2\pi}} > 0$$ for all $a \in [-1/\sqrt{2}, 1)$. Therefore we have that $$2f(0,a) \geq 2f(0, 1/\sqrt{2}) \approx 0.415107...$$ \begin{figure}[h] \includegraphics[height=2in]{f0a} \caption{$2f(0,a)$ for various values of $a$} \label{fg2.6} \end{figure} To finish the proof, we must show that $f(x, a) < 0.415107....$ for all $x$ and all $a \in [1/\sqrt{2},1)$. Consider the numerator $n$ of $f$ given by $$n(x) = e^{-\frac{(x - 1)^2}{2a^2}} + e^{-\frac{(x + 1)^2}{2a^2}}.$$ We have that for a given $x$, $n$ increases when $a$ increases, so that $$n(x) \leq m(x) = e^{-\frac{(x - 1)^2}{2}} + e^{-\frac{(x + 1)^2}{2}}.$$ Since $$\dfrac{d}{dx}(\log{m(x)}) = \tanh (x) - x,$$ which has the same sign as $-x$, we see that $m(x)$ is maximized at $0$. Therefore $n(x) \leq m(0) < 1.22,$ so that $$f(x) < \dfrac{m(0)}{2\sqrt{2\pi}a} \leq \dfrac{1.22}{2\sqrt{\pi}} \approx 0.345.$$ This means that there is a ray which beats $B$, contradicting the fact that $B$ is isoperimetric. \end{proof} \begin{theorem} \label{2.18} The isoperimetric boundaries for the double Gaussian density $f$ are always single points enclosing rays. \end{theorem} \begin{proof} To cover the three cases, apply Propositions \ref{2.8}, \ref{2.22}, and \ref{2.10}. \end{proof} \iffalse \begin{proposition} \label{2.11} The isoperimetric boundary $B$ corresponding to area $1/2$ is $\{0\}$. \end{proposition} \begin{proof} Applying Propositions 2.8 and 2.9 proves the result for $a \geq 1/2$. When $0 < a < 1/2$, if $B$ is outside of $[-c,c]$ where $c = a^2\arccosh (1/a),$ then by Corollary \ref{2.7} $B$ is a single point and must be $\{0\}$. If $B$ contains a point $x$ in $[-c,c]$, then since [if the proof for 1/2 turns out to be different than for other areas, the following fact will be made into a lemma; it is proved in the proof of 2.7] $0$ is the minimum of $f$ in this interval, $|B| \geq f(0).$ Since $B$ is isoperimetric, $B = \{0\}$. \end{proof} \fi \section{Isoperimetric Regions on the Double Gaussian Plane} This section describes evidence for the conjecture of Ca\~{n}ete et al., stated here as Conjecture \ref{4.1}, which states that double Gaussian isoperimetric boundaries in the plane are vertical lines. Proposition \ref{4.5} proves that horizontal and vertical lines are the only stationary lines. Proposition \ref{4.6} proves that vertical lines are better than horizontal lines. First we prove some incidental symmetry results (Propositions \ref{4.2} and \ref{4.3}). \begin{conjecture}[{\cite[Question 6]{canete}}] \label{4.1} Let $f(x,y) = e^{\psi(x,y)}$ be the normalized sum of two Gaussian densities with the same variance and different centers. Isoperimetric regions are half-planes enclosed by lines perpendicular to the line connecting the two centers. \end{conjecture} By the planar analogue of Proposition \ref{2.1}, it suffices to prove this conjecture in the case where the centers are $c_1 = (1,0)$ and $c_2 = (-1,0)$. Then we have that $$f(x,y) = e^{\psi(x,y)} = \frac{1}{4\pi a^2}e^{-y^2/(2a^2)}(e^{-(x-1)^2/(2a^2)} + e^{-(x+1)^2/(2a^2)}).$$ The next two propositions describe some symmetry properties of isoperimetric curves. For a curve $C$, let $A_C$ denote the weighted area enclosed by $C$. \begin{proposition} \label{4.2} Consider a density $g$ symmetric about the $x$-axis. If a closed, embedded curve $C$ encloses the same weighted area above and below the $x$-axis, then there is a curve $C'$ which is symmetric about the $x$-axis, encloses the same weighted area, and has weighted perimeter no greater than that of $C$. \end{proposition} \begin{proof} Let $C_1$ and $C_2$ be the parts of $C$ in the open upper and lower half-planes chosen so that the weighted perimeter of $C_1$ is no bigger than that of $C_2$. Consider the curve $C'$ formed by joining $C_1$ with its reflection over the $x$-axis and taking the closure. Let $w$ denote the part of $C$ on the $x$-axis and $w_1$ denote the part of $C'$ on the $x$-axis. Since $g$ is symmetric about the $x$-axis, $A_C = A_{C'}$. In addition, $$|C'| - |C| = (2|C_1| + |w_1|) - (|C_1| + |C_2| + |w|) = (|C_1| - |C_2|) + (|w_1| - |w|).$$ We have that $|C_1| - |C_2| \leq 0$ by assumption, and since the part of $C$ which intersects the $x$-axis must include $w_1$, $|w_1| - |w| < 0$. Therefore $|C'| -|C| \leq 0.$ \end{proof} \begin{proposition} \label{4.3} Consider a density symmetric about the x-axis. If $C$ is a closed embedded planar curve symmetric about the x-axis, then the part $C'$ of $C$ in the open upper half-plane encloses half as much weighted area with half the weighted length. \end{proposition} \begin{proof} Suppose that $C$ is a curve that is symmetric about the $x$-axis and encloses area $A$. Since $C$ is symmetric about the $x$-axis, $C$ cannot have non-zero perimeter on the $x$-axis. Then $C'$ encloses area $A_C/2$ in the upper half-plane and has weighted perimeter $|C|/2$. \end{proof} \begin{proposition} \label{4.5} If the plane is endowed with density $f$, then horizontal and vertical lines have generalized curvature 0 and are the only lines which have constant generalized curvature. \end{proposition} \begin{proof} Let $\psi = \ln{f}.$ Then $$\nabla\psi(x,y) = (\dfrac{-x+\tanh(x/a^2)}{a^2}, \dfrac{-y}{a^2}).$$ In addition, the normal to the line $y = cx + b$ is $(-c,1)/\sqrt{c^2 + 1}$ at all points of the line. Therefore the generalized curvature of such a line evaluated at $(0, b)$ is $$0-\nabla\psi(0,b)\cdot\frac{(-c,1)}{\sqrt{c^2 + 1}} = \frac{b}{a^2\sqrt{1 + c^2}},$$ and by an analogous computation the generalized curvature evaluted at $(1,c + b)$ is $$\dfrac{c+b}{a^2\sqrt{1+c^2}} + \dfrac{c(-1+\tanh(1/a^2))}{a^2\sqrt{1+c^2}}.$$ Thus the generalized curvature at $(0, b)$ and $(1, c+b)$ are equal exactly when $c = 0$. This shows that only non-vertical lines that could possibly have constant curvature are the horizontal lines $y = b$. Such lines have normal $(0,1)$, and this, combined for our formula with the gradient, shows that horizontal lines have constant curvature $b/a^2$. An explicit computation of the same variety shows that the vertical line $x = b$ has constant curvature $$\frac{b-\tanh(b/a^2)}{a^2}.$$ \end{proof} \begin{proposition} \label{4.6} In the plane with double Gaussian density $f$, vertical lines enclose given area with less perimeter than horizontal lines.\end{proposition} \begin{figure}[h] \centering \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=\linewidth]{eqrays.png} \caption{Symmetric Rays} \label{fig:sub1} \end{subfigure}% \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=\linewidth]{uneqrays.png} \caption{Unsymmetric Rays} \label{fig:sub2} \end{subfigure} \caption{When the purple areas are equal, the two unsymmetric rays are more efficient than the two symmetric rays. The efficiency increases as the disparity between the rays increases, and the limiting case is a single ray, which is the isoperimetric region.} \label{fg4.6} \end{figure} \begin{proof} We now compare the perimeters of and areas enclosed by the horizontal line $x = b$ and the vertical line $y = c$. By symmetry and the fact that we may assume the areas are less than $1/2$, we can assume that $b$ and $c$ are positive and consider the areas of the regions $x>b$ and $y > c$. The area enclosed by the vertical line is $$\int_{b}^{\infty}\int_{-\infty}^{\infty} f(x,y) dydx = \int_{b}^{\infty} \dfrac{e^{-\frac{(x - 1)^2}{2a^2}} + e^{-\frac{(x + 1)^2}{2a^2}}}{2a\sqrt{2 \pi}},$$ which is the same as the weighted length of the ray $R_b = [b, \infty)$ on the double Gaussian line. The perimeter of the vertical line is $$\int_{-\infty}^{\infty} f(b,y) dy = \dfrac{e^{-\frac{(b - 1)^2}{2a^2}} + e^{-\frac{(b + 1)^2}{2a^2}}}{2a\sqrt{2 \pi}},$$ which is exactly the cost of $R_b$ on the double Gaussian line. The area enclosed by the horizontal line is $$\int_{c}^{\infty} \int_{-\infty}^{\infty} f(x,y) dxdy = \int_{c}^{\infty} \dfrac{e^{-\frac{y^2}{2a^2}}}{2a\sqrt{2 \pi}}dy,$$ which is the same as the weighted length of the ray $R_c = [c, \infty)$ on the single Gaussian (of total weighted-length 1) line. The perimeter of the horizontal line is $$\int_{-\infty}^{\infty} f(x,c) dx = \dfrac{e^{-\frac{c^2}{2a^2}}}{2a\sqrt{2 \pi}},$$ which is exactly the cost of $R_c$ on the single Gaussian line. Therefore it suffices to show that a ray on the double Gaussian line of length $A$ costs less than a ray on the single Gaussian line of the same weighted length. Consider the line with density $g$ given by a single Gaussian of total length $1/2$. The ray on the single Gaussian is equivalent to the union of two disjoint, symmetric rays on the $g$-line. The ray on the double Gaussian is equivalent to the union of two disjoint, non-symmetric rays on the $g$-line. By applying the first and second variation arguments to a single Gaussian density, we see that two non-symmetric rays are always better than two symmetric rays of the same total weighted-length. \end{proof} Therefore if the isoperimetric curve corresponding to area $A$ is a line, then it is a vertical line. \section*{ Acknowledgements} This paper is the work of the Williams College NSF ``SMALL'' 2015 Geometry Group. We thank our advisor Professor Morgan for his support. We would like to thank the National Science Foundation, Williams College, the University of Chicago, and the MAA for supporting the ``SMALL'' REU and our travel to MathFest 2015. \bibliographystyle{abbrv}
{ "timestamp": "2016-08-29T02:00:59", "yymm": "1604", "arxiv_id": "1604.00592", "language": "en", "url": "https://arxiv.org/abs/1604.00592", "abstract": "We consider the isoperimetric problem for the sum of two Gaussian densities in the line and the plane. We prove that the double Gaussian isoperimetric regions in the line are rays and that if the double Gaussian isoperimetric regions in the plane are half-spaces, then they must be bounded by vertical lines.", "subjects": "General Mathematics (math.GM)", "title": "The isoperimetric problem in the plane with the sum of two Gaussian densities", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9886682484719526, "lm_q2_score": 0.8221891348788759, "lm_q1q2_score": 0.8128722918933682 }
https://arxiv.org/abs/2006.15797
Asymptotic enumeration of digraphs and bipartite graphs by degree sequence
We provide asymptotic formulae for the numbers of bipartite graphs with given degree sequence, and of loopless digraphs with given in- and out-degree sequences, for a wide range of parameters. Our results cover medium range densities and close the gaps between the results known for the sparse and dense ranges. In the case of bipartite graphs, these results were proved by Greenhill, McKay and Wang in 2006 and by Canfield, Greenhill and McKay in 2008, respectively. Our method also essentially covers the sparse range, for which much less was known in the case of loopless digraphs. For the range of densities which our results cover, they imply that the degree sequence of a random bipartite graph with m edges is accurately modelled by a sequence of independent binomial random variables, conditional upon the sum of variables in each part being equal to m. A similar model also holds for loopless digraphs.
\section{Introduction} Enumeration of discrete structures with local constraints has attracted the interest of many researchers and has applications in various areas such as coding theory, statistics and neurostatistical analysis. Exact formulae are often hard to derive or infeasible to compute. Asymptotic formulae are therefore sought and often provide sufficient information for the aforementioned applications. In this paper we find such formulae for bipartite graphs with given degree sequence, or loopless digraphs with given in- and out-degree sequences. Our results imply that the degree sequence of a random digraph or bipartite graph with $m$ edges is close to a sequence of independent binomial random variables, conditional upon the sum of degrees in each part being equal to $m$. We frame all our arguments in terms of bipartite graphs: as noted below, digraphs are equivalent to ``balanced" bipartite graphs. Thus, if loops are not forbidden, the digraph enumeration problem is the same as the bipartite one. The loopless case for digraphs is equivalent to bipartite graphs with a forbidden perfect matching. Our results on counting bipartite graphs with a given degree sequence imply equivalent results on counting $0$-$1$ matrices with given row and column sums. Similarly, counting (loopless) digraphs is equivalent to counting square $0$-$1$ matrices with given row and column sums where the entries on the diagonal are required to be 0. Our results are obtained via the method of degree switchings and contraction mappings recently introduced by the authors in~\cite{lw2018} to count the number of ``nearly" regular graphs of a given degree sequence for medium-range densities, and a wider range of degree sequences for low densities. The basic structure of the argument is very similar in the present case, but it needs significant modifications to account for the fact that we are dealing with bipartite graphs and certain edges are not allowed. \subsection{Enumeration results} The formulae in~\cite{lw2018} are stated in terms of a relationship between the degree sequence of the Erd\H{o}s-R\'enyi random graph and a sequence of independent binomial random variables. We shall do the same here for appropriate bipartite random graphs and suitable independent binomials. We first introduce appropriate graph theoretic notation. Let $\ell, n$ be integers and let $S=[\ell]$ and $T=[n+\ell]\setminus [\ell]$. We use $S$ and $T$ as the two parts of the vertex set of a bipartite graph $G$, i.e.\ a graph $G$ with bipartition $(S, T)$. Such a graph is said to have degree sequence $({\bf s},{\bf t})$ if vertex $a$ has degree $s_a$ for all $a\in S$, and $v$ has degree $t_v$ for all $v\in T$. (Our convention is to denote elements of $S$ by $\{a,b,\ldots\}$ and elements of $T$ by $\{v,w,\ldots\}$.) We let $\cD(G)$ denote the degree sequence of $G$. % When $\ell =n$, we use the fact that a digraph on $n$ vertices with out-degree sequence ${\bf s}$ and in-degree sequence ${\bf t}$ corresponds to a bipartite graph with degree sequence $({\bf s},{\bf t})$, the equivalence obtained by directing all edges from $S$ to $T$. For use in the digraph case, if $a\in S$ we define $\mathrm{mate}{a}=a+\ell\in T$, and for $v\in T$ we define $\mathrm{mate}{v}=v-\ell\in S$. The digraph contains a loop if and only if the bipartite graph has an edge joining $a$ to $a'$. The following probability spaces play an important role in this paper. Let $\mathcal{G}(\ell,n,m)$ denote the bipartite graph chosen uniformly at random among all bipartite graphs with bipartition $(S,T)$ and with $m$ edges. In the case when $\ell=n$, conditioning on the event that none of those $m$ edges is of the form $aa'$ yields a model of random directed graphs without loops which we call $\vec \mathcal{G}(n,m)$. We define $\cD(\mathcal{G}(\ell,n,m))$ and $\cD(\vec \mathcal{G}(n,m))$ to be the corresponding probability spaces of degree sequences of $\mathcal{G}(\ell,n,m)$ or of $\vec\mathcal{G}(n,m)$, respectively. Let $\mathcal{B}_p(\ell,n)$ be the probability space of vectors of length $\ell+n$ where the first $\ell$ elements are distributed as $\ensuremath{\mathrm{Bin}}(n,p)$ and the next $n$ are distributed as $\ensuremath{\mathrm{Bin}}(\ell ,p)$. Furthermore, let $\mathcal{B}_m(\ell,n)$ be the restriction of $\mathcal{B}_p(\ell,n) $ to the event $ \Sigma_1=\Sigma_2= m $, where $\Sigma_1$ is the sum of the first $\ell$ elements of the vector, and $\Sigma_2$ the sum of the other $n$ elements. Similarly, define $\vec\mathcal{B}_p(n)$ to be the probability space of random vectors of length $2n$, every component being independently distributed as $\ensuremath{\mathrm{Bin}}(n-1,p)$. Finally, let $\vec\mathcal{B}_m( n)$ be the restriction of $\vec\mathcal{B}_p( n)$ to the event $\Sigma_1=\Sigma_2= m $, where $\Sigma_i$ is defined as above with $\ell=n$. Note that if $\sum s_a = \sum t_v=m$, then \begin{align}\lab{probbin} \pr_{\mathcal{B}_m(\ell,n)}({\bf s}, {\bf t}) &= {\ell n \choose m }^{-2} \prod_{a\in S} { n \choose s_a }\prod_{v\in T} { \ell \choose t_v} \mbox{ and}\nonumber\\ \pr_{ \vec\mathcal{B}_m( n)} ({\bf s}, {\bf t}) &= { n(n-1) \choose m}^{-2} \prod_{a\in S} { n-1 \choose s_a }\prod_{v\in T} { n-1 \choose t_v}, \end{align} which we note are both independent of $p$. Our main result for degree sequences of ``medium density" states essentially that for certain sequences ${\bf d}$, the probability $\Pr_{\cD(\mathcal{G})}({\bf d})$ is asymptotically equal to $\Pr_{\mathcal{B}_m}({\bf d}){\tilde H}({\bf d})$, where $\mathcal{G} =\mathcal{G}(\ell,n,m)$ and $\mathcal{B}_m =\mathcal{B}_m(\ell,n)$ in the bipartite case, $\mathcal{G} =\vec\mathcal{G}(n,m)$ and $\mathcal{B}_m =\vec\mathcal{B}_m(n)$ in the digraph case, and where ${\tilde H}$ is a correction factor which we define next. For asymptotics in this paper, we take $n\to\infty$; the restrictions on $\ell$ will also ensure that $\ell\to\infty$. With $S$ and $T$ as above, let ${\bf d}$ be a sequence of length $N=\ell+n$. We set $M_1=M_1({\bf d}) = \sum_{i=1}^N d_i$ and use ${\bf s}={\bf s}({\bf d})$ and ${\bf t}={\bf t}({\bf d})$ to denote the vectors consisting of the first $\ell$, and of the last $n$, entries of ${\bf d}$ respectively. Thus, ${\bf d}=({\bf s},{\bf t})$. We also let $s=s({\bf d})$ and $t=t({\bf d})$ denote average of the components of ${\bf s}$, and of ${\bf t}$, respectively, that is $s = \frac{1}{\ell}\sum_{a\in S} s_a$ and $t = \frac{1}{n}\sum_{v\in T} t_v$. Then we set $$\sigma^2({\bf s})= \frac{1}{\ell}\sum_{a\in S} (s_a-s)^2, \quad \sigma^2({\bf t})= \frac{1}{n}\sum_{v\in T} (t_v-t)^2, $$ and, in the digraph case, $$\sigma({\bf s},{\bf t})=\frac{1}{n}\sum_{a\in S} (s_a-s)(t_{a'}-t).$$ We unify our analysis of the two cases, bipartite graphs and digraphs, by introducing the indicator variable $\delta^{\mathrm{di}}$ which is $1$ in the digraph case (in which case $\ell = n$ is assumed) and $0$ in the bipartite case (in which case terms containing $\delta^{\mathrm{di}}$ as a factor may be undefined). This significantly simplifies notation and permits us to emphasise the similarities between the two cases. Define $\mu=\mu({\bf d})=M_1({\bf d})/(2n(\ell-\delta^{\mathrm{di}}))$. (This will denote the {\em relative edge density} of a bipartite graph or a digraph with degree sequence ${\bf d}$.) We then set \begin{align}\lab{corrH} \corr{{\bf d}} = \exp\left(-\frac12 \left(1-\frac{\sigma^2({\bf s})}{s(1-\mu)}\right) \left(1-\frac{\sigma^2({\bf t})}{t(1-\mu)}\right) -\frac{\delta^{\mathrm{di}}\sigma({\bf s},{\bf t})}{s(1-\mu)}\right) \end{align} for a sequence ${\bf d}$ of length $\ell+n$, where $\mu = \mu({\bf d})$, ${\bf s}={\bf s}({\bf d})$, ${\bf t}={\bf t}({\bf d})$. We can now state our main result. \begin{thm}\thlab{t:mainbip} For a sufficiently small constant $\mu_0$, the following holds. Let $1/2<\varphi<3/5$. Let $\ell$, $n$ and $m$ be integers that satisfy $$ m/(n\ell)<\mu_0, \quad (\ell+n)^{5-5\varphi} =o(\ell nm^{3-5\varphi}), $$ and for all fixed $K>0$, $ \ell \log^K n+ n\log^K\ell =o(m)$. Let $\D$ be the set of sequences ${\bf d}=({\bf s},{\bf t})$ with ${\bf s}$ and ${\bf t}$ of lengths $ \ell$ and $n$ respectively, satisfying $M_1({\bf s})=M_1({\bf t})=m$, $|s_a-s|\le s^{\varphi}$ and $|t_v-t|\le t^{\varphi}$ for all $a\in S$ and all $v\in T$, where $s=m/\ell$ and $t=m/n$. Either set $\mathcal{G}=\mathcal{G}(\ell,n,m)$ and $\mathcal{B}_m= \mathcal{B}_m(\ell,n)$ (the bipartite case), or set $\mathcal{G}=\vec \mathcal{G}(n,m)$ and $\mathcal{B}_m= \vec\mathcal{B}_m(n)$ and restrict to $\ell=n$ (the digraph case). Then uniformly for all ${\bf d}\in\D$, \bel{enumFormula} \pr_{\cD(\mathcal{G})} ({\bf d}) = \pr_{\mathcal{B}_m}({\bf d})\corr{{\bf d}}\left(1+O\left(\frac{\log^2\ell}{\sqrt{\ell}}+\frac{\log^2n}{\sqrt{n}}+(\min\{s,t\})^{5\varphi-5}m^2/\ell n \right)\right). \end{equation} \end{thm} Recall that in this paper, asymptotic statements refer to $n\to\infty$. The condition $ \ell \log^K n+ n\log^K\ell =o(m)$, however, together with the trivial upper bound $m\le n\ell$ implies that $\ell\to\infty$ as well. We prove this theorem in Section~\ref{s:denseBip}. \begin{remark}\lab{r:first} In view of~\eqn{probbin} and the fact that $|\mathcal{G}(\ell,n,m)| = {\ell n \choose m }$, the formula in \thref{t:mainbip} is equivalent to the assertion that the number of bipartite graphs with degree sequence $({\bf s},{\bf t})$ is $$ {\ell n \choose m }^{-1}\prod_{a\in S}{ n \choose s_a }\prod_{v\in T} { \ell \choose t_v} \exp\bigg(-\frac12 \bigg(1-\frac{\sigma^2({\bf s})}{\mu(1-\mu) n}\bigg) \bigg(1-\frac{\sigma^2({\bf t})}{\mu(1-\mu)\ell }\bigg) +O(\xi)\bigg), $$ where $\xi$ is the error term from~\eqn{enumFormula}. Similarly, \eqn{probbin} and the fact that $|\vec \mathcal{G}(n,m)| = { n(n-1) \choose m }$ gives an asymptotic formula for the number of directed graphs with given degree sequence of in- and out-degrees. \end{remark} Our corresponding result for the sparse case is the following. Although it is not new in the bipartite case (see below), it completes the full range of densities (in a sense, for instance, regarding regular digraphs) in the digraph case. For a sequence ${\bf d}=({\bf s},{\bf t})$ as above, define $\Delta_S=\Delta({\bf s})=\max_{a\in S}(s_a)$, and similarly $\Delta_T=\Delta({\bf t}) =\max_{v\in T}(t_v)$. \begin{thm}\thlab{t:sparseCaseBip} Let $0<\varepsilon<1/2$, let $\ell$, $n$ and $m$ be integers such that $n/\log^4n + \ell/\log^4\ell=o(m)$, and set $s=m/\ell$, and $t=m/n$. Let $\D$ be a set of sequences ${\bf d}=({\bf s},{\bf t})$ such that ${\bf s}$ and ${\bf t}$ have length $\ell$ and $n$, respectively, and such that $M_1({\bf s})=M_1({\bf t})=m\ge 1$ and $\Delta_S^3\Delta_T^3 (n\ell)^{\varepsilon/2} = o(\min\{sm, tm\})$ uniformly over $\D$. Either set $\mathcal{G}=\mathcal{G}(\ell,n,m)$ and $\mathcal{B}_m= \mathcal{B}_m(\ell,n)$ (the bipartite case), or set $\mathcal{G}=\vec \mathcal{G}(n,m)$ and $\mathcal{B}_m= \vec\mathcal{B}_m(n)$ and restrict to $\ell=n$ (the digraph case). Then uniformly for ${\bf d}\in\D$, $$ \pr_{\cD(\mathcal{G})}({\bf d}) = \pr_{\mathcal{B}_m}({\bf d})\corr{{\bf d}}\bigg(1+O\bigg(\frac{\Delta_S^3\Delta_T^3 (n\ell)^{\varepsilon/2}}{m} ( 1/s +1/t) + n^{\varepsilon-1/2} + \ell^{\varepsilon-1/2} \bigg)\bigg). $$ \end{thm} We prove this theorem in Section~\ref{s:sparseBip} before tackling the more involved case of medium range densities. We note at this point that if we restrict $\D$ to the set of sequences where $s_a,t_v\ge 1$ for all $a\in S$ and $v\in T$ then $m\ge n, \ell$ and so the condition $n/\log^4n + \ell/\log^4\ell=o(m)$ is always true (as $n$ tends to infinity), and the condition $\Delta_S^3\Delta_T^3(n\ell)^{\varepsilon/2} =o(\min\{sm,tm\})$ is implied by $\Delta_S^3\Delta_T^3 = o(m^{1-\varepsilon})$ as $s,t\ge 1$. Sequences failing these conditions therefore contain entries 0, which are less interesting since the formula is then often implied by considering only the non-zero entries. One could also apply our method to reach further into this very sparse case, but given these considerations, it is possibly not warranted, and we do not attempt to do so here. Similarly, further examination of our argument should yield results covering cases with wider disparities between $\ell$ and $n$. There have been many contributions to this topic in the past. Finding (asymptotic) formulae for the number of bipartite graphs with a given degree sequence goes back to Read's thesis~\cite{read1958} and gained wider interest since the 1970's, including~\cite{ bbk1972, b1974, bm1986,CM, es1971, m1984, MWX, mp1976, o1969, wormald1980}. In particular, the sparse case is best covered by Greenhill, McKay and Wang~\cite{GMW}, who proved an asymptotic formula for the number of bipartite graphs of a given sequence $({\bf s},{\bf t})$, provided that $M_1({\bf s})=M_1({\bf t})$ and ${\bf s}_{\max}{\bf t}_{\max} =o\left((M_1({\bf s}))^{2/3}\right)$, and their result covers the bipartite version of \thref{t:sparseCaseBip}, in terms of both the density range $m/n\ell$ and the size of the error terms. This is supplemented by formulae for the number of dense bipartite graphs with specified degree sequences by Canfield, Greenhill, and McKay~\cite{CGM} that apply as long as $\ell$ and $n$ are not too far apart. In fact, in \cite{CGM} it was found that the formulae for the sparse and the dense case can be unified to produce the formula in \thref{t:mainbip}, which was implicitly conjectured in~\cite{CGM} to hold for the cases in between. This conjecture is essentially verified by \thref{t:mainbip} for a wide range of parameters ${\bf s}$ and ${\bf t}$. A special case is that of so-called semi-regular bipartite graphs, in which all vertices on one side of the bipartition have degree $s$, say, and all vertices on the other side have degree $t$. So let ${\bf s}$ denote the constant vector of length $\ell$ in which every entry is $s$, and ${\bf t}$ denote the constant vector of length $n$ in which every entry is $t.$ In 1977, Good and Crook~\cite{gc1977} suggested that the number of bipartite graphs with degree sequence $({\bf s}, {\bf t})$ is roughly $ \binom{n}{s}^{\ell}\binom{\ell}{t}^{n}/\binom{\ell n}{m} $ when $m= s\ell = tn$. Some of the references mentioned above, in particular~\cite{MWX} and~\cite{CM}, verify that this formula is correct up to a constant factor, for particular ranges of $m$, $n$, $s$ and $t$, by showing that the number is \bel{suggbip} \frac{\binom{n}{s}^{\ell}\binom{\ell}{t}^{n}}{\binom{\ell n}{m}} e^{-1/2+o(1)}. \end{equation} This asymptotic assertion is immediately equivalent to $\pr_{\mathcal{G}(\ell,n,m)} ({\bf s}, {\bf t}) \sim \pr_{ \mathcal{B}_m(\ell,n)}({\bf s}, {\bf t}){\tilde H}({\bf s}, {\bf t})$. Consequently, \thref{t:mainbip} verifies~\eqn{suggbip} for a new range of parameters in the moderately dense case. For digraphs without loops, there are far fewer corresponding results. For the dense case, i.e.~when the number of edges is $\Theta(n^2)$, a result by Greenhill and McKay~\cite{GM} implies an asymptotic formula. Barvinok~\cite{b2010} provides upper and lower bounds which are coarser but their bounds apply to a wider range of in- and out-degree sequences. The only result we are aware of that explicitly enumerates loopless digraphs by degree sequence in the sparse case is by Bender~\cite{b1974}, which only applies for bounded degrees. However, it is clear that the standard techniques used previously for sparse graph enumeration could be used to increase the density and obtain results more in line with the existing ones for bipartite graphs. {\subsection{Models for the degree sequences of random graphs } In 1997, McKay and Wormald~\cite{degseq1} showed that if a certain enumeration formula holds for the number of graphs of a given degree sequence then the degree sequences of the random graph models $\mathcal{G}(n,m)$ and ${\cal G}(n,p)$ can be modelled by certain binomial-based models. The model for ${\cal G}(n,p)$ showed that the degree sequence was distributed almost the same as a sequence of independent binomial random variables, subject to having even sum, but with a slight twist that introduces dependency. It was also shown there that for properties of the degree sequence satisfying some quite general conditions, this conditioning and dependency make no significant difference, and hence those properties are essentially the same as for a sequence of independent binomials. At that time, the existing formulae for the sparse and the dense case supplied that relationship of the models. Recently, the enumeration results of \cite{lw2018} for the medium range provide the missing formulae for the gap range of densities, establishing a conjecture from~\cite{degseq1}. A natural supposition since~\cite{degseq1} appeared was that the degree sequences of random bipartite graphs and digraphs satisfy similar properties. This was an implicit conjecture of McKay and Skerman~\cite{MS}, who adapted some of the arguments in~\cite{degseq1} to show that the existing enumeration results for dense bipartite graphs and directed graphs imply a binomial-based model of the degree sequences of such graphs. This is quite analogous to the model in the graph case, except that it contains an extra complicating conditioning required because the sum of degrees of the vertices in each part must be equal. McKay and Skerman point out that, once the enumeration formulae are proved in the missing ranges, one would expect the model results to follow. Our enumeration results stated above provide what is necessary to immediately establish the relevant conjecture in the case of $\mathcal{G}(\ell,n,m)$ and $\vec\mathcal{G}(n,m)$, as described below, provided that $\ell$ and $n$ are not too disparate. For their binomial random graph siblings $\mathcal{G}(\ell,n,p)$ and $\vec\mathcal{G}(n,p)$, in which edges are selected independently with probability $p$, one would expect that arguments similar to those in~\cite{MS}, in conjunction with our results, will now suffice. Let $A_n$ and $B_n$ be two sequences of probability spaces with the same underlying set for each $n$. Suppose that whenever a sequence of events $H_n$ satisfies $\Pr(H_n)=n^{-O(1)}$ in either model, it is true that $\Pr_{A_n}(H_n)\sim\Pr_{B_n}(H_n)$, where by $f(n)\sim g(n)$ we mean that $f(n)/g(n)\to 1$ as $n\to\infty$. Then we call $A_n$ and $B_n$ {\em asymptotically quite equivalent (a.q.e.).} We use $\omega$ to mean a function going to infinity as $n\to\infty$, possibly different in all instances. \begin{thm}\thlab{t:bipmodel} \begin{enumerate}[label=(\alph*)] \item\label{model-a} The probability spaces $\cD(\vec \mathcal{G}(n,m))$ and $\vec \mathcal{B}_m(n) $ are a.q.e.~provided that $\log ^3n = o(\min\{m, n\ell-m\})$; \item\label{model-b} The probability spaces $\cD(\mathcal{G}(\ell,n,m))$ and $\mathcal{B}_m(\ell,n) $ are a.q.e.~provided that $\max\{m, n\ell-m\}=\omega \log n$ and at least one of the following holds: \begin{enumerate}[label=(\roman*)] \item\label{model-b-ii} $\ell\le n$ and for some fixed $\mu_0>0$ and $\varepsilon>0$ we have $m<\mu_0n\ell$ and $n^3=o(\ell^2m^{1-\varepsilon})$; \item\label{model-b-iii} $n/\log^4n + \ell/\log^4\ell=o(m)$ and for some fixed $\varepsilon>0$ we have $m^{4+\varepsilon}=o(n^2\ell^2\min\{ \ell,n\})$; \item\label{model-b-iv} $m=o(\min\{n/\log^2n, \ell/\log^2\ell\})$ and $\log^3 \ell + \log^3 n=o(m)$. \end{enumerate} \end{enumerate} \end{thm} We prove this theorem in Section~\ref{s:denseBip}. We note that the assertion for~\ref{model-a} for the range $m> n^2 / \log n$ is covered by McKay and Skerman~\cite[Theorem 1(d)]{MS}. For the bipartite case~\cite[Theorem 1(c)]{MS} covers the range $m>\ell n / \log n$ and $\ell = n^{1+o(1)}$. \thref{t:bipmodel}~\ref{model-b}\ref{model-b-ii} applies for a slightly larger range of $\ell$ and $n$, at least for large-ish $m$. The last condition in~\ref{model-b-ii} is equivalent to $$\frac{n^{2+\varepsilon}}{\ell^{3-\varepsilon}} \ll \mu^{1-\varepsilon}$$ for some $\varepsilon>0,$ where $\mu = m/n\ell$. Thus, $n$ may be as large as $\ell^{2-\varepsilon}$ for sufficiently large density $\mu$. Finally, we note that when $\min\{\ell,n\}\gg \max\{\ell,n\}^{10/11+\varepsilon}$ for fixed $\varepsilon>0,$ then all values of $m$ are covered by \thref{t:bipmodel} (swapping $\ell$ and $n$ in~\ref{model-b}\ref{model-b-ii} if necessary) and using~\cite[Theorem 1(d)]{MS} for the dense cases of both~\ref{model-a} and~\ref{model-b}. \subsection{ Edge probabilities.} As a by-product of our proof of \thref{t:mainbip} in Section~\ref{s:denseBip}, we obtain asymptotic formulae for the edge probabilities in a random bipartite graph with a given degree sequence, and of a random digraph with a given sequence of out- and in-degrees. \begin{thm}\thlab{t:edgeprobability} Let $n$, $\ell$, $m$, and $\D$ be as in \thref{t:mainbip} and let $\mathcal{G} = \mathcal{G}(\ell,n,m)$ or $\mathcal{G} = \vec\mathcal{G}(n,m)$. Let $a\in S$ and $v\in T $, with $v\ne a'$ in the digraph case. Then uniformly for ${\bf d}=({\bf s},{\bf t})\in \D$, the probability that $av$ is an edge of $G\in \mathcal{G}$, conditional on the event that $\cD(G)={\bf d}$, is $$ \frac{s_at_v}{m-\delta^{\mathrm{di}}t}\left(1-\frac{(s_a-s)(t_v-t)}{m-\delta^{\mathrm{di}}t-\tavgs} +\frac{(s_a-s)\sigma^2({\bf t})}{\tavgs(\ell-t)} +\frac{(t_v-t)\sigma^2({\bf s})}{\tavgs(n-s)} +\frac{\delta^{\mathrm{di}}(t_{\mathrm{mate}{a}}+s_{\mathrm{mate}{v}})}{n-1} +O\left(\min\{s,t\}^{4\varphi-4}\frac{m}{n\ell}\right) \right), $$ where $s= m/\ell$ and $t = m/n$. \end{thm} We prove this theorem in Section~\ref{s:denseBip}. \section{Preliminaries} \lab{s:prelim} As we indicated in the introduction, the argument in this paper derives from that in~\cite{lw2018}, whose notation and structure we will follow quite closely. Differences occur though to account for the fact that we are dealing with certain forbidden edges. Naturally, we resort to notation used in~\cite{lw2018} and add notation that is special to the bipartite case. We then state several intermediate results from~\cite{lw2018}. \subsection{Notation}\lab{s:notation} Our {\em graphs} are simple, that is, they have no loops or multiple edges. We write $a\sim b$ to mean that $a/b\to 1$, $f=O(g)$ if $|f|\le Cg$ for some constant $C$, and $f=o(g)$ if $f/g\to 0$. We use $\omega$ to mean a function going to infinity, possibly different in all instances. Also $\binom{V}{2}$ denotes the set of $2$-subsets of the set $V$, and $V$ is often of the form $[N]$, which denotes $\{1,\ldots , N\}$. In this paper multiplication by juxtaposition has precedence over ``$/$", so for example $j/\mu N^2= j/(\mu N^2)$. Let $N$ be an integer and let $V=[N]$. Assume that $\cA=\cA(N)\se \binom{ [N] }{2}$ is specified; we call this the set of {\em allowable pairs}. Note that as usual we regard the edge joining vertices $u$ and $v$ as the unordered pair $\{u,v\}$, and denote this edge by $uv$ following standard graph theoretic notation. A sequence ${\bf d}= (d_{1},\ldots,d_{N})$ is called {\em $\cA$-realisable} if there is a graph $G$ on vertex set $V$ such that vertex $a\in V$ has degree $d_a$ and all edges of $G$ are allowable pairs. In this case, we say $G$ {\em realises ${\bf d}$ over $\cA$}. In standard terminology, if ${\bf d}$ is $\binom{V}{2}$-realisable, it is {\em graphical}. Let $\mathcal{G}_{\cA}({\bf d})$ be the set of all graphs that realise ${\bf d}$ over $\cA$. The graph case when $\cA=\binom{V}{2}$ is dealt with in~\cite{lw2018}. In this paper, we are particularly interested in the following two special cases of $\cA$. \begin{itemize} \item {\bf Bipartite graph case.\\} Let $ \ell, n$ be integers and set $N =\ \ell +n$. Set $\cA=\cA^{\mathrm{bi}} = \{uw:u\in [ \ell], w\in [N]\sm[\ell]\}$. Then $\mathcal{G}_\cA({\bf d})$ is the set of all bipartite graphs $G$ on vertex set $[N]$ that realise the degree sequence ${\bf d}=({\bf s},{\bf t})$ with one part being $S=[\ell]$ and the other part $T=[N]\setminus[\ell]$. \item {\bf Digraph case.\\} Assume that $N$ is even and let $n$ be an integer such that $N =2n$. Set $\cA=\cA^{\mathrm{di}} = \{uw: u\in [n], w\in [n+1,2n], u+n \neq w\}$. Then $\mathcal{G}_\cA({\bf d})$ corresponds to the set of all bipartite graphs $G$ on vertex set $[2n]$ that realise the degree sequence ${\bf d}=({\bf s},{\bf t})$ with one part being $S=[n]$ and the other part $T=[2n]\setminus[n]$ that do not contain any edge of a predefined matching, or equivalently, $\mathcal{G}_\cA({\bf d})$ corresponds to the set of all digraphs $G$ on vertex set $[n]$ that have no loops and that realise the out-degree sequence ${\bf s}$ and in-degree sequence ${\bf t}$. Recall that $\mathrm{mate}{a}=a+n$ for $a\in S$ and $v'= v-n$ for $v\in T$ so that edges of the form $aa'$ are forbidden. \end{itemize} Let $\ell$, $n$ be integers and suppose that ${\bf d}$ is a sequence of length $\ell + n$. Recall the definitions of ${\bf s}={\bf s}({\bf d})$, ${\bf t}={\bf t}({\bf d})$, $s$, $t$, $M_1({\bf d})$, $\sigma({\bf s},{\bf t})$, and of $\sigma^2({\bf d})$ from the introduction. We also use $\Delta$ or $\Delta({\bf d})$ to denote $\max_i d_i$, in line with the notation for maximum degree of a graph. With $\cA$ understood (to be either $\cA^{\mathrm{bi}} $ or $\cA^{\mathrm{di}} $ in this paper) we write $\mu=\mu({\bf d}) $ for the quantity $M_1({\bf d})/|\cA|$ and note that this agrees with the definition of $\mu$ given just above~\eqref{corrH} in the introduction. Throughout this paper we use ${\bf e}_i$ to denote the elementary unit vector with 1 in its coordinate indexed by $i$. We say ${\bf d}$ is {\em balanced} if $M_1({\bf s})=M_1({\bf t})$. Clearly being balanced is necessary for ${\bf d}$ to be $\cA$-realisable in either of the cases $\cA=\cA^{\mathrm{bi}}$ or $\cA=\cA^{\mathrm{di}}$. Furthermore, we say that ${\bf d}$ is {\em $S$-heavy} if $M_1({\bf s})=M_1({\bf t})+1$, and we call it {\em $T$-heavy} if $M_1({\bf s})=M_1({\bf t})-1$. Finally, we use $1\pm \xi$ to denote a quantity between $1-\xi$ and $1+\xi$ inclusively. \subsection{Cardinalities, probabilities and ratios} We first quote a simple result by which we leverage absolute estimates of probabilities from comparisons of related probabilities. \begin{lemma}[Lemma 2.1 in \cite{lw2018}]\thlab{l:lemmaX} Let $\mathcal{S}$ and $\mathcal{S}'$ be probability spaces with the same underlying set $\Omega$. Let $G$ be a graph with vertex set $\W\subseteq \Omega$ such that $ \pr_\mathcal{S}(v),\pr_{\mathcal{S}'}(v) >0$ for all $v\in \W$. Suppose that $\varepsilon_0, \delta >0$ such that $\min\{ \pr_\mathcal{S}(\W), \pr_{\mathcal{S}'}(\W)\}>1-\varepsilon_0>1/2$, and such that for every edge $uv$ of $G$, $$ \frac{\pr_{\mathcal{S}'}(u)}{\pr_{\mathcal{S}'}(v)}= e^{O( \delta )} \frac{\pr_{\mathcal{S}}(u)}{\pr_{\mathcal{S}}(v)} $$ where the constant implicit in $O(\cdot)$ is absolute. Let $r$ be an upper bound on the diameter of $G$ and assume $r<\infty$. Then for each $v\in \W$ we have $$ \pr_{\mathcal{S}'}(v) =e^{O(r \delta +\varepsilon_0)} \pr_\mathcal{S}(v) , $$ with again a bound uniform for all $v$. \end{lemma} Using the lemma calls for analysing the ratios of probabilities both in the ``true'' probability space $\mathcal{S}'$ (which will be the degree sequence of $\mathcal{G}(\ell,n,m)$ or of $\vec\mathcal{G}(n,m)$) and in an ``ideal'' probability space $\mathcal{S}$ by which we are approximating the true space. This leads to computing ratios of closely related instances of the expression on the right hand side of~\eqref{enumFormula}. Let ${\bf d}=({\bf s},{\bf t})$ be a sequence where ${\bf s}$ and ${\bf t}$ are of length $\ell$ and $n$, respectively, let $a, b\in S$, and assume that ${\bf d}$ is $S$-heavy. Note first that the following are immediate from~\eqref{probbin}: $$\frac{\Pr_{\mathcal{B}_m(\ell,n)}({\bf s}-{\ve_a},{\bf t})}{\Pr_{\mathcal{B}_m(\ell,n)}({\bf s}-{\ve_b},{\bf t})} = \frac{s_a(n+1-s_b)}{s_b(n+1-s_a)} \quad \text{ and }\quad \frac{\Pr_{\vec\mathcal{B}_m(n)}({\bf s}-{\ve_a},{\bf t})}{\Pr_{\vec\mathcal{B}_m(n)}({\bf s}-{\ve_b},{\bf t})} = \frac{s_a(n-s_b)}{s_b(n-s_a)}.$$ Similarly, straight from the definition of ${\tilde H}$ in~\eqref{corrH} we have \begin{align*} \frac{\corr{{\bf s}-{\ve_a}, {\bf t}}}{\corr{{\bf s}-{\ve_b}, {\bf t}}} &= \exp\bigg(\frac{s_b-s_a}{s\ell(1-\mu')} \left(1-\frac{\sigma^2({\bf t})}{t(1-\mu')}\right) +\frac{ \delta^{\mathrm{di}} (t_{a'}-t_{b'})}{sn(1-\mu')}\bigg), \end{align*} where $\mu'=\mu({\bf s}-{\ve_a},{\bf t})$, $s=s({\bf s}-{\ve_a},{\bf t})$, $t=t({\bf s}-{\ve_a},{\bf t})$, which are, in this case, equal to $\mu({\bf s}-{\ve_b},{\bf t})$, $s({\bf s}-{\ve_b},{\bf t})$, and $t({\bf s}-{\ve_b},{\bf t})$, respectively (recalling that $\mu$ is slightly different in the two cases of $\cA^{\mathrm{bi}}$ and $\cA^{\mathrm{di}}$), and where, we recall, $\delta^{\mathrm{di}}$ is the indicator variable for the digraph case. Therefore, denoting by ${H}({\bf d}')$ the function $\Pr_{\mathcal{B}_m}({\bf d}'){\tilde H}({\bf d}')$, we get a ``combined goal ratio'' in the two cases which is \begin{align}\lab{H} \frac{{H}({\bf s}-{\ve_a},{\bf t})}{{H}({\bf s}-{\ve_b},{\bf t})} =\frac{s_a(n+\delta^{\mathrm{bi}} -s_b)}{s_b(n+\delta^{\mathrm{bi}}-s_a)} \exp\bigg(\frac{s_b-s_a}{s\ell(1-\mu')} \left(1-\frac{\sigma^2({\bf t})}{t(1-\mu')}\right) +\frac{ \delta^{\mathrm{di}} (t_{a'}-t_{b'})}{sn(1-\mu')}\bigg), \end{align} where $\delta^{\mathrm{bi}} = 1-\delta^{\mathrm{di}}$. To analyse the ratios of such nearby sequences in the ``true'' probability space note that, with the above notation, $\Pr_{\cD(\mathcal{G})}({\bf d})$ in \thref{t:mainbip} is just $|\mathcal{G}_{\cA}({\bf d})|/|\mathcal{G}|$ where $\mathcal{G}$ is the random graph space $\mathcal{G}(\ell,n,m)$ or $\vec\mathcal{G}(n,m)$. Let us introduce some more notation. Let $F\se \cA$, i.e.~a subset of the allowable edges. We write $\mathcal{N}_F({\bf d})$ and $\mathcal{N}^*_F({\bf d})$ for the number of graphs $G\in \mathcal{G}_{\cA}({\bf d})$ that contain, or do {\em not} contain, the edge set $F$, respectively. (When $\mathcal{N}$ and similar notation is used, the set $\cA$ should be clear by context.) We abbreviate $\mathcal{N}_F({\bf d})$ to $\mathcal{N}_{ab}({\bf d})$ if $F={\{ab\}}$ (i.e.~contains the single edge $ab$), and put $\mathcal{N}({\bf d})=|\mathcal{G}_{\cA}({\bf d})|$. Additionally, for a vertex $a\in V$, we set $\cA(a) =\{v\in V: av \in \cA\}$, and, with ${\bf d}$ understood, we use $\cA^*(a)$ for the set of $v\in \cA(a)$ such that $\mathcal{N}_{av}({\bf d})>0$. We pause for a notational comment. In this paper, a subscript $ab$ is always interpreted as an ordered pair $(a,b)$ rather than an edge (and similar for triples). This is irrelevant for $\mathcal{N}_{ab}({\bf d}) = \mathcal{N}_{ba}({\bf d})$ since the two ordered pairs signify the same edge, but the distinction is important with other notation such as the following. For vertices $a,b \in V$, if ${\bf d}$ is a sequence such that ${\bf d}-{\ve_b}$ is $\cA$-realisable, we define \bel{realRatio} R_{ab}({\bf d}) =\frac{\mathcal{N}({\bf d}-{\ve_a})}{\mathcal{N}({\bf d}-{\ve_b})} \end{equation} and note that this is exactly $\Pr_{\cD(\mathcal{G})}({\bf d}-{\ve_a})/\Pr_{\cD(\mathcal{G})}({\bf d}-{\ve_b})$. Estimating those ``true'' ratios will be tightly linked to estimating the following. For $F\se \cA$, let $$P_F({\bf d}) =\frac{\mathcal{N}_{F}({\bf d})}{\mathcal{N}({\bf d})},$$ which is the probability that the edges in $F$ are present in a graph $G$ that is drawn uniformly at random from $\mathcal{G}_{\cA}({\bf d})$. Of particular interest are the probability of a single edge $av$ and a path $avb$, for which we simplify the notation to \bel{realProb} P_{av}({\bf d}) =P_{\{av\}}({\bf d}),\qquad P_{avb}({\bf d}) =P_{\{av, bv\}}({\bf d}). \end{equation} The following is~\cite[Lemma 2.2]{lw2018}, used to switch between degree sequences of differing total degree. \begin{lemma}\thlab{trick17} Let $a v\in \cA$ and let ${\bf d}$ be a sequence of length $N$. Then \begin{align*} \mathcal{N}_{av}({\bf d}) &= \mathcal{N} ({\bf d} - {\ve_a} - {\ve_v}) - \mathcal{N}_{av} ({\bf d} - {\ve_a} - {\ve_v})\\ &= \begin{cases} \mathcal{N} ({\bf d} - {\ve_a} - {\ve_v}) (1- P_{av}({\bf d}- {\ve_a} - {\ve_v})) & \mbox{if } \mathcal{N}({\bf d} - {\ve_a} - {\ve_v}) \ne 0\\ 0 & \mbox{otherwise.} \end{cases} \end{align*} \end{lemma} In Lemma 2.3 in~\cite{lw2018} we bound the probability of an edge of a random graph in $\mathcal{G}({\bf d})$ in the graph case. A similar switching argument is used to obtain corresponding bounds in the bipartite and digraph cases. Recall that by $\Delta({\bf d})$ we denote $\max_i d_i$, and that $M_1({\bf d}) = \sum_i d_i$. \begin{lemma}\thlab{l:simpleSwitching} Let $\cA$ be $\cA^{\mathrm{bi}}$ or $\cA^{\mathrm{di}}$ and let $({\bf s},{\bf t})$ be an $\cA$-realisable sequence. Then for any $av\in\cA$ we have $$ P_{av}({\bf s},{\bf t})\le \frac{\Delta({\bf s})\Delta({\bf t})}{M_1({\bf s}) \left(1- 2(\Delta({\bf s})+1)(\Delta({\bf t})+1)/M_1({\bf s})\right)}. $$ \end{lemma} \begin{proof} Assume without loss of generality that $a\in S$ which forces $v\in T$ in both the digraph and bipartite cases. For each bipartite graph $G$ with degree sequence $({\bf s},{\bf t})$ and an edge joining $a$ and $v$, we can perform a switching (of a type often used previously in graphical enumeration) by removing both $av$ and another randomly chosen edge $bw$ (with $b\in S$, and $w\in T$), and inserting the edges $aw$ and $bv$, provided that no multiple edges are formed. Note that the way we choose $b$ and $w$ no loops can occur this way. In the digraph case, we should also make sure that $w\neq\mathrm{mate}{a}$ and that $b\neq v'$, since the pairs $aa'$ and $vv'$ are not allowable. The number of such switchings that can be applied to $G$ with the vertices of each edge ordered, is at least $$ M_1({\bf s}) - (\Delta({\bf s}) +1)\Delta({\bf t})- \Delta({\bf s})(\Delta({\bf t})+1) $$ since there are $M_1({\bf s})$ ways to choose $b\in S$ and $w\in T$, whereas the number of such choices that are ineligible is at most the number of choices with $b$ being a neighbour of $v$ (which automatically rules out $b=a$) or $b=\mathrm{mate}{v}$, or similarly for $w$. On the other hand, for each graph $G'$ in which $av$ is {\em not} an edge, the number of ways that it is created by performing such a switching backwards is at most $\Delta({\bf s})\Delta({\bf t})$. Counting the set of all possible switchings over all such graphs $G$ and $G'$ two different ways shows that the ratio of the number of graphs with $av$ to the number without $av$ is at most $$ \beta:=\frac{\Delta({\bf s})\Delta({\bf t})}{M_1({\bf s})- 2(\Delta({\bf s}) +1)(\Delta({\bf t})+1) }. $$ Hence $P_{av}({\bf d})\le \beta/(1+\beta)$, and the lemma follows in both cases. \end{proof} \subsection{Proof structure}\lab{s:template} We recall the template of the method introduced in~\cite{lw2018}. We follow this template in both the sparse and dense cases. \noindent {\bf Step~1.}~Obtain an estimate of the ratio $R_{ab}({\bf d})$ between the numbers of graphs of related degree sequences, using the forthcoming Proposition~\ref{l:recurse}. This step is the crux of the whole argument. \smallskip \noindent {\bf Step 2.} By making suitable definitions, we cause this ratio to appear as the expression $\pr_{\mathcal{S}'}({\bf d})/ \pr_{\mathcal{S}'}({\bf d}')$ for some probability space $\mathcal{S}'$ on an underlying set $\Omega$ in an application of Lemma~\ref{l:lemmaX}. There, $\Omega$ is the set of degree sequences, with probabilities in $\mathcal{S}'$ determined by the random graph under consideration, and the graph $G$ in the lemma has a suitable vertex set $\W$ of such sequences. Each edge of $G$ is in general a pair of degree sequences ${\bf d}-{\ve_a}$ and ${\bf d}-{\ve_b}$ of the form occurring in the definition of $R_{ab}( {\bf d}) $. Having defined $G$, we may call any two such degree sequences {\em adjacent}. \smallskip \noindent {\bf Step 3.}~Another probability space $\mathcal{S}$ is defined on $\Omega$, by taking a probability space $\mathcal{B}$ directly from a joint binomial distribution, together with a function $\corr{{\bf d}}$ that varies quite slowly, and defining probabilities in $\mathcal{S}$ by the equation $ \pr_\mathcal{S}({\bf d})= \pr_{\mathcal{B} }({\bf d}) \corr{{\bf d}} /{\bf E}_{\mathcal{B}} {\tilde H}$. \smallskip \noindent {\bf Step 4.}~Using sharp concentration results, show that $P(\W) \approx 1$ in both of the probability spaces $\mathcal{S}$ and $\mathcal{S}'$ (where, by $\approx$, we mean approximately equal to, with some specific error bound in each case). As part of this, we show that ${\bf E}_{\mathcal{B} } {\tilde H} \approx 1 $. At this point, we may specify $\varepsilon_0$ for the application of Lemma~\ref{l:lemmaX}. \smallskip \noindent {\bf Step 5.}~Apply Lemma~\ref{l:lemmaX} and the conclusions of the previous steps to deduce $P_{\mathcal{S}'}({\bf d}) \approx P_\mathcal{S}({\bf d})\approx \pr_{\mathcal{B} }({\bf d}) \corr{{\bf d}}$. Upon estimating the errors in the approximations, which includes bounding the diameter of the graph $G$, we obtain an estimate for the probability $P_{\mathcal{S}'}({\bf d})$ of the random graph having degree sequence ${\bf d}$ in terms of a known quantity. \subsection{Realisability} As in the graph case in~\cite{lw2018}, before estimating how many (bipartite) graphs have degree sequence ${\bf d}$, for preparation we need to know that there is at least one such graph for various ${\bf d}$. Mirsky~\cite[p.~205]{M} gives a necessary and sufficient condition for the existence of a non-negative integer matrix with row and column sums in specified intervals. For the case that those sums are specified precisely, the statement is the following. \begin{thm}[Corollary of Mirsky~\cite{M}]\lab{t:mirsky} Let $0\le r_i$, $0\le c_j$, $m_{ij}\ge 0$ be integers for all $1\le i \le\ell $, $1\le j\le n$ such that $ \sum_{1\le i\le \ell } r_i = \sum_{ 1\le j\le n} c_j$. Then there exists an $ \ell\times n$ integer matrix $B=(b_{ij})$ with row sums $r_1,\ldots, r_\ell $ and column sums $c_1,\ldots, c_n$ such that $0\le b_{ij}\le m_{ij}$ for all such $i$ and $j$ if and only if, for all $X\subseteq \{1,\ldots, \ell\}$ and $Y\subseteq \{1,\ldots, n\}$, $$ \sum_{i\in X,\,j\in Y} m_{ij} \ge \sum_{i\in X } r_i - \sum_{ j\notin Y} c_j. $$ \end{thm} % We use this to show existence of bipartite graphs with given degrees and forbidden edges for the cases of interest, avoiding maximum generality in order to keep it simple. In order to apply this to digraphs, one would set $\ell= n$ and regard the edges as directed from the first part to the second. For loopless digraphs, we merely forbid all edges of the form $\{i,i+n\}$. Recall that, with $\ell$ and $n$ understood, we set $S=[\ell]$ and $T=[n+\ell]\sm[\ell]$ for convenience. \begin{lemma}\thlab{lem:bipRealisable} Given a constant $C\ge 1$, the following holds for $\ell,n$ sufficiently large and $\varepsilon>0$ sufficiently small. Let $s_a\ge 1$ and $t_v\ge 1$ be integers for all $a \in S$, $v\in T$, with $m:= \sum_{a \in S} s_a = \sum_{ v\in T } t_v$. Also let $F \se\{av : a\in S,v\in T\}$ be a set of unordered pairs, representing forbidden edges, with no more than $C$ pairs in $F$ containing any $w\in S\cup T$. Let $\Delta_S=\max_{a\in S} s_a $ and $\Delta_T=\max_{v\in T} t_v$. Then there exists a bipartite graph with bipartition $(S,T)$ with degrees $s_a$ for $a\in S$ and $t_v$ for $v\in T$, and containing no edge in the forbidden set $F$, provided that either of the following holds. \begin{enumerate}[label=(\alph*)] \item \label{lem:bipRealisablei} We have $m\le \ell n/9$, as well as $\Delta_S \le 2s$ and $\Delta_T \le2t$ where $s=m/\ell$ and $t=m/n$. \item \label{lem:bipRealisableii} We have $\Delta_S\le \sqrt m/2-C$ and $\Delta_T\le \sqrt m/2-C$. \end{enumerate} \end{lemma} \begin{proof} We will apply Theorem~\ref{t:mirsky} with $m_{ij}=0$ if $\{i,j+\ell\}$ is a forbidden edge, and $m_{ij}=1$ otherwise, and with $r_i=s_i$ and $c_j=t_{j+\ell}$. Note that $ \sum_{a\in X,v\in Y} m_{ij}\ge xy-C \min\{x,y\} $ for all subsets $X\se S$, $Y\se T$, where $x=|X|$ and $y=|Y|$. We will show that for all $X\subseteq S$ and $Y\se T$, with $x=|X|$ and $y=|Y|$, we have \bel{cond0} m-C \min\{x,y\}\ge \sum_{a\in X} s_a +\sum_{v\in Y} t_v -xy. \end{equation} Equivalently, $ xy-C \min\{x,y\}\ge \sum_{a\in X} s_a -\sum_{v\in T\sm Y} t_v$. Note that with the previous observation and Theorem~\ref{t:mirsky}, this implies that there is a matrix $B$ which is the adjacency matrix of the desired bipartite graph. For (a), suppose first that $x\ge 2t+C$. Then using $\sum_{a\in X} s_a\le m$ and $\sum_{v\in Y} t_v \le y\Delta_T \le 2y t$, we find that the right hand side of~\eqn{cond0} is at most $m-yC$, and~\eqn{cond0} follows. A symmetric argument works if $y\ge 2s+C$. So we may assume that neither of these occur. Then $$ \sum_{a\in X} s_a +\sum_{v\in Y} t_v \le x\Delta_S+y\Delta_T\le (2t+C)2s+(2s+C)2t $$ and the left hand side is at least $m-C(x+y)/2\ge m- C(s+t)-C^2$. Thus~\eqn{cond0} follows if we show that $m\ge 8st+ 5C(s+t)+C^2$, i.e.\ if $1\ge 8m/\ell n + 5C(1/\ell+1/n) +C^2/m$ (since $s=m/\ell$ and $t=m/n$). This holds for $\ell$ and $n$ sufficiently large because $ n\le m\le \ell n/9$. Now consider (b). Without loss of generality, since $S$ and $T$ can be interchanged along with $X$ and $Y$, $\ell$ and $n$ etc., we assume $x\ge y$. First consider the case that $x\le \sqrt m / 2$. Then $y\le \sqrt m / 2$, and so the right hand side of~\eqn{cond0} is at most $$ x\Delta_S +y\Delta_T \le m/2-C(x+y), $$ which implies~\eqn{cond0}. On the other hand, if $ x>\sqrt m / 2$ then we can bound the first summation in~\eqn{cond0} by $m$ and the second one by $y\Delta_T\le y\sqrt m/2-yC$, and use $xy> y\sqrt m/2$ to obtain~\eqn{cond0}. \end{proof} \subsection{Recursive relations}\lab{s:recursive} In this subsection we collect results about recursive relations that were obtained in~\cite{lw2018}. The results were stated for an arbitrary set $\cA$ of allowable pairs in $\binom{V}{2}.$ Recall the definitions of the probabilities $P_{av}({\bf d})$ and $Y_{avb}({\bf d})$ in~\eqref{realProb}, and of the ratio $R_{ab}({\bf d})$ in~\eqref{realRatio}. \begin{proposition}[Proposition 3.1 in \cite{lw2018}]\thlab{l:recurse} Let ${\bf d}$ be a sequence of length $n$ and let $\cA\se \binom{[n]}{2}$. \begin{itemize} \item[$(a)$] Let $a,v\in V$. If $\mathcal{N}_{av}({\bf d})>0$ then \begin{equation*} P_{av}({\bf d}) =d_{v} \Bigg(\sum_{b\in \cA^*(v)} R_{ba} ( {\bf d}- {\ve_v}) \frac{1-P_{bv}({\bf d} - {\ve_b} - {\ve_v})} {1-P_{av}({\bf d} - {\ve_a} - {\ve_v})}\Bigg)^{-1}. \end{equation*} \item[$(b)$] Let $a,b\in V$. If ${\bf d}-{\ve_b}$ is $\cA$-realisable then \begin{align} \lab{Ratformula R_{ab} ( {\bf d}) &= \frac{d_{a}}{d_{b}}\cdot \frac{ 1-\ensuremath {B}(a,b, {\bf d} - {\ve_b})}{ 1-\ensuremath {B}(b,a, {\bf d} - {\ve_a})}, \end{align} where \begin{align}\lab{eq:bad} \ensuremath {B}(i,j, {\bf d}') & = \frac{1}{d_{i}}\Bigg(\sum_{ v \in \cA(i)\setminus \cA(j)} P_{iv}({\bf d}') + \sum_{ v \in \cA(i)\cap \cA(j) } Y_{ivj}({\bf d}') \Bigg), \end{align} provided that $\ensuremath {B}(b,a, {\bf d} - {\ve_a}) \ne 1$. \item[$(c)$] Let $a,v,b$ be distinct elements of $V$. If ${\bf d}-{\ve_a}-{\ve_v}$ is $\cA$-realisable then $$Y_{avb}({\bf d})=\frac{P_{av}({\bf d})\big(P_{bv}({\bf d}-{\ve_a}-{\ve_v})-Y_{avb}({\bf d}-{\ve_a}-{\ve_v})\big)} {1-P_{av}({\bf d}-{\ve_a}-{\ve_v})}.$$ \end{itemize} \end{proposition} These recursive relations motivated the definition of operators in~\cite{lw2018} that we restate here. Let $\Z^N$ denote the set of non-negative integer sequences of length $N$. For a given integer $N$ and a set $\cA\se \binom{[N]}{2}$ we define $\oA$ to be the set of ordered pairs $(u,v)$ with $\{u,v\}\in \cA$. Ordered pairs are needed here because, although the functions of interest are symmetric in the sense that the probability of an edge $uv$ is the same as $vu$, our approximations to the probability do not obey this symmetry. Similarly, let $\oA_2$ denote the set of ordered triples $(u,v,w)$ with $u$, $v$ and $w$ all distinct and $\{u,v\}$, $\{v,w\} \in \cA$. Suppose we are given ${\bf p}: \oA\times\Z^N\to\R_{\ge 0}$, ${\bf y}: \oA_2\times \Z^N\to\R_{\ge 0}$ and ${\bf r}: [N]^2 \times \Z^N\to\R_{\ge 0}$. We write ${\bf p}_{av}({\bf d})$ for ${\bf p}(a,v,{\bf d})$ (where ${\bf d}\in \Z^N$), and remind the reader that in this paper, a subscript $av$ always denotes an ordered pair rather than an edge. Similarly, we write ${\bf y}_{avb}({\bf d})$ for ${\bf y}(a,v,b,{\bf d})$ and ${\bf r}_{ab}({\bf d})$ for ${\bf r}(a,b,{\bf d})$. We also define an associated function $\ensuremath {{\bf bad}}({\bf p},{\bf y})$ as follows. For ${\bf d}\in \Z^N $ and $a,b\in [N]$ with $a\neq b$, set $\ensuremath {{\bf bad}}({\bf p},{\bf y})(a,a,{\bf d}) = 0$ and \begin{equation}\lab{def:bad} \ensuremath {{\bf bad}}({\bf p},{\bf y})(a, b, {\bf d}) = \frac{1}{d_{a}} \left( \sum_{v\in \cA(a)\sm \cA(b)} {\bf p}_{av}({\bf d}) + \sum_{v\in \cA(a)\cap \cA(b)} {\bf y}_{avb}({\bf d}) \right). \end{equation} We define two operators ${\cal P}({\bf p},{\bf r})$, ${\cal Y}({\bf p},{\bf y})$ and ${\cal R}({\bf p},{\bf y})$, acting on ${\bf p}$, ${\bf y}$ and ${\bf r}$ as above, as follows. For ${\bf d} \in \Z^N$ and $a,v,b \in [N]$ we set \begin{align} {\cal P}({\bf p},{\bf r})(a,v,{\bf d}) &= d_{v} \left(\sum_{b\in A(v)} {\bf r}_{ba} ({\bf d} - {\ve_v}) \frac{1-{\bf p}_{bv}({\bf d} - {\ve_b} - {\ve_v})} {1-{\bf p}_{av}({\bf d} -{\ve_a} - {\ve_v})}\right)^{-1} \text{ for } (a,v)\in \oA,\lab{F1def}\\ {\cal Y}({\bf p},{\bf y})(a,v,b,{\bf d}) &= \frac{{\bf p}_{av}({\bf d})\big({\bf p}_{bv}({\bf d}-{\ve_a}-{\ve_v})-{\bf y}_{avb}({\bf d}-{\ve_a}-{\ve_v})\big)} {1-{\bf p}_{av}({\bf d}-{\ve_a}-{\ve_v})} \text{ for } (a,v,b)\in \oA_2, \lab{FYdef}\\ {\cal R}({\bf p},{\bf y})_{ab}({\bf d})&= \frac{d_{a}}{d_{b}}\cdot \frac{1- \ensuremath {{\bf bad}}({\bf p},{\bf y})(a,b,{\bf d}-{\ve_b})}{1- \ensuremath {{\bf bad}}({\bf p},{\bf y})(b,a, {\bf d} - {\ve_a})} \text{ for } a,b\in S \text{ or } a,b\in T. \lab{F2def} \end{align} We observed in~\cite{lw2018} that these operators are ``contractive'' in a certain sense, for particular functions ${\bf p}$ and ${\bf y}$, defined as follows. \begin{definition}\thlab{Pi-defn} Let $\D_0\se \Z^N$ and let $\mu\in\R$. We use $\Pi_{\mu}(\D_0)$ to denote the set of pairs of functions $({\bf p},{\bf y})$ with ${\bf p}:\oA\times \Z^N \to \R_{\ge 0}$ and ${\bf y}:\oA_2\times \Z^N \to \R_{\ge 0}$ such that for all balanced ${\bf d}\in \D_0$, we have \begin{enumerate}[label={$\mathrm{(}\Pi\mathrm{\alph*}\mathrm{)}$}] \item\label{Pi-a} $0\le{\bf p}_{av}({\bf d}) \le \mu$ for all $ (a,v) \in \oA$, \item\label{Pi-b} $\sum_{v\in \cA(a)\cap\cA(b)} {\bf y}_{avb}({\bf d})\le \mu d_a $ for all $a\ne b\in [N]$, and \item\label{Pi-c} $0\le {\bf y}_{avb}({\bf d})\le \mu {\bf p}_{bv}({\bf d})$ for all $ (a,v,b)\in \oA_2$. \end{enumerate} \end{definition} For the next lemma, we need to adapt~\cite[Lemma 5.2]{lw2018} to the present bipartite setting. Recall the definitions of $\cA^{\mathrm{bi}}$ and $\cA^{\mathrm{di}}$, and of $S$ and $T$ and $a'$ in Subsection~\ref{s:notation}. In this setting, $\cA(a)\cap\cA(b)\neq\emptyset$ if and only if both $a,b\in S$ or both $a,b\in T$. For such $a,b$ we have $|\cA(a)\sm\cA(b)| \leq 1$. Further note that $(a,v)\in\oA$ if and only if $a\in S$ and $v\in T$ in the bipartite case or $v\in T\sm\{a'\}$ in the digraph case (or $S$ and $T$ swapped), and thus $(a,v,b)\in\oA_2$ if and only if both $a, b\in S$, $a\neq b$ and $v\in T$ (bipartite) or $v\in T\sm\{a',b'\}$ (digraph); or $S$ and $T$ swapped. Also, for ${\bf d}=({\bf s},{\bf t})$ of length $\ell+n$, we let $Q_r^0({\bf d}),\, Q_r^S({\bf d})\, \se\Z^{\ell+n}$ be the set of balanced and $S$-heavy, respectively, vectors of non-negative integers that have $L_1$-distance at most $r$ from ${\bf d}$. Recall that we use $1\pm \xi$ to denote a quantity between $1-\xi$ and $1+\xi$ inclusively. With these definitions, Lemma~5.2 in~\cite{lw2018} specialises to the following. \begin{lemma}\thlab{l:errorImplication} There is a constant $C>0$ such that the following holds. Let $\ell, n$ be integers, $N=\ell+n$, and let $\cA$ be either $\cA^{\mathrm{bi}}$ or $\cA^{\mathrm{di}}$. Let ${\bf d}\in\Z^{\ell+n}$ such that $d_a>1$ for all $a\in [N]$. Let $0<\xi \le 1$ and $0<\mu_0=\mu_0(\ell,n) <C$. Let $({\bf p},{\bf y})$ and $({\bf p}',{\bf y}')$ be members of $\Pi_{\mu_0}(Q_2^0({\bf d}))$, and let ${\bf r}, {\bf r}':[N]^2\times \Z^n \to \R$. Let $a,b\in S$, $v\in T$. \begin{enumerate}[label={(\alph*)}] \item If ${\bf d}$ is $S$-heavy, ${\bf p}_{cw}({\bf d}')={\bf p}'_{cw}({\bf d}')(1 \pm\xi )$ for all $ (c,w)\in \oA$ and all ${\bf d}' \in Q^0_{1}({\bf d})$, and ${\bf y}_{cwh}({\bf d}')={\bf y}'_{cwh}({\bf d}')(1 \pm\xi )$ for all $(c,w,h)\in\oA_2$ and all ${\bf d}' \in Q^0_{1}({\bf d})$, then $$ {\cal R}({\bf p},{\bf y})_{ab}({\bf d})= {\cal R}({\bf p}',{\bf y}')_{ab}({\bf d})(1+O\left(\mu_0\xi \right)). $$ \item If ${\bf d}$ is balanced, $v\neq a'$, ${\bf p}_{cv}({\bf d}')={\bf p}'_{cv}({\bf d}')(1 \pm\xi )$ for all $c \in S\sm\{v'\}$ and all ${\bf d}' \in Q^0_{2}({\bf d})$, and ${\bf r}_{ca}({\bf d}')={\bf r}_{ca}'({\bf d}')(1 \pm\mu_0\xi)$ for all $c \in S\sm\{v'\}$ and all ${\bf d}'\in Q^1_1 ({\bf d})$, then $$ {\cal P}({\bf p},{\bf r})_{av}({\bf d})={\cal P}({\bf p}',{\bf r}')_{av}({\bf d})\left(1+O\left(\mu_0\xi \right)\right). $$ \item If ${\bf d}$ is balanced, $a\neq b$, $v\not\in\{a', b'\}$, ${\bf p}_{cv}({\bf d}')={\bf p}'_{cv}({\bf d}')(1 \pm\mu_0\xi )$ for all $c \in S\sm\{v'\}$ and all ${\bf d}' \in Q^0_{2}({\bf d})$, and ${\bf y}_{cwh}({\bf d}')={\bf y}'_{cwh}({\bf d}')(1 \pm\xi )$ for all $(c,w,h)\in\oA_2$ and all ${\bf d}' \in Q^0_{2}({\bf d})$, then $$ {\cal Y}({\bf p},{\bf y})_{avb}({\bf d})={\cal Y}({\bf p}',{\bf y}')_{avb}({\bf d})\left(1+O\left(\mu_0\xi \right)\right). $$ \end{enumerate} The constants implicit in $O(\cdot)$ are absolute. \end{lemma} % \subsection{Concentration of random variables} When ${\bf d}=({\bf s},{\bf t})$ is either the degree sequence of $\mathcal{G}(\ell,n,m)$ or $\mathcal{B}_m(\ell,n)$ then we need that $\sigma^2({\bf s})$, $\sigma^2({\bf t})$ and $\sigma({\bf s},{\bf t})$ (in the digraph case) are concentrated (to be used in Step 4 of the template given in Subection~\ref{s:template}). The following is due to McDiarmid~\cite{McD}, see, e.g., Lemma~4.1 in \cite{lw2018}. \begin{lemma}[McDiarmid] \thlab{l:subsetConc} Let $c>0$ and let $f$ be a function defined on the set of subsets of some set $U$ such that $|f(S)-f(T)|\le c$ whenever $|S|=|T|=m$ and $|S\cap T|=m-1$. Let $S$ be a randomly chosen $m$-subset of $U$. Then for all $\alpha>0$ we have $$ \pr\left(|f(S)-{\bf E} f(S)| \ge \alpha c\sqrt m \right) \le 2\exp (- 2\alpha^2). $$ \end{lemma} The following are the concentration results we need for both the sparse and the medium-dense ranges. \begin{lemma} \thlab{l:sigmaConcBip} Let $\ell, n, m$ be integers, let ${\bf d}=({\bf s},{\bf t})$ be a sequence in $\cD(\mathcal{G}(\ell,n,m))$, $\cD(\vec \mathcal{G}(n,m))$, or either of the binomial models $\mathcal{B}_m(\ell,n)$ and $\vec\mathcal{B}_m(n)$, and let $s = m/\ell$ and $t =m/n$, $\mu=\mu({\bf d})=m/n(\ell-\delta^{\mathrm{di}})$. Let $a\in S$, $v\in T$. Then the following hold. \begin{enumerate}[label={(\roman*)}] \item For all $\alpha >0$ we have $$\Pr\left(|s_a-s|\ge \alpha \right)\le 2\exp\bigg(-\frac{\alpha^2}{2(s+\alpha/3)}\bigg), \ \Pr\left(|t_v-t|\ge \alpha \right)\le 2\exp\bigg(-\frac{\alpha^2}{2(t+\alpha/3)}\bigg).$$ \item If $\log^3 n +\log^3 \ell =o(m)$ and $(\log n)/\sqrt n +(\log^{3/2} n)/\sqrt{m}=o(\alpha)$ then $$\pr\left(|\sigma^2({\bf t})- {\bf Var}\, t_v \ge \alpha t +1/n\right) = o(n^{-\omega }),$$ where ${\bf Var}\, t_v = t (1-\mu)(1-1/n) (1+O(1/n\ell))$; if $\log^3 n +\log^3 \ell =o(m)$ and $(\log \ell)/\sqrt \ell +(\log^{3/2} \ell)/\sqrt{m}=o(\alpha)$ then $$\pr\left(|\sigma^2({\bf s})- {\bf Var}\, s_a| \ge \alpha s +1/\ell\right) = o(\ell^{-\omega }),$$ where ${\bf Var}\, s_a = s (1-\mu)(1-1/\ell) (1+O(1/n\ell))$. Furthermore, for ${\bf d}=({\bf s},{\bf t})$ in $\cD(\vec G(n,m))$ or $\vec\mathcal{B}_m(n)$, $$ \pr\big(|\sigma({\bf s},{\bf t}) - {\bf Covar}( s_a, t_v ) | \ge \alpha s +1/n\big) = o(n^{-\omega }), $$ where $ {\bf Covar}( s_a, t_v ) = O( \mu)$. \end{enumerate} \end{lemma} \begin{proof} The proofs of these concentration results are routine so we just point out the differences compared with the proof of~\cite[Lemma 6.2]{lw2018}. Note that $s_a$ is distributed hypergeometrically with parameters $n(\ell-\delta^{\mathrm{di}})$, $m$, $n-\delta^{\mathrm{di}}$ in all four models, and similarly, each $t_v$ is distributed hypergeometrically with parameters $n(\ell-\delta^{\mathrm{di}})$, $m$, $\ell-\delta^{\mathrm{di}}$. Hence, the claims in (i), and in (ii) for $\sigma^2({\bf s})$ and $\sigma^2({\bf t})$, follow from the proof of Lemma 6.2 in~\cite{lw2018} with only trivial adjustments. For $\sigma({\bf s},{\bf t})$, define $$ g(a,b)= \sign(a-b) \min\{|a-b|,\sqrt x\}, $$ where $\sign(y)$ is $1$, $-1$ or 0 if $y$ is positive, negative or 0, respectively, and where $x$ is a function that satisfies $s\log n+\log^2n\ll x\ll \alpha^2sn/\log n$ as in the proof of~\cite[Lemma 6.2]{lw2018}. Let $f=\sum_{i=1}^n g(s_i,s)g(t_i,t)$, and adapt the rest of the proof of Lemma~6.2 in~\cite{lw2018} in the obvious way. This gives the required concentration bound for $\sigma({\bf s},{\bf t})$ near its expected value, which is $\frac{1}{n} \sum_{b\in S} {\bf Covar}(s_b,t_{\mathrm{mate}{b}}) = {\bf Covar}(s_a, t_{\mathrm{mate}{a}})$. On the other hand, we can bound ${\bf Covar}(s_a,t_{\mathrm{mate}{a}}) $ as follows. In $\cD(\vec G(n,m))$, the joint distribution of $(s_a,t_{a'})$ is multivariate hypergeometric, with $m$ edges chosen from $n(n-1)$ positions, and $s_a$ and $t_{\mathrm{mate}{a}}$ are the counts for disjoint subsets of size $n-1$ each. Thus, the well known formula gives $$ {\bf Covar}(s_a,t_{\mathrm{mate}{a}}) = \frac{m(n-1)^2(n(n-1)-m)}{(n(n-1))^2(n(n-1)-1)}=O(m/n^2), $$ which establishes the final claim for $\cD(\vec G(n,m))$. In $\vec\mathcal{B}_m(n)$ the random variables $s_a$ and $t_{v}$ are independent since we condition on $M_1({\bf s}) = m$ and $M_1({\bf t})=m$ separately. Thus the covariance is 0. \end{proof} \section{A formula for sparse digraphs and bipartite graphs} \lab{s:sparseBip} As mentioned in the introduction, our argument is based on~\cite{lw2018}, and this section in particular has much in common with the corresponding argument given there for the graph case. Recall that for a given sequence ${\bf d}=({\bf s},{\bf t})$ we define $\Delta_S=\Delta({\bf s})=\max_a(s_a)$, and similarly $\Delta_T=\Delta({\bf t}) =\max_v(t_v)$. \begin{proof}[Proof of \thref{t:sparseCaseBip}] If $\D=\emptyset$, there is nothing to prove. So fix ${\bf d}^*\in\D$ and define $\hat\Delta_S =2\Delta_S({\bf d}^*)+\ell^{\varepsilon/6}$ and $\hat\Delta_T =2\Delta_T({\bf d}^*)+n^{\varepsilon/6}$. Let $\D^+$ contain all sequences ${\bf d}\in \Z^{\ell+n}$ with $M_1({\bf s})=M_1({\bf t})=m$, $\Delta_S({\bf d})\le \hat\Delta_S$ and $\Delta_T({\bf d})\le \hat\Delta_T$. For an integer $r\ge 0$, denote by $Q_r^0$ (or $Q_r^S$) the set of all balanced (or $S$-heavy, respectively) sequences in $\Z^{\ell+n}$ that have $L_1$ distance at most $r$ from some sequence in $\D^+$. (Recall that we define ${\bf d}=({\bf s},{\bf t})$ to be balanced if $M_1({\bf s})=M_1({\bf t})$ and we call it $S$-heavy if $M_1({\bf s})=M_1({\bf t})+1$.) We start by estimating the ratios of the probabilities of adjacent degree sequences in the random bipartite graph model and the random digraph model, as prescribed by Step 1 of the template in Subsection~\ref{s:template}, using the following. Recall the definition of $R_{ab}$ in~\eqref{realRatio} with $\cA$ being $\cA^{\mathrm{bi}}$ or $\cA^{\mathrm{di}}$. \begin{claim}\thlab{RSparseBiAndDi} Uniformly for sequences ${\bf d}=({\bf s},{\bf t})\in Q_1^S$ and for $a, b \in S$ $$ R_{ab}({\bf d})=\frac{s_a}{s_b} \left(1 +\frac{(s_a-s_b) M_2 +( t_{\mathrm{mate}{a}}-t_{\mathrm{mate}{b}}) \delta^{\mathrm{di}} M_1 }{M_1^2} \right) \left(1+O\left(\frac{\hat\Delta_S^3\hat\Delta_T^3}{t m^2}\right)\right), $$ where $M_1 = M_1({\bf s}) -1 = M_1({\bf t})$, $M_2=M_2({\bf t})=\sum_{v\in T}t_v(t_v-1)$ and $\delta^{\rm di}$ is 0 in the bipartite case and 1 in the digraph case. \end{claim} % % \begin{proof} Note that the claim is true when $a=b$ as $R_{aa} = 1$ by definition. Hence, we can assume $a\neq b$ in the remainder of the proof. First, consider instead ${\bf d}=({\bf s},{\bf t})\in\D^+$, let $\ell_1$ and $n_1$ be the number of non-zero coordinates in ${\bf s}$ and ${\bf t}$, respectively. By definition, $s \ell = M_1({\bf s})\le \ell_1\Delta_S({\bf d})$. By assumption we therefore have \bel{deltas} \Delta_S({\bf d})\le \hat\Delta_S \ll (s\ell)^{1 /3} \leq \left(\Delta_S({\bf d}) \ell_1\right)^{1/3}, \end{equation} which readily implies that $\Delta_S({\bf d}) =o(\ell_1^{1/2})$. Similarly we find that $\Delta_T({\bf d})=o(n_1^{1/2})$. Now apply Lemma~\ref{lem:bipRealisable}(ii), with $F=\emptyset$ for the bipartite graph case, and $F$ being the set of disallowed edges from $S$ to $T$ in the digraph case, to the sequence formed by the non-zero coordinates of ${\bf d}$ to deduce that $\mathcal{N}({{\bf d}})>0$ for $\ell,n$ sufficiently large. We can deduce the same conclusion for all ${\bf d}\in Q^0_{8}$, since $\Delta_S({\bf d})$, $\Delta_T({\bf d})$, $\ell_1$ and $n_1$ can only change by a bounded additive term when moving from such ${\bf d}$ to the closest member of $\D^+$. Similarly, $M_1({\bf s})=\sum_i s_i=s\ell+O(1)$ for all ${\bf d}=({\bf s},{\bf t})\in Q^0_{8}$. Thus, all such ${\bf d}$ are $\cA$-realisable and $\hat\Delta_S=o(M_1({\bf s})^{1/3})$ and $\hat\Delta_T=o(M_1({\bf s})^{1/3})$, by~\eqn{deltas}. This and \thref{l:simpleSwitching} imply that \bel{Pbound} P_{av}({\bf d})=O(\hat\Delta_S\hat\Delta_T/M_1({\bf s})) = o(1) \quad \mbox{for all ${\bf d}\in Q^0_{8},\ av\in \cA$}. \end{equation} Next consider any distinct $a,b\in S$, $v\in T$ such that $av,bv\in\cA$ (i.e.~$(a,v,b)\in\oA_2$) and ${\bf d}\in Q_6^0$ with $d_a>0$ and $d_v>0$. Then ${\bf d}-{\ve_a}-{\ve_v} \in Q_{8}^0$ and hence $\mathcal{N}({\bf d}-{\ve_a}-{\ve_v})>0$ from above, and also $P_{av}({\bf d}-{\ve_a}-{\ve_v})=O(\hat\Delta_S\hat\Delta_T/s \ell) <1$ using~\eqref{Pbound}. Thus, for $\ell,n$ sufficiently large, $\mathcal{N}_{av}({\bf d})>0$ by \thref{trick17}, and we have $\mathcal{N}_{av}({\bf d})<\mathcal{N}({\bf d})$ since $P_{av}({\bf d})<1$ for similar reasons. This establishes the hypotheses for $\mathcal{N}_{av}({\bf d})$ and $\mathcal{N}({\bf d})$ in \thref{l:recurse} whenever they are needed below. It now follows that $Y_{avb}({\bf d})=O\left((\hat\Delta_S\hat\Delta_T/M_1({\bf s}))^2\right)$ for ${\bf d}\in Q_6^0$; if $d_a$ or $d_v$ is 0 then this is immediate, and otherwise it follows from~\eqn{FYdef} in view of~\eqref{Pbound}, and noting that the numerator is non-negative by definition. Next, definition~\eqn{eq:bad} yields $\ensuremath {B}(a,b,{\bf d})=O\left((\hat\Delta_S\hat\Delta_T)^2/t M_1({\bf s})\right)$ for all distinct $a,b\in S$ and all ${\bf d}\in Q_{6}^0$ with $d_a>0$. (In the current setting $|\cA(a)\setminus \cA(b)|$ is 0 and $\cA(a)\cap \cA(b) = T$ for the bipartite case, and $\cA(a)\setminus \cA(b) =\{\mathrm{mate}{b}\}$ and $\cA(a)\cap \cA(b) = T\sm\{\mathrm{mate}{a},\mathrm{mate}{b}\}$ for the digraph case.) Thus~\eqref{Ratformula} gives $R_{ab}=d_a/d_b(1+O((\hat\Delta_S\hat\Delta_T)^2/t M_1({\bf s})))$ for all ${\bf d}\in Q_{5}^S$ and all distinct $a,b\in S$ such that $d_a, d_b>0$. Now let ${\bf d}\in Q_4^0$ and $v\in T$ with $d_v>0$. We want to evaluate $\sum_{b\in\cA^*(v)}d_b$ in \thref{l:recurse}(a) and recall that $\cA^*(v)$ is the set of vertices $b$ such that $bv\in \cA$ and $\mathcal{N}_{bv}({\bf d})>0$. If $d_b>0$ then $\mathcal{N}_{bv}({\bf d})>0$ as noted above. Therefore $\sum_{b\in\cA^*(v)}d_b = \sum_{b\in\cA(v)}d_b = M_1({\bf s})- \delta^{\mathrm{di}}d_{\mathrm{mate}{v}}$ for such ${\bf d}$. Thus \thref{l:recurse} gives \bel{Pavsparse} P_{av}({\bf d}) =d_ad_v/M_1({\bf s}) \left(1+O\left((\hat\Delta_S\hat\Delta_T)^2/t M_1({\bf s})\right)\right) \end{equation} for all ${\bf d}\in Q_4^0$, $av\in \cA$ (if $d_a$ and $d_v$ are non-zero, the proposition applies as mentioned above, and if either is 0 then the claim holds trivially). Using a similar argument,~\eqn{FYdef} gives $$ Y_{avb}({\bf d})=\frac{d_a[d_v]_2 d_b}{M_1({\bf s})^2}\left(1+O\left(\frac{(\hat\Delta_S\hat\Delta_T)^2}{t M_1({\bf s})}\right)\right) $$ for all ${\bf d}\in Q^0_2$, all $(a,v,b)\in \oA_2$ with $a\in S$, and where $[x]_2$ denotes the falling factorial $x(x-1)$. Next, apply these results to the definition~\eqn{eq:bad} of $\ensuremath {B}$ for ${\bf d}\in Q^0_2$ and distinct $a,b\in S$, and note that $\cA(a)\setminus \cA(b) = \{b'\}$ for the digraph case and is empty in the bipartite case. This gives the sharper estimate $$\ensuremath {B}(a,b,{\bf d}) = \left(\frac{d_b M_2({\bf t})}{M_1({\bf s})^2} +\delta^{\mathrm{di}} \frac{d_{\mathrm{mate}{b}}}{M_1({\bf s})} \right) \left(1+O\left(\frac{\hat\Delta_S^2\hat\Delta_T^2}{t M_1({\bf s})}\right) \right) +O\left(\frac{\hat\Delta_T^2 \hat\Delta_S}{M_1({\bf s})^2}\right) $$ for ${\bf d}\in Q^0_2$, which we note is $O(\hat\Delta_S\hat\Delta_T/M_1({\bf s}))$ as $M_2({\bf t}) \leq \hat\Delta_T M_1({\bf t})=\hat\Delta_T M_1({\bf s})$. Recalling that $t\le \hat\Delta_T({\bf d})+2$ let us write $\hat\Delta_T^2 \hat\Delta_S/M_1({\bf s})^2 =O(\hat\Delta_T^3 \hat\Delta_S/tM_1({\bf s})^2)$. Thus, for all ${\bf d}\in Q_1^S $ and all distinct $a,b\in S$, and noting that $M_2=M_2({\bf t})$ changes by a negligible additive term $O(\hat\Delta_T)$ under bounded perturbations of the elements of the sequence ${\bf d}$, \bel{Req1} R_{ab}({\bf d}) =\frac{d_a}{d_b}\frac{\big(1 - ( d_b-1)M_2/M_1^2 - \delta^{\mathrm{di}}(d_{\mathrm{mate}{b}})/M_1 \big)} {\big(1 - ( d_a-1)M_2/M_1^2 - \delta^{\mathrm{di}}(d_{\mathrm{mate}{a}})/M_1\big)} \bigg(1+O\bigg( \frac{(\hat\Delta_S\hat\Delta_T)^3}{tM_1^2}\bigg)\bigg), \end{equation} by \thref{l:recurse}(b), which implies the claim since $d_a=s_a$ and $d_{\mathrm{mate}{a}}=t_{\mathrm{mate}{a}}$, and similarly for $d_b$ and $d_{\mathrm{mate}{b}}$. \end{proof} We next make the definitions of probability spaces necessary to apply Lemma~\ref{l:lemmaX} (see Steps 2 and 3 in the template). Let $\Omega$ be the underlying set of $\mathcal{B}_m(\ell,n)$ in the bipartite case and of $\vec\mathcal{B}_m(n)$ in the digraph case. Let $$ \W=\left\{ {\bf d}\in \D^+ : \max\{ \sigma^2({\bf s})\ell,\sigma^2({\bf t})n \} \le 2m\right\} $$ in the bipartite case, and the same but with the additional restriction that $$ |\sigma({\bf s},{\bf t})|\le 2 s $$ in the digraph case. Let $ \mathcal{S}'=\cD(\mathcal{G}(\ell,n,m))$ in the bipartite case, and $ \mathcal{S}'=\cD(\vec \mathcal{G}(n,m))$ in the digraph case. We now turn to define a second probability space $\mathcal{S}$ on the underlying set $\Omega$ (see Step 3 of the template). Recall the definition of ${\tilde H}({\bf d})$ in~\eqn{corrH} and let ${H}({\bf d}) =\pr_{\mathcal{B}_m}({\bf d}) {\tilde H}({\bf d})$, where $\mathcal{B}_m$ is $\mathcal{B}_m(\ell,n)$ in the bipartite case and $\vec\mathcal{B}_m(n)$ in the digraph case. Now define the probability function in $\mathcal{S}$ by \bel{PS} \pr_\mathcal{S}({\bf d}) = {H}({\bf d})/\sum_{ {\bf d}'\in\W}{H}({\bf d}') = \frac{ \pr_{\mathcal{B}_m}({\bf d}) {\tilde H}({\bf d})}{{\bf E}_{\mathcal{B}_m} (\mathbbm{1}_{\W}{\tilde H})} \end{equation} for ${\bf d}\in \W$, and $\pr_{\mathcal{S}}({\bf d})=0$ otherwise. The graph $G$ has vertex set $\W$ where two vertices are adjacent if they are either of the form ${\bf d}-{\ve_a}$, ${\bf d}-{\ve_b}$ for some $a$ and $b$ in $S$, or ${\bf d}-{\ve_v}$, ${\bf d}-{\ve_w}$ for some $v$ and $w$ in $T$. We need to estimate the probability of $\W$ in the two probability spaces $\mathcal{S}$ and $\mathcal{S}'$ (see Step 4 in the template). For convenience, we simultaneously make a similar estimate of $\pr_{\mathcal{B}_m}(\W)$ for use outside the present proof. Note that $\pr_{\mathcal{S}}(\W)=1$ by definition. We combine estimating $\pr_{\mathcal{S}'}(\W)$ and estimating the expressions ${\tilde H} ({\bf d})$ and ${\bf E}_{\mathcal{B}_m} (\mathbbm{1}_{\W}{\tilde H})$ in \eqn{PS} for later use. In the following, let ${\bf d}$ be in either of $\mathcal{S}'$ or $\mathcal{B}_m$. We first claim that ${\bf d}\in \D^+$ with high probability. Clearly, $M_1({\bf s})=M_1({\bf t})$ for all such ${\bf d}$ since this is true for any bipartite graph (or digraph) with sequence $({\bf s},{\bf t})$ by the definition of $\mathcal{B}_m$. Letting $\Delta_S^*=\Delta_S({\bf d}^*)$ and $\Delta_T^*=\Delta_T({\bf d}^*)$, we have by definition $\hat\Delta_S\ge \Delta_S^*+\ell^{\varepsilon/12}\sqrt {\Delta_S^*}\ge s+\ell^{\varepsilon/12}\sqrt {\Delta_S^*}$ and similarly $\hat\Delta_T\ge t+n^{\varepsilon/12}\sqrt {\Delta_T^*}$. Thus, for ${\bf d}\in \Omega$, in either $\mathcal{S}'$ or $\mathcal{B}_m$, \begin{align}\lab{aux885} \pr (s_a > \hat\Delta_S)&\le \pr\big(s_a>s+ \ell^{\varepsilon/12 }\sqrt {\Delta_S^*}\big) =o(\ell^{- \omega}), \text{ and}\\ \pr(t_v > \hat\Delta_T)&\le \pr\big(t_v>t+ n^{\varepsilon/12 }\sqrt {\Delta_T^*}\big) =o(n^{- \omega}) \end{align} by \thref{l:sigmaConcBip}(i) and noting that $\Delta_S^*,\Delta_T^*\to\infty$. The union bound now gives that, with probability $1- o(n^{- \omega}+\ell^{- \omega})$, $\Delta_S({\bf d})\le\hat\Delta_S$ and $\Delta_T({\bf d})\le\hat\Delta_T$ and hence ${\bf d}\in \D^+$, in both $\mathcal{S}'$ and $\mathcal{B}_m$. We next argue that the $\sigma$-terms are concentrated for ${\bf d}$ chosen in $\mathcal{S}'$ or $\mathcal{B}_m$. Let $\alpha = \max\{\log^4\ell/\sqrt{\ell},\log^4n/\sqrt{n}\}$ and note that the conditions $\log^3n+\log^3\ell=o(m)$, $(\log n)/\sqrt{n}+(\log^{3/2}n)/\sqrt{m}=o(\alpha)$, and $(\log \ell)/\sqrt{\ell}+(\log^{3/2}\ell)/\sqrt{m}=o(\alpha)$ in \thref{l:sigmaConcBip}(ii) follow from $n/\log^4n + \ell/\log^4\ell=o(m).$ Recalling that $\mu=m/n(\ell-\delta^{\mathrm{di}})$ we thus deduce that $\sigma^2({\bf t}) = t (1-\mu) \big(1+O(\alpha)\big)$ with probability $1-o(n^{-\omega})$, that $\sigma^2({\bf s})=s (1-\mu) \big(1+O(\alpha)\big)$ with probability $1-o(\ell^{-\omega})$, and, in the digraph case, $\sigma({\bf s},{\bf t})=O(\mu+s\alpha)$ with probability $1-o(n^{-\omega})$, by \thref{l:sigmaConcBip}(ii). This already implies that $\Pr_{\mathcal{S}'}(\W) = 1-o(n^{-\omega}+ \ell^{- \omega})$, in preparation for applying Lemma~\ref{l:lemmaX}, and similarly $\Pr_{B_m}(\W) = 1-o(n^{-\omega})$. Now note that if $\sigma^2({\bf t})= t (1-\mu)\big(1+O(\alpha)\big)$, then the term $\sigma^2({\bf t})/t(1-\mu)$ in the exponent of ${\tilde H}({\bf d})$ is $1+O(\alpha)$. Similarly for the term $\sigma^2({\bf s})/s(1-\mu)$. Furthermore, the term $\sigma({\bf s},{\bf t})/s(1-\mu)$ in the digraph case is $O(\alpha)$. It follows using the strong concentration shown in the previous paragraph that \bel{HConc} {\tilde H}({\bf d})=1+O(\alpha) \text{ with probability } 1-\bar n^{-\omega} \text{ for ${\bf d} \in \mathcal{B}_m$}, \end{equation} where $\bar n=\min\{n,\ell\}$. Recall that $\sigma^2({\bf s})\le 2s$, $\sigma^2({\bf t})\le 2t$ and, in the digraph case, $|\sigma({\bf s},{\bf t})|\le 2s$ for all ${\bf d}\in \W$. Thus, ${\tilde H}({\bf d})=\Theta(1)$ for ${\bf d}\in\W$, using the fact that $\mu < 1/2$, say. This and \eqn{HConc} then imply that ${\bf E}_{\mathcal{B}_m} (\mathbbm{1}_{\W}{\tilde H}) = 1+O(\alpha)$. To apply \thref{l:lemmaX} (see Step 5 in Section~\ref{s:template}), the final condition we need to show is that the ratios of probabilities satisfy \bel{eq:target1} \frac{\Pr_{\mathcal{S}'}({\bf d}-{\ve_a})}{\Pr_{\mathcal{S}'}({\bf d}-{\ve_b})} =e^{O(\delta)}\frac{\Pr_{\mathcal{S}}({\bf d}-{\ve_a})}{\Pr_{\mathcal{S}}({\bf d}-{\ve_b})} \end{equation} whenever ${\bf d}-{\ve_a}$ and ${\bf d}-{\ve_b}$ are elements of $\W$ that are adjacent in the auxiliary graph $G$ defined above, for $\delta=\delta(\hat\Delta_S,\hat\Delta_T)$ independent of ${\bf d}$ which we specify below, where the constant implicit in $O()$ is independent of ${\bf d}$ and ${\bf d}^*$. Compare the ratio formula \begin{align}\label{eq:ratioSparse} \frac{\Pr_{\mathcal{S}}({\bf d}-{\ve_a})}{\Pr_{\mathcal{S}}({\bf d}-{\ve_b})} = \frac{{H}({\bf d}-{\ve_a})}{{H}({\bf d}-{\ve_b})} \end{align} in~\eqref{H} with the expression in \thref{RSparseBiAndDi} when $a$ and $b$ are in $S$. Then use the identity $n\sigma^2({\bf t})=M_2-(t-1)M_1$ and recall that $M_1=\mu'n\ell$ in~\eqref{H} to deduce that \bel{ratios2} R_{ab}({\bf d}) = \frac{{H}({\bf s}-{\ve_a}, {\bf t})}{{H}({\bf s}-{\ve_b}, {\bf t})} \bigg(1+O\bigg( \frac{\hat\Delta_S^3\hat\Delta_T^3}{t ^3 n^2} \bigg)\bigg) \end{equation} whenever $a,b \in S$ and ${\bf d}\in Q_1^S$, where we use $(n+\delta^{\mathrm{di}}-s_b)/(n+\delta^{\mathrm{di}}-s_a)=\exp\big((s_a-s_b)/n+O(\Delta_S^2/n^2)\big)$, $1/(1-\mu)=1+O(\mu)$, $M_2({\bf t})=O(\Delta_T M_1)$ and $(\Delta_S\Delta_T)^2/M_1^2\le (\Delta_S\Delta_T)^3/tm^2$, $\Delta_S\le \hat\Delta_S$, $\Delta_T\le \hat\Delta_T$, and the most significant error term derives from \thref{RSparseBiAndDi}. The corresponding statements when $a,b \in T$ follow accordingly (after swapping $S\leftrightarrow T$, $s\leftrightarrow t$ and $\ell \leftrightarrow n$ in the conclusion of the argument). Equation~\eqn{ratios2} now implies that $$ \frac{ \pr_{\mathcal{S}'}({\bf d}-{\ve_a})}{ \pr_{\mathcal{S}'}({\bf d}-{\ve_b}) } = R_{ab}({\bf d}) = e^{O(\delta)}\frac{ \pr_{\mathcal{S}}({\bf d}-{\ve_a})}{ \pr_{\mathcal{S}}({\bf d}-{\ve_b})} $$ whenever ${\bf d}-{\ve_a}$ and ${\bf d}-{\ve_b}$ are adjacent elements of $\W$, where we may take $ \delta= (\hat\Delta_S\hat\Delta_T)^3(1/t m^2 + 1/s m^2)$ which is the error term in~\eqn{ratios2} together with the symmetric error after the swap. It is clear that the diameter $r$ of $G$ is at most $M_1 ({\bf s}) +M_1({\bf t})= 2m$. Lemma~\ref{l:lemmaX} then implies that $P_{\mathcal{S}'}({\bf d})=e^{O( r\delta+\varepsilon_0)}P_{\mathcal{S}}({\bf d})$ for ${\bf d}\in \W$, where $\varepsilon_0 = 1/n+1/\ell$. To proceed from here, since we found that ${\bf E}_{\mathcal{B}_m} (\mathbbm{1}_{\W}{\tilde H}) = 1+O(\alpha)$, equation~\eqn{PS} implies $P_{\mathcal{S}}({\bf d}) = {H}({\bf d})(1+O(\alpha))$ for ${\bf d}\in \W$. Hence, \bel{generalError} P_{\mathcal{S}'}({\bf d}^*)=e^{O(r\delta+\varepsilon_0+\alpha)} {H}({\bf d}^*). \end{equation} Note that $\alpha=O(n^{\varepsilon-1/2}+\ell^{\varepsilon-1/2})$, and $$r\delta+\varepsilon_0= O\left((\hat\Delta_S\hat\Delta_T)^3\left(\frac{1}{tm}+\frac{1}{sm}\right)\right) = O\left( \Delta_S({\bf d}^*)^3\Delta_T({\bf d}^*)^3 (\ell n)^{\varepsilon/2}\left(\frac{1}{tm}+\frac{1}{sm}\right)\right).$$ The theorem for ${\bf d}\in \W$ follows since $\mathcal{S}'$ is $\cD(\mathcal{G}(\ell,n,m))$ or $\cD(\vec\mathcal{G}(n,m))$, respectively, and by definition of ${H}$. On the other hand, to treat the elements of $\D^+\setminus \W$, where some $\sigma$-term may be unbounded, we still have~\eqn{ratios2} holding for all ${\bf d}\in Q_1^S$. For any ${\bf d}\in \D^+$, there is a telescoping product of such ratios starting with ${\bf d}'\in \W$, of length at most $2m$, which shows that $\pr_{\mathcal{S}}({\bf d})/{H}({\bf d})= \pr_{\mathcal{S}}({\bf d}')/{H}({\bf d}')(1+O(m\delta))$. From this, the required formula for $\pr_{\mathcal{S}}({\bf d})$ follows for all ${\bf d}\in\D$, in both the bipartite graph and digraph cases. \end{proof} \section{Proof of \thref{t:mainbip,t:bipmodel,t:edgeprobability}} \lab{s:denseBip} \def\epsV{\varepsilon} \def\epsA{\varepsilon} \def\sigma_S{\sigma_S} \def\sigma_T{\sigma_T} In this section we prove Theorem~\ref{t:mainbip}. Following the template in Subsection~\ref{s:template} we first consider the ratio of probabilities of ``adjacent'' degree sequences. To estimate those ratios we first present functions that approximate the ratios and the probabilities. We write these approximations parameterised to facilitate identifying negligible terms. We express the formulae for the approximations of $P$, $Y$ and $R$ in both the bipartite graph case and the digraph case simultaneously, again using $\delta^{\mathrm{di}}$ as the indicator variable which is 1 in the digraph case and 0 in the bipartite case. For integers $\ell$ and $n$, and a sequence of real numbers $\mu$, $\epsA_a$, etc., we define the expressions \remove{\ac{There was a discussion here to change to a different parametrisation, using $\mu/t^2$ instead of $1/t\ell$. In the end I decided against it. There are two reasons: For once, this introduces a slight error for the digraphs, and we'd have to argue once again that this is negligible. Second, in the old version the $\sigma_T^2$ term always comes with a factor $1/t \ell$. So in maple, we use a variable $sV$ that stands for $\sigma_T^2/t\ell$. This is important, as $\sigma$ by itself doesn't necessarily go to 0. With this extra factor, saying that the square of this term is negligible is much easier. I hope you don't mind. I thought about for a while when tackling the maple. \\ I DID change: $1/n-1$ to $\mu/s$ in both $\pi$ and $\rho$. This way, the only place where $n$ and $\ell$ appear is together with the $\sigma^2$ terms. That's less to worry about in the worksheet. }} \begin{eqnarray*} \pi & = &\mu (1+\epsA_a)(1+\epsV_v) \left(1 - \frac{\mu\epsA_a\epsV_v- \epsA_a \sigma_T^2/t \ell -\epsV_v \sigma_S^2 /s n}{1-\mu} + \frac{\delta^{\mathrm{di}}(\epsV_{a'}+\epsA_{v'})\mu}{s}\right),\\ \rho & = &\frac{1+\epsA_a}{1+\epsA_b}\cdot\frac{1- \mu(1 + \epsA_b)+\mu/s} {1- \mu(1+ \epsA_a)+ \mu/s} \\ && \times \left(1 +\frac{\epsA_a-\epsA_b} {(1-\mu)}\left(\frac{\sigma_T^2 }{(1-\mu)t\ell} - \frac{1}{\ell} \right) +\frac{\delta^{\mathrm{di}}(\epsV_{a'}-\epsV_{b'})\mu}{s(1-\mu)} \right). \end{eqnarray*} In the calculations below, there are small changes in most of the variables that turn out to have a negligible effect. Changes in the various occurrences of $\varepsilon$-type terms, however, need to be tracked precisely. In particular, we use \begin{itemize} \item $\pi(x,y)$ to stand for $\pi$ with $\epsA_a,\epsV_v$ replaced by $x,y$, \item $\pi(x,y,z,w)$ to stand for $\pi$ with $\epsA_a,\epsV_v,\epsV_{a'},\epsA_{v'}$ replaced by $x,y,z,w$, \item $\rho(x,y,z,w)$ to stand for $\rho$ with $\epsA_a,\epsA_b,\epsV_{a'},\epsV_{b'}$ replaced by $x,y,z,w$. \end{itemize} Recall that we consider sequences ${\bf d} = ({\bf s},{\bf t})$, where ${\bf s}$ and ${\bf t}$ have length $\ell$ and $n$, respectively, and where $\ell = n$ in the digraph case. Also $\mu = \frac12 M_1({\bf d}) / |\cA|$, where $|\cA|=(1-\delta^{\mathrm{di}})n\ell+\delta^{\mathrm{di}}n(n-1)$, $s=(\sum_{a\in S}s_a)/\ell$ and $t=(\sum_{v\in T}t_v)/n$. Noting that $S\cap T=\emptyset$, we set $\varepsilon_a=(s_a-s)/s$ for $a\in S$, $\varepsilon_v=(t_v-t)/t$ for $v\in T$, $\sigma_S^2 = \sigma^2({\bf s})$, and $\sigma_T^2=\sigma^2({\bf t})$. Then with $*$ standing for either of $\mathrm{bi}$ or $\mathrm{di}$, we define $$ P^*=\pi, \quad R^* = \rho,$$ $$ Y^*= \pi\cdot \pi(\epsA_b, \epsV_v-1/t, \epsV_{b'},\epsA_{v'})\cdot \left( 1+\frac{\mu(1 + \epsA_a)- \mu^2(1+\epsA_a+\epsA_b) }{t(1-\mu)} \right) , $$ so that $P_{av}^{\mathrm{bi}}$ etc.\ are functions of degree sequences. For example, the edge probability function for bipartite graphs with ``classical'' parameters $d_a$ etc is given by \begin{eqnarray*} P_{av}^{\mathrm{bi}} & = & \frac{d_ad_v}{m} \left( 1-\frac{n\ell(d_a-s)(d_v-t)}{m(n\ell-m)}-\frac{(d_a-s)\sigma^2({\bf t})n}{ts(n\ell-m)} -\frac{(d_v-t)\sigma^2({\bf s})\ell}{ts(n\ell-m)}\right) \end{eqnarray*} for $a\in S$, $v\in T$. One can compute similar expressions for $P_{av}^{\mathrm{di}}$, the ratio functions $R_{ab}^{\mathrm{bi}}$ and $R_{ab}^{\mathrm{di}}$, and the 2-path probabilities $Y_{avb}^{\mathrm{bi}}$ and $Y_{avb}^{\mathrm{di}}$. In particular, we remark that $R^*_{ab}({\bf d})$ written with ``classical'' parameters is just the expression in~\eqref{H}. The functions $P^*$, $Y^*$ and $R^*$ are our ``guessed'' probability and ratio functions and we will show that they approximate the actual functions sufficiently well. The following implies that they are close to fixed points of the operators defined in~(\ref{F1def}--\ref{F2def}). \begin{lemma}\thlab{l:mapleHigher} Let $n, \ell$ be integers and let $1/2\le \varphi < 3/5$. Let $\cA$ be as in either the bipartite or the digraph case, let ${\bf d}=({\bf s},{\bf t})$ be a sequence of length $\ell+n$, let $\mu=M_1({\bf d})/2|\cA|$, and assume that $\mu<1/4$. Furthermore, let $s$ and $t$ be the average of ${\bf s}$ and ${\bf t}$, respectively, and $\varepsilon= \bar d^{\varphi-1}$, where $\bar d = \min\{ s, t\}$, and assume that $\max_{a\in S} |s_a - s|/s \leq \varepsilon$, $\max_{v\in T} |t_v - t|/t \leq \varepsilon$. Let $*$ stand for either of $\mathrm{bi}$ or $\mathrm{di}$. Then \begin{itemize} \item[(a)] ${\cal R}(P^*,Y^*) _{ab}( {\bf d})= R^*_{ab} ( {\bf d}) \left(1 + O\left(\mu\varepsilon^4\right)\right)$ for all $a,b\in S$, \item[(b)] ${\cal P}(P^*,R^*)_{av}({\bf d}) = P^*_{av}({\bf d}) \left(1 + O(\mu\varepsilon^4)\right)$ for all $a\in S$, $av\in \cA $ \item[(c)] ${\cal Y}(P^*,Y^*)_{avb}({\bf d}) = Y^*_{avb}({\bf d}) \left(1 + O(\mu\varepsilon^4)\right)$ for all distinct $a,b\in S$, $v\in T $, $av,vb\in\cA$. \end{itemize} \end{lemma} \begin{proof} This follows the proof of~\cite[Lemma 7.1]{lw2018} very closely, with modifications due to the bipartite setting. We first present some convenient approximations of $P^*$ and $\pi$ for use when their parameters have been slightly altered. Let ${\bf d}'=({\bf s}',{\bf t}')$ be a sequence where ${\bf s}'$ and ${\bf t}'$ are of length $\ell$ and $n$, respectively, such that ${\bf d}'$ is at $L_1$ distance $O(1)$ from ${\bf d}$, and with $d_a'=d_a-j_a$ and $d_v'=d_v-j_v$ for some $a\in S$, $v\in T$. Here and in the following, the bare symbols $\mu$, $\epsA_a$ and so on are defined with respect to the original sequence ${\bf d}$, whilst $\mu'$, $\epsA_a'$, etc., are defined with respect to ${\bf d}'$. For such a sequence ${\bf d}'$ we have that $\mu({\bf d}')$ is $\mu'= \mu +O(1/ n\ell)$ by definition of $\mu({\bf d}')$. Therefore, the variables $\epsA_a$ and $\epsV_v$ change to $\epsA_a'=\epsA_a-j_a/s+O(1/\mu n\ell)$ and $\epsV_v'=\epsV_v-j_v/t+O(1/\mu n\ell)$. (Note that this takes into account that $\epsA_a'$ and $\epsV_v'$ are defined with respect to $s({\bf d}')$ and $t({\bf d}')$.) Furthermore, $$\sigma^2 ({\bf s}')-\sigma^2 ({\bf s})=O(( \max_{c\in S} |s_c-s| +1 )/ n )=O(\varepsilon s/n),$$ and similarly, for $\sigma^2 ({\bf t}')$. Hence, by definition of $P^*=\pi$ and the preceding considerations, \bel{piwithprime2} P^*_{av}({{\bf d}'}) =\pi( \epsA_a-j_a/s,\epsV_v-j_v/t)\big(1+O( 1/\mu n\ell)\big), \end{equation} That is, the changes from $\mu$, $s$, $t$, $\sigma_S^2$ and $\sigma_T^2$ to $\mu'$, $s'$, $t'$, $(\sigma_S^2)'$ and $(\sigma_T^2)'$ are negligible in the formula for $P^*$. For (a), we note first that ${\cal R}( {P^*},{Y^*})_{aa}({\bf d}) = 1 = R^*_{aa}({\bf d})$ by definition of $\rho$ and ${\cal R}$ in~\eqref{F2def}. Assume now that $a\neq b$. Using~\eqn{F2def} to evaluate ${\cal R}( {P^*},{Y^*})_{ab}({\bf d})$, we estimate the expression $\ensuremath {{\bf bad}}(a,b,{\bf d}-{\ve_b})=\ensuremath {{\bf bad}}({P^*},{Y^*})(a,b,{\bf d}-{\ve_b})$ for which, in turn, we need to estimate $\sum Y^*_{avb}({\bf d}-{\ve_b})$, where the sum is over all $v\in T$ such that both $av$ and $bv$ are allowable (see \eqref{def:bad}). By definition and~\eqref{piwithprime2}, $$ Y^*_{avb}({\bf d}-{\ve_b}) = \pi \cdot \pi(\epsA_{b}- 1/s,\, \epsV_{v}- 1/t) \cdot\left(1+\frac{\mu(1+\epsA_a) - \mu^2(1+\epsA_a+\epsA_b)}{t(1-\mu )} +O( 1/\mu n\ell) \right), $$ where we use $\epsA_a$, $\epsA_b$ and $\mu $ in the third factor (rather than the altered versions $\epsA_a'$ etc.)~using the same reasoning as for obtaining~\eqref{piwithprime2}. Consider expanding this expression for $Y^*_{avb}({\bf d}-{\ve_b})$ ignoring terms of order $\varepsilon^4$, and hence also ignoring those of order $s^{-2}$, $t^{-2}$, $\varepsilon^2/t$, and $\varepsilon^2/s$ since $\varepsilon^2\ge 1/\bar d =\max\{1/t,1/s\}$. A convenient way to do this is to make substitutions $\epsA_v=y_1\epsA_v$, $1/t = y_1^2/t $, $\mu = y_2 \mu$, $1/n =y_1^2y_2/n$, and so on where $y_1$ represents a parameter of size $O(\varepsilon)$ and $y_2$ one of size $O(\mu)$ (for instance, $\sigma_T^2/t\ell$ is $O(\varepsilon^2\mu)$), and then expand about $y_1=0$ and drop terms of order $y_1^4$. Since $Y^*_{avb}({\bf d}-{\ve_b})$ gains a factor $y_2^2$ via the factors of $\mu$ in $\pi$, each term containing $y_1^i$ is of order $y_1^iy_2^2$ and is hence $O(\mu^2\varepsilon^i)$. Next, removing the `sizing' variables $y_i$ by setting them equal to 1, and then expanding the result about $(\epsA_{v'},\epsV_v) =(0,0)$ and retaining all terms of total degree at most 3 in $ \epsA_{v'}$ and $ \epsV_v$, we get \begin{align*} Y^*_{avb}({\bf d}-{\ve_b}) &= c_{0} + c_{10}\epsV_{v} + c_{01}\epsA_{v'} + c_{20}\epsV_{v}^2 +O(\mu^2\varepsilon^4), \end{align*} where the functions $c_{0}$, $c_{10}$, $c_{01}$, and $c_{20}$ are independent of $\epsV_v$ and $\epsA_{v'}$, with $c_0$, $c_{10}$ and $c_{01}$ linear in $1/s$ and $ 1/t$, and $c_{20}$ constant in those variables. (By calculation, the third order terms all turn out to be absorbed by the error term. Furthermore, the relative error $1/\mu \ell n$ in the previous expression for $Y^*$ yields an absolute error $O(\mu/\ell n)=O(\mu^2\varepsilon^4)$ since~$Y^*$ is~$O(\mu^2)$.) We note that $c_{01}$ has a factor $\delta^{\mathrm{di}}$ since $\epsA_{v'}$ in $\pi$ has such a factor. Then considering the definition of $\ensuremath {{\bf bad}}(a,b,{\bf d}-{\ve_b})$ in~\eqn{def:bad} we find that the second summation can be written as \begin{align*} \Sigma _{\ensuremath {{\bf bad}}} &= \sum_{v\in \cA(a)\cap \cA(b)}Y^*_{avb}({\bf d}-{\ve_b})\\ &= \sum_{v\in \cA(a)\cap \cA(b)} (c_{0} + c_{10}\epsV_{v} + c_{01}\epsA_{v'} + c_{20}\epsV_{v}^2 + O(\mu^2\varepsilon^4)) \\ &= n c_0 + nc_{20}\sigma_T^2/t^2\\ % &\qquad +\delta^{\mathrm{di}} \left( -2c_0 - c_{10}(\epsV_{a'}+\epsV_{b'}) - c_{01}(\epsA_a+\epsA_b) -c_{20}(\epsV_{a'}^2+\epsV_{b'}^2) \right) +O(n \mu^2\varepsilon^4), \end{align*} since $a$ and $b$ are distinct elements of $S$ (in which case $\cA(a)\cap\cA(b)$ is $T$ in the bipartite case and is $T\sm\{a',b'\}$ in the digraph case), and where we also use that $\sum_{v\in T}\epsV_v =0$ and that, in the digraph case, $\sum_{v\in T}\epsA_{v'} = \sum_{a\in S}\epsA_a =0$. Noting that $\cA(a)\setminus \cA(b)$ is $\emptyset$ in the bipartite case and consists of just $\mathrm{mate}{b}$ in $T$ in the digraph case, we can write $\ensuremath {{\bf bad}}(a,b,{\bf d}-{\ve_b})$ in~\eqn{F2def}, by using~\eqn{def:bad}, as \begin{align*} \ensuremath {{\bf bad}}(a,b,{\bf d}-{\ve_b}) &= \frac{1}{d_a} \left(\Sigma_{\ensuremath {{\bf bad}}} +\delta^{\mathrm{di}} \pi(\epsA_a,\epsV_{b'},\epsV_{a'},\epsA_b-1/t) +O(1/ \ell n) \right), \end{align*} where $d_a =s_a = (1+\epsA_a)s$, and the $O(1/n\ell)$ term captures the fact that we use $\mu=\mu({\bf d})$, $\sigma^2({\bf s})$, and $\sigma^2({\bf t})$, respectively, instead of $\mu({\bf d}')$, $\sigma^2({\bf s}')$, and $\sigma^2({\bf t}')$, in the formula for $\pi$ applied to an altered sequence ${\bf d}'$. The error term $O(1/\ell n)$ here, together with the one from $\Sigma_{\ensuremath {{\bf bad}}}$ above, produce an additive error in $\ensuremath {{\bf bad}}(a,b,{\bf d}-{\ve_b}) $ of $O(\mu\varepsilon^4)$ since $n/d_a=n/s_a\sim 1/\mu$ and $1/\ell n<\mu\varepsilon^4$. Substituting the above expression, stripped of its error terms, into \begin{align*} \frac{{\cal R} (P^*,Y^*)_{ab} ({\bf d})}{\rho} -1 &= \frac{1}{\rho}\cdot \frac{(1+\epsA_a)(1-\ensuremath {{\bf bad}}(a,b,{\bf d}-{\ve_b}))}{(1+\epsA_b)(1-\ensuremath {{\bf bad}}(b,a,{\bf d}-{\ve_a}))} -1 \end{align*} and simplifying gives a rational function $\widehat F$ which satisfies ${\cal R}(P^*,Y^*)_{ab}({\bf d})/\rho = 1+ \widehat F + O(\mu\varepsilon^4)$. After inserting the size variables $y_1$ and $y_2$ into $\widehat F$ as specified above (and here it may be convenient to use $n=\delta^{\mathrm{di}}+s/\mu$), and simplifying, we find that $\widehat F$ has $y_2$ as a factor (of multiplicity 1), and its denominator is nonzero at $y_1=0$. Then expanding the expression in powers of $y_1$ shows that $\widehat F=O( y_1^4)$. Along with the extra factor $y_2$, this implies $\widehat F=O(\mu\varepsilon^4)$. Part (a) follows. To prove part (b) let $av\in\cA$ with $a\in S$ and let ${\bf d}'$ be the sequence ${\bf d} -{\ve_v}$. Note that, analogous to~\eqn{piwithprime2}, the differences in the values of $\mu$, $s$, $t$ in $\rho$ between ${\bf d}$ and ${\bf d}'$ are negligible, as are the differences in $\epsA_a$ and $\epsA_b$ for $b\in \cA(v) = S\sm\{v'\}$ (note that with these assumptions $v$ is never equal to $a, a', b$ or $b'$; so the values of $\epsA_a$, $\epsA_b$, $\epsA_{a'}$, $\epsA_{b'}$ only change since $s$ changes). Hence, we also have $$ R^*_{ab}({\bf d}') =\rho\cdot \left(1 +O\left( 1/ \mu n\ell \right)\right)$$ for $a,b\in S$, $v\in \cA(a)\cap \cA(b)$. Therefore, by definition \eqref{F1def}, \begin{align}\lab{aux1552} {\cal P}( P^*, R^* )_{av}({\bf d}) &= d_{v} \left(\sum_{b\in \cA(v)} R^*_{ba} ({\bf d}-{\ve_v}) \frac{1- P^*_{bv}({\bf d} - {\ve_b} - {\ve_v})} {1-P^*_{av}( {\bf d} -{\ve_a} - {\ve_v})}\right)^{-1}\nonumber\\ &= d_{v} \left(\sum_{b\in \cA(v)} \rho (\epsA_b,\epsA_a,\epsV_{b'},\epsV_{a'} ) \cdot \frac{1-\pi\left(\epsA_b- 1/s ,\epsV_v- 1/t ,\epsV_{b'},\epsA_{v'}\right)} {1-\pi\left(\epsA_a- 1/s,\epsV_v- 1/t ,\epsV_{a'},\epsA_{v'}\right)} \left(1 +O\left(\frac{1}{ \mu n\ell }\right)\right)\right)^{-1}. \end{align} By expanding in powers of $\epsA_b$ and $\epsV_{b'}$ we obtain \begin{align*} \rho(\epsA_b,\epsA_a,\epsV_{b'},\epsV_{a'}) \cdot \frac{1-\pi(\epsA_b- 1/s ,\epsV_v- 1/t ,\epsV_{b'},\epsA_{v'})} {1-\pi(\epsA_a- 1/s ,\epsV_v- 1/t ,\epsV_{a'},\epsA_{v'})} & = K + O(\varepsilon^4) \end{align*} where $K$ is a polynomial in $\epsA_b$ and $\epsV_{b'}$ of total degree at most 3 in $\{\epsA_b,\epsV_{b'}\}$. Calculations using the size variables $y_1$ and $y_2$ as above show that $K = k_{00} + k_{10} \epsA_b + k_{01} \epsV_{b'} + k_{20} {\epsA_b}^2 + O(\varepsilon^4)$ for some $k_{ij}$ (and in particular, the coefficients $k_{11}$ and $k_{02}$, and those for terms of third order, are all absorbed by the error terms). We also note from the definition of $\pi$ and $\rho$ that $k_{01}$ has $\delta^{\mathrm{di}}$ as a factor. Also, for $v\in T$ we have $\cA(v) = S$ in the bipartite case and $\cA(v) = S\sm\{\mathrm{mate}{v}\}$ in the digraph cases. Then the main summation over~$b$ in~\eqn{aux1552} can be evaluated as \begin{align*} \ell\cdot k_{00} + \ell \sigma_S^2 k_{20}/s^2 +\delta^{\mathrm{di}} \left( - k_{00} - k_{10}\epsA_{v'} - k_{01}\epsV_v - k_{20} {\epsA_{v'}}^2\right), \end{align*} with relative error $O(\varepsilon^4)$, noting that $K$ has constant order, where we use that $\sum_{b\in S} \epsA_b = 0$ and that $\sum_{b\in S} {\epsA_b}^2 = \ell \sigma_S^2/{s}^2$. Using the size variables $y_1$ and $y_2$ as described above, we then find that ${\cal P}(P^*,R^*)_{av}({\bf d}) = \pi (1+O(\mu\varepsilon^4))$, with the extra factor $\mu$ arising in the error term in the same way as for ${\cal R}$ in part (a). Part (b) follows. Part (c) is more straightforward than the first two parts and is easily verified by similar considerations, so we omit details. \end{proof} \begin{proof}[Proof of \thref{t:mainbip}] Let $\mu_0$, $\varphi$, $\ell$, $n$, $m$, $\D$ be as in the theorem statement. Note that all sequences ${\bf d}$ in $\D$ have the same values $\mu = \mu({\bf d})=m/n(\ell-\delta^{\mathrm{di}})$, $s = s({\bf d}) = m/\ell$ and $t = t({\bf d}) = m/n$. All other sequences ${\bf d}'$ in this proof will be close enough to $\D$ (in Hamming distance) that $\mu':=\mu({\bf d}') \sim \mu$. Let $\varepsilon= \max\{s^{\varphi-1},t^{\varphi-1}\}$ and note that $\varepsilon\geq \max\{1/\sqrt{s}, 1/\sqrt{t}\}$ by the lower bound on $\varphi$. We follow the template from Subsection~\ref{s:template}, first considering the ratios $R_{ab}$ for $a,b\in S$. Recall that $P$, $Y$ and $R$ denote the {\em actual} edge probability, path probability and ratio functions (c.f.~\eqref{realRatio} and \eqref{realProb}) and that $P^*$, $Y^*$ and $R^*$ are functions of degree sequences with closed form given at the beginning of this section. Recall that $P$, $Y$ and $R$ are always defined with respect to some underlying set $\cA$ which is either $\cA^{\mathrm{bi}}$ or $\cA^{\mathrm{di}}$ here. The next claim states that the functions $P^*$ and $R^*$ approximate $P$ and $R$ sufficiently well. An analogous statement for $Y^*$ and $Y$ appears in the proof of the claim but is not needed elsewhere. It is easy to see inductively that we only require $R_{ab}({\bf d})$ when $a$ and $b$ are in the same set of the bipartition. Let $Q_1^{S}$ denote the set of sequences ${\bf d}=({\bf s},{\bf t})\in \Z^{\ell+n}$ such that ${\bf d}-{\ve_a}\in \D$ for some $a\in S$; and let $Q_1^{T}$ denote the set of sequences ${\bf d}=({\bf s},{\bf t})\in \Z^{\ell+n}$ such that ${\bf d}-{\ve_v}\in \D$ for some $v\in T$. \begin{claim}\thlab{RDenseBip} Let $*$ be either $\mathrm{bi}$ or $\mathrm{di}$. For ${\bf d}\in \D$, $av\in\cA$, \begin{equation}\lab{appP} P_{av}({\bf d}) = P_{av}^*({\bf d}) (1+O(\mu\varepsilon^4)), \end{equation} and uniformly for all ${\bf d}\in Q_1^S$ and all $a,b\in S$ \begin{equation}\lab{appR} R_{ab}({\bf d}) = R_{ab}^*({\bf d}) (1+O(\mu\varepsilon^4)). \end{equation} \end{claim} By symmetry, \eqref{appR} also holds for all ${\bf d}\in Q_1^T$ and all distinct $a,b\in T$. The proof is very similar to the proof of Claim~6.4 in~\cite{lw2018} with some adaptations to the bipartite setting. We include a full proof for the sake of completeness. \begin{proof}[Proof of the claim.] To show that $P$ and $P^*$ (and $R$ and $R^*$) are $(\mu\varepsilon^4)$-close in the sense of~\eqref{appP} and~\eqref{appR}, we define the compositional operator $${\cal C}({\bf p},{\bf y})= \big(\hat{\bf p}, {\cal Y}(\hat{\bf p}, {\bf y})\big),\ \mbox{where\, $\hat{\bf p} = {\cal P}({\bf p},{\cal R}({\bf p},{\bf y}))$}.$$ We first observe that ${\cal C}$ fixes $(P,Y)$, where in this context we regard $P$ to be the function ${\bf p}$ with ${\bf p}_{ av}=P_{av} $ for all $av\in \cA$, and similarly $Y$ to be ${\bf y}$ with ${\bf y}_{avb}= Y_{avb}$, by \thref{l:recurse}. We will deduce a certain contraction property of ${\cal C}$ by applying \thref{l:errorImplication}(a--c) one after the other, and then show that for any integer $k>0$, ${\cal C}^{k}({P^*},{Y^*})$ and ${\cal C}^{k}(P,Y)$ are $O(\mu)^{k }$-close. We will also show that ${P^*}$ and ${\cal C}^{k}({P^*})$ are $O(\mu\varepsilon^4)$-close. These observations will then be shown to imply \thref{RDenseBip}. Fix $k_0 = 4\log n$ and $r=4k_0+4=O(\log n)$. Let $\Omega^{(0)}$ be the set of sequences ${\bf d}' \in \Z^n$ that are at $L_1$ distance at most $r$ from a sequence in $\D$. Let $ \mu_1=5\mu$, and define $\Omega^{(s)}$ to be the set of sequences ${\bf d}'\in\Omega^{(0)}$ of $L_1$ distance at least $s+1$ from all sequences outside $\Omega^{(0)}$. Towards applying \thref{l:errorImplication} we first establish that $(P,Y)$ and $({P^*},{Y^*})$ are elements of $\Pi_{ \mu_1}(\Omega^{(2)})$ (see \thref{Pi-defn}). Note that for ${\bf d}'\in \Omega^{(0)}$, the values of $s({\bf d}')$, $t({\bf d}')$ and $\mu({\bf d}')$ are asymptotically equal to $s$, $t$ and $\mu$, respectively, since $M_1({\bf d}')=M_1({\bf d}_0)+O( \log n)$ for some sequence ${\bf d}_0\in \D$. Thus, $\mu$ and $\mu({\bf d}')$ are interchangeable in the error terms below. Furthermore, we note that the bounds on $s_a$ and $t_v$ in the theorem statement imply that for all balanced ${\bf d} \in \Omega^{(0)}$, $s_a\sim s$ and $t_v\sim t$ uniformly for all $a\in S$ and $v\in T$. By \thref{lem:bipRealisable}\eqref{lem:bipRealisablei} we obtain ${\cal N}({\bf d})>0$ for all balanced ${\bf d}\in \Omega^{(0)}$ in both cases $\cA^{\mathrm{bi}}$ and $\cA^{\mathrm{di}}$. In doing so, we may take $C=1$ and $F$ to be either empty in the bipartite case, or a matching in the digraph case; the condition $m\le \ell n/9$ follows from choosing $\mu_0$ small enough, and the conditions on $\Delta_S$ and $\Delta_T$ can be seen to follow from $|s_a-s|\le s^{\varphi}$ for all $a\in S$, $|t_v-t|\le t^{\varphi}$ for all $v\in T$ and $\varphi <1$, say. After this, for $n$ and $\ell$ sufficiently large,~\thref{l:simpleSwitching}, together with the facts that $s_a\sim s$ and $t_v\sim t$ uniformly for all $a\in S$, $v\in T$, implies that for all $av\in \cA$ \bel{Pbound2} P_{av}({\bf d})\leq\frac{\mu}{1- 2\mu}(1+o(1)) < \frac{5\mu}{4} \quad \mbox{for all balanced ${\bf d}\in \Omega^{(0)}$}, \end{equation} where for the last inequality we use that $\mu< \mu_0 < 1/11$, say. Since $\Omega^{(2)}\se \Omega^{(0)}$ and $5\mu/4 <\mu_1$ this establishes requirement~\ref{Pi-a} for $P$ in the definition of $\Pi_{\mu_1}(\Omega^{(2)})$. Now restrict slightly to ${\bf d} \in \Omega^{(2)}$. By definition $Y_{avb}({\bf d})$ is the probability that both edges $av$ and $bv$ are present. Hence \thref{l:recurse}(c) implies (with the above bounds on $P_{av}({\bf d})$ applying for all ${\bf d}\in \Omega^{(0)}$) that $0\le Y_{avb}({\bf d}) =Y_{bva}({\bf d}) \le 3\mu P_{bv}({\bf d})/2$ (easily) assuming, as we may, that $\mu$ is sufficiently small. Thus $(P,Y)$ satisfies condition~\ref{Pi-c} for membership of $\Pi_{\mu_1}(\Omega^{(2)})$, and also $$ \sum_{v\in \cA(a)\cap \cA(b)} Y_{avb}({\bf d}) \le \sum_{v\in \cA(a)\cap \cA(b)}\frac{3P_{bv}({\bf d})\mu}{2}\le 3 \mu s_a(1+o(1)) $$ for all distinct $a,b\in S$, using $P_{bv}({\bf d})\le 2\mu\sim 2s_a/n$ which follows from \eqref{Pbound2} and recalling that $s_a\sim \mu n$ uniformly for all $a\in S$ for all ${\bf d} \in \Omega^{(0)}$. As $4\mu < \mu_1$, this shows $Y$ satisfies condition~\ref{Pi-b} for membership of $\Pi_{\mu_1}(\Omega^{(2)})$ when $n$ is sufficiently large and $a,b\in S$ are distinct. The equivalent statement for both $a,b\in T$ follows analogously. Note that this covers all cases of~\ref{Pi-b} as otherwise $\cA(a)\cap \cA(b)=\emptyset$ and the statement is trivial. % To see that $({P^*},{Y^*})$ is also in $\Pi_{ \mu_1}(\Omega^{(2)})$, we first observe that $P^*_{av}({\bf d})\sim \mu$ for all $av\in \cA$ and all ${\bf d}\in \Omega^{(0)}$. This is because $\varepsilon\to 0$ since $\varphi <1$ and both $s,t\to\infty$, and because $\sigma_T^2/t\ell, \sigma_S^2/sn=O(\varepsilon^2\mu)$. Properties~\ref{Pi-a}-\ref{Pi-c} follow directly from this fact and the definition of $Y^*$ since $\mu=\mu_1/5$. Now for large $n$ and $\ell$ and for $(a,v)\in \oA$ we have $P_{av}({\bf d})=P^*_{av}({\bf d})(1\pm1)$ for all balanced ${\bf d}\in \Omega^{(0)}$ since $P^*_{av}({\bf d})\sim \mu$ and by~\eqref{Pbound2}. Also, $0\le Y_{avb}({\bf d})\le 3\mu P_{bv}({\bf d})/2$ implies $Y_{avb}({\bf d})=Y^*_{avb}({\bf d})(1\pm1)$ for balanced ${\bf d}\in \Omega^{(2)}$. We may now apply \thref{l:errorImplication}(a) with $\xi=1$ for any $S$-heavy ${\bf d} \in \Omega^{(3)}$ to deduce that $$ {\cal R}(P ,Y ) _{ab}({\bf d})= {\cal R}({P^*} ,{Y^*} )_{ab}({\bf d})(1+O (\mu_1)) $$ for all $a,b\in S$ (noting that for $a=b$ the claim is trivial, and for distinct $a,b$ we use the previous observations). Writing ${\bf r}$ for ${\cal R}(P ,Y )$ and ${\bf r}'$ for ${\cal R}({P^*} ,{Y^*} )$ we obtain from this and \thref{l:errorImplication}(b) that $$ {\cal P}(P ,{\bf r} ) _{av}({\bf d})= {\cal P}({P^*} ,{\bf r}' )_{av}({\bf d})(1+O (\mu_1 )) $$ for all balanced ${\bf d} \in \Omega^{(4)}$ and all $av\in \cA$. Next applying \thref{l:errorImplication}(c) in the same way to balanced ${\bf d} \in \Omega^{(6)}$ gives $$ {\cal Y}(\hat{\bf p} ,Y) _{avb}({\bf d})= {\cal Y}(\hat{\bf p}' , {Y^*} )_{avb}({\bf d})(1+O (\mu_1 )) $$ for all balanced ${\bf d} \in \Omega^{(6)}$, and all $(a,v,b)\in \oA_2$ with $a\in S$, where $\hat{\bf p} = {\cal P}(P ,{\bf r} )$ and $\hat{\bf p}' = {\cal P}({P^*} ,{\bf r}' )$. Recalling the definition of ${\cal C}({\bf p},{\bf y})$ from the beginning of this proof, we note that the three conclusions above imply that, for ${\cal C}(P ,Y )=(P_1,Y_1)$ and ${\cal C}({P^*} ,{Y^*} )=(P_1^*,Y_1^*)$, we have $P_1^*({\bf d})=P_1({\bf d})(1+O(\mu_1))$ and $Y_1^*({\bf d})=Y_1({\bf d})(1+O(\mu_1))$ for all ${\bf d} \in \Omega^{(6)}$. Similarly, making $k-1$ iterated applications of the three parts of \thref{l:errorImplication} with ever-decreasing $\xi$ produces $$ P_k^*({\bf d})=P_k({\bf d})(1+O(\mu_1)^k) \mbox{and} Y_k^*({\bf d})=Y_k({\bf d})(1+O(\mu_1)^k) $$ for all ${\bf d} \in \Omega^{(2k+2)}$, where $P_k$, $P_k^*$ etc.\ are defined analogously for ${\cal C}^{k}$. In the same way, applying \thref{l:mapleHigher}(a), (b) and (c) in turn, recalling Lemma~\ref{l:errorImplication} to handle the small error terms, shows that $P_1^*({\bf d})=P^*({\bf d})(1+O(\mu \varepsilon^4))$ and $Y_1^*({\bf d})=Y^*({\bf d})(1+O(\mu \varepsilon^4))$ for all ${\bf d} \in \Omega^{(6)}$. Using the three parts of \thref{l:errorImplication} repeatedly, and bounding the total distance moved during the iterations as for a contraction mapping (as the sum of a geometric series), this gives $$ P_k^*({\bf d})=P^*({\bf d})(1+O(\mu \varepsilon^4)) \mbox{ and } Y_k^*({\bf d})=Y^*({\bf d})(1+O(\mu \varepsilon^4)) $$ for all ${\bf d} \in \Omega^{(2k+2)}$. Using the last two conclusions with $k:= k_0 = 4 \log n$ and the fact that $P_k = P$ (as ${\cal C}$ fixes $(P,Y)$) gives~\eqn{appP} for all balanced ${\bf d}\in \Omega^{(r-2)}$ since we may assume that the $ O( \mu_1)$ term is at most $1/e$ say. (Recall that $\mu_1=5\mu < 5\mu_0$ which we may choose to be sufficiently small.) Note that $\D\se \Omega^{(r)}\subseteq \Omega^{(r-2)}$ by definition. For~\eqn{appR}, we now use that~\eqn{appP} holds for all balanced ${\bf d}\in \Omega^{(r-2)}$ to deduce from \thref{l:errorImplication}(a) that ${\cal R}(P,Y)_{ab}({\bf d}) = {\cal R}({P^*} ,{Y^*} )_{ab}({\bf d}) (1+O(\mu \varepsilon^4))$ for all $S$-heavy ${\bf d}\in \Omega^{(r-1)}$. This, together with (a) above and the fact that ${\cal R}(P,Y)=R$, implies~\eqn{appR} for all ${\bf d}\in Q_1^S \subseteq \Omega^{(r-1)}$. \end{proof} Moving on to Step 2 of the template, we now make suitable definitions of probability spaces $\mathcal{S}$ and $\mathcal{S}'$ in preparation for using \thref{l:lemmaX}. Let $\Omega$ be the underlying set of $\mathcal{B}_m(\ell,n)$ in the bipartite case and of $\vec \mathcal{B}_m(n)$ in the digraph case, that is, the set of all degree sequences ${\bf d}=({\bf s},{\bf t})\in \Z^{\ell+n}$ such that $M_1({\bf s})=M_1({\bf t})=m$ (and $\ell =n$ in the digraph case). Let $\mathcal{S}'=\cD(\mathcal{G}(\ell,n,m))$ in the bipartite case, and $\mathcal{S}'=\cD(\vec \mathcal{G}(n,m))$ in the digraph case. Set $$ \W=\{{\bf d}=({\bf s},{\bf t})\in \D : \sigma^2({\bf s}) \le 2s,\ \sigma^2({\bf t}) \le 2t,\left|\delta^{\mathrm{di}}\sigma({\bf s},{\bf t})\right|<\xi s \} $$ where $\xi= \max\{ \log^2\ell/\sqrt{\ell},\log^2n/\sqrt{n}\}$. Define the graph $G$ on vertex set $\D$ by joining two degree sequences by an edge if they are of the form ${\bf d}-{\ve_a}$ and ${\bf d}-{\ve_b}$ for some ${\bf d}\in Q_1^S$ and $a,b\in S$, or for some ${\bf d}\in Q_1^T$ and $a,b\in T$. We note at this point that the diameter of $G$ is $r=O(\ell s^{\varphi}+nt^{\varphi})=O(\varepsilon\mu n\ell)$ since the constant sequence $(d,\ldots,d)$ is an element of $\D$ and by the degree constraints for ${\bf d}\in \D$. The same bound holds for the diameter of $G[\W]$. By definition of $R$ in~\eqref{realRatio} and its approximation in~\eqref{appR} we have, for adjacent vertices/sequences in this graph, that \begin{align}\lab{targetLHSnew} \frac{\Pr_{\mathcal{S}'}({\bf d}-{\ve_a})}{\Pr_{\mathcal{S}'}({\bf d}-{\ve_b})} &= R_{ab}({\bf d}) = R^*_{ab}({\bf d}) \left(1+ O\left(\mu \varepsilon^4\right)\right). \end{align} We now define the ideal probability space $\mathcal{S}$ (see Step 3 of the template) on $\Omega$. Recall the definition of ${\tilde H}({\bf d})$ in~\eqref{corrH}, which is slightly different in the bipartite case and the digraph case (due to $\mu$ being defined slightly differently and the extra term in the exponent in the digraph case). Also recall (from just before~\eqref{H}) that we use ${H}({\bf d})$ to denote the product $\Pr_{\mathcal{B}_m}({\bf d}){\tilde H}({\bf d})$ on the right hand side of~\eqref{enumFormula}, where $\mathcal{B}_m = \mathcal{B}_m(\ell,n)$ in the bipartite case and $\mathcal{B}_m=\vec\mathcal{B}_m(n)$ in the digraph case. Then set $$ \Pr_{\mathcal{S}}({\bf d}) = {H}({\bf d})/\sum_{{\bf d}'\in\W}{H}({\bf d}') = \frac{{H}({\bf d})}{{\bf E}_{\mathcal{B}_m}(\mathbbm{1}_{\W}{\tilde H})} $$ for ${\bf d}\in \W$ and $\Pr_{\mathcal{S}}({\bf d})=0$ otherwise. Following the template to Step 4 we next estimate the probability of $\W$ in the two probability spaces $\mathcal{S}$ and $\mathcal{S}'$ in both cases, bipartite and digraph. We simultaneously evaluate $\Pr_{\mathcal{B}_m}(\W)$ and ${\bf E}_{\mathcal{B}_m}(\mathbbm{1}_{\W}{\tilde H})$ for later use. First note that $\Pr_{\mathcal{S}}(\W)=1$ by definition. Next, $M_1({\bf s})=M_1({\bf t})=m$ for all ${\bf d}=({\bf s},{\bf t})\in\Omega$, by definition. Let $\bar n =\min\{n,\ell\}.$ For the following, let ${\bf d}$ be chosen in either of $\cD(\mathcal{G})=\mathcal{S}'$ and $\mathcal{B}_m$, in either of the bipartite and the digraph case. Then $|s_a-s|\le s^{\varphi}$ and $|t_v-t|\le t^{\varphi}$ for all $a\in S$ and $v\in T$ with probability at least $1-O({\bar n}^{-\omega})$ by \thref{l:sigmaConcBip}(i) and since $s> (\log \ell)^K$ and $t> (\log n)^K$ for all $K>0$. It follows that $\Pr_{\cD(\mathcal{G})}(\D) = \Pr_{\mathcal{S}'}(\D) = 1-O(\bar n^{-\omega})$ and $\Pr_{\mathcal{B}_m}(\D) = 1-O(\bar n^{-\omega})$. Next, apply \thref{l:sigmaConcBip}(ii) with $\alpha =\xi/2$ for ${\bf d}=({\bf s},{\bf t})$ (noting that $m\ge s+t\gg\log^3 \ell+ \log^3 n$ and $(\log^3 n+\log^3 \ell)/m=o(\xi^2)$ by definition of $\xi$) to deduce that $\sigma^2({\bf s})=s(1-\mu)(1\pm \xi)$, $\sigma^2({\bf t})=t(1-\mu)(1\pm \xi)$ and $\sigma({\bf s},{\bf t})=O(\xi s)$ with probability $1-O(\bar n^{-\omega})$. This implies in particular that $\Pr (\W) = 1-O(\bar n^{-\omega})$ in both $\mathcal{S}'$ and $\mathcal{B}_m$. Thus $\Pr(\W)\ge 1-\varepsilon_0$ in both $\mathcal{S}$ and $\mathcal{S}'$ in both cases, bipartite and digraph, for, say, $\varepsilon_0=1/\bar n$. Before moving on to Step 5 of the template we use these concentration results to estimate ${\bf E}_{\mathcal{B}_m}(\mathbbm{1}_{\W}{\tilde H})$. If $\sigma^2({\bf t})=t(1-\mu)(1+O(\xi))$ then the term $\sigma^2({\bf t})/t(1-\mu)$ in the exponent of ${\tilde H}({\bf d})$ is $1+O(\xi)$. Similarly for the term $\sigma^2({\bf s})/s(1-\mu)$. Furthermore, in the digraph case, if $\sigma({\bf s},{\bf t})=O(\xi s)$ then the term $\delta^{\mathrm{di}}\sigma({\bf s},{\bf t})/s(1-\mu)$ in ${\tilde H}({\bf d})$ is $O(\xi)$. It follows, using the strong concentration results in the previous paragraph, that \bel{HConcDense} {\tilde H}({\bf d})= 1+O(\xi) \text{ with probability } 1-O(\bar n^{-\omega}) \text{ for } {\bf d}\in\mathcal{B}_m. \end{equation} Furthermore, by definition of $\W$ it follows that ${\tilde H}({\bf d}) = \Theta(1)$ for all ${\bf d}\in \W$, using the fact that $\mu < 1/2,$ say. This and~\eqref{HConcDense} then imply that ${\bf E}_{\mathcal{B}_m}(\mathbbm{1}_{\W}{\tilde H})=1+O(\xi+\bar n^{-\omega})=1+O(\xi).$ We now move to Step 5 in the template. For ${\bf d}=({\bf s},{\bf t})\in Q_1^S$ and $a,b\in S$, \begin{align}\label{eq:ratioDense} \frac{{H}({\bf d}-{\ve_a})}{{H}({\bf d}-{\ve_b})} &= \frac{{H}({\bf s}-{\ve_a},{\bf t})}{{H}({\bf s}-{\ve_b},{\bf t})} = \frac{s_a(n+\delta^{\rm bi} -s_b)}{s_b(n+\delta^{\rm bi}-s_a)} \exp\bigg(\frac{s_b-s_a}{s(1-\mu)\ell } \left(1-\frac{\sigma^2({\bf t})}{t(1-\mu)}\right) +\frac{ \delta^{\rm di} (t_{a'}-t_{b'})}{t(1-\mu)n}\bigg), \end{align} by~\eqref{H}. Compare the expression on the right hand side with $R^*=\rho$ at the beginning of this section (after a straight-forward reparameterisation) to see that \bel{targetRHS} \frac{{H}({\bf d}-{\ve_a})}{{H}({\bf d}-{\ve_b})} = R^*_{ab}({\bf d}) \left(1+ O\left( 1/\bar n^2\right)\right), \end{equation} where the error is due to the fact that we use $e^x = 1+x+O(x^2)$ and that $\mu({\bf d}) = \mu +O(1/n\ell)$ for ${\bf d}\in Q_1^S$. This together with \eqref{targetLHSnew} gives \bel{ratioInD} \frac{\Pr_{\mathcal{S}'}({\bf d}-{\ve_a})}{\Pr_{\mathcal{S}'}({\bf d}-{\ve_b})} =e^{O(\mu\varepsilon^4)} \frac{{H}({\bf d}-{\ve_a})}{{H}({\bf d}-{\ve_b})} \end{equation} for ${\bf d}\in Q_1^S$ and $a,b\in S$. The same applies for ${\bf d}\in Q_1^T$ and $a,b\in T$ by symmetry. Note that when both ${\bf d}-{\ve_a}$ and ${\bf d}-{\ve_b}$ are elements of $\W$ then the right hand side is in fact equal to $ e^{O(\mu\varepsilon^4)}\Pr_{\mathcal{S}}({\bf d}-{\ve_a})/\Pr_{\mathcal{S}}({\bf d}-{\ve_b}),$ by definition of $\Pr_{\mathcal{S}}$ above. Therefore, by \thref{l:lemmaX} \begin{align*} \Pr_{\mathcal{S}'}({\bf d}) &= \Pr_{\mathcal{S}}({\bf d}) e^{O\left( r\mu\varepsilon^4+\varepsilon_0 \right)}\\ &= \Pr_{\mathcal{B}_m}({\bf d}){\tilde H}({\bf d})\left(1+O\left(\xi + r\mu\varepsilon^4+\varepsilon_0\right)\right) \end{align*} for all ${\bf d}\in \W$, where we use that ${\bf E}_{\mathcal{B}_m} (\mathbbm{1}_{\W}{\tilde H}) = 1+O(\xi)$. Now let ${\bf d}\in \D\sm\W$. Then there is some sequence ${\bf d}'\in \W$ such that the distance of ${\bf d}$ and ${\bf d}'$ in $G$ is at most $2r$. Any two adjacent sequences $\tilde{\bf d}-{\ve_a}$ and $\tilde{\bf d}-{\ve_b}$ along that path satisfy~\eqref{ratioInD}, so that by telescoping we see that \begin{align*} \Pr_{\mathcal{S}'}({\bf d}) &= H({\bf d}) \left(1+O\left(\xi + r\mu\varepsilon^4+\varepsilon_0\right)\right) \end{align*} for such ${\bf d}$ as well. This proves the claim, since $\xi = \max\{ \log^2\ell/\sqrt{\ell},\log^2n/\sqrt{n}\}$, $\varepsilon_0=1/n$, and $r\mu\varepsilon^4 = O\left(\varepsilon^5\mu^2n\ell\right)$. \end{proof} \begin{proof}[Proof of \thref{t:bipmodel}] In the sparse case, given that the set $\D$ in \thref{t:sparseCaseBip} is nonempty, we define a set $\W$ in the proof of \thref{t:sparseCaseBip} and show that $\Pr_{\mathcal{S}'}(\W) = 1-o(n^{-\omega})$ just before~\eqn{HConc}, where $\mathcal{S}'= \cD(\vec \mathcal{G}(n,m))$ or $\cD(\mathcal{G}(\ell,n,m))$ as the case may be, and also that $\Pr_{\mathcal{B}_m}(\W) = 1-o(n^{-\omega})$. As observed in that proof, the formula given in \thref{t:sparseCaseBip} holds for all ${\bf d}\in\W$, and also we can assume that $\corr{{\bf d}}\sim 1$ by~\eqn{HConc}. This gives the required a.q.e.\ property, for any triple $(\ell,n,m)$ such that the set $\D$ in that theorem is nonempty (for $n$ sufficiently large). The proof of \thref{t:mainbip} implies a similar result, for any $(\ell,n,m)$ satisfying that theorem's hypotheses. So all that is left to do is check which ordered triples $(\ell, n, m)$ are covered by these results, and to supply the remaining cases using previously known results. We first concentrate on the bipartite case~\ref{model-b}. We claim that the conditions in \ref{model-b-ii} imply that the hypotheses for \thref{t:mainbip} are satisfied. Note first that $n^3 = o(\ell^2m^{1-\varepsilon})$ implies in particular that $\ell\to\infty$ with $n$, using the trivial $m\le n\ell$. The same asymptotic inequality, together with $\ell\le n$, implies that $n\ell^{\varepsilon}\le m$. The required bounds on $m$ in \thref{t:mainbip} then follow from $n\ell^{\varepsilon}\le m<\mu_0 n\ell$ by choosing, say, $\varphi = 1/2+\varepsilon/10.$ Next, we check that the conditions in \ref{model-b-iii} imply that the hypotheses for \thref{t:sparseCaseBip} are satisfied and that there exists some non-empty set $\D$ as in the theorem statement. Given $\ell,n$ and $m$, let ${\bf d}=({\bf s},{\bf t})$ be a sequence such that $M_1({\bf s})=M_1({\bf t})=m$, all elements of ${\bf s}$ are equal to $\lfloor m/\ell\rfloor$ or $\lceil m/\ell\rceil$ and all elements of ${\bf t}$ are equal to $\lfloor m/n\rfloor$ or $\lceil m/n\rceil$, respectively. Then the inequality $$ \Delta_S^3\Delta_T^3(n\ell)^{\varepsilon/2} \le \left(\frac{m}{\ell}+1\right)^3 \left(\frac{m}{n}+1\right)^3m^{\varepsilon} \ll \min\{sm,tm\} $$ can be seen to follow from $m^{4+\varepsilon}=o(n^2\ell^2\min\{\ell,n\}).$ Thus, for $n$ sufficiently large, the set $\D$ in \thref{t:sparseCaseBip} can be assumed to be non-empty and for \ref{model-b-iii}, the requirements of \thref{t:sparseCaseBip} are satisfied. For $\log^3\ell+ \log ^3n\ll m \ll \min\{n/\log^2n,\ell/\log^2\ell\}$, \thref{l:sigmaConcBip} shows that with probability $1-n^{-\omega}-\ell^{-\omega}$ all vertices in $\mathcal{G}(\ell,n,m)$ have bounded degrees, so the main theorem of Bender~\cite{b1974} applies. \thref{l:sigmaConcBip} also shows that the terms $\sigma^2({\bf s})$ and $\sigma^2({\bf t})$ are concentrated near their expected values, to the extent that $\corr{{\bf d}} =1+o(1)$ with probability $1-n^{-\omega}-\ell^{-\omega}$. It is then a simple calculation to check that Bender's formula corresponds asymptotically with the first formula in~\eqn{probbin}, noting the adjustment required to count graphs (as in Remark~\ref{r:first}). This finishes the proof of part (b). For the digraph case~\ref{model-a}, \thref{t:mainbip,t:sparseCaseBip} also supply the required statement when $\ell=n$. Here \thref{t:mainbip} covers the range $n^{1+\varepsilon}<m<\mu_0 n^2$ for any $\varepsilon>0$ and sufficiently small constant $\mu_0$, and \thref{t:sparseCaseBip} covers the range $n/\log^3n < m < n^{5/4-\varepsilon},$ say. Only the very dense and very sparse remain. McKay and Skerman~\cite[Theorem 1(d)]{MS} immediately covers $\min\{m, n(n-1)-m) > n^2/c\log n$ for any $c>0$. Larger values of $m$ than this are covered by complementation of the results for smaller values. On the other hand, for $\log ^3n\ll m= \ll n/\log^2n$, we may again use \thref{l:sigmaConcBip} and the main theorem of~\cite{b1974}. This works almost exactly the same as for the bipartite graph case. \end{proof} \begin{proof}[Proof of \thref{t:edgeprobability}] The stated formula is a by-product of the proof of \thref{t:mainbip}, so here we just point to the relevant spots within that proof. Recall the definition of $P^\ast$, which is our approximation to the edge probability, at the beginning of Section~\ref{s:denseBip}, and note that a parameterisation yields the formula given in the statement of \thref{t:edgeprobability}. \thref{RDenseBip} then yields the desired approximation since $\mu\varepsilon^4=O(\min\{s,t\}^{4\varphi-4}m/n\ell)$. \end{proof}
{ "timestamp": "2020-06-30T02:26:30", "yymm": "2006", "arxiv_id": "2006.15797", "language": "en", "url": "https://arxiv.org/abs/2006.15797", "abstract": "We provide asymptotic formulae for the numbers of bipartite graphs with given degree sequence, and of loopless digraphs with given in- and out-degree sequences, for a wide range of parameters. Our results cover medium range densities and close the gaps between the results known for the sparse and dense ranges. In the case of bipartite graphs, these results were proved by Greenhill, McKay and Wang in 2006 and by Canfield, Greenhill and McKay in 2008, respectively. Our method also essentially covers the sparse range, for which much less was known in the case of loopless digraphs. For the range of densities which our results cover, they imply that the degree sequence of a random bipartite graph with m edges is accurately modelled by a sequence of independent binomial random variables, conditional upon the sum of variables in each part being equal to m. A similar model also holds for loopless digraphs.", "subjects": "Combinatorics (math.CO); Discrete Mathematics (cs.DM)", "title": "Asymptotic enumeration of digraphs and bipartite graphs by degree sequence", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9859363758493942, "lm_q2_score": 0.8244619285331332, "lm_q1q2_score": 0.8128670058437596 }
https://arxiv.org/abs/1401.4372
Regular matchstick graphs
A graph G=(V,E) is called a unit-distance graph in the plane if there is an injective embedding of V in the plane such that every pair of adjacent vertices are at unit distance apart. If additionally the corresponding edges are non-crossing and all vertices have the same degree r we talk of a regular matchstick graph. Due to Euler's polyhedron formula we have $r\le 5$. The smallest known 4-regular matchstick graph is the so called Harborth graph consisting of 52 vertices. In this article we prove that no finite 5-regular matchstick graph exists and provide a lower bound for the number of vertices of 4-regular matchstick graphs.
\section{Introduction} \noindent One of the possibly best known problems in combinatorial geometry asks how often the same distance can occur among $n$ points in the plane. Via scaling we can assume that the most frequent distance has length $1$. Given any set $P$ of points in the plane, we can define the so called unit-distance graph in the plane, connecting two elements of $P$ by an edge if their distance is one. The known bounds for the maximum number $u(n)$ of edges of a unit-distance graph in the plane, see e.~g.{} \cite{1086.52001}, are given by \[ \Omega\!\left(ne^{\frac{c\log n}{\log\log n}}\right)\le u(n)\le O\!\left(n^{\frac{4}{3}}\right). \] For $n\le 14$ the exact numbers of $u(n)$ were determined in \cite{schade}, see also \cite{1086.52001}. If we additionally require that the edges are non-crossing, then we obtain another class of geometrical and combinatorial objects: A \textbf{matchstick graph} is graph drawn with straight edges in the plane such that the edges have unit length, and non-adjacent edges do not intersect, see Figure~\ref{fig_configuration_1} for an example.
{ "timestamp": "2014-01-20T02:08:36", "yymm": "1401", "arxiv_id": "1401.4372", "language": "en", "url": "https://arxiv.org/abs/1401.4372", "abstract": "A graph G=(V,E) is called a unit-distance graph in the plane if there is an injective embedding of V in the plane such that every pair of adjacent vertices are at unit distance apart. If additionally the corresponding edges are non-crossing and all vertices have the same degree r we talk of a regular matchstick graph. Due to Euler's polyhedron formula we have $r\\le 5$. The smallest known 4-regular matchstick graph is the so called Harborth graph consisting of 52 vertices. In this article we prove that no finite 5-regular matchstick graph exists and provide a lower bound for the number of vertices of 4-regular matchstick graphs.", "subjects": "Combinatorics (math.CO)", "title": "Regular matchstick graphs", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.9859363700641144, "lm_q2_score": 0.8244619285331332, "lm_q1q2_score": 0.8128670010740165 }
https://arxiv.org/abs/2203.01473
Powers of posinormal Hilbert-space operators
A bounded linear operator $A$ on a Hilbert space $\mathcal{H}$ is posinormal if there exists a positive operator $P$ such that $AA^{*} = A^{*}PA$. We show that if $A$ is posinormal with closed range, then $A^n$ is posinormal and has closed range for all integers $n\ge 1$. Because the collection of posinormal operators includes all hyponormal operators, we obtain as a corollary that powers of closed-range hyponormal operators continue to have closed range. We also present a simple example of a closed-range operator $T: \mathcal{H}\to \mathcal{H}$ such that $T^2$ does not have closed range.
\section{Introduction} A bounded linear operator $A$ on a Hilbert space $\mathcal{H}$ is said to be \textit{posinormal} if there exists a positive operator $P$ such that $AA^{*} = A^{*}PA$. The operator $P$ is called an \textit{interrupter} of $A$. Note that if $A$ has interrupter $I$, then $A$ is normal. Rhaly introduced the notion of posinormality in \cite{Rhaly}. Posinormality is a unitary invariant; in fact, as Rhaly points out, if $V$ is an isometry (so that $V^*V = I$) and $A$ is posinormal with interrupter $P$, then $VAV^*$ is posinormal with interrupter $VPV^*$. If $A^*$ is posinormal, then $A$ is \textit{coposinormal}. Applying a result due to Douglas \cite[Theorem 1]{douglas1966majorization}, Rhaly \cite[Theorem 2.1]{Rhaly} obtains useful equivalent conditions for posinormality: \begin{theorem}[\cite{Rhaly}]\label{posinormal} For $A \in \mathcal{B(H)}$, the space of bounded linear operators on a Hilbert space $\mathcal{H}$, the following statements are equivalent: \begin{enumerate} \item[(1)] $A$ is posinormal; \item[(2)] $\mbox{\rm ran\ } \textrm{ } A \subseteq \mbox{\rm ran\ } \textrm{ } A^*$; \item[(3)] $AA^{*} \leq \lambda^2 A^{*}A $ for some $\lambda \geq 0$; and \item[(4)] there exists $T \in \mathcal{B(H)}$ such that $A = A^{*}T$. \end{enumerate} \end{theorem} Note that condition (2) of the preceding theorem yields another way to see a normal operator $N$ is both posinormal and coposinormal because $\mbox{\rm ran\ } N = \mbox{\rm ran\ } N^*$. By (3) with $\lambda = 1$, we see that every hyponormal operator is posinormal. This paper draws its motivation from three sources \cite{Bouldin,BoT,Kubrusly}. \begin{itemize} \item In \cite[Section 4]{BoT}, a class of composition operators on the Hardy space $H^2$ of the open unit disk in the complex plane is identified such that (i) each member of the class is both posinormal and coposinormal and (ii) for each positive integer $n$ exceeding $1$, there is a composition operator $C$ in this class for which $C^n$ is not posinormal. \item In \cite{Kubrusly}, an example of a posinormal operator whose square is not posinormal is exhibited and a study of powers of posinormal operators is undertaken based on the notions of ascent and descent of Hilbert-space operators. Corollary 1(b) of that paper states, contrary to the properties of the class of composition operators discussed in the preceding bullet item, that if a Hilbert-space operator $T$ is both posinormal and coposinormal, then $T^n$ is posinormal for all $n\ge 1$. An additional hypothesis is needed to make Corollary 1(b) of \cite{Kubrusly} hold. In this paper, we show that such a hypothesis is that the range of $T$ be closed: \begin{mthm} If $T: \mathcal{H}\to \mathcal{H}$ is a posinormal operator with closed range, then $T^n$ is posinormal for all $n\ge 1$ (no assumption of coposinormality required), and $T^n$ has closed range for all $n\ge1$. \end{mthm} \item In \cite{Bouldin}, Bouldin characterizes when a product of two operators with closed range will have closed range and presents an example of an operator $T$ having closed range for which $T^2$ does not have closed range. One key to proving the Main Theorem is establishing that if $T$ is posinormal with closed range, then $T^2$ also has closed range. We present two different proofs, one of which is based on Bouldin's work. We also present an example simpler than Bouldin's of an operator $T$ with closed range such that $T^2$ does not have closed range. \end{itemize} .. \section{Results} \begin{lemma}\label{CRL} Suppose that $T:\mathcal{H}\rightarrow \mathcal{H}$ is posinormal and has closed range; then $T^2$ also has closed range. \end{lemma} \begin{proof} Assume that $T$ has closed range, so that $T^*$ also has closed range. We have the following orthogonal decomposition of $\mathcal{H}$: $\mathcal{H}= \text{ker}\ T \oplus \mbox{\rm ran\ } T^*$. Because the restriction of $T$ to $\mbox{\rm ran\ } T^*$ is one-to-one with closed range, $T$ is bounded below on $\mbox{\rm ran\ } T^*$: for all $h\in \mbox{\rm ran\ } T^*$, we have $$ \|Th\| \ge c \|h\|,\ \text{for some}\ c > 0. $$ Because $T$ is posinormal, $\mbox{\rm ran\ } T\subseteq \mbox{\rm ran\ } T^*$ (by Part (2) of Theorem~\ref{posinormal}), and thus we have \begin{equation}\label{bddbelow} \|T^2h\| \ge c^2 \|h\| \quad \text{for all} \ h\in \mbox{\rm ran\ } T^*. \end{equation} It follows that $T^2$ has closed range: if $T^2 h_n \to w$ as $n\to\infty$ for some $w\in \mathcal{H}$ and sequence $(h_n)$ in $\mbox{\rm ran\ } T^*$; then, by (\ref{bddbelow}), because $(T^2 h_n)$ is Cauchy, the sequence $(h_n)$ is also Cauchy, so that $(h_n)$ has limit $h\in \mbox{\rm ran\ } T^*$ and $w = \lim_{n\to\infty} T^2 h_n = T^2h$. \end{proof} The preceding lemma may also be established using \begin{quotation} Bouldin's Criterion \cite{Bouldin}: {\it If $A$ and $B$ are operators on $\mathcal{H}$ having closed range then $AB$ also has closed range if and only if the angle between $\mbox{\rm ran\ } B$ and $\text{ker}\ A\cap(\text{ker}\ A \cap \mbox{\rm ran\ } B)^\perp$ is positive.} \end{quotation} Recall that the angle $\theta$ between two subspaces $\mathcal{M}$ and $\mathcal{N}$ of $\mathcal{H}$ is given by $$ \theta = \cos^{-1}\left(\rule{0in}{.15in}\sup\{|\langle f, g\rangle|: f\in \mathcal{M}, g\in \mathcal{N}, \|f\| =1= \|g\|\}\right), $$ where $\langle \cdot, \cdot\rangle$ denotes the inner product of the Hilbert space $\mathcal{H}$. Suppose that $A = B = T$, where $T$ is posinormal with closed range. Then $\text{ker}\ T\cap \mbox{\rm ran\ } T =\{0\}$ because $\mbox{\rm ran\ } T\subseteq \mbox{\rm ran\ } T^*$. Hence, $\text{ker}\ T\cap (\text{ker}\ T\cap \mbox{\rm ran\ } T)^\perp = \text{ker}\ T$ and the angle $\theta$ between $\mbox{\rm ran\ } T$ and $\text{ker}\ T\cap(\text{ker}\ T \cap \mbox{\rm ran\ } T)^\perp$ is $$ \cos^{-1}\left(\rule{0in}{.15in}\sup\{|\langle f, g\rangle|: f\in \mbox{\rm ran\ } T, g\in \text{ker}\ T, \|f\| =1= \|g\|\}\right) = \cos^{-1}(0) = \pi/2 > 0, $$ where we have again used $\mbox{\rm ran\ } T \subseteq \mbox{\rm ran\ } T^*$ to obtain $\langle f, g\rangle = 0$ for all inner products in the set above whose supremum is found to be 0. By Bouldin's Criterion, $T^2$ has closed range. In general, if $T: \mathcal{H} \to \mathcal{H}$ has closed range, then $T^2$ need not have closed range. Bouldin \cite[p.\ 363]{Bouldin} provides an example involving a direct-sum decomposition of $\mathcal{H}$ into three mutually orthogonal subspaces. Here's an example involving a direct-sum decomposition of $\mathcal{H}$ into two orthogonal subspaces. \begin{exmp} A Hilbert-space operator having closed range whose square does not have closed range. \end{exmp} \begin{proof}[] Let $A: \mathcal{H} \to \mathcal{H}$ be any bounded linear operator on the Hilbert space $\mathcal{H}$ such that $A$ does not have closed range. Define $T: \mathcal{H}\oplus \mathcal{H} \to \mathcal{H}\oplus \mathcal{H}$ by $$ T(h_1, h_2) = (Ah_1 + h_2, 0), $$ so that $T$ has matrix representation $$ T = \begin{pmatrix} A & I \\0&0\end{pmatrix}. $$ Then $\mbox{\rm ran\ } T = \mathcal{H}\oplus \{0\}$ is closed and $\mbox{\rm ran\ } T^2 = \mbox{\rm ran\ } A \oplus \{0\}$ is not closed. \end{proof} The following lemma is Proposition 3 of \cite{JKKP}. We include a short proof for completeness. \begin{lemma}[\cite{JKKP}]\label{keylemma} Suppose that $T:\mathcal{H}\to \mathcal{H}$ is posinormal; then $$ \text{ker}\ T = \text{ker}\ T^2. $$ \end{lemma} \begin{proof} That $\text{ker}\ T \subseteq \text{ker}\ T^2$ is clear. Suppose that $h\in \text{ker}\ T^2$. Because $T$ is posinormal, $\mbox{\rm ran\ } T \subseteq \mbox{\rm ran\ } T^*$; thus, there is some $w\in \mathcal{H}$ such that $Th = T^*w$. We have $$ 0 = T(Th) = T(T^*w), $$ so that $0 = \langle TT^* w, w\rangle = \langle T^*w, T^* w\rangle = \|T^*w\|^2$. Hence $T^*w = 0$ and we have $Th = T^*w = 0$. Thus, $h\in \text{ker}\ T$, and it follows that $\text{ker}\ T^2\subseteq \text{ker}\ T$, as desired. \end{proof} : \begin{mthm} If $T: \mathcal{H}\to \mathcal{H}$ is a posinormal operator with closed range, then $T^n$ is posinormal for all $n\ge 1$ and $T^n$ has closed range for all $n\ge1$. \end{mthm} \begin{proof} Suppose that $T$ is posinormal with closed range. By Lemma~\ref{CRL}, $T^2$ also has closed range. Because $\mbox{\rm ran\ } T$ and $\mbox{\rm ran\ } T^2$ are closed, the same is true of $\mbox{\rm ran\ } T^*$ and $\mbox{\rm ran\ } {T^*}^2$. Because $T$ is posinormal, Lemma~\ref{keylemma} yields $\text{ker}\ T = \text{ker}\ T^2$. Thus, $(\text{ker}\ T)^{\perp} = (\text{ker}\ T^2)^\perp$, so that the closure of $\mbox{\rm ran\ } T^*$ equals the closure of $\mbox{\rm ran\ } {T^*}^2$, but these ranges are closed, so that $\mbox{\rm ran\ } T^* = \mbox{\rm ran\ } {T^*}^2$. Thus, $\mbox{\rm ran\ } T^* = \mbox{\rm ran\ } {T^*}^n$ for all $n\ge1$ (see, e.g., \cite[p.\ 290]{TL}). Because $T$ is posinormal, $\mbox{\rm ran\ } T \subseteq \mbox{\rm ran\ } T^*$. Thus for every $n \ge 1$, we have $$ \mbox{\rm ran\ } T^n \subseteq \mbox{\rm ran\ } T \subseteq \mbox{\rm ran\ } T^* = \mbox{\rm ran\ } {T^*}^n. $$ Hence, by part (2) of Theorem~\ref{posinormal}, $T^n$ is posinormal for all $n\ge 1$. Note that $T^n$ has closed range for all $n\ge1$ because $\mbox{\rm ran\ } T^*$ is closed and $\mbox{\rm ran\ } {T^*}^n = \mbox{\rm ran\ } T^*$ for all $n\ge 1$. \end{proof} Powers of a normal operator remain normal, of course, whether or not the operator has closed range. However, powers of a hyponormal operator need not be hyponormal, and, as discussed in the Introduction, powers of posinormal operators need not be posinormal. We have shown in our Main Theorem that adding to posinormality the hypothesis of closed range does yield the posinormality of powers. Thus, we can say that a closed range hyponormal operator has posinormal powers, but the stronger assertion that powers are hyponormal is not valid---we conclude this paper with an example illustrating that the square of a closed-range hyponormal operator need not be hyponormal. First, we state the following immediate corollary of our Main Theorem. \begin{cor} If $T: \mathcal{H}\to \mathcal{H}$ has closed range and is normal or hyponormal , then $T^n$ has closed range for all $n\ge 1$. \end{cor} That powers of closed-range normal operators continue to have closed range is a consequence of the Spectral Mapping Theorem and the following characterization of normal operators having closed range (see, e.g., Proposition 4.5 of Chapter 11 of \cite{JBC}): \begin{quotation} {\it A normal operator has closed range if and only if $0$ is not a limit point of its spectrum.} \end{quotation} The preceding spectral characterization of close-range normal operators certainly does not extend to hyponormal operators; for example, the unilateral shift $U$ on $\ell^2$ is hyponormal with closed range but its spectrum is the closed unit disk. That powers of closed-range hyponormal operators continue to have closed range appears to be new. \begin{exmp} A hyponormal operator having closed range whose square is not hyponormal. \end{exmp} \begin{proof}[] Let $U$, as above, be the unilateral shift on $\ell^2$. Ito and Wong \cite[p.\ 158]{IW} point out $U^* + 2U$ is an example of a hyponormal Toeplitz operator ($U^* + 2U$ is equivalent to the Toeplitz operator with symbol $\phi(z) = \bar{z} + 2 z$) that is neither normal nor analytic. The hyponormal operator $U^* + 2U$ has closed range because it is bounded below: for each $\ell^2$ sequence $s$, we have $\|Us\| = \|s\|$ and $\|U^*s\| \le \|s\|$ so that $\|(2U + U^*)s\| \ge \|2Us\| - \|U^*s\| \ge \|s\|$. However, the square of $U^* + 2U$ is not hyponormal---see the solution of Problem 209 of \cite{PH82}. (That all powers of $U^* + 2U$ remain posinormal follows from our Main Theorem or the observation that all powers of the adjoint of $U^* + 2U$ are surjective because $U^* + 2U$ is bounded below.) \end{proof}
{ "timestamp": "2022-03-04T02:06:54", "yymm": "2203", "arxiv_id": "2203.01473", "language": "en", "url": "https://arxiv.org/abs/2203.01473", "abstract": "A bounded linear operator $A$ on a Hilbert space $\\mathcal{H}$ is posinormal if there exists a positive operator $P$ such that $AA^{*} = A^{*}PA$. We show that if $A$ is posinormal with closed range, then $A^n$ is posinormal and has closed range for all integers $n\\ge 1$. Because the collection of posinormal operators includes all hyponormal operators, we obtain as a corollary that powers of closed-range hyponormal operators continue to have closed range. We also present a simple example of a closed-range operator $T: \\mathcal{H}\\to \\mathcal{H}$ such that $T^2$ does not have closed range.", "subjects": "Functional Analysis (math.FA)", "title": "Powers of posinormal Hilbert-space operators", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9884918505966739, "lm_q2_score": 0.8221891239865619, "lm_q1q2_score": 0.8127272487099346 }
https://arxiv.org/abs/1101.4151
Tilted Sperner Families
Let \cal A be a family of subsets of an n-set such that \cal A does not contain distinct sets A and B with |A\B| = 2|B\A|. How large can \cal A be? Our aim in this note is to determine the maximum size of such an \cal A. This answers a question of Kalai. We also give some related results and conjectures.
\section{Introduction} A set system $\mathcal A\subseteq \mathcal {P}[n]=\mathcal {P}(\{1,\ldots ,n\})$ is said to be an \emph{antichain} or \emph{Sperner family} if $A\not\subset B$ for all distinct $A,B\in \mathcal A$. Sperner's theorem \cite{sper} says that any antichain $\mathcal A$ has size at most $\binom {n}{\lfloor n/2\rfloor }$. (See \cite{com} for general background.) Kalai \cite{kal} noted that the antichain condition may be restated as: $\mathcal A$ does not contain $A$ and $B$ such that, in the subcube of the $n$-cube spanned by $A$ and $B$, they are the top and bottom points. He asked what happens if we `tilt' this condition. For example, suppose that we instead forbid $A$,$B$ such that $A$ is 1/3 of the way up the subcube spanned by $A$ and $B$? Equivalently, $\mathcal A$ cannot contain two sets $A$ and $B$ with $|A\backslash B|=2|B\backslash A|$. An obvious example of such a system is any level set $[n]^{(i)}=\{ A\subset [n]:|A|=i\} $. Thus we may certainly achieve size $\binom {n}{\lfloor n/2\rfloor }$. The system $[n]^{(\lfloor n/2\rfloor )}$ is not maximal, as we may for example add to it all sets of size $\lfloor n/4\rfloor -1$ -- but that is a rather small improvement. Kalai \cite{kal} asked if, as for Sperner families, it is still true that our family $\mathcal A$ must have size $o(2^n)$. Our aim in this note is to verify this. We show that the middle layer is asymptotically best, in the sense that the maximum size of such a family is $(1+o(1)) \binom {n}{\lfloor n/2\rfloor }$. We also find the exact extremal system, for $n$ even and sufficiently large. We give similar results for any particular `forbidden ratio' in the subcube spanned. What happens if, instead of forbidding a particular ratio, we instead forbid an absolute distance from the bottom point? For example, for distance 1 this would correspond to the following: our set system $\mathcal A$ must not contain sets $A$ and $B$ with $|A\backslash B|=1$. How large can $\mathcal A$ be? Here the situation is rather different, as for example one cannot take an entire level. We give a construction that has size about $\frac {1}{n} \binom {n}{\lfloor n/2\rfloor }$, which is about (a constant fraction of) $1/n^{3/2}$ of the whole cube. But we are not able to show that this is optimal: the best upper bound that we are able to give is ${2^n}/{n}$. However, if we strengthen the condition to $\mathcal A$ not having $A$ and $B$ with $|A\backslash B| \leq 1$ then we are able to show that the greatest family has size $\frac{1}{n} \binom{n}{\lfloor n/2\rfloor }$, up to a multiplicative constant. \\ \section{Forbidding a fixed ratio} In this section we consider the problem of finding the maximum size of a family $\mathcal A$ of subsets of $[n]$ which satisfies $p|A\backslash B|\neq q|B\backslash A|$ for all $A,B\in \mathcal A$ where $p:q$ is a fixed ratio. Initially we will focus on the first non-trivial case $1:2$ (note that $1:1$ is trivial as then the condition just forbids two sets of the same size in $\mathcal A$) and then at the end of the section we extend these results to any given ratio. As mentioned in the Introduction, for the ratio $1:2$ we actually obtain the extremal family when $n$ is even and sufficiently large. This family, which we will denote by $\mathcal B_0$, is a union of level sets: $\mathcal B_0=\cup _{i\in I}[n]^{(i)}$. Here the set $I$ is defined as follows: $I=\{a_i:i\geq 0\}\cup \{b_i:i\geq 0\} $, where $a_0=b_0=\frac {n}{2}$ and $a_i$ and $b_i$ are defined inductively by taking $a_i=\lceil \frac {a_{i-1}}{2}\rceil -1$ and $b_i=\lfloor \frac {b_{i-1}+n}{2}\rfloor +1$ for all $i$. For example, if $n=2^k$ then $I=\{2^{k-1}\}\cup \{2^i-1:0\leq i\leq k-1\}\cup \{2^k-2^i+1:0\leq i\leq k-1\} $. Noting that for any sets $A$ and $B$ with either (i) $|A|=l$ where $l<\frac {n}{2}$ and $|B|>2l$ or (ii) $|A|=l$ where $l>\frac {n}{2}$ and $|B|<2l-n$ we have $|A\backslash B|\neq 2|B\backslash A|$, we see that $\mathcal B_0$ satisfies the required condition. Our main result is the following. \begin{thm} \label{main} Suppose $\mathcal A$ is a set system on ground set $[n]$ such that $|A\backslash B|\neq 2|B\backslash A|$ for all distinct $A,B\in \mathcal A$. Then $|\mathcal A|\leq (1+o(1))\binom {n}{\lfloor n/2 \rfloor }$. Furthermore, if $n$ is even and sufficiently large then $|\mathcal A|\leq |\mathcal B_0|$, with equality if and only if $\mathcal A=\mathcal B_0$. \end{thm} \noindent The main step in the proof of Theorem \ref{main} is given by the following lemma. The proof is a Katona-type (see \cite{katona}) averaging argument. \begin{lem} \label{inequality} Let $\mathcal{A}$ be a set system on $[n]$ such that $|A\backslash B|\neq 2|B\backslash A|$ for all distinct $A,B\in \mathcal A$. Then \begin{equation*} \sum _{j=l}^{2l} \frac {|\mathcal {A}_j|}{\binom {n}{j}} \leq 1 \end{equation*} for all $l\leq \frac {n}{3}$ and \begin{equation*} \sum _{j=2k-n}^{k} \frac {|\mathcal {A}_j|}{\binom {n}{j}} \leq 1 \end{equation*} for all $k\geq \frac {2n}{3}$, where $\mathcal A_j=\mathcal A\cap [n]^{(j)}$. \end{lem} \begin{proof} We only prove the first inequality, as the proof of the second is identical. Pick a random ordering of $[n]$ which we denote by $(a_1,a_2,\ldots ,a_{\lceil \frac {2n}{3} \rceil}, b_1,\ldots ,b_{\lfloor \frac {n}{3} \rfloor})$. Given this ordering, let $C_i=\{a_j:j\in[2i]\}\cup \{b_k:k\in [i+1,l]\} $ and let $\mathcal{C}=\{C_i:i\in [0,l]\}$. Consider the random variable $X=|\mathcal {A}\cap \mathcal {C}|$. Since each set $B\in [n]^{(i)}$ is equally likely to be $C_{i-l}$ we have $\mathbb {P}[B\in \mathcal {C}]= \frac {1}{\binom {n}{i}}$. Thus by linearity of expectation we have \begin{equation} \label{ref} \mathbb{E}(X)=\sum_{i=l}^{2l}\frac {|\mathcal A_i|}{\binom {n}{i}} \end{equation} On the other hand, given any $C_i, C_j$ with $i<j$ we have $|C_i\backslash C_j|=2|C_j\backslash C_i|$ and so $\mathcal{A}$ can contain at most one of these sets. This gives $\mathbb{E}(X)\leq 1$. Together with (\ref{ref}) this gives the claimed inequality \begin{equation*} \sum_{i=l}^{2l}\frac {|A_i|}{\binom {n}{i}} \leq 1 \end{equation*} \end{proof} \noindent \emph{Proof of Theorem \ref{main}.} We first show $|\mathcal A| \leq (1+o(1))\binom {n}{\lfloor n/2 \rfloor}$. By standard estimates (See e.g. Appendix A of \cite{aands}) we have $|[n]^{(\leq \alpha n)} \cup [n]^{(\geq (1-\alpha)n)}| = o(\binom {n}{\lfloor n/2 \rfloor})$ for any fixed $\alpha \in [0,\frac {1}{2})$, so it suffices to show that $| \bigcup _{i={\frac {2n}{5}}}^{\frac {3n}{5}} \mathcal A_i|\leq \binom {n}{\frac {n}{2}}$. But this follows immediately from Lemma \ref{inequality} by taking $l=\lfloor \frac {n}{3} \rfloor $. We now prove the extremal part of the claim in Theorem \ref{main}. We first show that the maximum of $f(x)=\sum _{i=0}^n x_i$ subject to the inequalities \begin{equation} \label{firstineq} \sum _{j=l}^{2l} \frac {x_j}{\binom {n}{j}} \leq 1, \quad l\in \{ 0,1,\ldots ,\lfloor \frac {n}{3} \rfloor \} \end{equation} and \begin{equation} \label{secondineq} \sum _{j=2k-n}^{k} \frac {x_j}{\binom {n}{j}} \leq 1, \quad k\in \{ \lceil \frac {2n}{3} \rceil ,\ldots ,n\} \end{equation} from Lemma \ref{inequality} occurs when $x_{n/2}= \binom {n}{\frac {n}{2}}$. Indeed, suppose otherwise. At least one of these inequalities involving $x_{n/2}$ must occur with equality, as otherwise we can increase $x_{n/2}$ slightly, increase the value of $f(x)$ and still satisfy (\ref{firstineq}) and (\ref{secondineq}). Pick $j>\frac {n}{2}$ as small as possible such that $x_j>0$. Let $y_{n/2}=x_{n/2}+\epsilon \binom {n}{n/2}$, $y_j=x_j-\epsilon \binom {n}{j}$ and $y_i=x_i$ for all other $i$. As $f(y)>f(x)$ one of the (\ref{firstineq}) or (\ref{secondineq}) must fail. If $\epsilon$ is sufficiently small only the inequalities involving $y_{n/2}$ and not $y_j$ can be violated. Choose $k<n/2$ maximal such that $y_k>0$ and $y_k$ does not occur in any inequality involving $y_j$. Note that we must have $j-k\geq \frac {n}{4}$. Decrease $y_k$ by $\epsilon \binom {n}{k}$. Since the only increased variable $y_{n/2}$ always occurs with one of $y_j$ or $y_k$, it follows that $y=(y_0,\ldots ,y_n)$ satisfies (\ref{firstineq}) and (\ref{secondineq}). We claim that $f(y)>f(x)$. Indeed, we must have either $|j-\frac {n}{2}|\geq \frac {n}{8}$ or $|k-\frac{n}{2}|\geq \frac {n}{8}$. Without loss of generality assume that $|k-\frac {n}{2}|\geq \frac {n}{8}$. Then since $\binom {n}{n/2}> \binom {n}{(n/2)+1} + \binom {n}{3n/8}$ for sufficiently large $n$ we have \begin{equation*} f(y)=f(x)+\epsilon \binom {n}{n/2}-\epsilon \binom {n}{j} - \epsilon \binom {n}{k} >f(x)+\epsilon \binom {n}{n/2}-\epsilon \binom {n}{(n/2)+1} - \epsilon \binom {n}{3n/8}>f(x). \end{equation*} Therefore we must have $x_{n/2}=\binom {n}{n/2}$, as claimed. Now, by the inequalities (\ref{firstineq}) and (\ref{secondineq}) we have $x_j=0$ for all $\frac {n}{4}\leq j\leq \frac {3n}{4}$ with $j\neq \frac {n}{2}$. From here it is easy to see by a weight transfer argument that $f(x)$ has a unique maximum when $x_i=\binom {n}{i}$ for $i\in I$ and $x_i=0$ otherwise. For a set system $\mathcal A$ these values of $x_i=|\mathcal A_i|$ can only be achieved if $\mathcal A=\mathcal B_0$, as claimed. \hspace{2cm} $\square$\\ \noindent We remark that the statement of Theorem \ref{main} does not hold for all even $n$, as can be seen for example by taking $n=4$ and $\mathcal A= \mathcal P[n]\backslash [n]^{(2)}$. We now extend Theorem \ref{main} from the ratio $1:2$ to any given ratio $p:q$. Let $p:q$ be in its lowest terms and $p<q$. If $A\in [n]^{(i+a)}$ and $B\in [n]^{(i)}$ satisfy $p|A\backslash B|=q|B\backslash A|$ then we have $p(a+b)=q(b)$ where $b=|B\backslash A|$. But then $pa=(q-p)b$ and since $p$ and $q$ are coprime we must have that $(q-p)|a$. Therefore any family $\mathcal A=\bigcup _{i\in I}[n]^{(i)}$, where $I$ is an interval of length $q-p$, satisfies $p|A\backslash B|\neq q|B\backslash A|$ for all $A,B\in \mathcal A$. Taking $\lfloor \frac {n}{2}\rfloor \in I$ gives $|\mathcal A|=(q-p+o(1))\binom {n}{\lfloor n/2 \rfloor }$. Our next result shows that this is asymptotically best possible. \begin{thm} \label{givenratio} Let $p,q\in \mathbb{N}$ be coprime with $p<q$. Let $\mathcal A$ be a set system on ground set $[n]$ such that $p|A\backslash B|\neq q|B\backslash A|$ for all distinct $A,B\in \mathcal A$. Then $|\mathcal A|\leq (q-p+o(1))\binom {n}{\lfloor n/2 \rfloor }$. \end{thm} The following lemma performs an analogous role to that of Lemma \ref{inequality} in the proof of Theorem \ref{main}. \begin{lem} \label{secondinequality} Let $\mathcal{A}$ be a set system on $[n]$ such that $p|A\backslash B|\neq q|B\backslash A|$ for all distinct $A,B\in \mathcal A$. Then \begin{equation*} \sum _{j\in J_k} \frac {|\mathcal {A}_j|}{\binom {n}{j}} \leq 1 \end{equation*} where $J_k=\{l:\lceil \frac {pn}{p+q}\rceil \leq l \leq \lfloor \frac {qn}{p+q}\rfloor, l\equiv k \pmod {(q-p)}\} $ for $0\leq k\leq q-p-1$. \end{lem} \begin{proof} We only sketch the proof, as it is very similar to the proof of Lemma \ref{inequality}. For convenience we assume $n=(p+q)m$ (this assumption is easily removed). Fix $k\in [0,q-p-1]$ and let $k'\equiv k-pm\pmod {(q-p)}$ where $k'\in [0,q-p-1]$. Pick a random ordering of $[n]$ which we denote by $(a_1,a_2,\ldots ,a_{qm}, b_1,\ldots ,b_{pm})$. Given this ordering let $C_i=\{a_j:j\in[qi+k']\}\cup \{b_j:j\in [pi+1,pm]\} $ and let $\mathcal{C}=\{C_i:i\in [0,m-1]\}$. (Here if $k'=0$ we additionally adjoin $C_m$ to $\mathcal C$.) By choice of $k'$, we have $|C_i|\in J_k$ for all $i\in [0,m-1]$. Again for any $C_i$ and $C_j$ with $i<j$ we have $q|C_i\backslash C_j|=p|C_j\backslash C_i|$, which implies that $\mathcal A$ contains at most one element of $\mathcal C$. Using this the rest of the proof is as in Lemma \ref{inequality}. \end{proof} The proof of Theorem \ref{givenratio} is now identical to the proof of Theorem \ref{main} taking Lemma \ref{secondinequality} in place of Lemma \ref{inequality}. For simplicity we have given in Lemma \ref{secondinequality} only the inequalities that we needed in order to prove Theorem \ref{givenratio}. Further inequalities involving smaller level sets analogous to those in Lemma \ref{inequality} can also be obtained in a similar fashion. While we have not done so here, we note that it is possible to use these inequalities to again find an exact extremal family for any given ratio $p:q$ as in Theorem \ref{main}, provided $q-p$ and $n$ have the opposite parity and $n$ is sufficiently large. \section{Forbidding a fixed distance} In this final section we consider how large a family $\mathcal A$ can be if for all $A,B\in \mathcal A$ we do not allow $A$ to have a constant distance from the bottom of the subcube formed with $B$. For `distance exactly 1' this would mean that we exclude $\vert A \backslash B\vert= 1$ for $A,B\in \mathcal A$. Here the following family $\mathcal A^*$ provides a lower bound: let $\mathcal A^*$ consist of all sets $A$ of size $\lfloor n/2\rfloor $ such that $\sum _{i\in A} i \equiv r \pmod {n}$, where $r\in \{0,\ldots ,n-1\}$ is chosen to maximise $|\mathcal A^*|$. Such a choice of $r$ gives $| \mathcal A^* |\geq {\frac {1}{n}} {\binom{n} {\lfloor n/2\rfloor }}$. Note that if we had $|A\backslash B|=1$ for some $A,B\in \mathcal A^*$ then, since $|A|=|B|$, we would also have $|B\backslash A|=1$. Letting $A\backslash B=\{i\}$ and $B\backslash A=\{j\}$ we then have $i-j\equiv 0 \pmod {n}$, giving $i=j$, a contradiction. We suspect that this bound is best. \begin{conjecture} \label{conject} Let $\mathcal{A}\subset \mathcal{P}[n]$ be a family which satisfies $|A\backslash B|\neq 1$ for all $A,B\in \mathcal{A}$. Then $|\mathcal A|\leq (1+o(1))\frac {1}{n}\binom {n}{\lfloor n/2\rfloor }$. \end{conjecture} \noindent The following gives an upper bound that is a factor $n^{1/2}$ larger than this. \begin{thm} { \label{exact} Let $\mathcal{A}\subset \mathcal{P}[n]$ be a family such that $|A\backslash B|\neq 1$ for all $A,B\in \mathcal{A}$. Then there exists a constant $C$ independent of $n$ such that $|\mathcal{A}|\leq \frac{C}{n}2^n$.} \end{thm} \begin{proof} { An easy estimate gives that the number of subsets of $\mathcal{A}$ in $[n]^{(\leq n/3)}\bigcup [n]^{(\geq 2n/3)}$ is at most $4\binom{n}{n/3}=o(\frac{2^n}{n})$. Therefore it suffices to show that $|\mathcal{A}_i|\leq \frac{C}{n}\binom{n}{i}$ for all $i\in [\frac{n}{3},\frac{2n}{3}]$. To see this, note that since $|A\backslash A'|\neq 1$ for all $A,A'\in \mathcal{A}$, each $B\in [n]^{(i+1)}$ contains at most one $A\in \mathcal{A}_i$. Double counting, we have \begin{equation*} { \begin{split} \frac {n}{3} | {\mathcal {A}} _i| \leq (n-i)| {\mathcal {A} }_i| &= | \lbrace (A,B): A\in \mathcal {A}_i, B \in [n]^{(i+1)}, A\subset B\rbrace | \\ & \leq \binom {n}{i+1} \leq 3\binom {n}{i} \end{split} } \end{equation*} as required. } \end{proof} Our final result gives an upper bound on the size of a family $\mathcal A$ in which we forbid `distance at most 1' instead of `distance exactly 1', i.e. where we have $|A\backslash B|> 1$ for all $A,B\in \mathcal A$. Again, the family $\mathcal A^*$ constructed above gives a lower bound for this problem. In general, if we forbid `distance at most $k$' then it is easily seen that the following family $\mathcal A_k^*$ gives a lower bound of $\frac {1}{n^k}\binom {n}{\lfloor n/2\rfloor }$: supposing $n$ is prime, let $\mathcal A_k^*$ consist of all sets $A$ of $\lfloor n/2 \rfloor $ which satisfy $\sum _{i\in A}i^d\equiv 0\pmod {n}$ for all $1\leq d\leq k$. Our last result provides a upper bound which matches this up to a multiplicative constant. The proof is again a Katona-type argument. Here the condition $|A\backslash B|>k$ rather than $|A\backslash B|\neq k$ seems to be crucial. \begin{thm} \label{atmost} Let $k\in \mathbb {N}$. Suppose $\mathcal {A}$ is a set system on $[n]$ such that $|A\backslash B|>k$ for all distinct $A,B \in \mathcal {A}$. Then $|\mathcal {A}|\leq \frac {(2^k-o(1))}{n^k}\binom {n}{\lfloor n/2 \rfloor}$. \end{thm} \begin{proof} { Consider the family $\partial ^{(k)} \mathcal A$, the $k$-shadow of $\mathcal A$, where \begin{equation*} { \partial ^{(k)}\mathcal{A} = \{B\in \mathcal{P}[n]: B=A\backslash C \mbox{ for some } A\in \mathcal {A} \mbox{ and }C\subset A \mbox { with } |C|=k\}. } \end{equation*} Since $\mathcal{A}$ does not contain $A,B$ with $|A\backslash B|\leq k$, every element of $\partial ^{(k)}\mathcal{A}$ is contained in at most one element of $\mathcal{A}$. Therefore we have \begin{equation} { \label{firstref} |\partial ^{(k)}\mathcal{A}|=\sum_{i=0}^n (i)_k|\mathcal{A}_i| } \end{equation} where $i_k=i(i-1)\cdots (i-k+1)$. Now, since $\mathcal{A}$ does not contain $A,B$ with $|A\backslash B|\leq k$, it follows that $\partial ^{(k)}\mathcal{A}$ is an antichain, and so by Sperner's theorem we have \begin{equation} { \label{secondref} |\partial ^{(k)}\mathcal{A}| \leq \binom{n}{\lfloor n/2 \rfloor} } \end{equation} Finally, an estimate of the sum of binomial coefficients (Appendix A of \cite{aands}) gives \begin{equation} { \label{thirdref} \sum_{i=0}^{\frac{n}{2}-n^{2/3}}|\mathcal{A}_i| \leq \sum_{i=0}^{\frac{n}{2}-n^{2/3}} \binom{n}{i} \leq e^{-n^{1/3}}2^n. } \end{equation} Combining (\ref{firstref}), (\ref{secondref}) and (\ref{thirdref}) we obtain \begin{equation*} { \begin{split} \binom {n}{\lfloor n/2 \rfloor} &\geq \sum_{i=0}^{\frac{n}{2}-n^{2/3}} (i)_k |\mathcal {A}_i| + \sum_{i=\frac{n}{2}-n^{2/3}}^n (i)_k|\mathcal {A}_i| \\ &\geq \sum_{i=0}^{\frac{n}{2}-n^{2/3}} (\frac {n}{2}-n^{2/3})_k|\mathcal {A}_i| - (\frac {n}{2}-n^{2/3})_ke^{-n^{1/3}}2^n + \sum_{i={\frac{n}{2}-n^{2/3}}}^n (\frac{n}{2}-n^{2/3})_k|\mathcal {A}_i| \\ &= (\frac {n}{2}- o(n))^k|\mathcal {A}| - o( \binom {n}{\lfloor n/2 \rfloor} ) \end{split} } \end{equation*} which gives the desired result. } \end{proof} Taking $k=1$ in Theorem \ref{atmost} we obtain an upper bound which differs by a factor of 2 from the lower bound given by the family $\mathcal A^*$. It would be interesting to close this gap.
{ "timestamp": "2011-01-24T02:01:43", "yymm": "1101", "arxiv_id": "1101.4151", "language": "en", "url": "https://arxiv.org/abs/1101.4151", "abstract": "Let \\cal A be a family of subsets of an n-set such that \\cal A does not contain distinct sets A and B with |A\\B| = 2|B\\A|. How large can \\cal A be? Our aim in this note is to determine the maximum size of such an \\cal A. This answers a question of Kalai. We also give some related results and conjectures.", "subjects": "Combinatorics (math.CO)", "title": "Tilted Sperner Families", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9884918482235152, "lm_q2_score": 0.8221891239865619, "lm_q1q2_score": 0.8127272467587494 }
https://arxiv.org/abs/2008.07067
Revisiting Spectral Bundle Methods: Primal-dual (Sub)linear Convergence Rates
The spectral bundle method proposed by Helmberg and Rendl is well established for solving large-scale semidefinite programs (SDP) thanks to its low per iteration computational complexity and strong practical performance. In this paper, we revisit this classic method showing it achieves sublinear convergence rates in terms of both primal and dual SDPs under merely strong duality, complementing previous guarantees on primal-dual convergence. Moreover, we show the method speeds up to linear convergence if (1) structurally, the SDP admits strict complementarity, and (2) algorithmically, the bundle method captures the rank of the optimal solutions. Such complementary and low rank structure is prevalent in many modern and classical applications. The linear convergent result is established via an eigenvalue approximation lemma which might be of independent interest. Numerically, we confirm our theoretical findings that the spectral bundle method, for modern and classical applications, speeds up under these conditions. Finally, we show that the spectral bundle method combined with a recent matrix sketching technique is able to solve an SDP with billions of decision variables in a matter of minutes.
\section{Analysis}\label{sec: analysis} In this section, we present and derive our sublinear convergence guarantees for both Block-Spec and HR-Spec as well as our improved local linear convergence for Block-Spec under extra condition of strict complementarity (defined momentarily). Key structural lemmas for our proofs are discussed in Section \ref{sec: analtical conditon}. Next, in Section \ref{sec: Relating primal and dual convergence}, we describe the relationship between primal and dual convergence for bundle methods. From this, we are able to focus our analysis on dual convergence only and yet conclude convergence for both. Let us first define the standard conditions of strong duality and strict complementarity. \paragraph{Strong duality} Strong duality in this paper means that the solution set $\xsolset$ and $\ysolset$ are nonempty and compact, and there is a solution pair $(\xsol,\ysol)\in \xsolset\times \ysolset$ such that the duality gaps is zero: \[ \pval := \inprod{-C}{\xsol} = \inprod{-b}{\ysol}=:\dval. \] Note that we require $\xsolset$ and $\ysolset$ to be nonempty and compact instead of just $\pval =\dval$. Such condition is ensured under Slater's conditions for both \eqref{p} and \eqref{d}. Next, we spell out strict complementarity. \paragraph{Strict Complementarity} \cite[Definition 4]{alizadeh1997complementarity} A pair of primal dual solution $(\xsol,\ysol)$ with dual slack matrix $\zsol = C-\Ajmap(\ysol)$ is \emph{strict complementary} if \[ \rank(\xsol)+\rank(\zsol)=\dm. \] If \eqref{p} and \eqref{d} admit such pair, we say the pair \eqref{p} and \eqref{d} (or simply \eqref{p}) satisfies strict complementarity. Such condition is satisfied for a generic SDP \cite{alizadeh1997complementarity} and also many structural SDPs \cite{ding2020regularity}. From these two conditions, we prove the following sublinear guarantees in Section \ref{sec: SublinearRates}\footnote{The proof of these rates rely on existing $\bigO \left(\frac{\log \frac{1}{\epsilon}}{\epsilon^{3}}\right)$ rates for the bundle method. Improvements in the underlying convergence theory for classic bundle methods would immediately give improvements in the convergence rates presented here.}. Denote $D_{\xsolset} :\,=\sup_{\xsol\in\xsolset} \nucnorm{\xsol}$, the largest nuclear norm of the primal solution set. \begin{theorem}\label{thm: sublinearates} Suppose strong duality holds. Given any $\beta\in(0,1)$, $\bar{r}\geq 1$, $\rho_t = \rho>0$, $\alpha \geq 2D_{\xsolset}$, and $z_0=y_0\in \RR^{\ncons}$, there is a constant $\kappa$, such that for any $\epsilon\in (0,\frac{1}{2})$, both Block-Spec and HR-Spec produce a solution pair $X_t$ and $y_t$ with \[ t \leq \kappa \frac{ \log (\frac{1}{\epsilon})}{\epsilon^3} \] that satisfies $F(y_t)-F(\ysol)\leq \epsilon$, and \begin{align*} \text{approximate primal feasibility: }& \quad\|b - \Amap X_{t}\|^2 \leq \epsilon, \quad X_{t}\succeq 0,\\ \text{approximate dual feasibility: }& \quad \lambda_{\min}(C - \Ajmap y_{t})\geq -\epsilon,\\ \text{approximate primal-dual optimality: }&\quad |\langle b, y_t \rangle - \langle C, X_t\rangle| \leq \sqrt{\epsilon}. \end{align*} Moreover, if strict complementarity holds, then the iteration number $t$ is bounded by \[ t\leq \kappa \frac{\log(\frac{1}{\epsilon})}{\epsilon}. \] \end{theorem} We improve on these results by showing Block-Spec locally converges linearly whenever both assumptions of strong duality and strict complementarity hold. Additionally, we require $\bar{r}\geq r_d$, where $r_d =\max_{y\in \ysolset}\dim(\nullspace(Z(\ysol)))$, the largest dimension of the null space of dual slack matrices. The linear convergence is local because it happens after $y_t$ is close to $\ysolset$. Upon first read, one can treat the dual solution as unique to ease the understanding. In this setting, $r_d = \dim(\nullspace(Z(\ysol))$. We are now ready to state the our improved convergence theorem, which is proven in Section \ref{sec: Proof of Local linear convergence of Block bundle method}. \begin{theorem}\label{thm: linear convergence of Block SBM under the extra condition strict complementarity} Suppose strong duality and strict complementarity holds. Then under proper selection of $\rho$ and $T_0>0$, for any $\beta\in [0,\frac{1}{2}]$ and $\bar{r}\geq r_d$, after at most $T_0$ many steps, Block-Spec will subsequently only take descent steps and converge linearly to an optimal solution. Consequently, there is a constant $\varkappa$, such that for any $\epsilon\in(0,\frac{1}{2})$, Block-Spec will produce a solution pair $X_t$ and $y_t$ with \[ t \leq T_0 + \varkappa \log (\frac{1}{\epsilon}) \] that satisfies $F(y_t)-F(\ysol)\leq \epsilon$, and \begin{align*} \text{approximate primal feasibility: }& \quad\|b - \Amap X_{t}\|^2 \leq \epsilon, \quad X_{t}\succeq 0,\\ \text{approximate dual feasibility: }& \quad \lambda_{\min}(C - \Ajmap y_{t})\geq -\epsilon,\\ \text{approximate primal-dual optimality: }&\quad |\langle b, y_t \rangle - \langle C, X_t\rangle| \leq \sqrt{\epsilon}. \end{align*} \end{theorem} The constants $\kappa$ and $\varkappa$ in Theorem \ref{thm: sublinearates} and \ref{thm: linear convergence of Block SBM under the extra condition strict complementarity} can depend on \eqref{p}, the choice of $\alpha,\beta,$ and $\rho$, and the initialization $X_0,z_0,y_0$, but does not depend on $\epsilon$. The choice of $T_0$ and $\rho$ can be found in Section \ref{sec: Proof of Local linear convergence of Block bundle method}. \subsection{Structural Lemmas}\label{sec: analtical conditon} We introduce a growth lemma that is central to our proof of faster convergence rate. We call $\gamma_1$ the \emph{quadratic growth parameter} when $\zeta_1=2$ in the following lemma. \begin{lemma}[Quadratic Growth]\cite[Section 4]{sturm2000error} \label{lem: qg} Suppose strong duality holds for \eqref{p} and \eqref{d}, then there are some $\rho_1,\rho_2$, such that for any fixed $\epsilon>0$, there are some $\gamma_1,\gamma_2>0$ such that for all $y$ with $F(y)\leq \inprod{-b}{\ysol}+ \epsilon$, and all $X\succeq 0$ with $|\inprod{C}{X}- \inprod{C}{\xsol}|\leq \epsilon$ and $\twonorm{\Amap{X}-b}\leq \epsilon$: \[ \dist^{\zeta_1}(y,\ysolset) \leq \gamma _1(F(y)-F(\ysol)),\quad \dist^{\zeta_2}(X,\xsolset)\leq \gamma_2\left(\abs{\inprod{C}{X}-\inprod{C}{\xsol}}+\twonorm{\Amap{X}-b}\right). \] If in addition, strict complementary holds for some primal dual solution pair $(\xsol,\ysol)\in \xsolset\times \ysolset$, then $\zeta_1=\zeta_2=2$ regardless of $\epsilon$. \end{lemma} \begin{proof} The result in \cite[Section 4]{sturm2000error} requires the set $S_1=\{y\mid F(y)\leq \inprod{-b}{\ysol}+ \epsilon\}$, and the set $S_2=\{X\mid X\succeq 0, |\inprod{C}{X}- \inprod{C}{\xsol}|\leq \epsilon,\,\text{and}\, \twonorm{\Amap{X}-b}\leq \epsilon\}$ being compact. Using \cite[Theorem 7.21]{ruszczynski2006nonlinear}, the optimization problem $\min_{X\succeq 0} g(X):= \inprod{C}{X} +\gamma \twonorm{\Amap X-b}$ has the same solution set as the primal SDP \eqref{p} for some large $\gamma >0$ . Thus the compactness of the set $S_1$ and $S_2$ is ensured by Lemma \ref{lem: sublevelset} below, and the proof is completed. \end{proof} \begin{lemma}[Compact sublevel set]\label{lem: sublevelset} If a convex lower semicontinuous function $f(x): \RR^\dm \rightarrow \RR\cup \{\infty\}$ has a compact nonempty solution set, then all of its sublevel set is compact. \end{lemma} \begin{proof} Suppose for some $L\in \RR$, then the closed sublevel set $S_L = \{x\mid f(x)\leq L\}$ is unbounded. Then there is a unit direction vector $\gamma \in \RR$ such that for all $x \in S_L,\alpha \geq 0$, $x+\alpha \gamma \in S_L$. This in particular violates the fact the solution set is bounded and the proof is completed. \end{proof} \subsection{Relating primal and dual convergence} \label{sec: Relating primal and dual convergence} In this section, we present the convergence measures of the iterates $X_t$ in \eqref{eq: X_tdefinition} and dual iterates $y_t$ in terms of the convergence of $F(y_t)$. Specifically, we consider primal feasibility $X\succeq 0$ and $\Amap X=b$, dual feasibility $Z(y)\succeq 0$, and primal-dual optimality $|\inprod{C}{X}-\inprod{b}{y}|$. Such reduction allows us to focus on the dual penalization objective $F(y_t)$ in the next two sections. To facilitate the presentation, we denote $ D_{\xsolset} :\,=\sup_{\xsol\in\xsolset} \nucnorm{\xsol}, $ and $ D_y \geq \sup_{F(y)\leq F(y_0)} \norm{y}. $ Also we introduce the shorthand $\bar{F}_t:\, = \bar{F}_{(V_t,\bar{X}_t)}$. Let us first deals with primal feasibility. \begin{lemma}[Primal Feasibility]\label{lem:primal-feas} At every descent step $t$, we have approximate primal feasibility $$ X_{t}\succeq 0, \quad\text{and}\quad \|b - \Amap X_{t}\|^2\leq \frac{2\rho_t}{\beta}(F(y_{t}) - F(\ysol )). $$ \end{lemma} \begin{proof} The $X_{t+1} =\eta^\star_{t}\bar{X}_t+V_t S^\star _{t}V_t^\top$ in \eqref{eq: X_tdefinition}. Since $\eta\geq 0$ and $S^\star_t\succeq 0$ by construction in \eqref{eq: subproblemAMSpectralbundleMethod}, $X_t$ is positive semidefinite. From the optimality of \eqref{eq: subproblemAMSpectralbundleMethod}, and the descent step assumption, we reach that \begin{equation}\label{eq: linearFeasibilityEq1} -b + \Amap(X_t) = \rho_t (y_{t}-y_{t+1}). \end{equation} Hence $\twonorm{-b + \Amap(X_t)}^2 = \rho_t^2\twonorm{ (y_{t}-y_{t+1})}^2$. The distance traveled during any descent step can be bounded by the objective value gap as $$ \frac{\rho_t}{2}\|y_{t+1} - y_t\|^2 \leq F(y_{t})-\bar F_{t}(y_{t+1}) \leq \frac{F(y_{t})-F(y_{t+1})}{\beta} \leq \frac{F(y_{t})-F(\ysol)}{\beta}$$ where the first inequality uses the fact that $z_{t+1}$ minimizes $\widetilde F^{t}(\cdot)+\frac{\rho_t}{2}\|\cdot-y_t\|$ and the second inequality uses the definition of a descent step. Combining this with our feasibility bound completes our proof. \end{proof} Next, we consider dual feasibility. \begin{lemma}[Dual Feasibility]\label{lem:dual-feas} At every descent step $t$, with the choice $\alpha \geq 2 D_{\xsolset}$, we have approximate dual feasibility $$ \lambda_{\min}(C - \Ajmap y_{t+1})\geq \frac{-(F(y_{t}) - F(\ysol))}{D_{\xsolset}}.$$ \end{lemma} \begin{proof} Standard strong duality $(\inprod{C}{\xsol}=\inprod{b}{\ysol}\implies \inprod{\xsol}{Z(\ysol)}=0)$ show for any $\xsol\in\xsolset$, \begin{align*} \langle b, y_{t+1}-\ysol \rangle & = \langle \Amap\xsol, y_{t+1}-\ysol \rangle\\ & \leq \langle \xsol, \Ajmap(y_{t+1}-\ysol)\rangle = \langle \xsol, Z(\ysol)-Z(y_{t+1})\rangle\\ & \leq -\nucnorm{\xsol}\min\{\lambda_{\min}(C - \Ajmap y_{t+1}),0\}. \end{align*} Recalling our assumption that $\alpha \geq 2D \geq 2\nucnorm{\xsol}$ and $\ysol$ indeed solves \eqref{eq: penaltySDP}, yield the claimed feasibility bound \begin{align*} F(y_{t}) - F(\ysol) \geq F(y_{t+1}) - F(\ysol) & = \langle -b, y_{t+1}-\ysol \rangle -\alpha\min\{\lambda_{\min}(C - \Ajmap y_{t+1}),0\}\\ & \geq -\|\xsol\|\min\{\lambda_{\min}(C - \Ajmap y_{t+1}),0\}. \qedhere \end{align*} \end{proof} Finally, we consider the primal-dual optimality. \begin{lemma}[Primal-Dual Optimality]\label{lem:optimality} At every descent step $t$, with the choice $\alpha \geq 2 D_{\xsolset}$, we have approximate primal-dual optimality bounded above by $$\langle b, y_{t+1} \rangle - \langle C, X_t\rangle \leq \frac{\alpha}{D_{\xsolset}}(F(y_{t}) - F(\ysol)) + \sqrt{\frac{2\rho_t}{\beta}(F(y_{t}) - F(\ysol))}\ D_y.$$ and below by $$ \langle b, y_{t+1} \rangle - \langle C, X_t\rangle \geq -\frac{1-\beta}{\beta}(F(y_{t}) - F(\ysol)) - \sqrt{\frac{2\rho_t}{\beta}(F(y_{t}) - F(\ysol))}\ D_y.$$ \end{lemma} \begin{proof} The standard duality analysis shows the primal-dual objective gap equals \begin{align*} \langle b, y_{t+1} \rangle - \langle C, X_t\rangle &= \langle \Amap X_t, y_{t+1} \rangle - \langle C, X_t\rangle + \langle b-\Amap X_t, y_{t+1}\rangle\\ &= \langle X_t, \Ajmap y_{t+1} - C\rangle + \langle b-\Amap X_t, y_{t+1}\rangle. \end{align*} Notice that the second term here is bounded above and below as $$ |\langle b-\Amap X_t, y_{t+1}\rangle| \leq \sqrt{\frac{2\rho_t}{\beta}(F(y_{t}) - F(\ysol))}\ \|y_t\|\leq \sqrt{\frac{2\rho_t}{\beta}(F(y_{t}) - F(\ysol))}\ D_y$$ by Lemma~\ref{lem:primal-feas}. Hence we only need to show that the first term also approaches zero (that is, we approach holding complementary slackness). An upper bound on this inner product follows from Lemma~\ref{lem:dual-feas} as $$ \langle X_t, \Ajmap y_{t+1} - C \rangle \leq -\nucnorm{X_t}\lambda_{\min}(C - \Ajmap y_{t+1}) \leq \frac{\nucnorm{X_t}(F(y_{t})-F(\ysol))}{D_{\xsolset}}. $$ Combining the above with $\tr(X_t) \leq \alpha$ by construction, we have $$ \langle b, y_{t+1} \rangle - \langle C, X_t\rangle \leq \frac{\alpha}{D_{\xsolset}}(F(y_{t}) - F(\ysol)) + \sqrt{\frac{2\rho_t}{\beta}(F(y_{t}) - F(\ysol))}\ D_y. $$ A lower bound on this inner product follows as \begin{align*} \frac{1-\beta}{\beta}(F(y_{t}) - F(y_{t+1})) &\overset{(a)}{\geq} F(y_{t+1}) - \bar F_{t}(y_{t+1})\\ & \overset{(b)}{=} -\alpha\min\{\lambda_{\min}(C-\Ajmap y_{t+1}),0\} + \langle C, X_{t}\rangle -\langle \Amap X_{t}, y_{t+1}\rangle\\ & \geq \langle X_{t}, C - \Ajmap y_{t+1}\rangle, \end{align*} where step $(a)$ follows from the definition of a descent step, and steo $(b)$ follows from the definition of $\bar{F}_t$ and the optimality of $z_{t+1}= y_{t+1}$ and $X_t$ in \eqref{eq: subproblemAMSpectralbundleMethod}. Hence \begin{align*} \langle b, y_{t+1} \rangle - \langle C, X_t\rangle &\geq -\frac{1-\beta}{\beta}(F(y_{t}) - F(\ysol)) - \sqrt{\frac{2\rho_t}{\beta}(F(y_{t}) - F(\ysol))}\ D_y.\qedhere \end{align*} \end{proof} \paragraph{Conclusion of primal dual convergence relattion} In conclusion, for an $y_t$ from the descent step satisfying $F(y_t)-F(\ysol)\leq \epsilon$, assuming $\rho$, $\beta$, and $y_0$ are $\epsilon$ independent, then both primal feasibility (in terms of $\norm{b-\Amap X_t}$) and primal-dual optimality (in terms of $\langle b, y_{t+1} \rangle - \langle C, X_t\rangle$) is $\bigO(\sqrt{\epsilon})$, while dual feasibility (in terms of $\lambda_{\min}(C - \Ajmap y_{t+1})$) is $\bigO(\epsilon)$. \subsection{Proof of Theorem \ref{thm: sublinearates}}\label{sec: SublinearRates} We shall utilize results in \cite{Du2017,grimmer2019general,kiwiel2000efficiency}. These results state that a proximal bundle method admits a $y_t$ from descent step satisfying $F(y_t)-F(\ysol)\leq \epsilon$ within $\bigO(\frac{\log (\frac{1}{\epsilon})}{\epsilon^3})$ steps for general $F$, and $\bigO(\frac{\log (\frac{1}{\epsilon})}{\epsilon})$ steps when $F$ has quadratic growth, a consequence of strict complementarity as shown in Lemma \ref{lem: qg}. Once we establish these convergence rates for Block-Spec, and HR-Spec, then Lemma \ref{lem:primal-feas}, \ref{lem:dual-feas}, and \ref{lem:optimality} imply the results for primal feasibility, dual feasibility, and primal-dual optimality. Recall in Section \ref{sec: Block spectral bundle method}, we claimed the following inequalities \eqref{eq: subgradientAndAggregation}, \begin{equation*} \bar{F}_{\texttt{simple}}(y)\overset{(a)}{\leq} \bar{F}_{(V_{t+1},\bar{X}_{t+1})}(y) \overset{(b)}{\leq} F(y). \end{equation*} Here the inequality $(b)$ is satisfied by construction. Once we establish the inequality $(a)$, we can immediately employ the results in \cite{Du2017,grimmer2019general} \footnote{Both proofs actually only show convergence for a proximal bundle method with cut aggregation $\bar{F}_{\texttt{simple}}$ or the method with a multiple yet \emph{finite} cuts. As remarked by the authors of \cite[Section 3]{Du2017}, `` All the results hold true for both versions, because the analysis of the method with multiple cuts uses the version with cut aggregation anyway", a detailed check yields that \eqref{eq: subgradientAndAggregation} (which appears as the inequality (12) in \cite{Du2017}) is the only ingredient needed to prove the convergence rate while the finiteness of the cuts is not needed. A future work \cite{DG2020} of the second author further confirms this fact.}. To avoid extra complication, we only show the inequalities for Block-Spec and defer the derivation for HR-Spec to Section \ref{sec: Model comparison for HR bundle method} in the appendix. Recall we write $\bar{F}_{t+1}:\,=\bar{F}_{(V_{t+1},\bar{X}_{t+1})}$ for short. Let us first define $\bar{F}_{\texttt{simple}}$. Denote $g_t \in \partial F(z_{t+1})$ and $s_{t+1}= -\rho_t (z_{t+1}-y_t)$. The simplest model $\bar{F}_{\texttt{simple}}$ (for bundle method with cut aggregation) in \cite[Section 2.2]{Du2017} for the $t+1$ step, which is a maximum of two lower bounds based on the aggregation $s_{t+1}$ and a subgradient $g_t$, takes form \begin{equation}\label{eq: simplemodel} \bar{F}_{\texttt{simple}}(y) = \max\{F(z_{t+1})+\inprod{g_{t+1}}{y-z_{t+1}}, F_{t}(z_{t+1})+\inprod{s_{t+1}}{y-z_{t+1}}\}. \end{equation} By the optimality condition of \eqref{eq: subproblemAMSpectralbundleMethod}, and definition of $X_t,z_{t+1}$, we know that \begin{align} -b + \Amap(X_t) & = \rho_t (y_t-z_{t+1})=s_{t+1} \label{eq: optimalityConditionEqsubproblemAMSpectralbundleMethod}\\ \bar{F}_{t}(z_{t+1}) & = \inprod{-b}{z_{t+1}} +\inprod{X_t}{\Amap^*(z_{t+1})-C}.\label{eq: F_{t+1}(z_{t+1})} \end{align} Let the $v_+$ be the top eignevector of $\lambda_{\max}(\Amap^* z_{t+1}-C)$ if $\lambda_{\max}(\Amap^* z_{t+1}-C)>0$ and be zero otherwise. Using \eqref{eq: optimalityConditionEqsubproblemAMSpectralbundleMethod}, \eqref{eq: F_{t+1}(z_{t+1})}, and the definition of $v_+$, the simple model $\bar{F}_{\texttt{simple}}$ can be rewritten as \begin{equation}\label{eq: simplemodelLambdaform} \begin{aligned} \bar{F}_{\texttt{simple}}(y) & = \max\{ \inprod{-b}{y}+ \inprod{\alpha v_+v_+^\top}{\Amap^*y-C}, \inprod{-b}{y}+ \inprod{X_t}{\Amap^*y-C} \}. \end{aligned} \end{equation} Recall the $t$-th spectral set $\mathcal{W}_{t+1}$ in \eqref{eq: tthspectralset} is defined as \[ \mathcal{W}_{t+1}:=\{\eta \bar{X}_{t+1} + V_{t+1}SV_{t+1}^\top \mid \eta \geq 0,\;S\in \symMat_+^{r_{t+1}},\;\text{and}\;\eta\alpha + \tr(S)\leq \alpha \}, \] which is the constraint set of Problem \eqref{eq: subproblemAMSpectralbundleMethod}. We find that ${X}_t=\bar{X}_{t+1}$ (due to the update of Block-Spec) and the subgradient $\alpha v_+v_+^\top \in \partial \max\{\lambda_{\max}(\Amap^* (z_{t+1})-C),0\}$ are both in $\mathcal{W}_{t+1}$: \begin{equation}\label{eq: aggregatebarxinthenewsetAmnesiaMethod} \bar{X}_{t+1}={X}_t\in \mathcal{W}_{t+1}\quad\text{and}\quad \alpha v_1v_1^\top \in \mathcal{W}_{t+1}. \end{equation} Hence \eqref{eq: subgradientAndAggregation} is established, and our proof is complete. \subsection{Local linear convergence of Block bundle method}\label{sec: Proof of Local linear convergence of Block bundle method} First, we discuss estimates of $T_0$, $\rho$ and the contraction factor, and the condition $\bar{r}\geq r_d$ for local linear convergence. Let us introduce some notations to facilitate the discussion: the gap parameter $\delta := \inf_{\ysol \in \ysolset}\max_{r\leq r_d} \lambda_{r}(-Z(\ysol ))-\lambda_{r+1}(-Z(\ysol))$ and the quadratic growth parameter $\gamma>0$ from Lemma \ref{lem: qg}. The gap parameter $\delta$ is nonzero from the definition of $r_d$, the compactness of $\ysolset$, and continuity of the function $\max_{r\leq r_d} \lambda_{r}(Z(\cdot ))-\lambda_{r+1}(Z(\cdot))$. When the dual solution is unique, we have $\delta = \lambda_{r_d}(-Z(\ysol))-\lambda_{r_d+1}(-Z(\ysol))$. \paragraph{Estimate of $T_0$, $\rho$, and contraction factor} The number $T_0$ is chosen so that $Z(y_t)$ is $\frac{\delta}{3}$ close to the solution set $Z(\ysolset)=\{Z(\ysol)\mid \ysol \in \ysolset\}$. Using Theorem \ref{thm: sublinearates}, we know $T_0$ satisfies \begin{equation} \label{eq: T0estimate} T_0\leq \kappa \frac{\log\left(\frac{\delta^2\gamma}{9\opnorm{\Amap^*}^2}\right)}{\frac{\delta^2\gamma}{9\opnorm{\Amap^*}^2}}. \end{equation} In our proof in Section \ref{sec: Proof of the quadratic accurate model}, the condition on regularization parameter $\rho$ is that \begin{equation}\label{eq: rhoEstimate} \rho \geq 4\opnorm{\Amap^*}^2\max\{\frac{72\sup_{\ysol \in \ysolset}\opnorm{2Z(\ysol)}}{\delta^2},\frac{9(8\sqrt{2}+16)}{\delta}\}. \end{equation} The contraction quantity is the distance to the dual solution set, $\dist(y_t,\ysolset)$, and the factor is $\sqrt{\frac{\rho}{2\gamma +\rho}}$, see \eqref{eq: approximationModelStep4} in Section \ref{sec: Linear convergence under a quadratic accurate model}. \paragraph{The condition $\bar{r}\geq r_d$} Modern applications \cite{candes2009exact,candes2013phaselift,recht2010guaranteed,ding2020regularity} of \eqref{p} actually shows that $\xsol$ is unique, admits rank $\rsol :\,=\rank(\xsol)\ll \dm$, and satisfies strict complementarity under certain structural probabilistic assumptions. If in addition, dual solution is unique, then we only need $\bar{r}\geq r_d=\rsol$. Hence, the subproblem \eqref{eq: subproblemSpectralbundleMethod} can be solved efficiently for these problems. Also $\bar{r}\geq \rsol$ should be required even from an eigenvalue computational perspective as the bottom $\rsol$ eigenvalues of the slack $Z(y_t)$ start to coalesce once $y_t$ is close to $\ysolset$. Moreover, as we observe in numerics in Section \ref{sec: numerics}, even if there are multiple dual solutions, $\bar{r}\geq \rsol $ is still enough to guarantee quick convergence while $\bar{r} <\rsol$ introduces slow convergence. \paragraph{Proof outline} The proof is split into two steps. In Section \ref{sec: Linear convergence under a quadratic accurate model}, We first show that linear convergence can be proved under the claim that the model $\bar{F}_{(V_t,\bar{X}_t)}(z)$ is quadratic accurate, see \eqref{eq: quadraticAccurateModel} for the definition. Next, in Section \ref{sec: Proof of the quadratic accurate model}, we show the claim indeed holds under strong duality, strict complementarity, and $\bar{r}\geq r_d$. \subsubsection{Linear convergence under a quadratic accurate model}\label{sec: Linear convergence under a quadratic accurate model} Let us first clarity the notion quadratic accurate model. Our model $\bar{F}_{t}(z):=\bar{F}_{(V_t,\bar{X}_t)}(z)$ is quadratic accurate if there is some $\rho>0$ (independent of $t$) such that for any $z\in \RR^{\ncons}$, \begin{equation}\label{eq: quadraticAccurateModel} \begin{aligned} \bar{F}_{t}(z)\leq F(z)\leq \bar{F}_{t}(z) + \frac{\rho}{2}\twonorm{z-y_t}^2. \end{aligned} \end{equation} Under this condition the method with $\rho_t =\rho$ always take a descent step for $\beta\leq \frac{1}{2}$. We verify the model is quadratic accurate, i.e., \eqref{eq: quadraticAccurateModel} holds, in the next section. Now suppose \eqref{eq: quadraticAccurateModel} is satisfied. We know the minimizer $z_{t}^\star$ of $\bar{F}_{t}(z) + \frac{\rho}{2}\twonorm{z-y_t}^2$ satisfies that for any $z\in \RR^\ncons$ \begin{equation}\label{eq: threePointInequality} \begin{aligned} \bar{F}_t(z_t^\star) +\frac{\rho}{2}\twonorm{z_t^\star-y_t}^2 + \frac{\rho}{2}\twonorm{z^\star_t-z}^2\leq \bar{F}_t(z) + \frac{\rho}{2}\twonorm{z- y_t}^2, \end{aligned} \end{equation} as $\bar{F}_t(z)+\frac{\rho}{2}\twonorm{z-y_t}^2$ is $\rho$-strongly convex. By setting $z=y_t$ and \eqref{eq: quadraticAccurateModel}, we find that \begin{equation}\label{eq: approximationModelStep1} F(y_t)-\bar{F}_t(z_t^\star)\geq \rho \twonorm{z_t^\star-y_t}^2\geq 0,\quad \text{and}\quad \frac{\rho}{2}\twonorm{z_t^\star-y_t}^2 \leq F(y_t)-F(z_{t}^\star ). \end{equation} Next, using \eqref{eq: quadraticAccurateModel} in the following step $(a)$ and \eqref{eq: approximationModelStep1} in the following step $(b)$, we find that \begin{equation}\label{eq: descentStepASBMLocalLinearConvergence} \begin{aligned} \beta \left(F(y_t)-\bar{F}_t(z_t^\star)\right)\leq \frac{1}{2}\left(F(y_t)-\bar{F}_t(z_t^\star)\right) & \overset{(a)}{\leq}\frac{1}{2}\left(F(y_t)-F(z_t^\star)\right)+\frac{\rho}{4}\twonorm{y_t-z_t^\star}^2\\ &\overset{(b)}{\leq} F(y_t)-F(z_{t}^\star). \end{aligned} \end{equation} Hence, we see the method will indeed take a descent step and $z_t^\star$ is in the sublevel set defined by $\{y\mid F(y)\leq F(y_0)\}$. By setting $z=\ysol$ for an arbitrary $\ysol\in \ysolset$ in \eqref{eq: threePointInequality}, and using \eqref{eq: quadraticAccurateModel}, we find that \begin{equation}\label{eq: approximationModelStep2} \begin{aligned} F(z^\star_t)\leq F(\ysol) +\frac{\rho}{2}\left(\twonorm{\ysol -y_t}^2-\twonorm{z_t^\star-\ysol}^2\right). \end{aligned} \end{equation} Now recall the quadratic growth of $F$ (derived from Lemma \ref{lem: qg}) that there is a $\gamma>0$ such that for all $z\in \{y\mid F(y)\leq F(y_0)\}$, \[ F(z)-F(\ysol)\geq \gamma \disttwonorm^2 (z,\ysolset). \] Hence combining this with \eqref{eq: approximationModelStep2}, we find that \begin{equation}\label{eq: approximationModelStep3} \begin{aligned} \gamma \disttwonorm^2 (z_t^\star,\ysolset) &\leq \frac{\rho}{2}\left(\twonorm{\ysol -y_t}^2-\twonorm{z_t^\star-\ysol}^2\right) \\ \implies \left(\gamma +\frac{\rho}{2}\right)\disttwonorm^2 (z_t^\star,\ysolset) & \leq \gamma \disttwonorm ^2(z^\star_t,\ysolset) + \frac{\rho}{2}\twonorm{z_t^\star-\ysol}^2\leq \frac{\rho}{2}\twonorm{\ysol -y_t}^2 \\ \implies \disttwonorm^2 (z^\star_t, \ysolset) & \leq \frac{\rho}{2\gamma +\rho} \twonorm{\ysol-y_t}^2. \end{aligned} \end{equation} Now take $\ysol$ to be the point in $\ysolset$ nearest to $y_t$, we find that \begin{equation}\label{eq: approximationModelStep4} \disttwonorm (z_t^\star,\ysolset) \leq \sqrt{\frac{\rho}{2\gamma +\rho}} \disttwonorm (y_t,\ysolset). \end{equation} This shows that the new iterate $y_{t+1}=z_{t+1}=z_t^\star$ (the method takes a descent step as we just argued in \eqref{eq: descentStepASBMLocalLinearConvergence}) is getting closer to the solution set $\ysolset$ geometrically with a factor of $\sqrt{\frac{\rho}{2\gamma +\rho}} <1$. Next we prove that the model assumption \eqref{eq: quadraticAccurateModel} indeed holds. \subsubsection{Proof of the quadratic accurate model} \label{sec: Proof of the quadratic accurate model} Withoutloss of generality, we may suppose our current $z_t$ is $y_t$. Let us first define the $r$-th spectral plus set of a matrix $X\in\symMat^{\dm}$ with $\lambda_{r}(X)-\lambda_{r+1}(X)>0$ as $\faceplus r(X):=\{VSV^{\top}\mid\tr(S)\leq1,S\succeq0,S\in\symMat^{r}\}.$ Here $V\in \RR^{\dm \times r}$ is the matrix formed by orthonomal eigenvectors of $X$ corresponding to its $r$ largest eigenvalues. We now utilize the following lemma, proved in Section \ref{sec: proofOfImportantLemma} in the appendix, to establish that our model $\bar{F}_{t}$ is quadratic accurate. \begin{lemma}\label{lem: importantLemmaQuadraticAccurateModel} Suppose $X\in\symMat^{\dm}$ has eigenvalues $\lambda_{r}(X)-\lambda_{r+1}(X)=\delta$ and denote the $\Lambda_{r,\dm}=\max\{|\lambda_{r+1}(X)|,|\lambda_\dm (X)| \}$. Then for any $Y\in \symMat^{\dm}$, the quantity $f_X(Y):\,=\max\{\lambda_1(Y),0\}-\max_{W\in \faceplus r (X)}\inprod{W}{Y}$ satisfies that \begin{equation} \begin{aligned} \label{eq: quadraticAccurateModelLemma} 0\leq f_X(Y) \leq \frac{8\fronorm{Y-X}^{2}\Lambda_{r,\dm}}{\delta^{2}}+\frac{(8\sqrt{2}+16)\fronorm{Y-X}^{2}}{\delta}. \end{aligned} \end{equation} \end{lemma} \paragraph{Unique solution case} Let us first suppose the dual solution $\ysol$ is unique. In this case, if $y_t$ is close to $\ysol$ so that $\opnorm{Z(y_t)-\zsol}\leq \frac{\delta}{3}$ where $\delta$ is the $r_d$-th eigengap of $-\zsol$, $\delta = \lambda_{r_d}(-\zsol)-\lambda_{r_d+1}(-\zsol)>0$. Then from Weyl's inequality, we know the $r_d-$th eigengap of $-Z(y_t)$, $\lambda_{r_d}(-Z(y_t))-\lambda_{r_d+1}(-Z(y_t))$, is at least $\frac{\delta}{3}$, and $\opnorm{Z(y_t)}\leq 2\opnorm{\zsol}$. Denote $V\in \RR^{\dm \times r_d}$ to be the matrix formed by the eigenvectors corresponding to the $r_d$ largest eigenvalue of $-Z(\ysol)$. Hence we find that \begin{equation} \begin{aligned} F(y)-\bar{F}_t(y)& =\alpha \max\{\lambda_{\max}(-Z(y)),0\} - \max_{\eta \alpha +\tr(S)\leq \alpha , \eta \geq 0, S\in \symMat_+^{\bar{r}}}\inprod{\eta\bar{X}+V_tSV_t^\top}{-Z(y)} \\ &\overset{(a)}{\leq } \alpha \max\{\lambda_{\max}(-Z(y)),0\} - \max_{\tr(S)\leq \alpha, S\in \symMat_+^{r_d}}\inprod{VSV^\top}{-Z(y)} \\ & = \alpha \left( \max\{\lambda_{\max}(-Z(y)),0\} - \alpha \max_{W\in \faceplus {r_d} (-Z(y_t))}\inprod{W}{-Z(y)} \right) \\ & \overset{(b)}{\leq } \frac{72\fronorm{Z(y_t)-Z(y)}^{2}\opnorm{2\zsol}}{\delta^{2}}+\frac{9(8\sqrt{2}+16)\fronorm{Z(y_t)-Z(y)}^{2}}{\delta}\\ &\leq 2\opnorm{\Amap^*}^2\max\{\frac{72\opnorm{2\zsol}}{\delta^2},\frac{9(8\sqrt{2}+16)}{\delta}\}\twonorm{y-y_t}^2, \end{aligned} \end{equation} Here $(a)$ is because $\bar{r} \geq r_d$ by assumption and $(b)$ is due to Lemma \ref{lem: importantLemmaQuadraticAccurateModel}. Combining the fact that $F(y)-\bar{F}_t(y)$ as $\bar{F}_t$ serves as a lower bound from the construction, we see the model $\bar{F}_t$ is indeed quadratic accurate with $\rho = 4\opnorm{\Amap^*}^2\max\{\frac{72\opnorm{2\zsol}}{\delta^2},\frac{9(8\sqrt{2}+16)}{\delta}\}$ in \eqref{eq: quadraticAccurateModel}. \paragraph{Multiple dual solutions case} What if $\mathcal{Y}_\star$ has multiple points? Recall we define $\delta$ as \begin{equation} \begin{aligned} \label{eq: definitionOfdelta} \delta = \inf_{\ysol \in \ysolset}\max_{r\leq r_d} \lambda_{r}(-Z(\ysol))-\lambda_{r+1}(-Z(\ysol)). \end{aligned} \end{equation} The gap $\delta$ is nonzero from the definition of $r_d$, the compactness of $\ysolset$, and continuity of the function $\max_{r\leq r_d} \lambda_{r}(\zsol(\cdot))-\lambda_{r+1}(\zsol(\cdot))$. Hence if $\dist(Z(y_t),Z(\ysolset))$ is less than a third of $\delta$, then there is an $r$ and $\ysol\in \ysolset$, such that $-Z(\ysol)$ is no more than $\frac{\delta}{3}$ away from $Z(y_t)$, and has $\lambda_{r}(-\zsol(y))-\lambda_{r+1}(-\zsol(y))\geq \frac{\delta}{3}$. Hence we can repeat previous argument for the case of unique dual solution and replace $r_d$ and $\opnorm{\zsol}$ by $r$ and $2\sup_{\ysol \in \ysolset}\opnorm{Z(\ysol)}$ respectively. Hence we see that the model $\bar{F}_t$ is indeed quadratic accurate in \eqref{eq: quadraticAccurateModel} with $ \rho =4\opnorm{\Amap^*}^2\max\{\frac{72\sup_{\ysol \in \ysolset}\opnorm{2Z(\ysol)}}{\delta^2},\frac{9(8\sqrt{2}+16)}{\delta}\} $ as stated in \eqref{eq: rhoEstimate}. \section{Defining Spectral Bundle Methods} \label{sec:def} In this section, we lay out the main ideas of spectral bundle methods in Section \ref{sec: Ingredients of Spectral bundle method}. In Section \ref{sec: SpectralbundleMethodExactDefinition}, we first present our Block spectral bundle method, which we believe is notationally simpler, and then the HR spectral bundle method. We compare their differences in Section \ref{sec: HR spectral bundle method}, and conclude the section by a discussion in solving a common small-scale subproblem of Block-Spec and HR-Spec in Section \ref{sec: ImportantsubproblemSolver}. \subsection{Ingredients of Spectral bundle method}\label{sec: Ingredients of Spectral bundle method} In this section, we describe three main ingredients of spectral bundle method: spectral lower bound, regularization, and aggregation. \subsubsection{Subgradient lower bound and spectral lower bound}\label{sec: Subgradient lower bound and spectral lower bound} Let us first recall the subgradient lower bound of bundle methods ~\cite[Section 7.4]{ruszczynski2011nonlinear}. \paragraph{Subgradient lower bound} For any convex function $F$, the set of lower minorants at a point $y$ defines the subdifferential $\partial F(y) = \{g\in\RR^\ncons \mid F(y') \geq F(y)+\langle g, y'-y\rangle \text{ for all } y' \}$. Each vector $g$ is referred to as a subgradient. The bundle method builds a model function $\bar{F}(y)$ of $F$ via a maximum of lower minorants of $F$ at certain points $\mathcal{Y}_I = \{y_i\}_{i=1}^{|I|}$: \[ \bar{F}_{\mathcal{Y}_I}(y) :\,= \max_{i=1,\dots,|I|}\left( F(y_i)+\inprod{g_i}{y-y_i}\right), \quad g\in\partial F(y_i) \] If we now substitute the special form of $F$ in \eqref{eq: penaltySDP}, we find that $\bar{F}_{\mathcal{Y}_I}(y)$ can be rewritten as \begin{equation}\label{eq: barFfiniteeigenvectors} \bar{F}_{\mathcal{Y}_I}(y) :\,= \max_{i=1,\dots,|I|}\inprod{-b}{y}+\inprod{\alpha v_iv_i^\top}{\Amap^*y-C} \end{equation} where each $v_i,\;i=1,\dots,|I|$ is a top eigenvector of $\Amap^*y_i-C$ if $\Amap^*y_i-C$ has a positive eigenvalue and is $0$ otherwise. \paragraph{Spectral lower bound} The key idea of spectral bundle method is maximizing over \emph{infinitely} many lower minorants indexed by an SDP representable set, which might be called spectral lower bound. To state the idea more precisely, first note the fact that for any $Z\in \symMat^\dm$: \[ \max \{\lambda_{\max}(Z),0\} = \max_{\inprod{X}{I}\leq 1,X\succeq 0}\inprod{X}{Z}. \] Hence we may rewrite $F$ as \begin{equation}\label{eqn: dualpenaltyobjectiverewrite} F(y) = \max_{\inprod{X}{I}\leq \alpha , X\succeq 0} \inprod{-b}{y} +\inprod{X}{\Amap^*y-C}. \end{equation} Of course, this form is no easier to solve than the original penalized form \eqref{eq: penaltySDP}. However, we can replace the constraint set $\{\inprod{X}{I}\leq \alpha , X\succeq 0\}$ by a smaller convex set. One choice is that we compute a matrix $V\in \RR^{n\times r}$ for some small value $r$ with orthonormal columns, i.e., $V^\top V= I\in \RR^{r\times r}$. Then we form the model based on $V$: \begin{equation}\label{eqn: modelFoverV} \bar{F}_{V}(y) :\,= \max_{\inprod{S}{I}\leq \alpha , S\in \symMat_+^r} \inprod{-b}{y} +\inprod{VSV^\top}{\Amap^*y-C}. \end{equation} This function $\bar{F}_{V}$ should serve as a better approximation of $F$ than $\bar{F}_{\mathcal{Y}(I)}$ in \eqref{eqn: dualpenaltyobjectiverewrite} so long as the column span of $V$ contains the span of $v_1$ to $v_{|I|}$. How to choose $V$? The most natural choice is to let $V$ be the $r$ orthonormal eigenvectors of $\Amap^*y-C$ for our current iterate $y$. This is in fact the main idea of the block spectral bundle method. However, the choice of $V$ for the HR spectral bundle method is more complicated. We detail the procedures in choosing $V$ in Section \ref{sec: Block spectral bundle method} for Block-Spec and Section \ref{sec: HR spectral bundle method} for HR-Spec. \subsubsection{Aggregation and Regularization} Having explained the key idea of spectral bundle method, we now explain the two ingredients of proximal bundle method \cite{Du2017} that is also effective for spectral bundle method: regularization and aggregation. \paragraph{Regularization} Directly minimize the model $\bar{F}_V$ in \eqref{eqn: modelFoverV} in each iteration would correspond to a cutting plane method, which are known to be unstable. Hence the approach used by bundle methods is to add a regularization term $\frac{\rho}{2}\norm{y-\hat{y}^2}$ to \eqref{eqn: modelFoverV}, where $\hat{y}$ is a reference point and $\rho>0$ is a regularization parameter: \[ \tilde{F}_{V,\hat{y},\rho}(y) := \bar{F}_V(y) + \frac{\rho}{2}\norm{y-\hat{y}}^2. \] Here, the regularization $\rho$ can be considered as a step size parameter. The larger the $\rho$, the smaller the step size. \paragraph{Aggregation} We now describe the final ingredient: aggregation. While running a bundle method, rather than storing all the past subgradients $\alpha v_iv_i^T$ for the model $\bar F$, we can collect a weighted average of these top eigenvectors from past iterates as a matrix $\bar{X}$ such that \[ \bar{X}\in \symMat_+^\dm, \quad \text{and}\quad \nucnorm{\bar{X}}=\tr(X)\leq \alpha. \] We then build model using this matrix $\bar{X}$ along with a matrix $V\in \RR^{\dm \times r}$ with orthonormal columns: \begin{equation}\label{eq: modelaggreagte} \bar{F}_{(V,\bar{X})}(y) :\,= \max_{\eta \alpha+\tr(S)\leq \alpha , \eta \geq 0, S\in \symMat_+^r} \inprod{-b}{y} +\inprod{\eta\bar{X}+VSV^\top}{\Amap^*y-C}. \end{equation} The regularized version with a reference point is then \begin{equation}\label{eq: modelregularizedfull} \tilde{F}_{(V,\bar{X},\hat{y},\rho)}(y) := \bar{F}_{(V,\bar{X})}(y) + \frac{\rho}{2}\norm{y-\hat{y}}^2. \end{equation} We specify the exact choice of $\bar{X}$ required by each spectral bundle method in the next section. \subsection{The Spectral bundle Methods}\label{sec: SpectralbundleMethodExactDefinition} Having seen the three ingredients, we define the spectral bundle methods in this section. We shall first display the Block-Spec due to its simpler form and then present the HR-Spec. \subsubsection{Block spectral bundle method}\label{sec: Block spectral bundle method} Block-Spec starts with two initial iterates $z_0=y_0\in \RR^\ncons$. Here $z_0$ is the always exploring iterate and $y_0$ is the reference iterate which does not move unless the model $\bar{F}_{V,\bar{X}}$ in \eqref{eq: modelaggreagte} is accurate enough (see step 2 below). We also intialize a primal iterate $\bar{X}_0=0$, an integer $\bar{r}>0$, and a matrix $V_0 \in \real^{\dm \times r}$ which is the collection of top $\bar{r}$ eigenvectors of $\Amap^*z_{t+1}-C$. Finally, we pick a fraction $\beta \in (0,1)$ which represents whether the quality of current model $\bar{F}_{V,\bar{X}}$ is satisfactory or not. Block-Spec iterates the following: \begin{enumerate} \item \textbf{Solving the regularized model:} Pick $\rho_t>0$ and solve $\min_{z} \;\tilde{F}_{(V_t,\bar{X}_t,y_t,\rho_t)} (z)$. Concretely, the minimization problem is \begin{equation}\label{eq: subproblemAMSpectralbundleMethod} \min_{z} \quad \max_{\eta \alpha +\tr(S)\leq \alpha , \eta \geq 0,S\in \symMat_+^{\bar{r}}} \inprod{-b}{z} +\inprod{\eta\bar{X}_t+V_t S V_t^\top}{\Amap^*z-C}+\frac{\rho_t}{2}\norm{z-y_t}^2. \end{equation} Obtain $z^\star_{t}, \eta^\star_{t}$, and $S^\star_{t}$ solving the minimax problem above. Define $z_{t+1}= z^\star_t$ and \begin{equation} \label{eq: X_tdefinition} X_t= \eta^\star_{t}\bar{X}_t+V_t S^\star _{t}V_t^\top \end{equation} \item \textbf{Testing the model based on objective value decrease:} If $F(z_{t+1}) \leq F(y_t) - \beta\left(F(y_t) - \bar{F}_{(V_t,\bar{X}_t)}(z_{t+1})\right)$, then we set $y_{t+1} = z_{t+1}$. In this case, we call this step a \emph{descent step}. Otherwise, we set $y_{t+1} = y_{t}$, and this step is called a \emph{null step}. \item \textbf{Aggregation:} We update the aggregate matrix $\bar{X}_t$ by \begin{equation}\label{eq: aggregationUpdateOurMethod} \bar{X}_{t+1} = X_t = \eta^\star_t\bar{X}_t+V_t S^\star_t V_t^\top. \end{equation} \item \textbf{Update the bundle:} We update $V_t$ by computing the top $\bar{r}$ orthonormal eigenvectors $v_1,\dots,v_{\bar{r}}$ of $\Amap^*z_{t+1}-C$, and set \begin{equation} V_{t+1}=[v_1,\dots ,v_{\bar{r}}]. \end{equation} \end{enumerate} Notice that we did not specify how to find $\eta_t^\star,S_t^\star$ and $z^\star_t$, and the choice of $\rho_t$. We describe a way of solving the minimax subproblem \eqref{eq: subproblemAMSpectralbundleMethod} in Section \ref{sec: ImportantsubproblemSolver}. From Theorem \ref{thm: sublinearates} and \ref{thm: linear convergence of Block SBM under the extra condition strict complementarity} in Section \ref{sec: analysis}, we are guided to set $\rho$ to be constant. To avoid extra computation in the evaluation of $\bar{F}_{(V_t,\bar{X}_t)}$ in step 2, we note that $ \bar{F}_{(V_t,\bar{X}_t)}(z_{t+1}) = \tilde{F}_{(V_t,\bar{X}_t,y_t,\rho_t)}(z_{t+1})-\frac{\rho_t}{2}\twonorm{z_{t+1}-y_t}^2. $ Finally, we remark an important property of Block-Spec: the new model $\bar{F}_{(V_{t+1},\bar{X}_{t+1})}$ lower bounds $F(y)$ by construction, and is always better than the simple model $\bar{F}_{\texttt{simple}}$ , defined in \eqref{eq: simplemodel}, of the bundle method with cut aggregation \cite[Section 2.2]{Du2017}: for all $y\in \RR^{\ncons}$ \begin{equation}\label{eq: subgradientAndAggregation} \bar{F}_{\texttt{simple}}(y)\leq \bar{F}_{(V_{t+1},\bar{X}_{t+1})}(y)\leq F(y). \end{equation} We prove this inequality in Section \ref{sec: SublinearRates} which is the key insight for establishing convergence of sublinear rates. \subsubsection{HR spectral bundle method}\label{sec: HR spectral bundle method} The HR bundle method proceeds the same as the Block bundle method in the first two steps but differs at the third and fourth step due to extra complexity of aggregation and orthogonalization. \begin{enumerate} \item [3'.] \textbf{Aggregation:} We do an eigenvalue decomposition of $S^\star_t= Q_1\Lambda_1 Q_1^\top + Q_2 \Lambda_2 Q_2^\top$, where $\Lambda_1$ consists of the largest $\bar{r}-1$ eigenvalues, and $\Lambda_2$ consists of the rest eigenvalues. We now update $\bar{X}_t$ by \begin{equation}\label{eqn: updatebarX_t+1FC} \bar{X}_{t+1}= \eta^\star_t \bar{X}_t +V_tQ_2\Lambda_2Q_2^\top V_t^\top. \end{equation} The rational behind this update is that we keep the important information in the matrix $V_t$ which can improve the model accuracy of $\bar{F}$ in the next round. \item [4'.] \textbf{Update the bundle:} Next we update $V_t$. We first compute a top eigenvector $v_{t+1}$ of $\Amap^*z_{t+1}-C$. Finally, we set $V_{t+1}$ consists of an orthonormal basis of the column space of $[V_tQ_1;v_{t+1}]$, e.g., using QR factorization: \begin{equation}\label{eqn: updateVFC} QR = [V_tQ_1;v_{t+1}],\quad V_{t+1}=Q. \end{equation} \end{enumerate} In step 3', we could choose $r$ eigenvectors with $0\leq r\leq \bar{r}-1$. Such choice reduce the variable size $S$ in \eqref{eq: subproblemAMSpectralbundleMethod} to $\mathbb{S}^{r+1}$. The inequality \eqref{eq: subgradientAndAggregation} continues to hold for HR bundle method as well even under this choice of $r$. We defer the detailed derivation to Section \ref{sec: Model comparison for HR bundle method} in the appendix. \subsubsection{A discussion on the differences of FC and Blockbundle methods}\label{sec: A discussion on the differences of FC and Blockbundle methods} Here we discuss the difference in updating the matrix $V$ between the two spectral bundle methods. Denote the current iterate as $T>1$. \begin{enumerate} \item HR-Spec computes a matrix $V$ using past top eigenvectors $\{v_t\}_{t=1}^{T}$ of iterates $\{\Amap^*y_t-C\}_{t=1}^{T}$. The orthoganalization step ensures that the trace of $VSV^\top$ is exactly the same as $S$. \item Block-Spec instead uses the current iterate $z_T$ and compute $V$ as a block of $\bar{r}$ top eigenvectors of $C-\Amap^*z_T$ without referencing any past eigenvectors. Hence the name block. \end{enumerate} Despite the simple form of Block-spec, the block procedure is advantageous when evaluating top eigenvectors becoming harder when $y_T$ is close to the solution set $\mathcal{Y}_\star$ as the eigenvalues of $Z(y_T)$ begin coalescing. In such setting, it is more computational efficient to compute a block of eigenvectors instead of just the top one. In fact, a block Lanczos method is actually employed in the original paper \cite{helmberg2000spectral} in computing eigenvectors, even though only one is needed. \paragraph{A hybrid approach} We note it is possible to combine these two ideas when forming $V$. For example, one could update the aggregation $\bar{X}$ according to \eqref{eqn: updatebarX_t+1FC}. Then introduce the $\bar{r}$ top eigenvectors $v_1,\dots,v_{\bar{r}}$ of $\Amap^*z-C$ and update $V$ using the following computation: \begin{equation}\label{eq: combinedupdateMethod} QR = [VQ_1; v_{1},\dots,v_{\bar{r}}],\quad V=Q. \end{equation} Here $QR$ stands for the $Q,R$ factors of QR factorization. Of course, in this case, we should be able to store $2\bar{r}$ many $\dm$-dimensional vectors. The properties described in \eqref{eq: subgradientAndAggregation} continue to hold. Since the model $F_{V,\bar{X}}$ is better than the model in HR and Block bundle methods, the theoretical results displayed in Section \ref{sec: analysis} continues to hold for this hybrid approach. \subsubsection{Solving the minimax subproblem \eqref{eq: subproblemAMSpectralbundleMethod}}\label{sec: ImportantsubproblemSolver} The last step in fully specifying the spectral bundle method is to discuss how to solve the minimax subproblem \eqref{eq: subproblemAMSpectralbundleMethod}. Define the $t$-th spectral set $\mathcal{W}_t$ as \begin{equation}\label{eq: tthspectralset} \mathcal{W}_t = \{\eta \bar{X}_{t} + V_{t}SV_{t}^\top \mid \eta \geq 0,\;S\in \symMat_+^{\bar{r}},\;\text{and}\;\eta \alpha + \tr(S)\leq \alpha \}. \end{equation} Hence we may rewrite the minimax problem \eqref{eq: subproblemAMSpectralbundleMethod} as \begin{equation}\label{eq: subproblemSpectralbundleMethod} \min_{z} \max_{X\in \mathcal{W}_t} \inprod{-b}{z} +\inprod{X}{\Amap^*z-C}+\frac{\rho_t}{2}\norm{z-y_t}^2. \end{equation} Using the strong duality of the above problem, we can interchange the min and max, and achieve \begin{equation}\label{eq: subproblemSpectralbundleMethodsolveStep1} \begin{aligned} &\min_{z} \max_{X\in \mathcal{W}_t} \inprod{-b}{z} +\inprod{X}{\Amap^*z-C}+\frac{\rho_t}{2}\norm{z-y_t}^2\\ = &\max_{X\in \mathcal{W}_t} \min_{z} \inprod{-b}{z} +\inprod{X}{\Amap^*z-C}+\frac{\rho_t}{2}\norm{z-y_t}^2. \end{aligned} \end{equation} By completing the square, we find that the inner minimization is achieved only when $z =y_t + \frac{1}{\rho_t}\left(b-\Amap X\right),$ and the minimax problem reduces to \begin{equation}\label{eq: subproblemSpectralbundleMethodsolveStep2} \begin{aligned} &\max_{X\in \mathcal{W}_t} \min_{z} \inprod{-b}{z} +\inprod{X}{\Amap^*z-C}+\frac{\rho_t}{2}\norm{z-y_t}^2\\ =&\max_{X\in \mathcal{W}_t} \inprod{-b}{y_t} +\inprod{X}{\Amap^*y_t-C} -\frac{1}{2\rho_t}\twonorm{b-\Amap X}^2. \\ = &-\min_{X\in \mathcal{W}_t} \inprod{b}{y_t} +\inprod{X}{C-\Amap^*y_t} +\frac{1}{2\rho_t}\twonorm{b-\Amap X}^2. \\ \end{aligned} \end{equation} Interestingly, the last minimization problem in \eqref{eq: subproblemSpectralbundleMethodsolveStep2} is the augmented Lagrangian problem of Problem \eqref{p} with the decision variable $X$ restricted to $\mathcal{W}_{t+1}$ instead of $\symMat_+^\dm$. So in essence, spectral bundle methods are secretly solving an augmented Lagrangian during its iterations. To solve the augmented Lagrangian problem in \eqref{eq: subproblemSpectralbundleMethodsolveStep2}, we may further rewrite it as \begin{equation}\label{eq: subproblemSpectralbundleMethodsolveStep3} \begin{aligned} \min_{(\eta,S)\in \mathcal{S}_t} f_t(\eta,S), \end{aligned} \end{equation} where $f_t(\eta,S)$ and $\mathcal{S}_t$ are defined as \begin{equation}\label{eq: subproblemSpectralbundleMethodsolveStep4} \begin{aligned} f_t(\eta,S) &:\;=\inprod{b}{y_t} +\inprod{\eta \bar{X}_t+V_tSV_t^\top}{C-\Amap^*y_t} +\frac{1}{2\rho_t}\twonorm{b-\Amap \left( \eta \bar{X}_t+V_tSV_t^\top \right)}^2, \\ \mathcal{S}_t & := \{{S\succeq 0,\;\eta \geq 0,\;\tr(S)+\alpha\eta \leq \alpha}\}. \end{aligned} \end{equation} \paragraph{Solving \eqref{eq: subproblemSpectralbundleMethodsolveStep3}} Since we can compute the gradient of $f_t$ easily and the projection to the constraint set $\mathcal{S}_t$ (after a proper scaling) can be done with time complexity $\bigO(\bar{r}^3)$ (see Section \ref{sec: projecttoS_t} in Appendix for detail). Hence, we may use the accelerated projected gradient method ~\cite{nesterov2013introductory}. Alternatively, we may use the interior point method described in \cite[Section 6]{helmberg2000spectral}, which has at least $\bigO({\bar{r}}^6)$ time complexity due to inverting an ${\bar{r}}^2 \times {\bar{r}}^2$ matrix. Moreover, in the special case $\bar{r}=1$, we have only two variables $(S,\eta)\in \RR^{2}$ with constraints $S\geq 0,\eta\geq 0$ and $S+\eta \leq \alpha$, and a quadratic objective $f_t$. Explicit formulas for the optimal $S_t^\star$ and $\eta_t^\star$ can be derived easily to avoid numerical optimization in this situation. \paragraph{Storage concerns} We note that just for the purpose of computing $S^\star_t$ and $\eta^\star_t$, one needs not to store $\bar{X}_t$ but only need to store $\Amap(\bar{X}_t)$ and $\inprod{C}{\bar{X}_t}$ as we may write $f_t$ as \[ f_t(\eta,S) :\;=\inprod{b}{y_t} +\eta \inprod{ \bar{X}_t}{C} -\inprod{\Amap(V_tSV_t^\top)}{y_t} +\frac{1}{2\rho_t}\twonorm{b-\eta(\Amap \bar{X}_t)- \Amap(V_tSV_t^\top)}^2. \] The updates of $\Amap(\bar{X}_t)$ and $\inprod{C}{\bar{X}_t}$ are also easy given the low rank updates of $\bar{X}_t$ in \eqref{eqn: updatebarX_t+1FC} and \eqref{eq: aggregationUpdateOurMethod}. Keeping only $\Amap(\bar{X}_t)$ and $\inprod{C}{\bar{X}_t}$ is advantageous when $\Amap$ and $\inprod{C}{\cdot}$ can be quickly applied to low rank matrices. Moreover, one can recover the matrix $\bar{X}_t$ without the need of storing $\bar{X}_t$ for both Block-Spec and HR-Spec using the matrix sketching idea in \cite{tropp2017practical}. We explain the sketching procedures more carefully in Section \ref{sec: matrixSketching} in the appendix. \paragraph{Summary of solving \eqref{eq: subproblemAMSpectralbundleMethod}} We summarize how to solve the minimax problem and obtain $z_t^\star$, $\eta_t^\star$, and $S^\star_t$ in the following two steps: \begin{enumerate} \item Solve \eqref{eq: subproblemSpectralbundleMethodsolveStep3} for general $\bar{r}>1$ via the accelerated gradient method or the interior point method, and obtain its optimal $S_t^\star$ and $\eta_t^\star$. The problem can be solved quickly for small $\bar{r}$ and admits explicit solution formulas for $\bar{r}=1$. \item Compute $z_t^\star = y_t +\frac{1}{\rho_t}\left(b-\Amap(\eta_t^\star \bar{X}_t +V_tS_t^\star V_t^\top)\right)$. \end{enumerate} \section{Discussion}\label{sec: discussion} In this paper, we give sublinear convergence rates of the classical spectral bundle method proposed by Helmberg and Rendl ~\cite{helmberg2000spectral}. We also develop Block-spec which not only enjoys the same rate, but also speed up to linear convergence with proper parameter choice and structural assumptions. We verify our theoretical results via numerical experiments. We here display a few potential future and (hopefully) interesting directions: \begin{itemize} \item \textbf{Handling other constraints:} The problem format \eqref{p} only has equality constraints. Incorporating inequality constraints and certain norm constraints such as $\norm{\Amap X-b}\leq \varepsilon$ for some $\varepsilon>0$ might be beneficial for other applications of SDP such as stochastic block model with more than 2 blocks \cite{amini2018semidefinite} and noisy matrix completion \cite{candes2010matrix}. It seems straightforward to extend the current framework to these new settings by introducing additional dual variables or analyzing new dual objective. \item \textbf{Converging to the relative interior of the dual solution set $\ysolset$:} In Theorem \ref{thm: linear convergence of Block SBM under the extra condition strict complementarity}, the rank estimate $\bar{r}$ needs to satisfy $\bar{r}\geq r_d$ instead of $\bar{r}\geq \rank(\xsol)$ assuming uniqueness of the primal solution. Though the quantity $r_d$ can be indeed larger than $\rank(\xsol)$ as shown in \cite[Theorem 5.1]{ding2020regularity}, $\bar{r}\geq \rank(\xsol)$ already ensures quick convergence in our numerics. By speculating the proof of Theorem \ref{thm: linear convergence of Block SBM under the extra condition strict complementarity}, the linear convergence can be proved assuming Block-Spec converges to a dual solution that is in the relative interior of $\ysolset$. And this is indeed what we observed after examining the dual slack matrices (not shown here). Of course, this cannot be guaranteed by current algorithm design. Hence we ask the question that whether it is possible to design an algorithm that has low per iteration complexity and always converges to the relative interior of the optimal solution set. \item \textbf{Adaptive choice of $\rho$ and $\bar{r}$:} Currently, the choice of $\rho$ and $\bar{r}$ are set to be a constant. Is it possible to have an algorithmic adaptive way in setting $\rho$ and $\bar{r}$? Does such choice hurt or enhance primal convergence? \end{itemize} \section{Introduction} We consider solving semidefinite problems of the following form: \begin{equation}\label{p} \tag{P} \begin{aligned} & \underset{X\in\symMat^{\dm}\subset \RR^{\dm \times \dm}}{\text{maximize}} & & \langle -C, X \rangle \\ & \text{subject to} & & \Amap X = b \\ &&& X \succeq 0, \end{aligned} \end{equation} whose the decision variable $X$ is required to be symmetric and positive definite: $X\succeq 0$. The problem data comprises a symmetric cost matrix $C\in \symMat^{\dm}\subset \RR^{\dm \times \dm}$, a linear map $\Amap: \symMat^{\dm} \rightarrow \RR^{\ncons}$, and a right hand side vector $b \in \RR^{\dm}$. The task of solving \eqref{p} is often approached by considering its dual problem: \begin{equation}\label{d} \tag{D} \begin{aligned} & \underset{y\in\RR^{\ncons }}{\text{minimize}} & & \langle -b, y \rangle \\ & \text{subject to} & & \Amap^*y \preceq C. \end{aligned} \end{equation} Semidefinite programming occurs at the heart of many important large-scale problems (for example, matrix completion \cite{candes2009exact}, max-cut \cite{goemans1995improved}, community detection \cite{bandeira2018random}, and phase retrieval \cite{candes2013phaselift}). A huge body of works has devoted to solve the SDP problem \eqref{p} \cite{todd2001semidefinite,nesterov1989self,nesterov1994interior,alizadeh1995interior,burer2003nonlinear,glowinski1975approximation,helmberg2000spectral,boyd2011distributed,friedlander2016low,friedlander2016low,yurtsever2019conditional,renegar2014efficient,ding2019optimal}. We refer the reader to \cite{monteiro2003first}, \cite[Section 2]{ding2019optimal}, and \cite[Section 3 and 4]{majumdar2019survey} for surveys of methods for solving the SDP problem. Among the methods, the spectral bundle method proposed by Helmberg and Rendl~\cite{helmberg2000spectral} is of particular interest of this paper due to its merits of low per iteration complexity and faster convergence in practice. These two properties make it ideal for solving large-scale problems where a high iteration cost may make computing even a single iteration prohibitively slow. Interestingly, instead of dealing with \eqref{p} and \eqref{d} directly, Helmberg and Rendl's method considers an equivalent penalization yet unconstrained form of \eqref{d}: for all sufficiently large $\alpha$, e.g., larger than the trace of any optimal solution of \eqref{p} \cite[Lemma 6.1]{ding2019optimal} \footnote{The lemma in \cite{ding2019optimal} actually requires the primal solution to be unique. However, a closer look at the proof reveals that such condition can be replaced by $\alpha >\sup_{\xsol \in \xsolset}\tr(\xsol)$.}, \eqref{d} is equivalent to (in the sense of same optimal value and solution set)\footnote{Actually, the HR spectral bundle method requires the trace of every feasible $X$ for \eqref{p} to be the same and deals with the eigenvalue instead of the maximum of the eigenvalue and zero in \eqref{eq: penaltySDP}. However, the method can be translated word to word to the general situation where different feasible $X$ admits different trace.} \begin{equation}\tag{\texttt{pen}-D} \begin{aligned}\label{eq: penaltySDP} & \underset{y\in\RR^{\ncons}}{\text{minimize}} & & F(y) :\,= \langle -b, y \rangle +\alpha \max\{\lambda_{\max}(\Ajmap y-C), 0\}. \end{aligned} \end{equation} In Section~\ref{sec: HR spectral bundle method}, we formally define the Helmberg and Rendl's Spectral Bundle Method, which we denote by HR-Spec. The main idea behind HR-Spec is to approximate the nonnegative eigenvalue function, $\alpha \max\{\lambda_{\max}(\Ajmap y-C), 0\}$, by a maximum of lower bound indexed by a small SDP representable set. Then using this model of the objective iterative improvement can be achieved by running the proximal bundle method~\cite{Du2017}. Importantly, using a small SDP representable set makes the subproblems involved with the approximation of $\alpha \max\{\lambda_{\max}(\Ajmap y-C), 0\}$ and the bundle method easier to solve. We defer the exact details constructing this model to Section~\ref{sec: Subgradient lower bound and spectral lower bound} and of the algorithm to Section~\ref{sec: HR spectral bundle method}. The per iteration complexity of such methods can be much lower than that of many ADMM type methods \cite{boyd2011distributed}, which requires projection to the SDP cone, $\symMat_+^\dm$, costing $\bigO(\dm^3)$ in general. Indeed, in Section~\ref{sec: ImportantsubproblemSolver}, we discuss the computation advantages of this approach and show that with proper choice of the parameters, only one eigenvector computation is required per iteration. HR-Spec has received considerable attention since it was first proposed (over 500 citations) and has developed various variants later on \cite{helmberg2002spectral,apkarian2008trust,helmberg2014spectral}. However, despite the success of this method, its convergence theory has left open two important issues. First, there is no convergence rate guarantees of HR-Spec. The main guarantee, Lemma 5 in \cite{helmberg2000spectral}, only ensures the dual iterate of \eqref{p} converges in terms of its objective value. Second, there is no convergence guarantee on the primal side. In \cite{helmberg2000spectral}, only a paragraph in \cite[Section 8]{helmberg2000spectral} discusses how one may reconstruct the primal iterates without any guarantees and numerical verification on the quality of such primal reconstruction. \paragraph{Our contribution} We close these open areas in the theory for the spectral bundle method and propose a new variant of the method which enjoys greatly improved convergence rates: \begin{itemize} \item \textbf{Convergence rates for HR spectral bundle method}: In Theorem \ref{thm: sublinearates}, we show that the HR-Spec admits convergence rate $\bigO(\frac{\log(1/\epsilon)}{\epsilon^3})$ in terms of dual objective, and $\bigO(\frac{\log(1/\epsilon)}{\epsilon^6})$ in terms of primal convergence merely under strong duality. With the extra condition strict complementarity described in Section \ref{sec: analtical conditon}, dual objective and primal convergences speed up to $\bigO(\frac{\log(1/\epsilon)}{\epsilon})$ and $\bigO(\frac{\log(1/\epsilon)}{\epsilon^2})$ respectively. \item \textbf{Development of Block spectral bundle method (Block-Spec)}: We develop a new variant of HR-Spec, called Block spectral bundle method. It not only enjoys the same per iteration complexity and the theoretical guarantees of HR-Spec, but also enjoys \emph{linear convergence} whenever strict complementarity holds, see Theorem \ref{thm: linear convergence of Block SBM under the extra condition strict complementarity}. \end{itemize} We finally remark that both HR-Spec and Block-Spec are compatible with the matrix sketching idea in \cite{tropp2017practical} and hence can avoid storing a matrix $X$ with $\dm^2$ entries during its iteration and hence achieve the so-called storage optimality discussed in \cite[Section 1.2]{ding2019optimal}. An earlier attempt by the authors \cite{ding2019bundle} has explored this idea. Since storage is not a central topic of the paper, we defer the sketching procedure to Section \ref{sec: matrixSketching} in the appendix. \paragraph{Paper organization} The rest of the paper is organized as follows. In Section \ref{sec:def}, we describe the main ingredients of spectral bundle methods (spectral lower bounds, regularization, and aggregation) and formally define the Block-Spec and HR-Spec. In Section \ref{sec: analysis}, we present the convergence rates for both HR-Spec and Block-Spec. In Section \ref{sec: numerics}, we demonstrate that both methods are effective in solving modern problems admitting low rank solutions and valid our theory, finding when the conditions of our linear convergence theory are met, Block-Spec substantially speeds up. \paragraph{Notation} We denote the sets of optimal solutions for \eqref{p} and \eqref{d} by $\xsolset$ and $\ysolset$ respectively. We equip $\symMat^{\dm}$ and $\real^{\ncons}$ with the trace inner product and dot product respectively, and denoted both as $\inprod{\cdot}{\cdot}$. The induced norms are both denoted as $\norm{\cdot}$. For a symmetric matrix $A\in \symMat^\dm$, we write its eigenvalues as $\lambda_{\max}(A)=\lambda_{1}(A)\geq \dots \geq \lambda_{\dm}(A)$. The matrix operator norm (maximum of singular value), Frobenius norm, and nuclear norm (sum of singular values) are denoted as $\fronorm{\cdot}$, $\opnorm{\cdot}$, and $\nucnorm{\cdot}$ respectively. The dual slack matrix $Z(y)$ for each $y\in \RR^{\ncons}$ is defined as $Z(y):\, = C-\Amap^*y$. The operator norm of $\Amap^*$ is defined as $\opnorm{\Amap^*}=\max_{y\in \RR^{\ncons},\twonorm{y}\leq 1}\fronorm{\Amap^*y}$. For a closed set $\mathcal{X}\subset \RR^{\ncons}$ and a point $z\in \RR^{\ncons}$, we define the distance of $z$ to $\mathcal{X}\subset \RR^{\ncons}$ as $\dist(z,\mathcal{X})=\inf_{x\in \mathcal{X}}\norm{x-z}$. \section{Numerics}\label{sec: numerics} In this section, we give preliminary numerics demonstrating that (i) both methods are able to solve modern problems of interesting size, matrix completion and max-cut, and (ii) \emph{more importantly}, the rates given in Section \ref{sec: analysis} are not artifact of the proof but are what observed in the experiments. \paragraph{Experiments setup} The two SDP for matrix completion and max-cut are displayed in Table \ref{tb: SpecBundleMethodperformance}. For max-cut, $L$ is the Laplacian of the graph G1 in \cite{Gset} with $800$ vertices. For matrix completion, $\Omega$ is the set of indices of the observed entries of the underlying rank $3$ matrix $\trux \in \RR^{400\times 400}$. Here $\trux = WW^\top$ where $W\in \RR^{400\times 3}$ with each entry following Rademacher distribution. Each entry of $\trux$ is observed with probability $0.1$. Both problem has decision variable size $800\times 800$, and satisfy strong duality, strict complementarity, and has a unique primal solution $\xsol$ as verified in \cite{ding2020regularity}. We set $\alpha =2\dm$, $\rho=0.5,\beta = 0.25$ for Max-Cut and $\alpha = 4 \nucnorm{\trux},\rho =5,\beta= 0.25$ for matrix completion. Both methods start the initial $X_0,y_0,z_0$ at zero. Subproblem \eqref{eq: subproblemAMSpectralbundleMethod} is solved via Mosek \cite{mosek2010mosek}. We run 100 iterations for matrix completion, and 200 iterations for max-cut. The optimal value $\pval$ and primal solution $\xsol$ for max-cut is obtained through Mosek \cite{mosek2010mosek}. We set $\pval = 2\nucnorm{\trux}$ and $\xsol = \begin{bmatrix} \trux & \trux \\ \trux & \trux \end{bmatrix}$ for matrix completion. Such choice of $\xsol$ indeed solves matrix completion SDP with high probability \cite{candes2009exact}. Let the rank of the optimal solution be $\rsol = \rank(\xsol)$, which is $13$ for max-cut and $3$ for matrix completion. We set the rank estimate $\bar{r}= \rsol-1$, $\rsol$, and $\rsol+1$. \paragraph{Experiment Results} The experiment results are shown in Table \ref{tb: SpecBundleMethodperformance}, the accuracy of the last iterates, and Figure \ref{fig: penalizedObjectiveValue}, the evolution of dual objective value $F$ in \eqref{eq: penaltySDP}. The dual optimality (dual opt.), primal optimality (primal opt.), and primal feasibility (primal feas.) are defined as $\frac{F(y)-\dval}{\dval}$, $\abs{\frac{\inprod{C}{X}-\pval}{\pval}}$, and $\frac{\norm{\Amap X-b}}{\norm{b}}$ respectively. \paragraph{Solving problem to moderate accuracy} As shown in Table \ref{tb: SpecBundleMethodperformance}, whenever $\bar{r}\geq \rsol$, HR-Spec is able to solve both problems in terms of dual optimality to moderate low accuracy ($10^{-3}\sim 10^{-4}$), while Block-Spec is able to solve both problems in terms of dual optimality to moderate high accuracy ($10^{-7}\sim 10^{-8}$). We note that Block-spec actually achieves $10^{-4}\sim 10^{-5}$ accuracy within 50 iterations for both problems as shown in Figure \ref{fig: penalizedObjectiveValue}. We include additional 50 iterations of matrix completion and 150 iterations of max-cut to see how HR-spec performs. Since most of the time, it only achieves $10^{-2}$ accuracy after 50 iterations. In terms of primal iterate $X_t$, primal feasibility is usually worse than dual optimality by one or two order of magnitude, while primal optimality is usually on the same order. The fact primal feasibility is a bit worse might be expected from our lemmas in Section \ref{sec: Relating primal and dual convergence}, as we indicate that bound for primal feasibility can be worse than dual optimality. \paragraph{Sublinear and linear rates} As shown in Figure \ref{fig:figure Matrix Completion}, HR-Spec may exhibit sublinear convergence rate for the case $\bar{r}=2,3$. In Figure \ref{fig:figure Max-cut}, the method also converges slowly indicating a potential sublinear rates. Block-spec does converge very fast in both case whenever $\bar{r}\geq \rsol$ as expected from our Theorem \ref{thm: linear convergence of Block SBM under the extra condition strict complementarity}. It flats out after reaching $10^{-8}$ accuracy, which we suspect is due to the inaccuracy in the eigenvalue solver or the choice of $\rho$. Block-spec also has slower convergence whenever $\bar{r}<\rsol$ indicating the condition $\bar{r}\geq \rsol$ is crucial for quick convergence. \begin{table} \begin{tabular}{clllll} \hline Problem & $\bar{r}$ & Method & Dual Opt. & Primal Opt. & Primal Feas.\\ \hline \textbf{Matrix Completion} & \multirow{2}{*}{2} & HR & $0.6177$ & 0.2621 & 0.3707 \\ \multirow{5}{*}{$ \displaystyle \begin{aligned} &{\text{max}} & & -\inprod{I}{W_1}-\inprod{I}{W_2} \\ & \text{s.t.} & & X_{ij} = \trux_{ij},\, (i,j)\in \Omega \\ & & & \begin{bmatrix} W_1 & X \\ X^\top & W_2 \end{bmatrix}\succeq 0. \end{aligned} $} & & B & 0.07236 & 0.01155 & $0.01308$ \\ & \multirow{2}{*}{3} & HR & 0.1898 & $1.2146 \times 10^{-11}$ & 0.1448 \\ & & B & $2.788 \times 10^{-7}$& $8.441\times 10^{-7}$ & $ 1.098\times 10^{-5}$ \\ & \multirow{2}{*}{4} & HR & $7.0455\times 10^{-7}$ & $4.782\times 10^{-11}$ & $1.547\times 10^{-4}$ \\ & & B & $4.967\times 10^{-8}$ & $2.823\times 10^{-8}$ & $9.154\times 10^{-6}$\\ \hline \hline \textbf{Max-cut} & \multirow{2}{*}{12} & HR & $8.740 \times 10^{-4}$ & $6.092\times 10^{-4}$ & 0.04898 \\ \multirow{5}{*}{$ \displaystyle \begin{aligned} &\mbox{max} & & \inprod{L}{X}\\ &\mbox{s.t.} & & \diag(X) = \ones \\ & & & X\succeq 0 \end{aligned} $ } & & B & $6.670\times 10^{-6}$ & $3.550\times 10^{-6}$& $6.092 \times 10^{-4}$ \\ & \multirow{2}{*}{13} & HR & $5.357\times 10^{-4}$ & $2.185\times 10^{-4}$ & 0.03988 \\ & & B & $6.234\times 10^{-10}$ & $6.988\times 10^{-9}$& $9.772\times 10^{-6}$ \\ & \multirow{2}{*}{14} & HR & $1.278\times 10^{-4}$ & $1.574\times 10^{-4}$ & 0.01574\\ & & B & $1.207\times 10^{-8}$ & $2.281\times 10^{-7}$ & $2.733\times 10^{-5}$\\ \hline \end{tabular} \caption{This table shows the final accuracy of $y_t$ and $X_t$ for block spectral bundle method (B) and HR spectral bundle method (HR) after $100$ iteration for Matrix completion, and $200$ iterations for Max-cut problem with varying rank estimate $\bar{r}$. The dual optimality (dual opt.), primal optimality (primal opt.), and primal feasibility (primal feas.) are defined as $\frac{F(y)-\dval}{\dval}$, $\abs{\frac{\inprod{C}{X}-\pval}{\pval}}$, and $\frac{\norm{\Amap X-b}}{\norm{b}}$ respectively. The set $\Omega$ is the set of indices for observed entries of the ground truth $\trux$ for matrix completion. The matrix $L$ is the Laplacian matrix of the graph.}\label{tb: SpecBundleMethodperformance} \end{table} \begin{figure}[H] \begin{subfigure}[(a)]{.5\textwidth} \centering \includegraphics[width=0.8\linewidth, height= 0.25\textheight]{MatrixCompletionPlot.pdf} \caption{Matrix Completion} \label{fig:figure Matrix Completion} \end{subfigure}% \hspace{5pt} \begin{subfigure}[(b)]{.5\textwidth} \centering \includegraphics[width=0.8\linewidth, height= 0.25\textheight]{Max-cutPlot.pdf} \caption{Max-cut} \label{fig:figure Max-cut} \end{subfigure} \caption{The evolution of the relative penalized dual objective value $\abs{\frac{F(y_t)-F(\ysol)}{{F(\ysol)}}}$. Here B is Block-Spec shown as the solid line, and HR is HR-spec, shown as the dotted line.} \label{fig: penalizedObjectiveValue} \end{figure} \section{Lemmas, proofs, and procedures for Section \ref{sec:def}}\label{sec: detailsForSectionref{sec:def}} \section{Inequalities \eqref{eq: subgradientAndAggregation} of HR bundle method}\label{sec: Model comparison for HR bundle method} We aim to show that the property \eqref{eq: subgradientAndAggregation} continues to hold for HR bundle method. This is verified if we establish the following membership relation: \begin{equation*} X_t\in \mathcal{W}_{t+1}\quad\text{and}\quad \alpha v_1v_1^\top \in \mathcal{W}_{t+1}. \end{equation*} We consider the general situation that we may choose $r$ many eigenvectors of $S_t^\star$ instead of $\bar{r}-1$ as stated in step 3' in Section \ref{sec: Block spectral bundle method}. The membership of $X_t$ in $\mathcal{W}_{t+1}$ is by construction. To see this, recall the number of column of $V_tQ_1$ is $r$ as we pick $r$ eigenvectors of $S_t^\star$. Denote the left $\dm\times r$ matrix of $R$ by $R_{:,1:r}$. We have $V_tQ_1 = Q R_{:,1:r}$. Then we can set $\eta = 1$, and the left top $r\times r$ submatrix of $S$ as $S_{1:r,1:r}=R_{:,1:r}\Lambda_1R_{:,1:r}^\top$, and the rest of $S$ to be $0$. Then we have $V_{t+1}SV_{t+1}^\top = V_tQ_1\Lambda_1 Q_1^\top V_t^\top$. This choice of $\eta$ and $S$ makes $X_t = \eta \bar{X}_{t+1} + V_{t+1}SV_{t+1}^\top $ due to the updating scheme \eqref{eqn: updatebarX_t+1FC} of $\bar{X}_{t+1}$ and definition of $X_t$ in \eqref{eq: X_tdefinition}. This choice of $S$ is feasible because $S\succeq 0$ as $\Lambda_1\succeq 0$, and \begin{equation} \begin{aligned} \label{eq: X*intheNewSettrace} \eta \tr (\bar{X}_{t+1})+\tr(S) & \overset{(a)}{=} \tr(\eta^\star _t\bar{X}_t)+ \tr(V_tQ_2\Lambda_2Q_2^\top V_t^\top) + \tr(V_tQ_1\Lambda_1 Q_1^\top V_t^\top)\\ &\leq \eta_t^\star\alpha +\tr(V_tS^\star_tV_t^\top) \overset{(b)}{\leq} \alpha. \end{aligned} \end{equation} Here in step $(a)$ we use the definiton of $\bar{X}_{t+1}$ and $\tr(S)= \tr(V_{t+1}SV_{t+1})$ because $V_{t+1}$ has orthonormal columns. Step $(b)$ is due to $V_t$ has orthonormal columns and the iterate $\eta^\star _t$ and $S^\star _t$ satisfies the constraint $\eta^\star \alpha +\tr (S^\star _t)\leq \alpha$ by construction. \section{Projecting to a scaled $\mathcal{S}_t$}\label{sec: projecttoS_t} Recall we are trying to solve \[ \min_{(\eta,S)\in\mathcal{S}_t} f_t(\eta,S) \] \begin{equation} \begin{aligned} f_t(\eta,S) &:\;=\inprod{b}{y_t} +\inprod{\eta \bar{X}_t+V_tSV_t^\top}{C-\Amap^*y_t} +\frac{1}{2\rho_t}\twonorm{b-\Amap \left( \eta \bar{X}_t+V_tSV_t^\top \right)}^2. \\ \mathcal{S}_t & := \{{S\succeq 0,\;\eta \geq 0,\;\tr(S)+\alpha \eta \leq \alpha}\} \end{aligned} \end{equation} After a proper scaling of $S$, we may consider the constraint set as \[ \tilde{\mathcal{S}}= \{S\in \symMat_+^k,\, \eta\geq 0,\, \tr(S)+\eta \leq 1\}, \] and a new objective $\tilde{f}_t(\eta,S) = f(\eta,\alpha S)$. Let us explain how to project to the set $\tilde{\mathcal{S}}$ for arbitrary $(\eta_0,S_0)\in \RR\times \symMat^{\bar{r}}$ and get its projection $(\eta^\star,S^\star)$. The procedures are as follows: \begin{enumerate} \item Compute the eigenvalue decomposition of $S_0 = V\Lambda_0 V^\top$, where $\Lambda_0\in \symMat^{\bar{r}}$ is a diagonal matrix with diagonal $\vec{\lambda}_0=( \lambda_1,\dots,\lambda_{\bar{r}})$. \item Compute $(\eta^\star, \vec{\lambda}^\star) = \arg\min_{\eta+\sum_{i=1}^{\bar{r}}\lambda_i \leq 1,\;\eta \geq 0,\;\lambda_i\geq 0}\twonorm{(\eta_0,\vec{\lambda}_0)-(\eta,\vec{\lambda})}$. \item Form $S^\star = V\diag(\vec{\lambda^\star})V^\top$. Here $\diag(\lambda)$ forms a diagonal matrix with the vector $\lambda$ on the diagonal. \end{enumerate} The main computational cost is the eigenvalue decomposition which requires $\bigO(\bar{r}^3)$ time. The second step requires projection to the convex hull of probability simplex and the origin, which can be done in $\bigO(k\log k)$ time \cite{wang2013projection}. The correctness of the procedure can be verified as in \cite[Lemma 3.1]{allen2017linear} and \cite[Lemma 6]{garber2019convergence}. \section{Matrix Sketching} \label{sec: matrixSketching} When $m$ is on the order $\dm$, it is beneficial to use the matrix sketching idea developed in \cite{tropp2017practical} to achieve storage reduction. As noted in \ref{sec: ImportantsubproblemSolver}, if we store $\Amap (X_t)= d_t$ and $c_t = \inprod{C}{X_t}$ at each iteration, then we have no problem in doing the small-scale SDP \eqref{eq: subproblemSpectralbundleMethodsolveStep3}. If $\Amap$ and inner product with $C$ can be applied to low rank matrices efficiently, then updating $z_t$ and $c_t$ is not hard due to linearity of our updating scheme $\bar{X}_{t+1} = X_t = \eta^\star_t\bar{X}_t+V_t S^\star_t V_t^\top$ in \eqref{eq: aggregationUpdateOurMethod} for Block-spec, and the updating scheme of $\bar{X}_t$ in \eqref{eqn: updatebarX_t+1FC} for HR-spec. Let us first explain how to omit storing the iterate $\bar{X}_{t+1}=X_t = \eta^\star_t\bar{X}_t+V_t S^\star_t V_t^\top$ for Block-spec. We first draw two matrices with independent standard normal entries \begin{equation} \begin{aligned} \Psi \in \real^ {n \times k} \quad \text{with} \quad k=2r+1; \\ \Phi \in \real ^{l\times \dm} \quad \text{with} \quad l=4r+3. \end{aligned}\nonumber \end{equation} Here $r$ is chosen by the user. It either represents the estimate of the true rank of the primal solution or the user's computational budget in dealing with larges matrices. We use $Y^C_t$ and $Y^R_t$ to capture the column space and the row space of $X_t$: \begin{equation}\label{eqn: onetimeSketch} Y^C_t =\bar{ X}_t\Psi \in \real^{\dm \times k},\qquad Y^R_t =\Phi \bar{X}_t \in \real^{l\times n} . \end{equation} Hence we initially have $Y^C_0=0$ and $Y^R_0=0$. Notice that we should not store matrix $X_t$ in this case. However, using the updating scheme $X_{t+1} = \eta^\star_t\bar{X}_t+V_t S^\star_t V_t^\top$. where $V_t\in \RR^{\dm \times \bar{r}}$ and $S_t^\star \in \symMat^{\bar{r}}$. In this setting, $Y^C_{t+1}$ and $Y^R_{t+1}$ can be directly computed as \begin{align} Y^C_{t+1} = V_tS^\star _t(V_t^\top\Psi)+ \eta_t^\star Y^C_t \in \real^{\dm \times k }, \label{eq:sketch-primal-update1}\\ Y^R_{t+1} = (\Psi V_t)S_t^\star V_t^\top+ \eta_t^\star Y^R_t\in \real^{l\times n} . \label{eq:sketch-primal-update2} \end{align} This observation allows us to form the sketch $Y^C_t$ and $Y^R_t$ from the stream of updates. We then reconstruct $X_t$ and get the reconstructed matrix $\hat{X}_t$ by \begin{align}\label{eqn: reconstructionsketch} Y^C_t = Q_tR_t, \quad B_t = (\Phi Q_t)^{\dagger} Y^R_t, \quad \hat{X}_t= Q_t[B_t]_r, \end{align} where $Q_tR_t$ is the $QR$ factorization of $Y^C_t$ and $[\cdot]_r$ returns the best rank $r$ approximation in Frobenius norm. Specifically, the best rank $r$ approximation of a matrix $Z$ is $U\Sigma V^\top $, where $U$ and $V$ are right and left singular vectors corresponding to the $r$ largest singular values of $Z$ and $\Sigma$ is a diagonal matrix with $r$ largest singular values of $Z$. In actual implementation, we may only produce the factors $(QU, \Sigma, V)$ defining $\hat{X}_T$ in the end instead of reconstructing $\hat{X}_t$ in every iteration. We refer the reader to \cite[Theorem 5.1]{tropp2017practical} for the theoretical guarantees on the reconstruction matrix $\hat{X}_t$. Hence we can avoid the \emph{forming a new iteratre} procedure in Block-spec. We remark that the reconstructed matrix $\hat{X}_t$ is not necessarily positive semidefinite. However, this suffices for the purpose of finding a matrices close to $X_t$. More sophisticated procedure is available for producing a positive semidefinite approximation of $X_t$ \cite[Section 7.3]{tropp2017practical}. For HR-spec, using the definition of $\bar{X}_t$ in \eqref{eqn: updatebarX_t+1FC}, we can sketch it via the same procedure as $X_t$ for Block-spec as above using two matrices $Y_{t}^C$ and $Y_t^R$. To construct the converging primal iterate $X_t$, we note from the definition of $X_t$ in \eqref{eq: X_tdefinition}: $X_{t} = \eta^\star_{t+1}\bar{X}_{t}+V_{t+1} S^\star_{t+1} V_{t+1}^\top$, we can sketch $X_t$ by constructing $\tilde{Y}_{t+1}^C$ and $\tilde{Y}_{t+1}^R$ as the $Y_{t+1}^C$ and $Y_{t+1}^R$ in \eqref{eq:sketch-primal-update1} and \eqref{eq:sketch-primal-update2}, and reconstruct it using \eqref{eqn: reconstructionsketch} from $\tilde{Y}_{t+1}^C$ and $\tilde{Y}_{t+1}^R$. \section{Proof of Lemma \ref{lem: importantLemmaQuadraticAccurateModel}}\label{sec: proofOfImportantLemma} In this section, we provide a proof of Lemma \ref{lem: importantLemmaQuadraticAccurateModel}. In addition to the Frobenius norm bound, we also provide an operator two norm bound \eqref{eq:qudraticaccuracyopnorm}. Recall the assumption that $\lambda_{r}(X)-\lambda_{r+1}(X)=\delta>0$ for some $\delta>0$. Let $V\in\real^{n\times r}$ be an orthonormal matrix formed by the $r$ eigenvectors corresponding to the top $r$ eigenvalues. Hence if $V$ is an representation, then any $VO$ with $O\in\real^{r\times r}$ orthonormal is also a representation. Recall $r$-th spectral plus set $\faceplus r(X):=\{VSV^{\top}\mid\tr(S)\leq1,S\succeq0,S\in\symMat^{r}\}.$ Note that any $VO$ produce the same $\faceplus r(X)$. Now for any $Y\in\symMat^{\dm},$ since $\max\{\lambda_{1}(Y),0\}=\max_{W\succeq0,\tr(W)\leq1,}\inprod WY,$ we see the following always holds (as $\{W|W\succeq0,\tr(W)\leq1\}\supset\faceplus r(X)$): \begin{align*} \max\{\lambda_{1}(Y),0\} & \geq\max_{W\in\faceplus r(X)}\inprod WY. \end{align*} Define the error $f_X(Y)$ as \begin{align*} f_{X}(Y) & =\lambda_{1}(X)-\max_{W\in\faceplus r(X)}\inprod WY. \end{align*} We always have $f_{X}(Y)\geq0$ as previously argued. If $\lambda_{1}(Y)<0$, then $\max\{\lambda_{1}(Y),0\}=0$ and hence $Y\preccurlyeq0$. Thus the approximation $\max_{W\in\faceplus r(X)}\inprod WY=0$ as well. Hence we may only consider the case $\lambda_{1}(Y)>0$ in the following. Let $v$ be the eigenvector with two norm $\twonorm v=1$ corresponding to the largest eigenvalue $\lambda_{1}(Y)$, then \begin{align*} f_{X}(Y) & =\lambda_{1}(Y)-\max_{W\in\faceplus r(X)}\inprod WY =\min_{W\in\faceplus r(X)}\inprod{vv^{\top}-W}Y\\ & =\min_{W\in\faceplus r(X)}\underbrace{\inprod{vv^{\top}-W}{Y-X}}_{T_{1}}+\underbrace{\inprod{vv^{\top}-W}X}_{T_{2}}. \end{align*} To analyze $T_{1}$ and $T_{2}$, we define some notaion first. Denote $V'\in\mathbb{\mathbb{R}}^{\dm\times r}$ to be the orthonormal matrix formed by the eigenvectors corresponding to the top $r$ eigenvalues of $Y$. We assume the first column of $V'$ is $v.$ Also denote $F\in\real^{n\times(n-r)}$ to be an orthonormal matrix fomed by the rest eigenvectors of $X.$ So the eigenvalue decomposition of $X$ is $X=V\Lambda_{1}V^{\top}+F\Lambda_{2}F^{\top},$ for some diagonal $\Lambda_{1}\in\symMat^{r}$ and $\Lambda_{2}\in\symMat^{(\dm-r)\times(n-r)}$. Let us now analyze the $T_{2}$ term first. We may choose $W_{1}=VV^{\top}vv^{\top}VV^{\top}$ here. With such choice, $T_{2}$ equals the following: \begin{align*} T_{2} & =\inprod{vv^{\top}-W}X =\inprod{vv^{\top}}X-\inprod WX\\ & \overset{(a)}{=}\inprod{vv^{\top}}{V\Lambda_{1}V^{\top}+F\Lambda_{2}F^{\top}}-\inprod{VV^{\top}vv^{\top}VV^{\top}}{V\Lambda_{1}V^{\top}+F\Lambda_{2}F^{\top}}\\ & \overset{(b)}{=}\inprod{vv^{\top}}{V\Lambda_{1}V^{\top}}+\inprod{vv^{\top}}{F\Lambda_{2}F^{\top}}-\inprod{vv^{\top}}{V\Lambda_{1}V^{\top}}\\ & \overset{(c)}{=}\inprod{V'e_{1}(V'e_{1})^{\top}}{F\Lambda_{2}F^{\top}}. \end{align*} Here we use the eigenvalue decomposition of $X$ in step $(a)$. The step $(b)$ uses the fact that $V$ has orthonormal columns and that $V^{\top}F=0$ as the columns are orthonormal. Step $(c)$ uses the fact that $v$ is the first column of $V'$. Let the matrix $O^{\star}\in\real^{r\times r}:O^{\star}\in\arg\min_{OO^\top =I}\fronorm{VO-V'}$. Below, we denote $VO^{\star}$ to be $V$ to save some notation. Let the error of $V$ and $V'$ be $E=V'-V.$ Also denote $e_{1}\in\real^{r}$ to be the vector with first entry $1$ and all other entries $0$. Let us now bound $T_{2}=\inprod{V'e_{1}(e_{1}V')^{\top}}{F\Lambda_{2}F^{\top}}$: \begin{equation} \label{eq:T2maxlambda} \begin{aligned} T_{2} & =\inprod{(V+E)e_{1}e_{1}^{\top}(V+E)^{\top}}{F\Lambda_{2}F^{\top} & \overset{(a)}{=}&\inprod{Ee_{1}e_{1}^{\top}E^{\top}}{F\Lambda_{2}F^{\top}} \\ &\overset{(b)}{\leq}\nucnorm{Ee_{1}e_{1}^{\top}E^{\top}}\opnorm{F\Lambda_{2}F^{\top}} & \overset{(c)}{=}&\opnorm{Ee_{1}e_{1}^{\top}E^{\top}}\opnorm{F\Lambda_{2}F^{\top} \\ & \overset{(d)}{\leq}\opnorm{Ee_{1}}^{2}\opnorm{\Lambda_{2} & \overset{(e)}{\leq}&\opnorm E^{2}\opnorm{\Lambda} \end{aligned} \end{equation} Here we use the fact $V^{\top}F=0$ in step $(a)$. Step $(b)$ is due to the Holder's inequality. Step $(c)$ uses the fact that for rank $1$ matrix, the Frobenius norm is the same as its operator norm. Step $(d)$ uses the submultiplicity of operator two norm. The last step $(e)$ uses the fact operator norm of $e_{1}$ is $1$. Let us now turn back to analyze $T_{1}$. With the choice of $W=VV^{\top}vv^{\top}VV^{\top}$, the difference $vv^{\top}-W$ is \begin{align*} vv^{\top}-W & =V'e_{1}(V'e_{1})^{\top}-VV^{\top}vv^{\top}VV^{\top}\\ & =V'e_{1}(V'e_{1})^{\top}-VV^{\top}V'e_{1}(V'e_{1})^{\top}VV^{\top}\\ & =(V+E)e_{1}e_{1}^{\top}(V+E)^{\top}-VV^{\top}(V+E)e_{1}e_{1}^{\top}(V+E)^{\top}VV^{\top}\\ & =Ee_{1}e_{1}^{\top}V^{\top}+Ve_{1}e_{1}^{\top}E^{\top}+Ee_{1}e_{1}E^{\top}\\ & -VV^{\top}Ee_{1}e_{1}^{\top}V^{\top}-Ve_{1}e_{1}^{\top}E^{\top}VV^{\top}-VV^{\top}Ee_{1}e_{1}^{\top}EVV^{\top}. \end{align*} Hence, using the fact the nuclear norm of rank one matrix is the same as operator norm, the nuclear norm of $vv^{\top}-W$ is bounded by \begin{align*} \nucnorm{vv^{\top}-W} & \leq\opnorm{Ee_{1}e_{1}^{\top}V^{\top}}+\opnorm{Ve_{1}e_{1}^{\top}E^{\top}}+\opnorm{Ee_{1}e_{1}E^{\top}}\\ & +\opnorm{VV^{\top}Ee_{1}e_{1}^{\top}V^{\top}}+\opnorm{Ve_{1}e_{1}^{\top}E^{\top}VV^{\top}}+\opnorm{VV^{\top}Ee_{1}e_{1}^{\top}EVV^{\top}}\\ & \overset{(a)}{\leq}4\opnorm E+2\opnorm E^{2}. \end{align*} Here in step $(a)$, we use the fact that $\opnorm{e_{1}e_{1}^{\top}}\leq1$ and $\opnorm V\leq1.$ Thus the first term $T_{1}$ is bounded by \begin{equation}\label{eq: T1maxlambda} T_{1} =\inprod{vv^{\top}-W}{Y-X} \overset{(a)}{\leq}\nucnorm{vv^{\top}-W}\opnorm{Y-X} \leq\left(4\opnorm E+2\opnorm E^{2}\right)\opnorm{Y-X} \end{equation} Now combining (\ref{eq: T1maxlambda}) and (\ref{eq:T2maxlambda}), we found that $f_{X}(Y)$ is upper bounded by \begin{align*} f_{X}(Y) & \leq\opnorm{\Lambda_{2}}\opnorm E^{2}+\left(4\opnorm E+2\opnorm E^{2}\right)\opnorm{Y-X}. \end{align*} Let us consider two cases: \begin{enumerate} \item First is the case of Frobenius norm. The Frebenius bound \cite[Theorem 2]{yu2015useful} of $E$ says that \begin{align*} \fronorm E & \leq\frac{2\sqrt{2}\fronorm{Y-X}}{\delta}. \end{align*} Hence in this case, we have for all $Y\in\symMat^{\dm}$ \begin{align*} f_{X}(Y) & \leq\frac{8\fronorm{Y-X}^{2}\opnorm{\Lambda_{2}}}{\delta^{2}}+\frac{8\sqrt{2}\fronorm{Y-X}^{2}}{\delta}+\frac{16\fronorm{Y-X}^{3}}{\delta^{2}}. \end{align*} \item Second is the operator norm. We have the operator norm of $E$ bounded by \begin{align*} \opnorm E & \leq\frac{2\sqrt{2}\sqrt{r}\opnorm{Y-X}}{\delta}. \end{align*} In this case, the function $f_{X}(Y)$ is upper bouned by \begin{align} f_{X}(Y) & \leq\frac{8r\opnorm{Y-X}^{2}\opnorm{\Lambda_{2}}}{\delta^{2}}+\frac{8\sqrt{2}\sqrt{r}\opnorm{Y-X}^{2}}{\delta}+\frac{16r\opnorm{Y-X}^{3}}{\delta^{2}}.\label{eq:fxYopeartornormboundglobalwithr} \end{align} If $\opnorm{Y-X}\leq\delta$, then using it for the term $\frac{16r\opnorm{Y-X}^{3}}{\delta^{2}}$, we have \begin{align} f_{X}(Y) & \leq\frac{8r\opnorm{Y-X}^{2}\opnorm{\Lambda_{2}}}{\delta^{2}}+\frac{(8\sqrt{r}+16r)\opnorm{Y-X}^{2}}{\delta}.\label{eq:fxYoperatornormboundwithr} \end{align} \end{enumerate} Still, we have not reach a global quadratic accurate model. Let us show that the function $f_{X}(Y)$ is bounded by a linear difference always. We decompose $f_{X}(Y)$ into two terms: \begin{align*} f_{X}(Y) & =\max\{\lambda_{1}(Y),0\}-\max\{\lambda_{1}(V^{\top}YV),0\}\\ & =\underbrace{\max\{\lambda_{1}(Y),0\}-\max\{\lambda_{1}(X),0\}}_{T_{1}}+\underbrace{\max\{\lambda_{1}(X),0\}-\max\{\lambda_{1}(V^{\top}YV),0\}}_{T_{2}}. \end{align*} For the term $T_{1}$, we note the function $\max\{x,0\}$ for any $x\in\real$ is $1$-Lipschitz with respect to the norm $|x|.$ Thus the term $T_{1}$ is bounded by \begin{equation*} |T_{1}| \leq|\lambda_{1}(Y)-\lambda_{1}(X)| \leq\opnorm{Y-X}. \end{equation*} For the second term $T_{2}$, we note that $\lambda_{1}(X)=\lambda_{1}(V^{\top}XV$) because of the definition of $V$. Hence, use the same reasoning, we have \begin{equation*} |T_{2}| \leq\opnorm{V^{\top}XV-V^{\top}YV} \leq\opnorm{X-Y}. \end{equation*} where the last line is due to submultiplicity of operator two norm. Hence, we see the error function $f_{X}(Y)$ is always bounded by \begin{align*} |f_{X}(Y)| & \leq2\opnorm{X-Y}. \end{align*} The inequality (\ref{eq:fxYoperatornormboundwithr}) tells us that when $\opnorm{X-Y}\leq\delta$, we have $f_{X}(Y)\leq\frac{8r\opnorm{Y-X}^{2}\opnorm{\Lambda_{2}}}{\delta^{2}}+\frac{(8\sqrt{2r}+16r)\opnorm{Y-X}^{2}}{\delta}.$ Now if $\opnorm{X-Y}\geq\delta$, then definitely $\frac{2\opnorm{X-Y}^{2}}{\delta}\geq2\opnorm{X-Y}$. Hence, the model $\max_{W\in\faceplus r(X)}\inprod WY$ is always qudratic accurate: for all $Y\in\symMat^{\dm},$ \begin{align} \text{0\ensuremath{\leq f_{X}\left(Y\right)\leq}} & \min\left\{ \frac{8r\opnorm{Y-X}^{2}\opnorm{\Lambda_{2}}}{\delta^{2}}+\frac{(8\sqrt{2r}+16r)\opnorm{Y-X}^{2}}{\delta},\frac{2\opnorm{X-Y}^{2}}{\delta}\right\} .\label{eq:qudraticaccuracyopnorm} \end{align} The same argument applies to the Frobeius norm case, and we reach \begin{align}\label{eq: qudraticaccuracyFrobeniusNorm} f_{X}(Y) & \leq\frac{8\fronorm{Y-X}^{2}\opnorm{\Lambda_{2}}}{\delta^{2}}+\frac{(8\sqrt{2}+16)\fronorm{Y-X}^{2}}{\delta}. \end{align}
{ "timestamp": "2020-08-18T02:22:55", "yymm": "2008", "arxiv_id": "2008.07067", "language": "en", "url": "https://arxiv.org/abs/2008.07067", "abstract": "The spectral bundle method proposed by Helmberg and Rendl is well established for solving large-scale semidefinite programs (SDP) thanks to its low per iteration computational complexity and strong practical performance. In this paper, we revisit this classic method showing it achieves sublinear convergence rates in terms of both primal and dual SDPs under merely strong duality, complementing previous guarantees on primal-dual convergence. Moreover, we show the method speeds up to linear convergence if (1) structurally, the SDP admits strict complementarity, and (2) algorithmically, the bundle method captures the rank of the optimal solutions. Such complementary and low rank structure is prevalent in many modern and classical applications. The linear convergent result is established via an eigenvalue approximation lemma which might be of independent interest. Numerically, we confirm our theoretical findings that the spectral bundle method, for modern and classical applications, speeds up under these conditions. Finally, we show that the spectral bundle method combined with a recent matrix sketching technique is able to solve an SDP with billions of decision variables in a matter of minutes.", "subjects": "Optimization and Control (math.OC)", "title": "Revisiting Spectral Bundle Methods: Primal-dual (Sub)linear Convergence Rates", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9802808721806971, "lm_q2_score": 0.8289388019824946, "lm_q1q2_score": 0.8125928517918221 }
https://arxiv.org/abs/1710.05731
Trees and $n$-Good Hypergraphs
Trees fill many extremal roles in graph theory, being minimally connected and serving a critical role in the definition of $n$-good graphs. In this article, we consider the generalization of trees to the setting of $r$-uniform hypergraphs and how one may extend the notion of $n$-good graphs to this setting. We prove numerous bounds for $r$-uniform hypergraph Ramsey numbers involving trees and complete hypergraphs and show that in the $3$-uniform case, all trees are $n$-good when $n$ is odd or $n$ falls into specified even cases.
\section{Introduction} In graph theory, trees play the important role of being minimally connected. The removal of any edge results in a disconnected graph. So, it is no surprise that trees serve as optimal graphs with regard to certain extremal properties, especially in Ramsey theory. Here, one defines the Ramsey number $R(G_1, G_2)$ to be the minimal natural number $p$ such that every red/blue coloring of the edges in the complete graph $K_p$ on $p$ vertices contains a red subgraph isomorphic to $G_1$ or a blue subgraph isomorphic to $G_2$. In 1972, Chv\'atal and Harary \cite{CH} proved that for all graphs $G_1$ and $G_2$, \begin{equation} R(G_1, G_2)\ge (c(G_1)-1)(\chi (G_2)-1)+1,\label{CHineq} \end{equation} where $c(G_1)$ is the order of a maximal connected component of $G_1$ and $\chi (G_2)$ is the chromatic number for $G_2$. Burr \cite{B} was able to strengthen this result slightly by proving that \begin{equation} R(G_1, G_2)\ge (c(G_1)-1)(\chi (G_2)-1)+t(G_2),\label{Burrineq} \end{equation} where $t(G_2)$ is the minimum number of vertices in any color class of any (vertex) coloring of $G_2$ having $\chi (G_2)$ colors. If we consider the special case in which $G_2=K_n$, we find that (\ref{CHineq}) and (\ref{Burrineq}) agree, and we have $$R(G_1, K_n )\ge (c(G_1)-1)(n-1)+1.$$ With regard to this inequality, Chv\'atal \cite{C} was able to prove that trees are optimal: $$R(T_m, K_n )=(m-1)(n-1)+1,$$ where $T_m$ is any tree on $m$ vertices. In particular, it follows that $$R(T_m, K_n)=R(T_m', K_n)$$ for any two trees $T_m$ and $T_m'$ having $m$ vertices and $$R(G, K_n )\ge R(T_m, K_n ),$$ for all graphs $G$ with $c(G)=m$. The optimal role trees possess in this Ramsey theoretic setting led Burr and Erd\H{o}s \cite{BE} to refer to a connected graph $G$ of order $m$ as being {\it $n$-good} if it satisfies $$R(G, K_n )=R(T_m, K_n )=(m-1)(n-1)+1.$$ That is, $G$ is $n$-good if the Ramsey number $R(G,K_n)$ is equal to the lower bound given by Chv\'atal and Harary \cite{CH} and Burr \cite{B} (which happen to agree in this case). Our goal in the present paper is to extend the definition of $n$-good to the setting of $r$-uniform hypergraphs and to consider how one can generalize and adapt the results of \cite{B} and \cite{BE} to the hypergraph setting. In Section 2, we establish the definitions and notations to be used through the remainder of the paper. As $r$-uniform trees will serve an important role in our definitions, we also prove a couple of important results concerning such hypergraphs that are analogues of known results in the graph setting. In Section 3, we prove a generalization of a Ramsey number lower bound that is due to Burr \cite{B} and use this result to define the concept of an $n$-good hypergraph. Several other Ramsey number inequalities are also given in the section, some which generalize results from the theory of graphs. In Section 4, we ask the question ``are $r$-uniform trees $n$-good?'' A previous result due to Loh \cite{L} provides a partial answer and we set the stage for addressing this question in general. Although we are unable to provide a complete answer, we are able to make significant progress on the determination of which $3$-uniform trees are $n$-good in Section 5. In this section, we show that infinitely many $3$-uniform trees are $n$-good without finding a single counterexample. We also consider examples of $3$-uniform loose cycles and we conclude with some conjectures regarding the Ramsey numbers for $r$-uniform trees versus complete hypergraphs. \noindent {\bf Acknowledgement:} The authors would like to thank John Asplund for carefully reading a preliminary draft of this paper and for making several valuable comments about its content. \section{Background on Hypergraphs and Trees} Recall that an {\it $r$-uniform hypergraph} $H=(V(H), E(H))$ consists of a nonempty set of vertices $V(H)$ and a set of $r$-uniform hyperedges $E(H)$ in which each hyperedge is an unordered $r$-tuple of distinct vertices in $V(H)$ (of course, $r=2$ corresponds to graphs). Also, if $|V(H)|<r$, then $E(H)$ is necessarily empty. The \textit{complete $r$-uniform hypergraph} $K_{n}^{(r)}$ consists of the vertex set $V = \{1,2,\ldots,n\}$, with all $r$-element subsets of $V$ as hyperedges. If $H_1$ and $H_2$ are $r$-uniform hypergraphs, the \textit{Ramsey number} $R(H_1, H_2 ;r)$ is the smallest natural number $p$ such that any red-blue coloring of $K_{p}^{(r)}$ contains either a red copy of $H_1$ or a blue copy of $H_2$. As with graphs, there are various types of colorings of hypergraphs. A \textit{weak proper vertex coloring} of a hypergraph $H$ is a function $\chi$ from $V(H)$ to a \textit{color set} $C$ such that there is no hyperedge with all vertices taking the same value in $C$. A \textit{color class} is a set of vertices which all share the same color in a given coloring. The size of the smallest color set such that there exists a proper vertex coloring of $H$ is the \textit{weak chromatic number of H}, denoted $\chi_w(H)$. We write $t(H)$ for the minimum size of a color class in any coloring of $H$ with $\chi_w(H)$ colors. In the graph setting, several equivalent definitions of trees are used, and we begin our analysis by considering equivalent definitions in the hypergraph setting. A few special terms and properties need to be defined. For $r$-uniform hypergraphs with $r>2$, paths and cycles have many more degrees of freedom than their graphical counterparts. In the broadest sense, a {\it Berge path} is a sequence of $k$ distinct vertices $v_1, v_2, \dots , v_k$ and $k-1$ distinct $r$-edges $e_1, e_2, \dots , e_{k-1}$ such that for all $i\in \{ 1, 2, \dots , k-1\}$, $v_i, v_{i+1} \in e_i$. \begin{figure}[h!] \centerline{{\includegraphics[width=0.7\textwidth]{BergePath1Edited.png}} }\caption{A $5$-uniform Berge path on distinct vertices $v_1, v_2, v_3, v_4$.} \label{BergePath} \end{figure} Figure \ref{BergePath} gives an example of a $5$-uniform Berge path on distinct vertices $v_1, v_2, v_3, v_4$. It should be noted that although the four indicated vertices are distinct and the hyperedges are distinct, other vertices are allowed to be repeated in different hyperedges. A hypergraph $H$ is \textit{connected} if there exists a Berge path between any two vertices in $H$. The \textit{connected components} of $H$ are the maximal connected subhypergraphs of $H$; we write $c(H)$ for the order of the largest connected component in $H$. A Berge path $v_1, v_2, \dots , v_k$ with hyperedges $e_1, e_2 , \dots , e_{k-1}$ can be extended to form a {\it Berge cycle} if we include a distinct hyperedge $e_k$ that includes both $v_1$ and $v_k$. In the graph setting, paths are types of trees, but we will need to be more restrictive in the hypergraph setting. Namely, Berge paths are too broad, leading us to the definition of a loose path. An {\it $r$-uniform loose path} $P_m^{(r)}$ on $m$ vertices is a sequence of distinct vertices $v_1, v_2, \dots , v_m$ along with hyperedges $$e_i:=v_{(i-1)(r-1)+1}, v_{(i-1)(r-1)+2}, \dots , v_{(i-1)(r-1)+r},$$ where $i=1, 2, \dots , k$. It necessarily follows that $m=r+(k-1)(r-1)$. Notice that consecutive hyperedges of a loose path intersect in exactly one vertex and each loose path is necessarily a Berge path. For graphs, the definitions of loose paths and Berge paths coincide. It is well-known that there exist several equivalent definitions of a tree in the graph context. With regard to the definitions above, Theorem \ref{equiv} provides four equivalent definitions of an {\it $r$-uniform tree}. Note that some authors have referred to such trees as hypertrees (eg., see \cite{L}). \begin{theorem} The following definitions of an $r$-uniform tree $T$ are equivalent: \begin{enumerate} \item $T$ is an $r$-uniform hypergraph that can be formed hyperedge-by-hyperedge with each new hyperedge intersecting the previous hypergraph at exactly one vertex. That is, each new hyperedge requires the creation of exactly $r-1$ new vertices. \item $T$ is a connected $r$-uniform hypergraph that does not contain any (Berge) cycles. \item $T$ is a connected $r$-uniform hypergraph in which the removal of any hyperedge (keeping all vertices) results in a hypergraph with exactly $r$ connected components. \item $T$ is an $r$-uniform hypergraph in which there exists a unique loose path between any pair of distinct vertices. \end{enumerate}\label{equiv} \end{theorem} \begin{proof} We prove a cyclic sequence of implications to obtain the desired theorem. \begin{enumerate} \item[$(1)\Rightarrow (2)$:] Suppose that $T$ is an $r$-uniform hypergraph that can be formed hyperedge-by-hyperedge with each new hyperedge intersecting the previous hypergraph at exactly one vertex. Clearly, this hypergraph and all of its subhypergraphs will have the property that any two distinct hyperedges intersect in at most one vertex. It follows that all paths in this graph will be loose paths. If there were a Berge cycle in the graph, then there must be a step in this construction process where a hyperedge is added to close off a loose path in the existing hypergraph. This would require the last hyperedge added in the Berge cycle to include at least two vertices from the previous hypergraph. Thus, no Berge cycle can exist. \\ \item[$(2) \Rightarrow (3)$:] (by contrapositive) Suppose that $T$ is a connected $r$-uniform hypergraph such that there exists a hyperedge $e^* = v_1, v_2, \ldots, v_r$ whose removal results in a hypergraph with fewer than $r$ connected components. Then two vertices $v_i$ and $v_j$ must be in the same connected component in $T - e^*$. This means there is a Berge path connecting $v_i$ and $v_j$ which does not involve $e^*$, and adjoining $e^*$ to this Berge path gives a Berge cycle in $T$. \\ \item[$(3) \Rightarrow (4)$:] The proof is by strong induction on $s$, the number of hyperedges. In the base case (one hyperedge), both statements are automatically true. Now suppose that $(3)$ implies $(4)$ for any connected $r$-uniform hypergraph with $s$ hyperedges, and let $T$ be a connected $r$-uniform hypergraph with $s+1$ edges in which the removal of any hyperedge (keeping all vertices) results in a hypergraph with exactly $r$ connected components . We may choose an arbitrary hyperedge $e = v_1, v_2, \ldots, v_r$ and remove it, resulting in connected components $T_1, T_2, \ldots, T_r$, with $v_i \in T_i$. Note that each $T_i$ is a connected, $r$-uniform hypergraph with fewer than $s+1$ edges. Now suppose $v,w$ are distinct vertices in $T$. If $v,w$ are in the same connected component $T_i$, applying the induction hypothesis gives that there is a unique loose path in $T_i$ between $v$ and $w$. (The induction hypothesis is applicable from $(2) \implies (3)$ since each $T_i$ is a connected hypergraph with no cycles.) If $v \in T_i$ and $w \in T_j$, $i \neq j$, then we can obtain a loose path from $v$ to $w$ by following the loose path from $v$ to $v_i$ in $T_i$, the hyperedge $e$, and the loose path from $v_j$ to $w$ in $T_j$. In either case, it is clear that the loose path between $v$ and $w$ is unique. \\ \item[$(4) \Rightarrow (1)$:] For the sake of contradiction, suppose there is an $r$-uniform hypergraph $T$ with $s$ hyperedges for which (4) holds but (1) does not. Take $s$ to be the smallest such number. Let $v$ and $w$ to be two vertices in $T$ so that the unique loose path between $v$ and $w$ is maximal, i.e. it is not a proper subhypergraph of any other loose path in $T$. Let $e_1, e_2, \ldots, e_k$ be the hyperedges of the loose path between $v$ and $w$, and denote by $u$ the lone vertex in $e_{k-1} \cap e_k$. Now consider the subhypergraph $T'$ obtained from $T$ by removing all vertices in $e_k$ except for $u$. Notice that $T'$ has a unique loose path between any two distinct vertices, and since $T'$ has fewer than $s$ hyperedges, $T'$ can be built hyperedge-by-hyperedge with each new hyperedge intesecting the previous hypergraph in only one vertex. However, we now add $e_k$ to $T'$ to obtain $T$, and we have actually built $T$ by such a process. This is a contradiction, which completes the proof. \end{enumerate} Thus, we find that $(1)$, $(2)$, $(3)$, and $(4)$ are equivalent. \end{proof} Let $\delta (H)$ denote the minimal degree of any vertex in $H$, where the {\it degree} of a vertex is the number of hyperedges containing that vertex. Define a {\it free hyperedge } of an $r$-uniform hypergraph to be a hyperedge in which exactly $r-1$ vertices have degree $1$. So, if we add a free hyperedge to a hypergraph, we add in $r-1$ new vertices along with the corresponding hyperedge. If $T$ is an $r$-uniform tree, then an {\it end} vertex of $T$ is a vertex of degree $1$ in a free hyperedge. The hypergraph trees that have just been defined possess a property analogous to a well-known result regarding degrees of vertices and the existence of trees as subgraphs (e.g., see Lemma 2.1 in \cite{GV}). \begin{theorem} Assume that $r\ge 2$ and let $T_m^{(r)}$ be any $r$-uniform tree of order $m$. If $H$ is any $r$-uniform hypergraph of order $p$ with $$\delta (H)\ge \left( \begin{array}{c} p-1 \\ r-1\end{array}\right)-\left(\begin{array}{c} p-m \\ r-1 \end{array}\right),$$ then $H$ contains a subhypergraph isomorphic to $T_m^{(r)}$. \end{theorem} \begin{proof} We proceed by induction on the number $k$, of hyperedges in $T_m^{(r)}$. Note that $m=r+(k-1)(r-1)$. If $k=1$, then $m=r$ and $$\delta (H)\ge \left( \begin{array}{c} p-1 \\ r-1\end{array}\right)-\left(\begin{array}{c} p-r \\ r-1 \end{array}\right)>0.$$ Thus, there exists at least one hyperedge, forming a $T_r^{(r)}$. Now assume the theorem is true for all trees having $k$ hyperedges, let $T^{(r)}_{r+k(r-1)}$ be any tree with $k+1$ hyperedges, and suppose that $$\delta (H)\ge \left( \begin{array}{c} p-1 \\ r-1\end{array}\right)-\left(\begin{array}{c} p-(r+k(r-1)) \\ r-1 \end{array}\right).$$ Denote by $T'$ the tree with $k$ hyperedges formed by removing a free hyperedge (and all of its degree $1$ vertices) from $T^{(r)}_{r+k(r-1)}$ and assume that $x$ is the vertex in $T'$ that was incident with the removed leaf. Then by the inductive hypothesis, there must be a subgraph isomorphic to $T'$. The maximum number of hyperedges that contain $x$ and some other vertices from $T'$ in $H$ is $$ \left( \begin{array}{c} p-1 \\ r-1\end{array}\right)-\left(\begin{array}{c} p-(r+(k-1)(r-1)) \\ r-1 \end{array}\right),$$ so the assumed inequality implies that some other hyperedge that contains $x$ must exist. Such a hyperedge can be added to $T'$ to form a copy of $T^{(r)}_{r+k(r-1)}$. \end{proof} \section{$n$-Good Hypergraphs} In this section, we introduce the concept of $n$-good $r$-uniform hypergraphs. As in the graph setting, the determination of whether or not a hypergraph is $n$-good depends on the value of a specific Ramsey number. Recall that an $n$-good graph $G$ is a connected graph of order $m$ that satisfies $$R(G, K_n )=R(T_m, K_n )=(m-1)(n-1)+1.$$ The fact that this concept is well-defined stems from the observation that the Ramsey number $R(T_m, K_n )$ is independent of the particular choice of tree $T_m$ of order $m$. In the $r$-uniform hypergraph setting, it is not immediately clear that this independence is present. So, when considering the concept of an $n$-good hypergraph, we focus on the fact that the Ramsey number $R(T_m,K_n)$ equals the lower bound proved by Chv\'atal and Harary \cite{CH}: $$R(T_m, K_n)\ge (m-1)(n-1)+1.$$ This result, and the corresponding upper bound proved by Chv\'atal \cite{C}, were generalized to the setting of $r$-uniform hypergraphs in \cite[Theorem 3]{BHR}. There, it was shown that if $T_m^{(r)}$ is any $r$-uniform tree of order $m$, then \begin{equation}(m-1) \left( \ceil[\Big]{\frac{n}{r-1}} -1\right)+1 \le R(T_m^{(r)}, K_n^{(r)};r)\le (m-1)(n-1)+1.\label{ChvHar} \end{equation} When $r=2$, these two bounds agree and the general lower bound (\ref{Burrineq}) proved by Burr \cite{B} provides no improvement to Chv\'atal and Harary's bound. We offer the following improvement of the lower bound in (\ref{ChvHar}), which may be viewed as a generalization of Theorem 1 of \cite{Burr}. \begin{theorem} Let $H_1$ and $H_2$ be $r$-uniform hypergraphs. If $c(H_2) \geq t(H_1)$, then $$R(H_1,H_2;r) \geq (\chi_w(H_1) - 1)(c(H_2) - 1) + t(H_1).$$ \label {BurrGen}\end{theorem} \vspace{-.4in} \begin{proof} Let $k = (\chi_w(H_1) -1)(c(H_2) - 1) + t(H_1)$. We will construct a red-blue coloring of $K_{k-1}^{(r)}$ which contains neither a red $H_1$ nor a blue $H_2$. Begin by taking $(\chi_w(H_1) - 1)$ disjoint copies of $K_{c(H_2)-1}^{(r)}$, along with a disjoint copy of $K_{t(H_1)-1}^{(r)}$. The hyperedges strictly contained in each of these complete subhypergraphs are colored blue, with all other hyperedges colored red. The order of the largest connected component in any blue subhypergraph is $c(H_2)-1$, so no blue copy of $H_2$ can exist. Denote by $H_R$ the subhypergraph spanned by the red hyperedges. Note that we can obtain a proper weak vertex coloring by using the same color on all vertices in each of the original disjoint complete subhypergraphs. If $t(H_1)=1$, then $\chi_w(H_R)=\chi_w(H_1)-1$, since in this case there is no $K_{t(H_1)-1}^{(r)}$ . If $t(H_1) >1$, then $\chi_w(H_R)=\chi_w(H_1)$ and $t(H_R)=t(H_1)-1$, as any smallest color class in this coloring must have order $t(H_1)-1$. In either case, it is clear that no red subhypergraph isomorphic to $H_1$ exists. Thus, $R(H_1, H_2 ; r)\ge k.$ \end{proof} With this theorem in place, we offer the following definition, generalizing the concept of a $G$-good graph as first defined by Burr \cite{Burr}. Let $H_1$ and $H_2$ be finite hypergraphs. If $c(H_2)\ge t(H_1)$, the hypergraph $H_2$ is called {\it $H_1$-good} if $$R(H_1, H_2 ; r)=(\chi_w(H_1)-1)(c(H_2)-1)+t(H_1).$$ The case where $H_1 = K_n^{(r)}$ is of special interest. Following the terminology used in \cite{BE}, we offer the following definition of an $n$-good hypergraph. (Note that in this case, the inequality $c(H_2) \ge t(K_n^{(r)})$ always holds when $H_2$ is nonempty.) \begin{definition} The hypergraph $H_2$ is called {\it $n$-good} whenever $$R(K_n^{(r)}, H_2 ; r)=(\chi_w(K_n^{(r)})-1)(c(H_2)-1)+t(K_n^{(r)}).$$ \end{definition} Note that $\chi _w(K_n^{(r)})=\ceil[\big]{\frac{n}{r-1}}$ since at most $r-1$ vertices can receive the same color in any weak coloring. If we let $n=q(r-1)+k$, where $q, k \in \mathbb{Z}$ with $0\le k <(r-1)$, then \begin{equation}t(K_n^{(r)})=\left\{ \begin{array}{ll} k & \mbox{if} \ k\ne 0 \\ r-1 & \mbox{if} \ k=0, \end{array}\right.\label{colorclass}\end{equation} resulting in the following corollary, which gives a slight improvement on the lower bounds given in (\ref{ChvHar}) when $H$ is a tree. \begin{corollary} If $H$ is any connected $r$-uniform hypergraph of order $m\ge r$ and $n=q(r-1)+k$, where $q, k \in \mathbb{Z}$ with $0\le k <(r-1)$, then $$R(H, K_{n}^{(r)}; r) \ge \left\{ \begin{array}{ll} (m-1)\left( \ceil[\big]{\frac{n}{r-1}}-1\right)+ k & \mbox{if} \ k\ne 0 \\ \notag \\ (m-1)\left( \frac{n}{r-1}-1\right)+r-1 & \mbox{if} \ k=0.\end{array} \right.$$ \label{Burr} \end{corollary} Known lower bounds for certain $3$ and $4$-uniform Ramsey numbers allow us to verify that $K_4^{(3)}$, $K_5^{(3)}$, and $K_6^{(3)}$ are not $4$-good, $K_5^{(3)}$ is not $5$-good, and $K_5^{(4)}$ is not $5$-good (see Section 7.1 of \cite{Rad}). It is not immediately clear whether or not any $n$-good hypergraphs exist, but trees seem like an appropriate place to begin our search since they were central to the definition in the graph setting. So, we now focus our attention on the Ramsey numbers $R(T_m^{(r)}, K_n^{(r)}; r)$, with an emphasis on trying to determine whether or not a tree $T_m^{(r)}$ is $n$-good. The simplest tree $T_r^{(r)}$ consists of a single hyperedge. From Corollary \ref{Burr}, we see that the trivial Ramsey number $R(T_r^{(r)}, K_n^{(r)};r)=n$ shows that $T_r^{(r)}$ is $n$-good for all $n\ge r$. Before focusing exclusively on $r$-uniform trees, we prove several bounds that hold for $R(H, K_n^{(r)};r)$. Similar to the approach used in \cite{BE}, we offer the following theorem and corollary. \begin{theorem} For any connected $r$-uniform hypergraph $H$ of order $m\ge r$, $$R(H,K_{n-r+1}^{(r)};r)+m-1\le R(H, K_n^{(r)};r).$$\label{verygood} \end{theorem} \vspace{-.3in} \begin{proof} Let $s=R(H,K_{n-r+1}^{(r)};r)$ and consider a red/blue coloring of $K_{s-1}^{(r)}$ that lacks a red $H$ and a blue $K_{n-r+1}^{(r)}$. Union this hypergraph with a red $K_{m-1}^{(r)}$, along with all interconnecting hyperedges colored blue. Clearly, no red $H$ exists and the largest complete blue subhypergraph contains at most $n-1$ vertices. Thus, $R(H, K_n^{(r)};r)\ge s+m-1$. \end{proof} \begin{corollary} If a connected $r$-uniform hypergraph $H$ of order $m\ge r$ is $n$-good, then it is $(n-r+1)$-good. \label{goodreduction} \end{corollary} \begin{proof} Suppose that $H$ is $n$-good. That is, $$R(H, K_n^{(r)};r)= (m-1)\left( \ceil[\Big]{\frac{n}{r-1}}-1\right)+t(K_n^{(r)}).$$ From Theorem \ref{verygood}, \begin{align} R(H,K_{n-r+1}^{(r)};r) &\le R(H, K_n^{(r)};r)-m+1 \notag \\ &\le (m-1)\left( \ceil[\Big]{\frac{n}{r-1}}-2\right)+t(K_n^{(r)}) \notag \\ &\le (m-1)\left( \ceil[\Big]{\frac{n-r+1}{r-1}}-1\right)+t(K_n^{(r)}).\notag \end{align} From (\ref{colorclass}), the value of $t(K_{n}^{(r)})$ is determined modulo $r-1$. So, $t(K_{n}^{(r)})=t(K_{n-r+1}^{(r)})$, and it follows that $H$ is $(n-r+1)$-good. \end{proof} The following theorem should be compared to Theorem 3.2 of \cite{BE}. \begin{theorem} Let $H'$ be a connected $r$-uniform hypergraph of order $m-r+1\ge r$ and let $H$ by the hypergraph formed by adding a free hyperedge to $H'$ ($H$ has order $m$). Then $$R(H, K_{n}^{(r)};r)\le max\{ R(H' ,K_n^{(r)};r), R(H, K_{n-1}^{(r)};r)+m-r+1 \}.$$ \label{max} \end{theorem} \begin{proof} Let $s=max\{ R(H' ,K_n^{(r)};r), R(H, K_{n-1}^{(r)};r)+m-r+1 \}$ and consider a red/blue coloring of the hyperedges in $K_s^{(r)}$. If there exists a blue $K_n^{(r)}$, we are done, so suppose such a subhypergraph does not exist. Then there must be a red $H'$. Let $x$ be a vertex in $H'$ in which the addition of a free hyperedge incident with $x$ results in a subhypergraph isomorphic to $H$. There are $$s-(m-r+1)\ge R(H, K_{n-1}^{(r)};r)$$ vertices not contained in the red $H'$. If any hyperedge including $x$ and any $r-1$ of these remaining vertices is red, then we have a red $H$. So, assume that all such hyperdges are blue. The subhypergraph induced by the remaining vertices contains a red $H$ or a blue $K_{n-1}^{(r)}$. In the latter case, including $x$ produced a blue $K_n^{(r)}$. \end{proof} \noindent We obtain the following corollary. \begin{corollary} Let $H'$ be an $r$-uniform hypergraph of order $m-r+1\ge r$ and let $H$ by the hypergraph formed by adding a free hyperedge to $H'$ ($H$ has order $m$). If $n\ge r+1$, $$R(H', K_{n}^{(r)};r)\le n_1, \qquad \mbox{and} \qquad R(H, K_{n-1}^{(r)};r)\le n_2,$$ where $n_1\le n_2+m-r+1$, then $$R(H, K_{n}^{(r)};r)\le n_2+m-r+1.$$ \label{genconstruct1} \end{corollary} Using a similar construction to that of Theorem \ref{max}, the following theorem and its corollary will be useful in upcoming proofs. \begin{theorem} Let $H'$ be a connected $r$-uniform hypergraph of order $m-r+1\ge r$ and let $H$ by the hypergraph formed by adding a free hyperedge to $H'$ ($H$ has order $m$). Then $$R(H, K_{n}^{(r)};r)\le max\{ R(H' ,K_n^{(r)};r)+n-1, R(H, K_{n-1}^{(r)};r)\}.$$ \label{max2} \end{theorem} \begin{proof} Let $s=max\{ R(H' ,K_n^{(r)};r)+n-1, R(H, K_{n-1}^{(r)};r) \}$ and consider a red/blue coloring of the hyperedges in $K_s^{(r)}$. If there exists a red $H$, we are done, so suppose such a subhypergraph does not exist. Then there must be a blue $K_{n-1}^{(r)}$. There are $$s-(n-1)\ge R(H', K_{n}^{(r)};r)$$ vertices not contained in the red $K_{n-1}^{(r)}$, so assume it contains a red $H'$. Let $x$ be a vertex in $H'$ in which the addition of a free hyperedge incident with $x$ results in a subhypergraph isomorphic to $H$. If any hyperedge including $x$ and any $r-1$ vertices from the blue $K_{n-1}^{(r)}$ is red, then we have a red $H$. Otherwise, all such hyperdges are blue and we have a blue $K_n^{(r)}$. \end{proof} \begin{corollary} Let $H'$ be an $r$-uniform hypergraph of order $m-r+1\ge r$ and let $H$ by the hypergraph formed by adding a free hyperedge to $H'$ ($H$ has order $m$). If $n\ge r+1$, $$R(H', K_{n}^{(r)};r)\le n_1, \qquad \mbox{and} \qquad R(H, K_{n-1}^{(r)};r)\le n_2,$$ where $n_2\le n_1+n-1$, then $$R(H, K_{n}^{(r)};r)\le n_1+n-1.$$ \label{genconstruct2} \end{corollary} Before we shift our focus to trees, we conclude this section with a proof that whenever an $r$-uniform hypergraph is $n$-good, a finite number of disjoint copies of that hypergraph is $n$-good. When $a\in \mathbb{N}$ and $H$ is any $r$-uniform hypergraph, we denote by $aH$ the disjoint union of $a$ copies of $H$. \begin{theorem} If an $r$-uniform hypergraph $H$ of order $m$ is $n$-good, where $n\ge 2r-1$, then $aH$ is $n$-good for all $a\in \mathbb{N}$. \end{theorem} \begin{proof} Assuming that $H$ is $n$-good, where $n\ge 2r-1$, it follows that $$R(H, K_n^{(r)};r)=(m-1)\left( \ceil[\bigg]{\frac{n}{r-1}}-1\right)+t(K_n^{(r)}).$$ It remains to be shown that \begin{equation} R(aH, K_n^{(r)};r)\le (am-1)\left( \ceil[\bigg]{\frac{n}{r-1}}-1\right)+t(K_n^{(r)}).\label{need}\end{equation} By Lemma 3.1 of \cite{OR} (which generalized the analogous result for graphs from \cite{BES}), it follows that \begin{equation} R(aH, K_n^{(r)};r)\le (m-1)\left( \ceil[\bigg]{\frac{n}{r-1}}-1\right) +(a-1)m+t(K_n^{(r)}).\label{given}\end{equation} Proving that (\ref{need}) follows from (\ref{given}) is equivalent to proving that $$2(a-1)\le \left( \ceil[\bigg]{\frac{n}{r-1}}-1\right)(a-1),$$ which is true whenever $\ceil[\big]{\frac{n}{r-1}}\ge 3$ (equivalently, $n\ge 2r-1$). \end{proof} \section{Are $r$-Uniform Trees $n$-Good?} While we will not obtain a complete answer to the question ``Are $r$-uniform trees $n$-good?'', we will provide an infinite number of cases in which the answer is ``yes'' and we will not obtain a single counterexample. The usefulness of Corollaries \ref{genconstruct1} and \ref{genconstruct2} in determining upper bounds for tree/complete hypergraph Ramsey numbers follows from equivalent definition $(1)$ of an $r$-uniform tree (given in Theorem \ref{equiv}). For example, we offer the following application of Corollary~\ref{genconstruct2} concerning the (unique) $r$-uniform tree of order $2r-1$. \begin{theorem} For all $r\ge 3$, $$2r\le R(T_{2r-1}^{(r)}, K_{r+1}^{(r)};r)\le 2r+1.$$ \end{theorem} \begin{proof} The lower bound follows from Theorem \ref{BurrGen}. The upper bound is a direct application of Corollary \ref{genconstruct2} applied to the trivial Ramsey numbers $$R(T_r^{(r)}, K_{r+1}^{(r)};r)=r+1=m_1 \quad \mbox{and} \quad R(T_{2r-1}^{(r)}, K_{r}^{(r)};r)=2r-1=m_2.$$ It is easily confirmed that $m_2\le m_1+r$, as is required to apply Corollary \ref{genconstruct2}. \end{proof} Let $T$ be an $r$-uniform tree containing exactly $t$ hyperedges. In 2009, Loh \cite{L} solved a problem posed by Bohman, Frieze, and Mubayi \cite{BFM} when he proved that if an $r$-uniform hypergraph $H$ satisfies $\chi _w (H)>t$, then $T$ is isomorphic to a subhypergraph of $H$. This result is independent of the specific tree being considered and it is independent of the uniformity $r$. As an application of this result, Loh proved the following upper bound for $R(T_m^{(r)}, K_n^{(r)};r)$, which is an $r$-uniform analogue for the bound proved by Chv\'atal \cite{C} for graphs. We also note that this upper bound improves on the upper bound given in Theorem 3.4 of \cite{BHR}. We reproduce Loh's proof for completion and to provide the proof in the context of the given paper. \begin{theorem}[Loh, 2009] If $n\ge r\ge 2$ and $T_m^{(r)}$ is any $r$-uniform tree on $m$ vertices, then $$R(T_m^{(r)}, K_{n}^{(r)};r)\le \frac{(m-1)(n-1)}{r-1}+1.$$\label{Loh} \end{theorem} \begin{proof} Let $p=\frac{(m-1)(n-1)}{r-1}+1$ and suppose that $T_m^{(r)}$ contains exactly $t$ hyperedges. Then $t=\frac{m-1}{r-1}$ and we consider a red/blue coloring of the hyperedges of $K_p^{(r)}$. Denote the subhypergraphs spanned by the red and blue hyperedges by $H_R$ and $H_B$, respectively. If $\chi _w (H_R) \le t$, then any collection of vertices of the same color form an independent set and it follows that $H_R$ has an independent set of cardinality at least $$\ceil[\Big]{\frac{p}{t}}=\ceil[\bigg]{\frac{(n-1)t+1}{t}}=n.$$ Such a collection of vertices corresponds to a subhypergraph of $H_B$ that is isomorphic to $K_n^{(r)}$. If $\chi _w(H_R)>t$, then Theorem 1 of \cite{L} implies that $H_R$ contains a subhypergraph isomorphic to $T_m^{(r)}$. It follows that every red/blue coloring of the hyperedges of $K_p^{(r)}$ contains a red $T_m^{(r)}$ or a blue $K_n^{(r)}$. \end{proof} \begin{corollary} Let $n\ge r\ge 2$ and $T_m^{(r)}$ be any $r$-uniform tree. If $r-1$ divides $n-1$, then $T_m^{(r)}$ is $n$-good. \label{Loh2} \end{corollary} \begin{proof} Writing $(r-1)\ell =n-1$, it follows that $t(K_n^{(r)})=1$ and $\ceil[\big]{\frac{n}{r-1}}=\ell +1$. Thus, the lower bound in Theorem \ref{BurrGen} becomes $(m-1)(\ell)+1$, agreeing with the upper bound in Theorem \ref{Loh}. \end{proof} \noindent In particular, note that when $r-1$ divides $n-1$, the Ramsey number $R(T_m^{(r)}, K_n^{(r)};r)$ is independent of the choice of $r$-uniform tree on $m$ vertices. Next, we consider a construction that provides an upper bound for all Ramsey numbers $R(T_{2r-1}^{(r)}, K_n^{(r)};r)$ when $r\ge 3$ is odd. In the next section, this upper bound will prove to be tight in the case of $r=3$. Before we state and prove this theorem, let us recall a few definitions. If $H$ is a hypergraph, a {\em matching on H} is a set of hyperedges from $H$ which are all disjoint from one another. The {\em size} of a matching $M$ is the number of hyperedges in $M$. A matching is {\em maximal} if it is not a proper subset of any other matching. \begin{theorem} Let $r\ge3$ be odd and $n\ge r+1$. Then $R(T_{2r-1}^{(r)}, K_n^{(r)};r)\le p,$ where $$p=\left\{ \begin{array}{lr} \frac{r+1}{2}n-(r-1) & \mbox{if}\ n \ \mbox{is even} \\ \\ \frac{r+1}{2}n-\frac{r-1}{2} & \mbox{if}\ n \ \mbox{is odd.} \end{array} \right.$$ \label{tree2} \end{theorem} \begin{proof} Let $p$ be defined as above and consider a red/blue coloring of the hyperedges in $K_p^{(r)}$ that lacks a red subhypergraph isomorphic to $T_{2r-1}^{(r)}$. Suppose that $M$ is a maximal red matching of size $k$ and define $S_1$ to be a set of vertices consisting of exactly two vertices from each hyperedge in $M$. Then the subhypergraph induced by $S_1$ is a complete blue hypergraph as the assumption that $r$ is odd forces every hyperedge to intersect some element of $M$ at exactly one vertex. If $2k\ge n$, then the coloring contains a blue $K_n^{(r)}$. Otherwise, $2k\le n-1$ and we can define $S_2$ to consist of all vertices not in $M$ along with a single vertex from each hyperedge in $M$. In the case where $n$ is even, equality isn't possible, so this inequality can be improved to $2k\le n-2$. When $n$ is odd, we have that \begin{align} |S_2|&=p-(r-1)k \notag \\ &=\frac{r+1}{2}n-\frac{r-1}{2}-(r-1)k \notag \\ &\ge \frac{r+1}{2}n-\frac{r-1}{2}+(r-1)\left(\frac{1-n}{2}\right) =n.\notag \end{align} Similarly, when $n$ is even, we have \begin{align} |S_2|&=p-(r-1)k \notag \\ &=\frac{r+1}{2}n-(r-1)-(r-1)k \notag \\ &\ge \frac{r+1}{2}n-(r-1)+(r-1)\left(\frac{2-n}{2}\right) =n.\notag \end{align} In both cases, the subhypergraph induced by $S_2$ forms a complete blue hypergraph of order at least $n$, completing the proof of the theorem. \end{proof} Now, we restrict our attention in the next section to showing that certain $3$-uniform trees are $n$-good and to proving upper bounds whenever our methods are insufficient for determining exact evaluations of $R(T_m^{(3)}, K_n^{(3)};3)$. \section{$3$-Uniform $n$-Good Hypergraphs} Having laid down the appropriate framework with which to study $n$-good hypergraphs when $r\ge 3$, we now focus on the $3$-uniform case. The smaller uniformity will enable us to give numerous precise evaluations of $R(H,K_n;3)$, from which we can gain a better understanding of which hypergraphs are $n$-good. \subsection{Trees Versus Complete $3$-Uniform Hypergraphs}\label{3uniform} First, we focus on finding exact Ramsey numbers for certain trees and complete graphs in the 3-uniform case. The first nontrivial tree to consider is $T_5^{(3)}$, which is a loose path with two hyperedges and is unique up to isomorphism. It is easily confirmed that the lower bound given in Theorem \ref{BurrGen} and the upper bound given in Theorem \ref{tree2} agree in this case, implying \begin{equation} R(T_{5}^{(3)}; K_{n}^{(3)}; 3) = \left\{ \begin{array}{rl} 2n-2 & \mbox{if $n$ is even} \\ 2n-1 & \mbox{if $n$ is odd.} \end{array} \right.\label{5trees}\end{equation} Of course, when $n$ is odd, this result also follows from Theorem \ref{Loh} (and Corollary \ref{Loh2}). Therefore, $T_5^{(3)}$ is $n$-good for all $n\ge 3$. Our efforts in this section lead to the determination of the values/ranges for $R(T_m^{(3)}, K_n^{(3)};3)$ given in Table \ref{t1}. All exact evaluations included in this table correspond to trees that are $n$-good. \begin{table}[H] \centering \begin{tabular}{|c||c|c|c|c|c|c|c|} \hline $_m\ \backslash \ ^n $ & 4 & 5 & 6 & 7 & 8 & 9 & 10 \\ \hline\hline 5 & 6 & 9 & 10 & 13 & 14 & 17 & 18 \\ \hline 7 & [8, 9] & 13 & [14, 15] & 19 & [20, 21] & 25 & [26, 27] \\ \hline 9 & [10, 12] & 17 & [18, 20] & 25 & [26, 28] & 33 & [34, 36] \\ \hline 11 & [12, 15] & 21 & [22, 25] & 31 & [32, 35] & 41 & [42, 45] \\ \hline 13 & [14, 18] & 25 & [26, 30] & 37 & [38, 42] & 49 & [50, 54] \\ \hline 15 & [16, 21] & 29 & [30, 35] & 43 & [44, 49] & 57 & [58, 63] \\ \hline \hline 2j+1 & [2j+2, 3j] & 4j+1 & [4j+2, 5j] & 6j+1 & [6j+2, 7j] & 8j+1 & [8j+2, 9j] \\ \hline \end{tabular} \caption{Exact values/ranges for $R(T_m^{(3)}, K_n^{(3)};3)$ whenever $m=2j+1\ge 5$ and $4\le n\le 10$. All of the exact values shown in this chart correspond to trees that are $n$-good.} \label{t1} \end{table} \begin{theorem} For all $n\ge 3$ and $j\ge 2$, $$R(T_{2j+1}^{(3)}, K_{n}^{(3)};3)=j(n-1)+1$$ when $n$ is odd, and $$j(n-2)+2\le R(T_{2j+1}^{(3)}, K_{n}^{(3)};3)\le j(n-1)$$ when $n$ is even. \label{treebounds}\end{theorem} \begin{proof} All lower bounds follow from Theorem \ref{BurrGen} and the upper bound in the case when $n$ is odd follows from Theorem \ref{Loh} (and Corollary \ref{Loh2}). To prove the upper bounds for a fixed even value of $n$, we proceed by induction on $j$, using $R(T_5^{(3)}, K_n^{(3)};3)$ as the base case. Suppose that the upper bound holds for all $3$-uniform trees having fewer than $j$ hyperedges and let $T_{2j+1}^{(3)}$ be an $r$-uniform tree containing exactly $j$ hyperedges. Let $T'$ be the tree formed by removing a single free hyperedge (along with all $r-1$ of its degree $1$ vertices) from $T_{2j+1}^{(3)}$. By the inductive hypothesis, we have that $$R(T', K_n^{(3)};3)\le (j-1)(n-1),$$ and since $n-1$ is odd, we have that $$R(T_{2j+1}{(3)},K_{n-1}^{(3)};3)=j(n-2)+1.$$ Letting $$n_1=(j-1)(n-1) \quad \mbox{and} \quad n_2=j(n-2)+1,$$ it is easily confirmed that $n_2\le n_1+n-1$. Thus, from Corollary \ref{genconstruct2}, it follows that $$R(T_{2j+1}^{(3)}, K_n^{(3)};3)\le jn-j=j(n-1)$$ when $n$ is even. \end{proof} Although Theorem \ref{treebounds} is the best we can offer for arbitrary $3$-uniform trees, stronger upper bounds can be determined for the special case of loose paths when $n$ is even and this is the the focus of Subsection \ref{loose}. \subsection{Loose Paths Versus Complete $3$-Uniform Hypergraphs}\label{loose} Now, we turn our attention to improving the bounds of $R(T_m^{(3)}, K_n^{(3)};3)$ when $T_m^{(3)}$ is the loose path $P_m^{(3)}$. In the following theorem, we will show that the $3$-uniform loose path $P_m^{(3)}$ is $4$-good (where $m$ is odd). \begin{theorem} If $j\ge 1$, then $R(P_{2j+1}^{(3)},K_4^{(3)};3)=2j+2$. \label{4good} \end{theorem} \begin{proof} Theorem \ref{BurrGen} shows that $2j+2$ is a lower bound for the given Ramsey number for all values of $j$, so now it remains to be shown that $R(P_{2j+1}^{(3)},K_4^{(3)};3)\le 2j+2$. We proceed by (weak) induction on $j$. The $j=1$ case follows from the trivial Ramsey number $R(P_{3}^{(3)}, K_4^{(3)};3)=4$. Now, assume that $R(P_{2j-1}^{(3)},K_4^{(3)};3)\le 2(j-1)+2$ and consider a red/blue coloring of $K_{2j+2}^{(3)}$. By the inductive hypothesis, there exists a red $P_{2j-1}^{(3)}$ or a blue $K_4^{(3)}$. In the latter case, we are done, so assume the former case and let $x_1$, $x_2$, $x_3$, and $x_4$ be the end vertices in the red $P_{2j-1}^{(3)}$. Other than the red $P_{2j-1}^{(3)}$, there exist three other vertices; label them $y_1$, $y_2$, and $y_3$. There are now two cases to consider: the hyperedge $y_1y_2y_3$ is either red or it is blue. \begin{enumerate} \item[Case 1:] Assume $y_1y_2y_3$ is blue. Then if any of the hyperedges $x_1y_1y_2$, $x_1y_2y_3$, or $x_1y_1y_3$ is red, the path extends to form a red $P_{2j+1}^{(3)}$. Otherwise, all of these hyperedges are blue and the subhypergraph induced by $\{ x_1, y_1, y_2, y_3\}$ is a blue $K_4^{(3)}$. \item[Case 2:] Assume $y_1y_2y_3$ is red and label the hyperedges in the red $P_{2j-1}^{(3)}$ by $e_1, e_2, \dots , e_{j-1}$, where $e_i$ is adjacent to $e_{i+1}$ for each $1\le i\le j-2$. Without loss of generality, assume that the end vertex $x_1$ is contained in $e_1$ and $x_2$ is contained in $e_{j-1}$. Now consider the subhypergraph induced by $\{ x_1, x_2, y_1, y_2\}$. It either forms a blue $K_4^{(3)}$ or some hyperedge is red. If either $x_\ell y_1y_2$ is red for $\ell =1, 2$, then it is clear that we can form a red $P_{2j+1}^{(3)}$ by adding this hyperedge to the corresponding end of the red $P_{2j-1}^{(3)}$. If $x_1x_2y_\ell$ is red for $\ell =1, 2$, then the hyperedges $$y_1y_2y_3 - x_1x_2y_\ell - e_1 - e_2 - \cdots - e_{j-2}$$ form a red $P_{2j+1}^{(3)}$. \end{enumerate} Hence, regardless of how we color the hyperedges in $K_{2j+2}^{(3)}$, we are able to prove the existence of a red $P_{2j+1}^{(3)}$ or a blue $K_4^{(3)}$. \end{proof} \begin{theorem} The loose path $P_7^{(3)}$ satisfies $$R(P_7^{(3)}, K_8^{(3)};3)=20.$$ \label{P78good} \end{theorem} \begin{proof} It suffices to prove that $R(P_7^{(3)}, K_8^{(3)}; 3)\le 20$, so consider a red/blue coloring of the hyperedges in $K_{20}^{(3)}$. Since $R(P_5^{(3)}, K_{8}^{(3)};3)=14$ (equation (\ref{5trees})), there exists a red $P_5^{(3)}$ or a blue $K_8^{(3)}$. Assume the former case and consider the subhypergraph induced on the 15 vertices not included in the red $P_5^{(3)}$. We again apply the same Ramsey number to see that there is a red $P_5^{(3)}$ or a blue $K_8^{(3)}$. Assume the former case so that we now have two disjoint red subhypergraphs isomorphic to $P_5^{(3)}$, along with ten other vertices. Denote the paths by $P_1$ and $P_2$ and let their hyperedges be given by $$w_1w_2w_3 - w_3w_4w_5 \qquad \mbox{and} \qquad x_1x_2x_3 - x_3x_4x_5,$$ respectively. Since $R(P_5^{(3)}, K_6^{(3)};3)=10$ (equation (\ref{5trees})), it follows that the subhypergraph induced on the remaining ten vertices contains a red $P_5^{(3)}$ or a blue $K_6^{(3)}$, giving us two cases to consider. \begin{enumerate} \item[Case 1:] If there exists a blue $K_6^{(3)}$, then it is disjoint from the two paths (see Figure \ref{fig1}). \begin{figure}[H] \centerline{{\includegraphics[width=0.8\textwidth]{case1-slim.png}} }\caption{A coloring of $K_{20}^{(3)}$ that contains two disjoint red $P_5^{(3)}$ subhypergraphs and a disjoint blue $K_6^{(3)}$.} \label{fig1} \end{figure} \noindent If we denote the vertices in the blue $K_6^{(3)}$ by $z_1, z_2, \dots , z_6$, then consider the hyperedges of the forms $w_5x_5 z_i$, $w_5 z_iz_j$, and $x_5 z_iz_j$, where $i\ne j$. If any such hyperedge is red, we can form a red $P_7^{(3)}$. Otherwise, all such hyperedges are blue and the subhypergraph induced by $\{w_5, x_5, z_1, z_2, \dots , z_6 \}$ is a blue $K_8^{(3)}$. \item[Case 2:] If there exists a red $P_5^{(3)}$, call it $P_3$ and denote its hyperedges by $y_1y_2y_3 - y_3y_4y_5$ (See figure \ref{fig2}). \begin{figure}[H] \centerline{{\includegraphics[width=0.8\textwidth]{case2-slim.png}} }\caption{A coloring of $K_{20}^{(3)}$ that contains three disjoint red $P_5^{(3)}$ subhypergraphs.} \label{fig2} \end{figure} \noindent Denote the vertices not contained in $P_1$, $P_2$, or $P_3$ by $z_1, z_2, \dots , z_5$. The subhypergraph induced by $\{ z_1, z_2, \dots , z_5\}$ either contains a red hyperedge or it does not, giving us two subcases to consider. \begin{enumerate} \item[Subcase 1:] Suppose that the subhypergraph induced by $\{ z_1, z_2, \dots , z_5\}$ is a blue $K_5^{(3)}$. If any of the hyperedges $w_5x_5y_5$, $w_5x_5z_i$, $w_5y_5z_i$, $x_5y_5z_i$, $w_5z_iz_j$, $x_5z_iz_j$, or $y_5z_iz_j$ are red, where $i\ne j$, then we can form a red $P_7^{(3)}$. If they are all blue, then the subhypergraph induced by $$\{w_5, x_5, y_5, z_1, z_2, \dots , z_5 \}$$ is a blue $K_8^{(3)}$. \item[Subcase 2:] If the subhypergraph induced by $\{ z_1, z_2, \dots , z_5\}$ contains a red hyperedge, then without loss of generality, suppose that $z_1z_2z_3$ is red. Consider the subhypergraph that is induced by $\{ w_1, w_5, x_1, x_5, y_1, y_5, z_1, z_2\}$. If any hyperedge is red, we can extend one of the paths to form a red $P_7^{(3)}$. If they are all blue, we have a blue $K_8^{(3)}$. \end{enumerate} \end{enumerate} Thus, in all cases, we have proven that our coloring of $K_{20}^{(3)}$ contains a red $P_7^{(3)}$ or a blue $K_8^{(3)}$. \end{proof} \begin{corollary} For $j\ge 3$, $$R(P_{2j+1}^{(3)}, K_8^{(3)};3)\le 7j-1.$$ \end{corollary} \begin{proof} We proceed by (weak) induction on $j\ge 3$. Theorem \ref{P78good} provides the base case ($j=3$). Now suppose that $$R(P_{2j-1}^{(3)}, K_8^{(3)};3)\le 7(j-1)-1=7j-8.$$ Using the fact that $$R(P_{2j+1}^{(3)}, K_7^{(3)};3)=6j+1,$$ one can check that the criteria for applying Theorem \ref{genconstruct2} are met and $$R(P_{2j+1}^{(3))}, K_8^{(3)};3)\le 7j-1,$$ proving the corollary. \end{proof} \begin{corollary} The Ramsey number $$R(P_7^{(3)}, K_{6}^{(3)};3)=14.$$\label{P76good} \end{corollary} \begin{proof} This corollary follows from Theorem \ref{P78good} and Corollary \ref{goodreduction}: if $P_7^{(3)}$ is $8$-good, then it is $6$-good. \end{proof} \begin{corollary} For $j\ge 3$, $$R(P_{2j+1}^{(3)}, K_6^{(3)};3)\le 5j-1.$$ \end{corollary} \begin{proof} We proceed by (weak) induction on $j\ge 3$. Corollary \ref{P76good} provides the base case ($j=3$). Now suppose that $$R(P_{2j-1}^{(3)}, K_6^{(3)};3)\le 5(j-1)-1=5j-6.$$ Using the fact that $$R(P_{2j+1}^{(3)}, K_5^{(3)};3)=4j+1,$$ one can check that the criteria for applying Lemma \ref{genconstruct2} are met and $$R(P_{2j+1}^{(3))}, K_6^{(3)};3)\le 5j-1,$$ proving the corollary. \end{proof} The previous two theorems (and three corollaries) allow us to improve several of the known upper bounds in Table \ref{t1} when the tree being considered is a loose path. So, we provide the following table of exact values/ranges for Ramsey numbers of the form $R(P_m^{(3)}, K_n^{(3)};3)$. As before, exact evaluations correspond to loose paths that are $n$-good. \begin{table}[!h] \centering \begin{tabular}{|c||c|c|c|c|c|c|c|} \hline $_m\ \backslash \ ^n $ & 4 & 5 & 6 & 7 & 8 & 9 & 10 \\ \hline\hline 5 & 6 & 9 & 10 & 13 & 14 & 17 & 18 \\ \hline 7 & 8 & 13 & 14 & 19 & 20 & 25 & [26, 27] \\ \hline 9 & 10 & 17 & [18, 19] & 25 & [26, 27] & 33 & [34, 36] \\ \hline 11 & 12 & 21 & [22, 24] & 31 & [32, 34] & 41 & [42, 45] \\ \hline 13 & 14 & 25 & [26, 29] & 37 & [38, 41] & 49 & [50, 54] \\ \hline 15 & 16 & 29 & [30, 34] & 43 & [44, 48] & 57 & [58, 63] \\ \hline \hline 2j+1 & 2j+2 & 4j+1 & [4j+2, 5j-1] & 6j+1 & [6j+2, 7j-1] & 8j+1 & [8j+2, 9j] \\ \hline \end{tabular} \caption{Exact values/ranges for $R(P_m^{(3)}, K_n^{(3)};3)$ whenever $m=2j+1\ge 5$ and $4\le n\le 10$. All of the exact values shown in this chart correspond to loose paths that are $n$-good.} \label{t2} \end{table} \subsection{Cycles Versus Complete $3$-Uniform Hypergraphs} Having identified an infinite number of $n$-good trees without encountering a counterexample, we conclude this section with some nontrivial examples of hypergraphs that are not $n$-good. Define the loose cycle $C_4^{(3)}$ to consist of two $3$-uniform hyperedges whose intersection has two elements. \begin{theorem} The 3-uniform hypergraph $C_4^{(3)}$ is $4$-good. \end{theorem} \begin{proof} Consider a red-blue coloring of the hyperedges in $K_{5}^{(3)}$. If there are three hyperedges $A_1, A_2, A_3$, then by the Inclusion-Exclusion Principle, \begin{align*} 5 &\geq |A_1 \cup A_2 \cup A_3| \\ &\geq |A_1| + |A_2| + |A_3| - |A_1 \cap A_2| - |A_1 \cap A_3| - |A_2 \cap A_3| \end{align*} If all pairs of subsets have intersections or size one or less, then we would have $5 \geq 6$. Therefore if there are at least three red hyperedges, there is a red copy of $C_4^{(3)}$. Now suppose there are exactly two red hyperedges $A_1$ and $A_2$. If $|A_1 \cap A_2| = 2$, then $A_1$ and $A_2$ form a red copy of $H$. Otherwise, we must have $|A_1 \cap A_2| = 1$, say $A_1 = \{a_1, a_2, a_3\}$ and $A_2 = \{a_3, a_4, a_5\}$, with $a_i \neq a_j$ for $i \neq j$. In this case, all hyperedges not containing $a_3$ must be blue, so there is a blue complete hypergraph on the four vertices $a_1, a_2, a_4$ and $a_5$. In either case, there is a either a red copy of $C_4^{(3)}$ or a blue copy of $K_4^{(3)}$. \end{proof} The proof of the following Theorem will be simplified by carefully stating the specific conditions under which the cycle $C_4^{(3)}$ is $n$-good. Let $H$ be a 3-uniform hypergraph of order $m$. If $n$ is even, then $H$ is $n$-good if and only if $$R(H,K_n^{(3)};3)\le (m-1)(n/2-1)+2.$$ If $n$ is odd, then $H$ is $n$-good if and only if $$R(H,K_n^{(3)};3)\le (m-1)((n+1)/2-1)+1.$$ For $C_{4}^{(3)}$, these bounds are given explicitly by: if $n$ is even, then $C_{4}^{(3)}$ is $n$-good if and only if $$R(C_{4}^{(3)}, K_{n}^{(3)}; 3) \le (4-1)(n/2 -1)+2 = 3n/2 +1.$$ If $n$ is odd, then $C_4^{(3)}$ is $n$-good if and only if $$R(C_4^{(3)},K_n^{(3)};3)\le 3((n+1)/2-1)+1 = 3/2(n+1) - 2.$$ \begin{theorem} If $j$ is an even positive integer such that $3j+1$ is prime, then $R(C_{4}^{(3)}, K_{2j+1}^{(3)};3) > 3j+1$. \end{theorem} \begin{proof} Let $j$ be an even positive integer such that $3j+1$ is prime, and let $p = 3j+1$. We identify the vertices of $K_{p}^{(3)}$ with the elements $\{0,1,2,\ldots,p-1\}$ of the finite field $\mathbb{Z}/{p\mathbb{Z}}$, and denote by $\left( \mathbb{Z}/{p\mathbb{Z}} \right)^*$ the multiplicative group of nonzero elements of $\mathbb{Z}/{p\mathbb{Z}}$. We take two types of hyperedges to be red. The first type consists of 3-element sets of the form $\{x, -x, 0 \}$, while the second type consists of $3$-element subsets $\{x,y,z\}$ such that $x^3 = y^3 = z^3$ i.e. the $j$ cosets of the kernel of the group homomorphism $a \mapsto a^3$. As such, it is immediate that two red hyperedges of the second kind will have disjoint intersection. It is also obvious that two red hyperedges of the first kind can only intersect in the singleton set $\{0\}$. Also, a red hyperedge of the first and second type can intersect in only a subset of size 1, since $(-x)^3 = -x^3 \neq x^3$ for any nonzero $x$ in $\mathbb{Z}/{p\mathbb{Z}}$. Thus there is no red $C_{4}^{(3)}$. Now suppose that $K_m$ is a complete blue subhypergraph. If $0$ is a vertex of $K_m$, then $K_m$ can include at most one element from each of the $\frac{p-1}{2}$ pairs $\{x,-x\}$, and hence \[ m \leq \frac{p-1}{2} + 1 = \frac{3j}{2} + 1 < 2j + 1 \] If $0$ is not a vertex in $K_m$, then $K_m$ can include at most two elements from each of the $j$ cosets, so $m \leq 2j < 2j+1$. As we have exhibited a red-blue coloring of $K_{3j+1}^{(3)}$ which contains neither a red $C_{4}^{(3)}$ nor a blue $K_{2j+1}$, this shows that $R(C_{4}^{(3)}, K_{2j+1}^{(3)};3) > 3j+1$. \end{proof} \begin{corollary} If $j$ is an odd positive integer such that $3j+1$ is prime, then $C_{4}^{(3)}$ is not $(2j+1)$-good. \end{corollary} \begin{proof} From the preceding Theorem, $R(C_{4}^{(3)},K_{2j+1};3) > 3j+1 = \frac{3}{2}(2j+2) - 2.$ \end{proof} \begin{corollary} $C_4^{(3)}$ is not $5$-good. \end{corollary} \begin{proof} This is an application of the preceding Corollary. \end{proof} \noindent We leave open the general question of determining which other $3$-uniform cycles are $n$-good. \section{Conclusion} From this work, we know that there are infinitely many 3-uniform trees that are $n$-good. We have also found additional values for which the $3$-uniform path $P^{(3)}_m$ is $n$-good, as well as determining when certain cycles are \textit{not} $n$-good. We conclude by stating a few conjectures that follow from our work. \begin{conjecture} If $r\ge 2$ and $T_1$ and $T_2$ are any $r$-uniform trees of order $m$, then $$R(T_1, K_n^{(r)};r)=R(T_2, K_n^{(r)};r).$$ \end{conjecture} \noindent A stronger statement is contained in the following conjecture. \begin{conjecture} If $r\ge 2$ and $T$ is any $r$-uniform tree, then $T$ is $n$-good. \end{conjecture} \noindent We stated the above conjectures for general $r$-uniform hypergraphs, but even proving them for the $3$-uniform case would be substantial. Other directions for future inquiry may also include exploring properties of $H_1$-good hypergraphs for cases in which $H_1$ is not complete. \bibliographystyle{amsplain}
{ "timestamp": "2017-10-17T02:16:42", "yymm": "1710", "arxiv_id": "1710.05731", "language": "en", "url": "https://arxiv.org/abs/1710.05731", "abstract": "Trees fill many extremal roles in graph theory, being minimally connected and serving a critical role in the definition of $n$-good graphs. In this article, we consider the generalization of trees to the setting of $r$-uniform hypergraphs and how one may extend the notion of $n$-good graphs to this setting. We prove numerous bounds for $r$-uniform hypergraph Ramsey numbers involving trees and complete hypergraphs and show that in the $3$-uniform case, all trees are $n$-good when $n$ is odd or $n$ falls into specified even cases.", "subjects": "Combinatorics (math.CO)", "title": "Trees and $n$-Good Hypergraphs", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9883127444139621, "lm_q2_score": 0.8221891305219504, "lm_q1q2_score": 0.8125799960134781 }
https://arxiv.org/abs/0902.0958
Randomized Kaczmarz solver for noisy linear systems
The Kaczmarz method is an iterative algorithm for solving systems of linear equations Ax=b. Theoretical convergence rates for this algorithm were largely unknown until recently when work was done on a randomized version of the algorithm. It was proved that for overdetermined systems, the randomized Kaczmarz method converges with expected exponential rate, independent of the number of equations in the system. Here we analyze the case where the system Ax=b is corrupted by noise, so we consider the system where Ax is approximately b + r where r is an arbitrary error vector. We prove that in this noisy version, the randomized method reaches an error threshold dependent on the matrix A with the same rate as in the error-free case. We provide examples showing our results are sharp in the general context.
\section{Introduction} The Kaczmarz method~\cite{K37:Angena} is one of the most popular solvers of overdetermined linear systems and has numerous applications from computer tomography to image processing. It is an iterative method, and so therefore is practical in the realm of very large systems of equations. The algorithm consists of a series of alternating projections, and is often considered a type of \textit{Projection on Convex Sets} (POCS) method. Given a consistent system of linear equations of the form $$ Ax = b, $$ the Kaczmarz method iteratively projects onto the solution spaces of each equation in the system. That is, if $a_1, \ldots, a_m \in \mathbb{R}^n$ denote the rows of $A$, the method cyclically projects the current estimate orthogonally onto the hyperplanes consisting of solutions to $\pr{a_i}{x} = b_i$. Each iteration consists of a single orthogonal projection. The algorithm can thus be described using the recursive relation, $$ x_{k+1} = x_k + \frac{b_i - \pr{a_i}{x_k}}{\|a_i\|_2^2}a_i, $$ where $x_k$ is the $k^{th}$ iterate and $i = (k$ mod $m) + 1$. Although the Kaczmarz method is popular in practice, theoretical results on the convergence rate of the method have been difficult to obtain. Most known estimates depend on properties of the matrix $A$ which may be time consuming to compute, and are not easily comparable to those of other iterative methods (see e.g. ~\cite{DH97:Therate},~\cite{G05:Onthe},~\cite{HN90:Onthe}). Since the Kaczmarz method cycles through the rows of $A$ sequentially, its convergence rate depends on the order of the rows. Intuition tells us that the order of the rows of $A$ does not change the difficulty level of the system as a whole, so one would hope for results independent of the ordering. One natural way to overcome this is to use the rows of $A$ in a random order, rather than sequentially. Several observations were made on the improvements of this randomized version~\cite{N86:TheMath,HM93:Algebraic}, but only recently have theoretical results been obtained~\cite{SV06:Arandom,SV09:Arand}. \subsection{Randomized Kaczmarz} In designing a random version of the Kaczmarz method, it is necessary to set the probability of each row being selected. Strohmer and Vershynin propose in~\cite{SV06:Arandom,SV09:Arand} to set the probability proportional to the Euclidean norm of the row. Their revised algorithm can then be described by the following: $$ x_{k+1} = x_k + \frac{b_{p(i)} - \pr{a_{p(i)}}{x_k}}{\|a_{p(i)}\|_2^2}a_{p(i)}, $$ where $p(i)$ takes values in $\{1, \ldots, m\}$ with probabilities $\frac{\|a_{p(i)}\|_2^2}{\|A\|_F^2}$. Here and throughout, $\|A\|_F$ denotes the Frobenius norm of $A$ and $\|\cdot\|_2$ denotes the usual Euclidean norm or spectral norm for vectors or matrices, respectively. We note here that of course, one needs some knowledge of the norm of the rows of $A$ in this version of the algorithm. In general, this computation takes $\mathrm{O}(mn)$ time. However, in many cases such as the case in which $A$ contains Gaussian entries, this may be approximately or exactly known. In~\cite{SV06:Arandom,SV09:Arand}, Strohmer and Vershynin prove the following exponential bound on the expected rate of convergence for the randomized Kaczmarz method, \begin{equation}\label{SVbound} \mathbb{E}\|x_k - x\|_2^2 \leq \Big(1 - \frac{1}{R}\Big)^k\|x_0 - x\|_2^2, \end{equation} where $R = \|A^{-1}\|^2\|A\|_F^2$, $x_0$ is an arbitrary initial estimate, and $\mathbb{E}$ denotes the expectation (over the choice of the rows). Here and throughout, we will assume that $A$ has full column rank so that $\|A^{-1}\| \overset{\mathrm{\scriptscriptstyle{def}}}{=} \inf\{M : M\|Ax\|_2 \geq \|x\|_2$ for all $x\}$ is well defined. We comment here that this particular mixed condition number comes as an immediate consequence of the simple probabilities used within the randomized algorithm. The first remarkable note about this result is that it is essentially independent of the number $m$ of equations in the system. Indeed, by the definition of $R$, $R$ is proportional to $n$ within a square factor of $\kappa(A)$, the condition number of $A$ ($\kappa(A)$ is defined as the ratio of the largest to smallest singular values of $A$). This bound also demonstrates, however, that the Kaczmarz method is an efficient alternative to other methods only when the condition number is very small. If this is not the case, then other alternative methods may offer improvements in practice. The bound~\eqref{SVbound} and the relationship of $R$ to $n$ shows that the estimate $x_k$ converges exponentially fast to the solution in just $\mathrm{O}(n)$ iterations. Since each iteration requires $\mathrm{O}(n)$ time, the method overall has a $\mathrm{O}(n^2)$ runtime. Being an iterative algorithm, it is clear that the randomized Kaczmarz method is competitive only for very large systems. For such large systems, the runtime of $\mathrm{O}(n^2)$ is clearly superior to, for example, Gaussian elimination which has a runtime of $\mathrm{O}(mn^2)$. Also, since the algorithm needs only access to the randomly chosen rows of $A$, the method need not know the entire matrix $A$, which for very large systems is a clear advantage. Thus the interesting cases for the randomized method are those in which $n$ and $m$ are large, and especially those in which $m$ is extremely large. Strohmer and Vershynin discuss in detail in Section 4.2 of~\cite{SV09:Arand} cases where the randomized Kaczmarz method even outperforms the conjugate gradient method (CGLS). They show that for example, randomized Kaczmarz computationally outperforms CGLS for Gaussian matrices when $m > 3n$. Numerical experiments in~\cite{SV09:Arand} also demonstrate advantages of the randomized Kaczmarz method in many cases. Since the results of \cite{SV06:Arandom,SV09:Arand}, there has been some further discussion about the benefits of this randomized version of the Kaczmarz method (see \cite{CHJ09:Anote,SV09:Comments}). The Kaczmarz method has been studied for over seventy years, and is useful in many applications. The notion of selecting the rows randomly in the method has been proposed before (see \cite{N86:TheMath,CFMSS92,HM93:Algebraic}), and improvements over the standard method were observed. However, the work by Strohmer and Vershynin in~\cite{SV06:Arandom,SV09:Arand} provides the first proof on the rate of convergence. The rate is exponential in expectation and is in terms of standard matrix properties. We are not aware of any other Kaczmarz method that provably achieves exponential convergence. It is important to note that the method of row selection proposed in this version of the randomized Kaczmarz method is \textit{not} optimal, and an example that demonstrates this is given in~\cite{SV09:Arand}. However, under this selection strategy, the convergence rates proven in~\cite{SV06:Arandom,SV09:Arand} are optimal, and there are matrices that satisfy the proven bounds exactly. The selection strategy in this method was chosen because it often yields very good results, allows a provable guarantee of exponential convergence, and is computationally efficient. Since the algorithm selects rows based on their row norms, it is natural to ask whether one can simply scale the rows any way one wishes. Indeed, choosing the rows based on their norms is related to the notion of applying a diagonal preconditioner. However, since finding the optimal diagonal preconditioner for a system $Ax=b$ is itself a task that is often more costly than inverting the entire matrix, we select an easier, although not optimal, preconditioner that simply scales by the (square of the) row norms. This type of preconditioner yields a balance of computational cost and optimality (see \cite{vdS69:Cond,S69:Optimality}). The distinction between the effect of an alternative diagonal preconditioner on the Kaczmarz method versus the randomized method discussed here is important. If the system is multiplied by a diagonal matrix, the standard Kaczmarz method will not change, since the angles between all rows do not change. However, such a multiplication to the system in our randomized setting changes the probabilities of selecting the rows (by definition). It is then not a surprise that this will also affect the convergence rate proved for this method (since multiplication will affect the value of $R$ in~\eqref{SVbound}). This randomized version of the Kaczmarz method provides clear advantages over the standard method in many cases. Using the selection strategy above, Strohmer and Vershynin were able to provide a proof for the expected rate of convergence that shows exponential convergence. No such convergence rate for any Kaczmarz method has been proven before. These benefits lead one to question whether the method works in the more realistic case where the system is corrupted by noise. In this paper we provide theoretical and empirical results to suggest that in this noisy case the method converges exponentially to the solution within a specified error bound. The error bound is proportional to $\sqrt{R}$, and we also provide a simple example showing this bound is sharp in the general setting. \section{Main Results} Theoretical and empirical studies have shown the randomized Kaczmarz algorithm to provide very promising results. Here we show that it also performs well in the case where the system is corrupted with noise. In this section we consider the consistent system $Ax=b$ after an error vector $r$ is added to the right side: $$ Ax \approx b + r. $$ Note that we do not require the perturbed system to be consistent. First we present a simple example to gain intuition about how drastically the noise can affect the system. To that end, let $A$ be the $n \times n$ identity matrix, $b=0$, and suppose the error is the vector whose entries are all one, $r = (1, 1, \ldots, 1)$. Then the solution to the noisy system is clearly $x = r = (1, 1, \ldots, 1)$, and the solution to the unperturbed problem is $x=0$. By Jensen's inequality, we have $$ \Big(\mathbb{E}\|x_k - r\|_2\Big)^2 \leq \mathbb{E}\Big(\|x_k - r\|_2^2\Big). $$ Now considering the noisy problem, we may substitute $r$ for $x$ in~\eqref{SVbound}. Combining this with Jensen's inequality above, we obtain \begin{equation}\label{X1} \mathbb{E}\|x_k - r\|_2 \leq \Big(1 - \frac{1}{R}\Big)^{k/2}\|x_0-r\|_2. \end{equation} Then by the triangle inequality, we have $$ \|r-x\|_2 \leq \|r-x_k\|_2 + \|x_k-x\|_2. $$ Next, by taking expectation and using~\eqref{X1} above, we have $$ \mathbb{E}\|x_k - x\|_2 \geq \|r - x\|_2 - \Big(1 - \frac{1}{R}\Big)^{k/2}\|x_0-r\|_2. $$ Finally by the definition of $r$ and $R$, this implies $$ \mathbb{E}\|x_k - x\|_2 \geq \sqrt{R} - \Big(1 - \frac{1}{R}\Big)^{k/2}\|x_0-r\|_2. $$ This means that the limiting error between the iterates $x_k$ and the original solution $x$ is $\sqrt{R}$. In~\cite{SV06:Arandom,SV09:Arand} it is shown that the bound provided in~\eqref{SVbound} is optimal, so even this trivial example demonstrates that if we wish to maintain a general setting, the best error bound for the noisy case we can hope for is proportional to $\sqrt{R}$. Our main result proves this exact theoretical bound. \begin{theorem}[Noisy randomized Kaczmarz]\label{thm} Let $A$ have full column rank and assume the system $Ax=b$ is consistent. Let $x_k^*$ be the $k^{th}$ iterate of the noisy randomized Kaczmarz method run with $Ax \approx b + r$, and let $a_1, \ldots a_m$ denote the rows of $A$. Then we have $$ \mathbb{E}\|x_k^* - x\|_2 \leq \Big(1 - \frac{1}{R}\Big)^{k/2}\|x_{0}\|_2 + \sqrt{R}\gamma, $$ where $R = \|A^{-1}\|^2\|A\|_F^2$, $\gamma = \max_i \frac{|r_i|}{\|a_i\|_2}$, and the expectation is taken over the choice of the rows in the algorithm. \end{theorem} \begin{remark} In the case discussed above, note that we have $\gamma = 1$, so the example indeed shows the bound is sharp. \end{remark} One may also recall the bound from perturbation theory (see e.g.~\cite{HJ85:Matrix-Analysis}) on the relative error in the perturbed case. If we let $\hat{x} = A^\dagger (x+r)$ (where $A^\dagger \overset{\mathrm{\scriptscriptstyle{def}}}{=} (A^*A)^{-1}A^*$ denotes the left inverse of $A$), then $$ \frac{\|x-\hat{x}\|_2}{\|x\|_2} \leq \kappa(A) \frac{\|r\|_2}{\|Ax\|_2}. $$ By applying the bound $\sqrt{R} \leq \kappa(A)\sqrt{n}$ to Theorem~\ref{thm} above, we obtain the bound $$ \frac{\|x-\hat{x}\|_2}{\|x\|_2} \leq \kappa(A) \max_i\frac{\sqrt{n}|r_i|}{\|a_i\|_2\|x\|_2}. $$ These bounds look similar in spirit, providing some more reassurance to the sharpness of the error bound. It is important to note though that the first is obtained by applying the left inverse rather than an iterative method, which explains why the bounds are not exactly equal. Of course for problems of large sizes, applying the inverse may not even be computationally feasible. Before proving the theorem, it is important to first analyze what happens to the solution spaces of the original equations $Ax=b$ when the error vector is added. Letting $a_1, \ldots a_m$ denote the rows of $A$, we have that each solution space $\pr{a_i}{x} = b_i$ of the original system is a hyperplane whose normal is $\frac{a_i}{\|a_i\|_2}$. When noise is added, each hyperplane is translated in the direction of $a_i$. Thus the new geometry consists of hyperplanes parallel to those in the noiseless case. A simple computation provides the following lemma which specifies exactly how far each hyperplane is shifted. \begin{lemma}\label{easylem} Let $H_i$ be the affine subspaces of $\mathbb{R}^n$ consisting of the solutions to the unperturbed equations, $H_i = \{x: \left\langle a_i, x \right\rangle = b_i\}$. Let $H_i^*$ be the solution spaces of the noisy equations, $H_i^* = \{x: \left\langle a_i, x \right\rangle = b_i + r_i\}$. Then $H_i^* = \{w + \alpha_i a_i : w\in H_i\}$ where $\alpha_i = \frac{r_i}{\|a_i\|_2^2}$. \end{lemma} \begin{remark} Note that this lemma does not imply that the noisy system is consistent. By definition of $H_i^*$ it is clear that each subspace is non-empty, but we are not requiring that the intersection of all $H_i^*$ be non-empty. \end{remark} \begin{proof} First, if $w\in H_i$ then $\pr{a_i}{w + \alpha a_i} = \pr{a_i}{w} + \alpha\|a_i\|_2^2 = b_i + r_i$, so $w + \alpha a_i \in H_i^*$. Next let $u \in H_i^*$. Set $ w = u - \alpha a_i$. Then $\pr{a_i}{w} = \pr{a_i}{u} - r_i = b_i + r_i - r_i = b_i$, so $w \in H_i^*$. This completes the proof. \end{proof} We will also utilize the following lemma which is proved in the proof of Theorem 2 in~\cite{SV06:Arandom,SV09:Arand}. \begin{lemma}\label{SVlem} Let $x_{k-1}^*$ be any vector in $\mathbb{R}^n$ and let $x_k$ be its orthogonal projection onto a random solution space as in the noiseless randomized Kaczmarz method run with $Ax=b$. Then we have $$ \mathbb{E}\|x_k - x\|_2^2 \leq \Big(1 - \frac{1}{R}\Big)\|x_{k-1}^* - x\|_2^2, $$ where $R = \|A^{-1}\|^2\|A\|_F^2$, and the expectation is taken over the choice of the rows in the algorithm. \end{lemma} We are now prepared to prove Theorem~\ref{thm}. \begin{proof}[of Theorem~\ref{thm}] Let $x_{k-1}^*$ denote the $(k-1)^{th}$ iterate of noisy randomized Kaczmarz. Using notation as in Lemma~\ref{easylem}, let $H_i^*$ be the solution space chosen in the $k^{th}$ iteration. Then $x_k^*$ is the orthogonal projection of $x_{k-1}^*$ onto $H_i^*$. Let $x_k$ denote the orthogonal projection of $x_{k-1}^*$ onto $H_i$ (see Figure~\ref{fig0}). \begin{figure}[ht] \includegraphics[scale=0.5]{feh.eps} \caption{The parallel hyperplanes $H_i$ and $H_i^*$ along with the two projected vectors $x_k$ and $x_k^*$.}\label{fig0} \end{figure} By Lemma~\ref{easylem} and the fact that $a_i$ is orthogonal to $H_i$ and $H_i^*$, we have that $x_k^* - x = x_k - x + \alpha_i a_i$. Again by orthogonality, we have $\|x_k^* - x\|_2^2 = \|x_k - x\|_2^2 + \|\alpha_i a_i\|_2^2$. Then by Lemma~\ref{SVlem} and the definition of $\gamma$, we have $$ \mathbb{E}\|x_k^* - x\|_2^2 \leq \Big(1 - \frac{1}{R}\Big)\|x_{k-1}^* - x\|_2^2 + \gamma^2, $$ where the expectation is conditioned upon the choice of the random selections in the first $k-1$ iterations. Then applying this recursive relation iteratively and taking full expectation, we have \begin{align*} \mathbb{E}\|x_k^* - x\|_2^2 &\leq \Big(1 - \frac{1}{R}\Big)^k\|x_{0} - x\|_2^2 + \sum_{j=0}^{k-1}\Big(1 - \frac{1}{R}\Big)^j\gamma^2\\ &\leq \Big(1 - \frac{1}{R}\Big)^k\|x_{0} - x\|_2^2 + R\gamma^2. \end{align*} By Jensen's inequality we then have $$ \mathbb{E}\|x_k^* - x\|_2 \leq \left(\Big(1 - \frac{1}{R}\Big)^k\|x_{0} - x\|_2^2 + R\gamma^2\right)^{1/2} \leq \Big(1 - \frac{1}{R}\Big)^{k/2}\|x_{0} - x\|_2 + \sqrt{R}\gamma. $$ This completes the proof. \end{proof} \section{Numerical Examples} In this section we describe some of our numerical results for the randomized Kaczmarz method in the case of noisy systems. Figure~\ref{trio} depicts the error between the estimate by randomized Kaczmarz and the actual signal, in comparison with the predicted threshold value for several types of matrices. The first study was conducted for 100 trials using $2000 \times 100$ Gaussian matrices (matrices who entries are i.i.d. Gaussian with mean $0$ and variance $1$) and independent Gaussian noise of norm $0.02$. The systems were homogeneous, meaning $x=0$ and $b=0$. The thick line is a plot of the threshold value, $\gamma\sqrt{R}$ for each trial. The thin line is a plot of the error in the estimate after the given amount of iterations for the corresponding trial. The scatter plot displays the convergence of the method over several randomly chosen trials from this study, and clearly shows exponential convergence. The second study is a similar study but for the experiments in which we used partial Fourier matrices. In this case we use $m=700$ and $n=101$. For $j=1\ldots 700$ and $k=-50\ldots 50$, we set $A_{j,k} = \exp(2\pi ikt_j)$, where $t_j$ are generated uniformly at random on $[0, 1]$. This type of generation is used to create nonuniformly spaced sampling values, and is used in many applications in signal processing, such as in the reconstruction of bandlimited signals. The third study is similar but used matrices whose entries are Bernoulli ($0/1$ each with probability $0.5$). All of these experiments were conducted to demonstrate that the error found in practice is close to that predicted by the theoretical results. As is evident by the plots, the error is quite close to the threshold in all cases. \begin{figure}[h!] \begin{center} $\begin{array}{c@{\hspace{.1in}}c} \includegraphics[width=2.5in]{DgaussR163er02.eps} & \includegraphics[width=2.5in]{DgaussR163er02SCAT.eps} \\ \includegraphics[width=2.5in]{four2.eps} & \includegraphics[width=2.5in]{bernR162er02.eps} \\ \end{array}$ \end{center} \caption{The comparison between the actual error in the randomized Kaczmarz estimate (thin line) and the predicted threshold (thick line). The mean values of $R$ in these experiments were 163.2 (upper left), 428.6 (lower left) and 162.4 (lower right). The scatter plot shows exponential convergence over several trials.} \label{trio} \end{figure} \subsection*{Acknowledgment} I would like to thank Roman Vershynin for suggestions that simplified the proofs and for many thoughtful discussions. I would also like to thank Thomas Strohmer for his very appreciated guidance. \newpage
{ "timestamp": "2010-03-25T01:00:13", "yymm": "0902", "arxiv_id": "0902.0958", "language": "en", "url": "https://arxiv.org/abs/0902.0958", "abstract": "The Kaczmarz method is an iterative algorithm for solving systems of linear equations Ax=b. Theoretical convergence rates for this algorithm were largely unknown until recently when work was done on a randomized version of the algorithm. It was proved that for overdetermined systems, the randomized Kaczmarz method converges with expected exponential rate, independent of the number of equations in the system. Here we analyze the case where the system Ax=b is corrupted by noise, so we consider the system where Ax is approximately b + r where r is an arbitrary error vector. We prove that in this noisy version, the randomized method reaches an error threshold dependent on the matrix A with the same rate as in the error-free case. We provide examples showing our results are sharp in the general context.", "subjects": "Numerical Analysis (math.NA); Probability (math.PR)", "title": "Randomized Kaczmarz solver for noisy linear systems", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9883127416600689, "lm_q2_score": 0.8221891305219504, "lm_q1q2_score": 0.812579993749257 }
https://arxiv.org/abs/2107.09029
Conditions for matchability in groups and field extensions
The origins of the notion of matchings in groups spawn from a linear algebra problem proposed by E. K. Wakeford [24] which was tackled in 1996 [10]. In this paper, we first discuss unmatchable subsets in abelian groups. Then we formulate and prove linear analogues of results concerning matchings, along with a conjecture that, if true, would extend the primitive subspace theorem. We discuss the dimension $m$-intersection property for vector spaces and its connection to matching subspaces in a field extension, and we prove the linear version of an intersection property result of certain subsets of a given set.
\section{Introduction} Throughout this paper, we may assume that $G$ is an additive abelian group, unless stated otherwise. Let $B$ be a finite subset of $G$ which does not contain the neutral element. For any subset $A$ in $G$ with the same cardinality as $B$, a {\it matching} from $A$ to $B$ is defined to be a bijection $f:A\to B$ such that for any $a\in A$, we have $a+f(a)\not\in A$. Evidently, it is necessary for the existence of a matching from $A$ to $B$ that $\#A=\# B$ and $0\not\in B$. One says that a group $G$ has the {\it matching property} if these necessary conditions are sufficient as well. That is, we say that $G$ has the matching property if for any pairs of finite subsets $A$ and $B$ of it, the conditions $\#A=\#B$ and $0\notin B$ suffice to guarantee the existence of a matching between $A$ and $B$. The notion of matchings in abelian groups was introduced by Fan and Losonczy in \cite{Fan} in order to generalize a geometric property of lattices in Euclidean space related to an old problem of E. K. Wakeford concerning canonical forms for symmetric tensors. In particular, Wakeford in \cite{Wakeford} considered the question of which sets of monomials are removable from a generic homogeneous polynomial through a linear change in its variables. The notion of matching has been investigated in literature extensively in various ways. See \cite{Alon, Eliahou 1, Aliabadi 1} for more results on matchings. A related notion is that of a matching between subspaces of a field extension. In \cite{Eliahou 3}, Eliahou and Lecouvey formulate some linear analogues of matchings in groups and prove similar results in the linear context. Later, the linear version of a matching is extensively studied by the first author with collaborators in \cite{Aliabadi 1, Aliabadi 2, Aliabadi 3, Aliabadi 4}. There are still many fascinating open problems in this area. This paper continues to study some problems motivated in \cite{Aliabadi 1, Aliabadi 2, Aliabadi 4}. We extend our results on matchings in groups to the linear setting, which generalizes some results of \cite{Aliabadi 2}. We study primitive subspaces and their applications in partitioning finite fields. Finally, in a related result to matchings, we discuss the dimension $m$-intersection property for vector subspaces. The analogy between matchings in abelian groups and in field extensions is highlighted throughout the paper, and numerous open questions are presented for further inquiry. Our tools mix linear algebra and combinatorial number theory. \subsection{Main results} We state our main theorems. The needed definitions from matchings in groups and linear matchings appear in Sections \ref{Matching property} and \ref{A Dimension Criterion}. We start with the following theorem in which we present the size of the largest matchable subsets of two given sets. \begin{theorem}\label{Unmatchable} Let $G$ be an abelian group and $A$ and $B$ be nonempty finite subsets of $G$ with $\#A=\#B$ and $A+B\neq A$. Assume that $A$ is not matched to $B$. Then $M(A,B)=\#A-D(A,B)$. \end{theorem} The following theorem is concerned with the primitive subspace theorem which is motivated by certain matchable vector subspaces of a simple field extension. \begin{theorem}\label{general field} Let $\mathcal{F}$ be a nonempty finite collection of proper subspaces of a $K$-vector space $V$, where $\dim_K(V)=n<\infty$. Let $\#\mathcal{F}\leq \# K$ and $s=\max\{\dim_K(S)\mid S\in\mathcal{F}\}$. Let $T\subseteq V$ be a subspace maximal with the property that $T\cap S=0$ for all $S\in\mathcal{F}$. Then $\dim_K(T)=n-s$. \end{theorem} In the following theorem, we present the linear analogue of a theorem pertaining to matchings in the group setting. \begin{theorem}\label{Linear analog of matchings} Let $K\subset F$ be a field extension, $A$ and $B$ be two $n$-dimensional $K$-subspaces of $F$, and $n\geq1$. Assume further that for any $b\in B\setminus\{0\}$, $A$ does not contain any nontrivial linear translate of $K(b)$. Then $A$ is matched to $B$. \end{theorem} \begin{theorem}\label{vector space span} Let $K\subset F$ be a field extension and $A$ and $B$ be $n$-dimensional $K$-subspaces of $F$ with $n>1$. If $A$ is matched to $B$, then $\langle AB \rangle\neq A$. \end{theorem} The following linear algebra result relies on two theorems. The first one is a theorem due to Rado from \cite{Rado} in which the necessary and sufficient condition for existing a free transversal is provided. The second one is an observation concerning a property for vector subspaces called the $m$-intersection property. \begin{theorem}\label{linear algebra} Let $W$ be an $n$-dimensional vector space and $ \mathcal{U}=\{U_1,U_2,\ldots,U_t\}$, $t<n$, be a family of subspaces of $W$ each of dimension $n-1$, and assume that $\mathcal{U}$ satisfies the dimension $n$-intersection property. Then there exist $n-t$ subspaces $U_{t+1},\ldots,U_n$ of $W$ of dimension $n-1$ and a basis $\{x_1,\ldots,x_n\}$ for $W$ such that \begin{align*} \ker x_i^*+\ker x_j^*=W, \end{align*} for any $i$ and $j$ with $U_i\neq U_j$, $1\leq i,j\leq n$. \end{theorem} We now present an outline of the paper. In Section \ref{Matching property}, we discuss matchings in the context of abelian groups and connect this notion to matchings in bipartite graphs. With this, along with a result on maximum matchings in bipartite graphs, we elucidate the algebraic structure of unmatchable subsets. In Section \ref{A Dimension Criterion}, we present a generalization of a linear algebra result on primitive subspaces of field extensions which arose from matching subspaces in simple field extensions. In Section \ref{The Linear Matching Property}, we formulate and prove linear analogues of results concerning matchings in groups. Section \ref{intersection property} establishes a link between matchable subspaces and a certain property of finite families of vector subspaces called the dimension $m$-intersection property. Finally, in Section \ref{Future}, we present a possible direction for future work in this line of research. \section{Matching Property in Abelian Groups}\label{Matching property} To begin our investigation, we note that many results on the problem of classifying matchable subsets in groups are known. One of the earliest results in this direction appears in \cite{Losonczy}, where it is shown that an abelian group satisfies the matching property if and only if it is either torsion-free or of prime order. Later, this result is generalized for arbitrary groups \cite{Eliahou 1}. This classification was established using methods pertaining to additive number theory and combinatorics. Specifically, the additive tools used are lower bounds on the size of the sumset \[ A+B=\{a+b:\ a\in A\, \text{and}\ b\in B\}, \] in $G$, and the main combinatorial tool is a result due to Philip Hall \cite{Hall} which states as follows: \begin{theorem}[Hall's marriage theorem] Let $\mathcal {G}_{A,B}=(V(\mathcal {G}_{A,B}),E(\mathcal {G}_{A,B}))$ be a bipartite graph with bipartitions $A$ and $B$ so that $\#A=\#B$. Then $\mathcal {G}$ has a perfect matching if and only if for each subset $S$ of $A$, $\#S \leq \#N(S)$, where $N(S)$ denotes the set of vertices which are adjacent to at least one vertex in $S$ \end{theorem} Having the classification of groups in terms of the matching property in place, a natural question one might raise is: given an arbitrary group $G$, is there any criterion to characterize matchable subsets in a more general way? This problem is studied in \cite{Aliabadi 2, Aliabadi 3} which highlights a close relation between matchable subsets and certain cosets of $G$. In particular, it is observed in \cite{Eliahou 1} that the existence of nontrivial proper finite subgroups is an obstruction for the matching property. Inspired by this observation, the following is proved in \cite {Aliabadi 2}. \begin{proposition}\label{Matching generalziation} Let $G$ be an abelian group and let $A$ and $B$ be finite subsets of G with the same cardinality. Assume further that for any element $b \in B$, A does not contain any coset of the subgroup generated by $b$. Then there is a matching from $A$ to $B$. \end{proposition} Motivated by Proposition \ref{Matching generalziation}, one may ask that if $A$ is matched to $B$, can we conclude that for any element $b\in B$, $A$ does not contain any subgroup generated by $b$? The answer is negative. For example, consider $G=\mathbb{Z}/6\mathbb{Z}$ and $A=B=G\setminus\{\bar{0}\}$. Then $A$ is matched to $B$ via the map $\bar{a}\mapsto -\bar{a}$, but $A$ contains $\{\bar{1},\bar{4}\}=\bar{1}+\langle\bar{3}\rangle$, a coset of the subgroup generated by $\bar{3}\in B$. In Section \ref{The Linear Matching Property}, we will formulate and prove a linear analogue of Proposition \ref{Matching generalziation}. \subsection{Unmatchable subsets} All preceding results in the literature on matchings in the group setting address the conditions and cases in which certain subsets are matchable. In this subsection, we briefly investigate unmatchable subsets. Let $A$ and $B$ be two finite nonempty subsets of an abelian group $G$ with the same cardinality $n$ and $0\not\in\ B$. Assume further that $A$ is not matched to $B$. We are interested in determining the size of the largest possible subset $A_0$ of $A$ for which $A_0$ can be matched to a subset of $B$ in the usual sense. Denote the size of such a maximum subset by $M(A,B)$ provided that $A+B\neq A$, and $M(A,B)=0$ if and only if $A+B=A$. Motivated by this definition, we investigate the structure of subsets with $A+B=A$. \begin{lemma}\label{Coset} Let $A$ and $B$ be nonempty finite subsets of an arbitrary group $G$. Assume that $\# A\leq\# B$ and that $A+B=A$. Then $B$ is a subgroup of $G$, and $A$ is a coset of $B$. \end{lemma} \begin{proof} If $b$ is in $B$, then the mapping $\underset{a \longmapsto a+b}{\varphi: A \rightarrow A+b}$ is injective, and thus $\#(A+b)=\# A$. Since $A+b\subset A+B=A$ and $A$ is finite, it follows that $A+b=A$. Now let $X=\{x\in G: A+x=A\}$. Then $B\subset X$ and $X$ is a subgroup of $G$. Also $A+X=A$. If $a\in A$, then $a+X$ is a left coset of the subgroup $X$, and thus $\#(a+X)=\# X$, and we have $a+X\subset A+X=A$. Then $\# B\geq \# A\geq \#(a+X)=\# X\geq \# B$, so $\# X=\# B$, and $\# A=\#(a+X)$. Since $B$ is finite and contained in $X$, and $\#B=\# X$, it follows that $B=X$, so $B$ is a subgroup as desired. Since $A$ is finite and contains $a+X$, and $\# A=\# (a+X)$, it follows that $A=a+X$. We know that $X=B$, so $A=a+X=a+B$, and thus $A$ is the left coset $a+B$ of $B$. The proof is complete. \end{proof} \begin{corollary}\label{Lemma consequence} Let $A$, $B$ and $G$ be as in Lemma \ref{Coset}. Then $0\in B$. \end{corollary} \begin{proof} It is immediate from $B$ being a subgroup of $G$. \end{proof} \begin{corollary} Let $A$, $B$ be nonempty finite subsets of an arbitrary group $G$ of the same cardinality. Then $M(A,B)$=0 if and only if $B$ is a subgroup of $G$ and $A$ is a left coset of $B$. \end{corollary} \begin{proof} It is immediate. \end{proof} We associate a bipartite graph $\mathcal {G}_{A,B}=(V(\mathcal {G}_{A,B}),E(\mathcal {G}_{A,B}))$ to the pair of sets $A$ and $B$ as follows. The nodes of $\mathcal {G}_{A,B}$ are given by the bipartition $V(\mathcal {G}_{A,B})=A\cup B$, and there is an edge $e(a,b)\in E(\mathcal {G}_{A,B})$ joining $a\in A$ to $b\in B$ if and only if $a+b\notin A$. Assuming $A$ is not matched to $B$, Hall's condition fails for some $S\subset\ A$, i.e. $\#S>\#N(S)$, where $N(S)$ stands for the set of vertices which are adjacent to at least one vertex in $S$. Define $D(A,B)= \text{max}\{\#S-\# \cup_{a\in S}\ B_a: S\subset A\}$. Since $A$ is not matched to $B$, then $D(A,B)> 0$. In what follows, we prove Theorem \ref{Unmatchable} in which the size of the largest matchable subsets of $A$ and $B$ is provided. Our approach to proving the statement requires that we adapt the existing proofs for Hall's marriage theorem. In other words, we shall employ an argument similar to that of the existence of perfect matchings in balanced bipartite graphs. \begin{proof}[Proof of Theorem \ref{Unmatchable}.] Let $D(A,B)=d$ and $\# A=n$. It suffices to show that the bipartite graph $\mathcal {G}$ associated to $(A,B)$ has a matching of size $\#A-d$ and that the size of every matching in $\mathcal {G}$ is less than or equal to $\# A-d$. We break the proof down into two steps: \noindent {\textbf{Step 1:}} $M(A,B)\leq n-d$: According to the definition of $D(A,B)$, at least $d$ vertices of $A$ will remain unmatched in any matching in $\mathcal {G}$ in the graph setting. This implies $M(A,B)\leq d-n$. \noindent {\textbf{Step 2:}} $M(A,B)\geq n-d$: Suppose $M(A,B)=n-k$, for some $0\leq k \leq {n-1}$. Then in our matching, there must be $k$ unmatched vertices in $A$ for which the alternating tree rooted at these vertices does not contain an augmenting path. We can construct a set $S$ with $\#S-\#\cup_{a\in S}\ B_a=k$, where $B_a$ is defined as $B_a=\{y\in B: a+y\notin A\}$. Since $D(A,B)$ is the maximum of such differences, it follows that $\# D(A,B)\geq k$. This implies that $n-\# D(A,B)\leq n-k$. So $M(A,B)\geq n-d$.\ By step 1 along with step 2 we totally arrive at the desired result. \end{proof} \begin{remark} Note that the method of associating a bipartite graph to our subsets in Theorem \ref{Unmatchable} first was used in \cite{Aliabadi 1} as a tool to count the number of matchings of matchable subsets of a given abelian group. See also \cite{Hamidoune} for more details about the counting aspect of matchable pairs of sets. \end{remark} \section{A Dimension Criterion for Primitive Matchable Subspaces}\label{A Dimension Criterion} In this section, we shall assume that $K\subset F$ is a field extension, $A,B\subset F$ are two $n$-dimensional $K$-subspaces of $F$, and $\mathcal{A}=\{a_1,\ldots,a_n\}$, $\mathcal{B}=\{b_1,\ldots,b_n\}$ are ordered bases of $A,B$, respectively. The Minkowski product $AB$ of $A$ and $B$ is defined as $AB:=\{ab:\, a\in A, b\in B\}$. Note that Eliahou and Lecouvey have introduced the following notions for matchable bases of subspaces in a field extension \cite{Eliahou 3}. The ordered basis $\mathcal{A}$ is said to be {\it matched} to an ordered basis $\mathcal{B}$ of $B$ if \begin{align*} a^{-1}_iA\cap B\subset \langle b_1,\ldots,\hat{b}_i,\ldots,b_n\rangle, \end{align*} for each $1\leq i\leq n$, where $\langle b_1,\ldots,\hat{b}_i,\ldots,b_n\rangle$ is the vector space spanned by $\mathcal{B}\setminus\{b_i\}$. The subspace $A$ is {\it matched} to the subspace $B$ if every basis of $A$ can be matched to a basis of $B$. A {\it strong matching} from $A$ to $B$ is a linear transformation $T:A\to B$ such that every basis $\mathcal{A}$ of $A$ is matched to the basis $T(\mathcal{A})$ of $B$. Finally, the extension $F$ of $K$ has the {\it linear matching property} if for every pair $A$ and $B$ of $n$-dimensional $K$-subspaces of $F$ with $n>1$ and $1\not\in B$, $A$ is matched to $B$. It is shown in \cite{Losonczy} that for a nontrivial finite cyclic group $G$ and finite nonempty subsets $A$, $B$ of $G$ with $\# A=\# B$, there exists a matching from $A$ to $B$ if every element of $B$ is a generator of $G$. The linear analogue of this result is given in \cite{Aliabadi 3} as the following theorem. \begin{theorem}\label{primitive matchings} Let $K\subset F$ be a separable field extension and $A$ and $B$ be two $n$-dimensional $K$-subspaces of $F$ with $n>1$. Then $A$ is matched to $B$ provided that $B$ is a primitive $K$-subspace of $F$. \end{theorem} Note that a $K$-subspace $B$ of $F$ is called {\it primitive} if $K(\alpha)=F$, for all $\alpha\in B\setminus\{0\}$. \begin{remark} It is worth pointing out that if $B$ is a primitive $K$-subspace of $F$, then $K\cap B=\{0\}$ (Here we are assuming that $K\subsetneq F$). \end{remark} \begin{example} Consider the field extension $\mathbb{R}\subset\mathbb{C}$ and the $\mathbb{R}$-subspace $W=\langle i\rangle$ of $\mathbb{C}$, where $\langle i\rangle$ stands for the $\mathbb{R}$-subspace of $\mathbb{C}$ generated by $i$. Then adjoining any nonzero element of $W$ to $\mathbb{R}$ covers the entirety of $\mathbb{C}$. So $W$ is a primitive $\mathbb{R}$-subspace of $\mathbb{C}$. \end{example} Motivated by Theorem \ref{primitive matchings} one may ask the size of primitive subspaces. This topic is studied in \cite{Aliabadi 4, Aliabadi 1}. It is proved in \cite{Aliabadi 4} that if $A$ is a primitive $K$-subspace of $F$ where $K$ is an infinite field, then $\dim_K A\leq [F:K]-\psi(F,K)$, where \[\psi(F,K)=\max\left\{[M:K]:\, M\text{ is a proper intermediate field of }K\subset F\right\}.\] Note that in the above definition, "proper intermediate field of $K\subset F$" means $K\subset M\subsetneq F$. Hence we have $1\leq\psi(F,K)<[F:K]$. In particular, the dimension of the largest primitive subspace is given in \cite{Aliabadi 4} in the case the base field is infinite. Later in \cite{Aliabadi 1}, this result is generalized for all base fields as follows: \begin{proposition}\label{th2.1} Let $F, K, n$ and $\psi(F,K)$ be as above. Assume that $K$ is infinite and $K\subset F$ is simple. Then \begin{align*} \psi(F,K)+\phi(F,K)=n, \end{align*} where \[\phi(F,K)=\max\left\{\dim_KV:\, V\text{ is a primitive }\text{K-subspace of F}\right\},\] namely, $\phi(F,K)$ denotes the dimension of the largest primitive subspace. \end{proposition} \begin{example}\label{er} Consider the finite field extension $\mathbb{Q}\subset \mathbb{Q} (\sqrt{2}, \sqrt{3})$. Then according to Proposition \ref{th2.1}, the dimension of the greatest primitive $\mathbb{Q}$-subspace of $\mathbb{Q} (\sqrt{2}, \sqrt{3})$ is $2$ as $[\mathbb{Q} (\sqrt{2}, \sqrt{3}):\mathbb{Q}]=4$. Thus, $\psi (\mathbb{Q} (\sqrt{2}, \sqrt{3}),\mathbb{Q})=2$. \end{example} In Theorem \ref{general field}, we generalize Proposition \ref{th2.1}. Note that the main required tools in the proof of Theorem \ref{general field} are linear covering results vector spaces stated as Lemma \ref{Covering} and Lemma \ref{Main Lemma union} in the next subsection. \subsection{Linear covering results} We begin with a well-known linear algebra theorem which asserts that a vector space over an infinite field cannot be written as a finite union of its proper subspaces. One can see \cite{Friedland-Aliabadi, Roman} for more details; however, in the case that the base field is finite, this result does not hold. We have the following scenario for the finite base field: let $V$ be a finite-dimensional vector space over $\mathbb{F}_{q}$, where $\mathbb{F}_{q}$ stands for finite field of order $q$, where $q=p^r$ for some prime $p$ and $r\in\mathbb{N}$. We say a collection $\{W_i\}_{i\in I}$ of proper $K$-subspaces of $V$ is a \textit{linear covering} of $V$ if $V=\underset{i\in I}{\bigcup} W_i$. The \textit{linear covering number} $\mathrm{LC}(V)$ of a vector space $V$ of dimension at least $2$ is the least cardinality $\#I$ of a linear covering $\{W_{i}\}_{i \in I}$ of $V$. Under the condition $\dim_KV\geq2$, which is the sufficient and necessary condition for the existence of linear coverings, we have the following result from \cite{Heden}. See also \cite{Javaheri, Khare, Luh} for more developments on the topic of covering vector spaces. \begin{lemma}\label{Covering} If $\dim_KV$ and $\#K$ are not both infinite, then $\mathrm{LC}(V)=\#K+1$ \end{lemma} Having the covering theorem for infinite base fields along with Lemma \ref{Covering} at hand, we obtain the following covering result for arbitrary base fields. \begin{lemma}\label{Main Lemma union} Let $V$ be a finite-dimensional vector space over a field $K$ and let $\mathcal{V}=\{V_i\}_{i=1}^{m}$ be a finite family of subspaces of $V$ where $m\leq\#K$. Then $V\neq\cup_{i=1}^{m}V_{i}$. \end{lemma} \begin{proof} In case $K$ is finite, the proof is an immediate consequence of Lemma \ref{Covering}. If $K$ is infinite, the proof follows from Theorem 1.2 in \cite{Roman}. \end{proof} The following short lemma will be used in the proof of Theorem \ref{general field}: \begin{lemma}\label{Sum of subspaces} Let $A$, $B$ and $C$ be subspaces of a vector space $V$, and suppose $A\cap B=0$ and $(A+B)\cap C=0$. Then $(A+C)\cap B=0$. \end{lemma} \begin{proof} Assume to the contrary that $(A+C)\cap B\neq0$. Let $b\in (A+C)\cap B$, where $b\neq0$, and write $b=a+c$, with $a\in A$ and $c\in C$. Then $c=b-a$ lies in $C\cap (A+B)=0$, so $c=0$. Thus $a=b$ lies in $A\cap B=0$. Therefore $a=0=b$. This contradicts the fact that $b\neq0$. \end{proof} In the proof of Theorem \ref{general field}, we assume that $dim_K{V}=n>2$. The cases $n=1$ and $n=2$ are straightforward to verify. \begin{proof}[Proof of Theorem \ref{general field}] Let $t=\dim_K(T)$. If $S\in\mathcal{F}$, then $T\cap S=0$, so \[n=\dim_KV\geq \dim_K(T+S)=\dim_K(T)+\dim_K(S)=t+\dim_K(S),\] and thus $\dim_K(S)\leq n-t$ for all $S\in\mathcal{F}$. Thus $s\leq n-t$, and so $t\leq n-s$. To complete the proof, we show that $t=n-s$. Otherwise, $t<n-s$, so $n>s+t$. Then \[\dim_K(V)=n>s+t\geq\dim_K(S)+\dim_K(T)\geq\dim_K(S+T),\] for all $S\in\mathcal{F}$. Then $S+T$ is a proper subspace of $V$ for all $S\in\mathcal{F}$. By Lemma \ref{Main Lemma union}, there exists a vector $v\in V$ such that $v\notin S+T$, and thus $(S+T)\cap Kv=0$ for all $S\in\mathcal{F}$. Also, by the definition of $T$, we have $S\cap T=0$, and thus by Lemma \ref{Sum of subspaces}, we have $(Kv+T)\cap S=0$ for all $S\in\mathcal{F}$. Now $v\notin S+T$, so $v\notin T$, and thus $T\subsetneq Kv+T$. This contradicts the definition of $T$. \end{proof} Observe that the condition $m\leq\#K$ in Lemma \ref{Main Lemma union} also appears in Theorem \ref{general field} as the covering theorem for vector spaces over finite fields plays a crucial role in the proofs of Theorem \ref{general field}. However, we do not encounter such a restriction when the base field is infinite. Inspired by this observation, to determine whether or not the condition $\# \mathcal{F}\leq\#K$ is removable from Theorem \ref{general field}, one may take finite-dimensional vector spaces over finite fields into account. In what follows, we first determine by an example that in Lemma Theorem \ref{general field} the condition $m\leq\#K$ cannot be relaxed. Our example signifies that the upper bound $\#K$ for the number of subspaces is strict. \begin{example} Consider the vector space $\mathbb{F}_{2}^{2}$ over $\mathbb{F}_{2}$ and $\mathbb{F}_{2}$-subspaces\linebreak $V_1$=$\{(0,0), (1,0)\}$, $V_2$=$\{(0,0), (0,1)\}$ and $V_3$=$\{(0,0), (1,1)\}$. The family $\mathcal{V}=\{V_i\}_{i=1}^{3}$ violates the condition $\# \mathcal{F}\leq\#K$ in Theorem \ref{general field}. The largest possible dimension of a subspace $W$ of $\mathbb{F}_{2}^{2}$ which intersects every $V_i$ trivially is zero. Hence, Theorem \ref{general field} fails for $\mathcal{V}$. \end{example} \begin{question}\label{Conjecture} Let $V$ be an $n$-dimensional vector space over a field $K$ and let $\mathcal{V}=\{V_i\}_{i<n,i\mid n}$ be a finite family of subspaces of $V$ indexed by positive proper divisors of $n$ that satisfy the following two properties: i) $dim_KV_i=i,$ ii) $V_i\cap V_j=V_\text{gcd(j,j)}$. Then is it true that the dimension of the largest possible subspace of $V$ which intersects every member of $\mathcal{V}$ trivially is given by the following? \[\dim_KV-\text{the largest divisor of}\ n.\] \end{question} \begin{remark} Along the same line of reasoning as in the proof of Theorem \ref{general field}, one may prove the question in the case of an infinite field by invoking the fact that a vector space over an infinite field cannot be written as a finite union of its proper subspaces. Therefore, everything boils down to the case where the base field is finite. We believe that in order to handle this case, we require stronger tools than covering results for vector spaces over finite fields. \end{remark} \subsection{A connection to a group theory result} There has been a vast literature as well as ongoing investigations on linear analogues of existing results in group theory. As a case in point, a recent result due to Bachoc et al \cite{Bachoc}, gives the linearization of a theorem of Kneser on the size of certain subsets of an abelian group. We consider the following scenario in group theory. Let $\mathbb{Z}/p^r\mathbb{Z}$ denote the cyclic group of order $p^r$, where $p$ is a prime and $r\in \mathbb{N}$. Denote the order of greatest proper subgroup of $\mathbb{Z}/p^r\mathbb{Z}$ by $\psi(\mathbb{Z}/p^r\mathbb{Z})$, and denote the number of generators of $\mathbb{Z}/p^r\mathbb{Z}$ by $\phi(\mathbb{Z}/p^r\mathbb{Z})$. Since $p$-groups have subgroups of index $p$, then $\psi(\mathbb{Z}/p^r\mathbb{Z})=p^{r-1}$. Also, it is well known that $\phi(\mathbb{Z}/p^r\mathbb{Z})=\varphi(p^r)$, where $\varphi$ stands for Euler's totient function. According to Euler's product formula, $\varphi(p^r)=p^r\left(1-\frac{1}{p}\right)=p^r-p^{r-1}$. Therefore, \begin{align} \label{primitive subspace theorem} \psi(\mathbb{Z}/p^r\mathbb{Z})+\phi(\mathbb{Z}/p^r\mathbb{Z})=p^r. \end{align} Note that Proposition \ref{th2.1} can be regarded as a linear analogue of relation \eqref{primitive subspace theorem}. Indeed, the linear analogues of ``the order of a group'', ``the order of its largest proper subgroup'' and ``the number of generators of a cyclic group'' are ``the degree of a field extension'', ``the degree of its largest proper intermediate subfield'' and ``the dimension of the largest primitive vector space'', respectively. The linear analogue of the group theory result presented as Proposition \ref{th2.1} (for general base fields) seems to be a better result than the original group theory theorem which only applies to cyclic groups of ``prime power'' order, as it applies to all finite degree extensions $F/K$ that are ``cyclic'' (i.e., monogenic as a $K$-algebra). Formulating a reasonable analogue for all finite cyclic groups shall be possible in some ways. \subsection{Partitioning finite fields} Consider the field extension $\mathbb{F}_q\subset\mathbb{F}_{q^n}$, where $n\in \mathbb{N}$ and $q=p^{r}$ for some prime $p$ and $r \in \mathbb{N}$. Let $V$ be an $\mathbb{F}_q$-subspace of $\mathbb{F}_{q^n}$. We call a set $\mathcal{P}=\{W_i\}_{i=1}^\ell$ of $\mathbb{F}_q$-subspaces of $\mathbb{F}_{q^n}$ a \textit{partition} of $V$ if every nonzero element of $V$ is in $W_i$ for exactly one $i$. See \cite{Heden} for more results on partitions of finite vector spaces. In the following observation, we provide a partition for $\mathbb{F}_{q^n}$ using its primitive $\mathbb{F}_q$-subspaces. \begin{observation}\label{partition} Consider the field extension $\mathbb{F}_q\subset\mathbb{F}_{q^n}$. Let $M$ be an intermediate subfield of $\mathbb{F}_q\subset\mathbb{F}_{q^n}$ for which $\psi(\mathbb{F}_{q^n}, \mathbb{F}_q)=[M:K]$. Let $W$ be a $\mathbb{F}_q$-primitive subspace of $\mathbb{F}_{q^n}$ such that $\phi(\mathbb{F}_{q^n}, \mathbb{F}_q)=\dim_{\mathbb{F}_q}W$. Assume that $W$ has a subspace partition $\{W_1,\ldots,W_l\}$, where $\dim_{\mathbb{F}_q}W_i=t_i\leq\psi(\mathbb{F}_{q^n},\mathbb{F}_q)$, for $1\leq i\leq l$. Then, for each $\alpha\in M\setminus\{0\}$, one can define a $t_i$-dimensional subspace $W_{i_\alpha}$ of $\mathbb{F}_{q^n}$ such that $W$, $M$ and the subspaces $W_{i_\alpha}$ form a partition of $\mathbb{F}_{q^n}$. \end{observation} \begin{proof} For each subspace $W_i$, $1\leq i\leq l$, let $T_i$ be a 1-1 linear transformation from $W_i$ into $M$. For each $\alpha\in M\setminus\{0\}$, we associate with it the following set: \begin{align*} W_{i_\alpha}=\{w+\alpha T_i(w):\ w\in W_i\}. \end{align*} Since $W_i\cap M=\{0\}$ and $W_i\cap W_j=\{0\}$, for $i\neq j$, one can easily verify that the $W_{i_\alpha}$'s, $M$ and $W$ form a partition of $\mathbb{F}_{q^n}$ into subspaces. \end{proof} \section{The Linear Matching Property, Improved}\label{The Linear Matching Property} Our main goal in this section is to formulate and prove a linear analogue of Proposition \ref{Matching generalziation}. For this purpose, we employ the following result from \cite{Bachoc} which is the linear version of a famous theorem due to Kneser \cite[page 116, Theorem 4.3]{Nathanson}. Note that in the following theorem, $\langle AB\rangle$ stands for the $K$-subspace of $F$ spanned by the subset \[ AB=\{ab:\ a\in A\, \text{and}\, b\in B\} ,\] which is the Minkowski product of the subspaces $A$ and $B$. \begin{proposition}\label{subpaces product} Let $K\subset F$ be a field extension, and let $A,B\subset F$ be nonzero finite-dimensional $K$-subspaces of $F$. Let $M$ be the subfield of $F$ which stabilizes $ AB$, i.e. $M=\{x\in F: x\langle AB\rangle \subset \langle AB \rangle\}$. Then \begin{align*} \dim_K \langle AB \rangle \geq\dim_K A+\dim_K B-\dim _K M. \end{align*} \end{proposition} For nonempty subsets $C$ and $D$ of $F$ we have $K\langle C\cup B \rangle=K\langle C \rangle+K\langle D\rangle$, the sum of two subspaces $K\langle C \rangle$ and $K\langle D\rangle$. We have also $K\langle CD \rangle=K(K\langle C\rangle K\langle B\rangle)$. The following theorem by Eliahou and Lecouvey, which formulates the matching property in terms of suitable dimension estimates, is also the engine behind our proof. \begin{proposition}\label{dimension estimate} Let $K\subset F$ be a field extension and $A$ and $B$ be two $n$-dimensional $K$-subspaces of $F$. Suppose that $\mathcal{A}=\{a_1,\ldots,a_n\}$ is a basis of $A$. Then $\mathcal{A}$ can be matched to a basis of $B$ if and only if, for all $J\subset \{1,\ldots,n\}$, we have: \begin{align*} \dim_K\bigcap_{i\in J}\left(a_i^{-1}A\cap B\right)\leq n-\# J. \end{align*} \end{proposition} We will also use the following definition which is analogous to the notion of ``coset'' in the group setting. \begin{definition} Let $K\subset F$ be a field extension and $M$ be an intermediate subfield of it. Then a {\it nontrivial linear translate} of $M$ is a $K$-subspace of the form $xM$ for a nonzero element $x\in F$. \end{definition} Now, we are ready to prove Theorem \ref{Linear analog of matchings}. We acknowledge that a similar method has also been suggested in \cite{Aliabadi 3}. It is worth pointing out that the following sufficient condition may be seen as the linear analogue of Proposition \ref{Matching generalziation}. \begin{proof}[Proof of Theorem \ref{Linear analog of matchings}.] Assume to the contrary that $A$ is not matched to $B$. Then, by Theorem \ref{dimension estimate}, there exists a basis $\mathcal{A}=\{a_1,\ldots,a_n\}$ of $A$ and $J\subset \{1,\ldots,n\}$ such that \begin{align*} \dim_K\bigcap_{i\in J}\left(a^{-1}_iA\cap B\right)> n-\# J. \end{align*} Let $S=\langle a_i: i\in J\rangle$ be a $K$-subspace of $A$, $U=\underset{i\in J}{\bigcap}\left(a_i^{-1}A\cap B\right)$ and $U_0=\langle U\cup \{1\}\rangle$. By Proposition \ref{subpaces product}, there exists an intermediate subfield $M$ of $K\subset F$ such that \begin{align} \dim_K\langle U_0S\rangle \geq \dim_KU_0+\dim_KS-\dim_K M, \end{align} where $M$ is the stabilizer of $\langle U_0S\rangle$. Define $U'=M\cup U$. Invoking Proposition \ref{subpaces product} one more time, one can find an intermediate subfield $M'$ of $K\subset F$ for which \begin{align}\label{eqn3} \dim_K\langle U'S\rangle\geq\dim_K\langle U'\rangle+\dim_K S-\dim_K M', \end{align} where $M'$ is the stabilizer of $\langle U'S\rangle$. The following computations show that $\langle U'S\rangle=\langle U_0S\rangle$; \begin{align}\label{eqn4} \langle U'S\rangle=&\langle(M\cup U)S\rangle=\langle MS\rangle\cup \langle U_0S\rangle\notag\\ =&\langle MS\rangle\cup\langle U_0SM\rangle=M\langle S\cup U_0S\rangle\notag\\ =&M\langle U_0S\rangle=\langle U_0S\rangle. \end{align} Then, the stabilizers of these two subspaces must be the same. That is, $M=M'$. Then we would have \begin{align}\label{eqn5} \dim_K\langle U'S\rangle\geq\dim_K\langle U'\rangle+\dim_KS-\dim_K M. \end{align} Having \eqref{eqn4} and \eqref{eqn5} at hand and using the inclusion-exclusion principle for vector spaces we obtain: \begin{align}\label{eqn6} \dim_K\langle U_0S\rangle=&\dim_K\langle U'S\rangle\notag\\ \geq& \dim_K\langle U'\rangle+\dim_K S-\dim_K M\notag\\ =&\dim_K\langle M\cup U\rangle+\dim_KS-\dim_KM\notag\\ =&\dim_KM+\dim_KU-\dim_K(M\cap U)+\dim_K S-\dim_KM\notag\\ =&\dim_KU+\dim_KS-\dim_K(M\cap U). \end{align} We now have two cases for $M\cap U$: \begin{enumerate} \item If $M\cap U=\{0\}$, $\dim_K(S\cup SU)> n$. On the other hand, since $S\cup SU\subset A$, we would have $\dim_KA>n$, contradicting the assumption $\dim_KA=n$. \item If $M\cap U\neq\{0\}$, then $M\cap B\neq\{0\}$. Choose a nonzero element $b\in M\cap B$. Also let $x$ be a nonzero element of $US$. Then $xK(b)\subset USM\subset A$, contradicting the assumption that $A$ does not contain any nontrivial linear translate of $K(b)$. \end{enumerate} Therefore $A$ is matched to $B$, as claimed. \end{proof} Back to the group setting, for two finite subsets $A$ and $B$ of a group $G$ with $\#A=\#B>0$, clearly the condition $A\cap (A+B)=\emptyset$ implies that $A$ is matched to $B$. One can go further and argue that every bijection from $A$ to $B$ is a matching. The linear analogue of this statement is studied in \cite[Theorem 6.3]{Eliahou 3}, in which it is proved that for $n$-dimensional $K$-vector spaces $A$ and $B$, the condition $A\cap \langle AB \rangle=\{0\}$ not only implies that $A$ is matched to $B$, but also that every isomorphism from $A$ to $B$ is a strong matching. Another obvious observation in the group setting is that if $A$ is matched to $B$, then $A+B\neq A$. In Theorem \ref{vector space span} we formulate the linear analogue of this observation. To prove Theorem \ref{vector space span} we will need the following lemma the proof of which is obtained along the same lines as used in the proof of Lemma \ref{Coset}. \begin{lemma}\label{linear translate} Let $K\subset F$ be a field extension and $A$ and $B$ be finite-dimensional $K$-subspaces of $F$. Assume further that $0<\dim_{K} A\leq \dim_{K} B$ and $\langle AB \rangle=A$. Then $B$ is a subfield of $F$ and $A$ is a linear translate of $B$. \end{lemma} \begin{proof} If $b$ is a nonzero element of $B$, then the linear transformation $T:A\to Ab$ is injective, and thus $\dim_KAb=\dim_KA$. Since $Ab\subset \langle AB \rangle=A$ and $A$ is finite-dimensional, it follows that $Ab=A$. Let $M=\{x\in L: Ax=A\}$. Then $B\subset M$ and $M$ is a subfield of $L$. Also $AM=A$. If $a\in A$ is nonzero, then $aM$ is a linear translate of the subfield $M$, and thus $\dim_K(aM)=[M:K]$ and we have $aM\subset AM=A$. Then $\dim_KB\geq \dim_KA\geq \dim_K(aM)=[M:K]\geq \dim_KB$, so $\dim_KB=[M:K]$ and $\dim_K A=\dim_K(aM)$. Since $B\subset M$ and $\dim_KB=[M:K]$, it follows that $B=M$, so $B$ is a subfield as claimed. Since $aM\subset A$, and $\dim_KA=\dim_K(aM)$, it follows that $A=aM$. We know that $M=B$, so $A=aM=aB$, and thus $A$ is the linear translate $aB$ of $B$. The proof is complete. \end{proof} \begin{proof}[Proof of Theorem \ref{vector space span}] Assume to the contrary that $\langle AB \rangle=A$. Then by Lemma \ref{linear translate}, $B$ is a subfield of $F$ and so $1\in B$. Applying Lemma 2.3 in \cite{Eliahou 3} for $A$ and $B$ implies that $A$ cannot be matched to $B$, contradicting our assumption. \end{proof} \subsection{Unmatchable subspaces in a field extension} The purpose of this subsection is to formulate the linear analogue of Theorem \ref{Unmatchable} in a field extension $K\subset F$. In the process, we use the dimension estimate of matchable subspaces (Proposition \ref{dimension estimate}) which is derived naturally from the linear version of Hall's marriage theorem. We assume that $K,F, A,B$ and $n$ are as in Section 3. We assume that $A$ is not matched to $B$. Our goal is to estimate the dimension of the largest subspace of $A$ which is matched to a subspace of $B$, denoted by $M(A,B)$. Since $A$ is not matched to $B$, there exists a basis $\mathcal{A}=\{a_1,\ldots,a_n\}$ which fails the dimension criteria, namely for some $J\subset\{1,\ldots,n\}$, \begin{align*} \dim\bigcap_{i\in J}\left(a_i^{-1}A\cap B\right)> n-\# J. \end{align*} Define: \[D_{\mathcal{A}}(B)=\max\{\#J:\ J\subset\{1,\ldots,n\},\;\; \mathrm{and}\;\;\dim\underset{i\in J}{\bigcap}\left(a_i^{-1}A\cap B\right)>n-\#J\},\] and \[D(A,B)=\max\{D_{\mathcal{A}}(B):\ \mathcal{A}\; \text{is a basis for}\;A\}.\] We now formulate and conjecture the linear analogue of Theorem \ref{Unmatchable} as follows. \begin{conjecture} Let $K\subset F$ be a field extension and $A$ and $B$ be two $n$-dimensional $K$-subspaces of $F$ with $n\geq1$, and $\langle AB\rangle \neq A$. Assume that $A$ is not matched to $B$. Then $M(A,B)=n-D(A,B)$. \end{conjecture} Note that according to Theorem \ref{vector space span}, if $\langle AB\rangle=A$, then $M(A,B)=0$. \section{Matching and Dimension $m$-Intersection Property}\label{intersection property} The main objective of this section is to present a linear algebra result whose proof relies on tools utilized in matching theory. The first tool which is heavily used in matching theory is called the $m$-intersection property. The concept of the $m$-intersection property was first studied in \cite{Brualdi} to investigate the sparse basis problem. (see \cite{Friedland} for more results on the sparse basis problem.) Following Brualdi, Friedland and Pothen \cite{Brualdi}, we say that the family $\mathcal{J}=\{J_1,\ldots,J_t\}$ of subsets of $\{1,\ldots,n\}$, each of cardinality $m-1$, satisfies the {\it $m$-intersection property} provided that \begin{align*} \#\bigcap_{i\in J}J_i\leq m-\# J, \end{align*} for any $J\subset\{1,\ldots,t\}$, $J\neq\emptyset$. It is known that for a given set $\mathcal{J}$ one can check efficiently, i.e. in polynomial time, whether $\mathcal{J}$ satisfies the $m$-intersection property. This notion is generalized in \cite{Aliabadi 4} as follows: \begin{definition} The family $\mathcal{J}=\{J_1,\ldots,J_t\}$ of subsets of $\{1,\ldots,n\}$, each of cardinality $\leq m-1$, satisfies the {\it weak $m$-intersection property} provided \begin{align*} \#\bigcap_{i\in J}J_i\leq m-\# J, \end{align*} for all $J\subset \{1,\ldots,t\}$, $J\neq\emptyset$. \end{definition} Given an abelian group $G$ and finite subsets $A$ and $B$ of $G$ with $\# A=\# B=n>0$, ``whether $A$ is matched to $B$'' is characterized in \cite{Aliabadi 4} based on ``whether a certain family of subsets of $A$ possesses the weak $n$-intersection property.'' The intersection property introduced above may be of interest in its own right. The following result is proved in \cite{Brualdi}. \begin{theorem}\label{m-intersection property 1} Let $J_1,J_2,\ldots,J_t$ be $t<m$ subsets of $\{1,\ldots,n\}$, each of cardinality $m-1$, and assume the $m$-intersection property \begin{align}\label{eqq*} \#\bigcap_{i\in J}J_i\leq m-\# J, \end{align} holds for all nonempty subsets $J$ of $\{1,\ldots,t\}$. Then there exist $m-t$ subsets $J_{t+1},\ldots,J_m$ of $\{1,\ldots,n\}$ of cardinality $m-1$ such that \eqref{eqq*} holds for all nonempty subsets $J$ of $\{1,2,\ldots,m\}$. \end{theorem} We aim to provide the linear analogue of Theorem \ref{m-intersection property 1}. For this sake, we first define the dimension $m$-intersection property, which is analogous to the notion of the $m$-intersection property in set theory. \begin{definition} Let $W$ be an $n$-dimensional vector space and $U_1,\ldots,U_t$ be $t$ subspaces of $W$ of dimension $m-1$. We say that the family $\mathcal{U}=\{U_1,\ldots,U_t\}$ satisfies the {\it dimension $m$-intersection property} provided that \begin{align*} \dim\bigcap_{i\in J}U_i\leq m-\#J, \end{align*} for any $J\subseteq\{1,\ldots,t\}$, $J\neq\emptyset$. \end{definition} Note that the dimension $m$-intersection property is used to study matchable bases of subspaces in a given field extension. See \cite{Aliabadi 4} for more details. We now formulate the linear analogue of Theorem \ref{m-intersection property 1}, whose proof is obtained from a simple adaption of the proof of Lemma 5.1 in \cite{Brualdi}, in terms of our linear situation. \begin{observation}\label{m-intersection property 2} Let $W$ be an $n$-dimensional vector space and $U_1,U_2,\ldots,U_t$ be $t<m$ subspaces of $W$, each of dimension $m-1$, and assume that the dimension $m$-intersection property \begin{align}\label{eqq**} \dim\bigcap_{i\in J}U_i\leq m-\# J, \end{align} holds for all nonempty subsets $J$ of $\{1,\ldots,t\}$. Then there exist $m-t$ subspaces $U_{t+1},\ldots,U_m$ of $W$ of dimension $m-1$ such that \eqref{eqq**} holds for all nonempty subsets $J$ of $\{1,2,\ldots,m\}$. \end{observation} \begin{proof} It suffices to change ``subsets'', ``cardinality'', and ``$m$-intersection property'' to ``subspaces'', ``dimension'', and ``dimension $m$-intersection property'', respectively, in the proof of Lemma 5.1 in \cite{Brualdi}. Then the same argument along with the inclusion-exclusion principle for vector spaces shall complete the proof. \end{proof} \subsection{Linear analogue of Hall's marriage theorem} The second tool which is used in the proof Theorem \ref{linear algebra} is a linear analogue of Hall's marriage theorem. Let $W$ be a vector space over a field $K$ and $\mathcal{U}=\{U_1,\ldots,U_m\}$ be a family of $K$-subspaces of $W$. A {\it free transversal} for $\mathcal{U}$ is a set of linearly independent vectors $\{x_1,\ldots,x_m\}$ in $W$ with $x_i\in U_i$, $1\leq i\leq m$. In the following theorem due to Rado \cite{Rado} the necessary and sufficient conditions for the existence of a free transversal of $\mathcal{U}$ are given. \begin{theorem}\label{transversal} Let $W$, $K$ and $\mathcal{U}$ be as above. Then $\mathcal{U}$ has a free transversal if and only if \begin{align*} \dim\underset{i\in J}{+} U_i\geq\#J, \end{align*} for all $J\subseteq \{1,\ldots,m\}$. \end{theorem} \subsection*{A Few Notations} We shall use the following standard notation. We denote \begin{align*} W^*=\left\{\psi:W\to K:\, \psi \text{ is linear transformation}\right\}, \end{align*} the {\it dual} of $W$. Moreover, for any subspace $V$ of $W$, we denote \begin{align*} V^\perp=\left\{\psi\in W^*:\, V\subset \ker\psi\right\}, \end{align*} the {\it orthogonal} of $V$ in $W^*$. We will also use the fact that $V\oplus V^\perp=W$. Having Observation \ref{m-intersection property 2} and Theorem \ref{transversal} at hand we are ready to prove Theorem \ref{linear algebra}. \begin{proof}[Proof of Theorem \ref{linear algebra}.] According to Observation \ref{m-intersection property 2}, one may find $(n-1)$-dimensional subspaces $U_{t+1},\ldots,U_n$ such that the family $\{U_1,\ldots,U_n\}$ satisfies the $n$-intersection property. Hence, for any $J\subseteq\{1,\ldots,n\}$, $J\neq\emptyset$, we have \begin{align}\label{eqq(..)} \dim\bigcap_{i\in J}U_i\leq n-\# J. \end{align} Taking the orthogonal in the dual space $W^*$, we have \begin{align*} \dim\left(\bigcap_{i\in J}U_i\right)^\perp \geq \#J. \end{align*} Thus, \begin{align*} \dim \sum_{i\in J}U_i^\perp \geq \# J. \end{align*} It follows from Theorem \ref{transversal} that there exists a free transversal $(\psi_1,\ldots,\psi_n)\in W^*$ for the family of subspaces $\{U_i^\perp\}_{i=1}^n$. Since $\psi_1,\ldots,\psi_n$ are linearly independent and $\dim W^*=n$, then $\{\psi_i\}_{i=1}^n$ forms a basis for $W^*$. Let $\{x_1,\ldots,x_n\}$ be a basis for $W$ for which $x_i^*=\psi_i$, $1\leq i\leq n$. Then $U_i\subseteq \ker x_i^*$. Condition \eqref{eqq(..)} implies that $U_i\neq U_j$, for some $1\leq i, j\leq n$ and since $\dim U_i=\dim U_j=n-1$, then $U_i+U_j=W$. We totally have $W= U_i+U_j\subseteq \ker x_i^*+\ker x_j^*\subseteq W $, which implies the desired result. \end{proof} \section{Future work}\label{Future} The matching problems considered in this paper can be reformulated for matroids in a field extension in some seemingly unchallenging ways; let $K\subset F$ be a field extension, $A$ be a subset of $F$, and $M_1$ and $M_2$ be two matroids over $K(A)$, where $K(A)$ stands for the subfield of $F$ generated by $A$. Then adapting the notion of matchable subspaces, one may define matchable matroids. The main obstacle in finding the matroid analogue of matchings may be the matroid version of Proposition \ref{subpaces product}. We hope that the techniques presented in \cite{Bachoc} have more general applicability, especially in the direction of generalizing these statements to matroids in a field extension. \section*{Acknowledgement} We are deeply grateful to Shira Zerbib and Khashayar Filom for their constant encouragement, generosity, and for many insightful conversations. This work was supported by the Iowa State University Dean’s High Impact Award for undergraduate summer research in mathematics.
{ "timestamp": "2022-03-09T02:06:03", "yymm": "2107", "arxiv_id": "2107.09029", "language": "en", "url": "https://arxiv.org/abs/2107.09029", "abstract": "The origins of the notion of matchings in groups spawn from a linear algebra problem proposed by E. K. Wakeford [24] which was tackled in 1996 [10]. In this paper, we first discuss unmatchable subsets in abelian groups. Then we formulate and prove linear analogues of results concerning matchings, along with a conjecture that, if true, would extend the primitive subspace theorem. We discuss the dimension $m$-intersection property for vector spaces and its connection to matching subspaces in a field extension, and we prove the linear version of an intersection property result of certain subsets of a given set.", "subjects": "Combinatorics (math.CO)", "title": "Conditions for matchability in groups and field extensions", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9883127409715955, "lm_q2_score": 0.8221891305219504, "lm_q1q2_score": 0.8125799931832017 }
https://arxiv.org/abs/2106.12019
Length-Preserving Directions and Some Diophantine Equations
We study directions along which the norms of vectors are preserved under a linear map. In particular, we find families of matrices for which these directions are determined by integer vectors. We consider the two-dimensional case in detail, and also discuss the extension to the three-dimensional case.
\section{ Introduction.} In the nice Webinar talk ``Eigenpairs in Maple'' of June 25, 2015 \cite{lopez}, Dr.~Robert Lopez discussed how to use {Maple} to find eigenvalues and eigenvectors (eigenpairs) of a matrix $A$. An eigenvector of $A$ is a (nonzero) vector whose direction is preserved under multiplication by $A$. By the end of the talk, Dr.~Lopez, as an aside, asked the question, what about preserving the {\it magnitude\/} of the vector, rather than its direction? In other words, what about (nonzero) vectors $\ve v$ such that $\norm{v} = \normm{A \ve v}$, where $\norm{v}$ is the usual Euclidean norm? He provided the example of $$ A = \begin{pmatrix} 4&3\cr-2&-3 \end{pmatrix} ; $$ regarded as a map from ${\Bbb R}^2$ to itself; it preserves the norms, but not the directions, of the vectors with integer coordinates $\ve v_1 = \langle 1, -1 \rangle$ and $\ve v_2 = \langle 17, -19 \rangle$. He clarified he had found such matrix by using {Maple} and ``for-loops'' to find matrices $A$ for which the equation $\norm{ v} = \normm{A \ve v}$ would have integer solutions. In this article we explore this intriguing idea, obtain several families of such ``nice'' $2\times 2$ matrices, then consider a few $3 \times 3$ examples, and discuss a couple of related quadratic Diophantine equations. To avoid repetition, when considering an equation involving a vector $\ve v$, by a {\it solution\/} $\ve v$ we will consistently mean a {\it nontrivial solution\/} $\ve v \neq \ve 0$. \section{General considerations.} A given $n \times n$-matrix $A$ with real entries generates a related linear map $\ve v \mapsto A \ve v$ of ${\Bbb R}^n$ into itself; we will use the same letter $A$ to represent this map. We seek nonzero vectors $\ve v$ whose norm is preserved under this linear map; in other words, we seek (nonzero) solutions of the equation \begin{equation} \norm{ v} = \normm{A \ve v}. \label{norma} \end{equation} First of all, observe that, since $\norm{\lambda \ve v} = \abs{\lambda}\cdot\norm{\ve v}$, then the entire line generated by any nonzero solution of (\ref{norma}) will consist of vectors whose norm is preserved by $A$; we will call these lines the {\it norm-preserving lines}. Next, if $A$ has eigenvalue $1$, or $-1$, then the corresponding eigenspace will consist entirely of solutions of (\ref{norma}) as well. The interesting case, though, is when there are nonzero solutions of (\ref{norma}) that are {\it not\/} eigenvectors. This may happen {\it even\/} if $A$ has an eigenvalue $\pm 1$. For example, the matrix $$ A = \begin{pmatrix} 1&-8 \cr 0 & 3 \end{pmatrix} $$ has eigenvalue $1$ with eigenline determined by $\ve v = \langle 1, 0 \rangle$, but also has another, noninvariant, norm-preserving line, determined by $\ve w = \langle 9, 2 \rangle$. Along this line, the map $A$ acts like a rotation. At the other end of the spectrum we have the case of an orthogonal matrix $A$, for which {\it every\/} line through the origin is norm-preserving. In the $2\times 2$ case these are typically rotations, for which these lines are noninvariant; however, all lines are rotated by the same angle under $A$. In the general case this does not happen: each norm-preserving line is usually rotated by a different angle. \section{The $2 \times 2$ case.} Let us discuss nonzero solutions of equation (\ref{norma}) for the case of a $2 \times 2$ real-valued matrix $A$. We will first obtain conditions for existence of such solutions, and next, find families of norm-preserving lines determined by integer vectors. \subsection{Existence of a solution. } Let us study solutions of equation (\ref{norma}) for a general real-valued $2 \times 2$-matrix $$ A = \begin{pmatrix} a&b \\ c&d \end{pmatrix} . $$ Equation (\ref{norma}) is equivalent to $\norm{ v}^2 = \normm{A \ve v}^2$, or $(\ve v, \ve v) = (A \ve v, A \ve v)$, where $ (\ve v, \ve w)$ is the usual Euclidean inner product. The right-hand side becomes $$\normm{A \ve v}^2 = (A \ve v, A \ve v) = (\ve v, A^t A \ve v) = (\ve v, B \ve v), $$ where $A^t$ is the transpose of $A$, and $B = A^t A$. Thus, equation (\ref{norma}) is equivalent to \begin{equation} (\ve v, (B - I) \ve v) = 0 , \label{norma2} \end{equation} Where $I$ is the identity matrix. Further, we have \[ B = A^t \, A = \begin{pmatrix} a^2 + c^2 & ab+cd \\ ab+cd & b^2 +d^2 \end{pmatrix} = \begin{pmatrix} m & p \\ p & n \end{pmatrix}, \] where \begin{eqnarray} m & = & a^2 + c^2 , \label{i}\\ n & = & b^2 + d^2 , \label{ii} \\ p & = & ab + cd , \label{iii} \end{eqnarray} so that if we denote $\ve v = \langle x, y \rangle$, then the quadratic form at the left-hand side of (\ref{norma2}) is \begin{equation} \Phi(x, y) = (m-1)x^2 + 2p xy + (n-1) y^2 ; \label{forma} \end{equation} with this notation, (\ref{norma}) or, equivalently, (\ref{norma2}), is in turn equivalent to $\Phi(x, y) = 0$. We now prove the following result. \begin{theorem} Norm-preserving lines exist if and only if \begin{equation} a^2 + b^2 + c^2 + d^2 \ge 1 + \det(A)^2 . \label{condition} \end{equation} \end{theorem} We will provide two proofs of this fact, one analytic, and one geometric. \begin{proof}[First Proof] Norm-preserving lines are determined by $\ve v = \langle x, y \rangle$, where $(x, y)$ is a nontrivial solution of $\Phi(x, y) = 0$, where $\Phi(x, y)$ is given by (\ref{forma}). If $n=1$, that is, if $b^2 + d^2 = 1$, this equation becomes $$ (m-1) x^2 + 2pxy = x \left[ (m-1) x + 2p y \right] = 0, $$ which has the nontrivial solution $\ve v = \langle 0, 1\rangle$. Also, in this case condition (\ref{condition}) is satisfied, since \begin{align*} &a^2 + b^2 + c^2 + d^2 - 1 - \det(A)^2 \cr &= a^2 + c^2 - \det(A)^2 = (a^2+c^2) - (ad - bc)^2 , \end{align*} which is equal to $(ab + cd)^2 = p^2$ under the assumption that $n = b^2 + d^2 = 1$. This follows from the identity $$ (a^2+c^2) - (ad - bc)^2 - (ab + cd)^2 = [1- (b^2 + d^2)](a^2 + c^2) . $$ Thus, in this case condition (\ref{condition}) holds. If $n \neq 1$, a nontrivial solution of $$ (m-1) x^2 + 2pxy + (n-1) y^2 = 0 $$ must satisfy $x \neq 0$. Dividing by $x^2$ and solving for $\frac yx$, we obtain \begin{equation} {y \over x} = {- p \pm \sqrt{p^2 - (m-1)(n-1)} \over n-1} . \label{eqForRatio} \end{equation} A solution will exist if and only if the discriminant $ p^2 - (m-1)(n-1) $ is non-negative. Further, $$ p^2 - (m-1)(n-1) = (p^2 - mn) + m + n - 1, $$ and $$ mn - p^2 = \det(B) = \det(A)^2 = (ad - bc)^2 , $$ which can also be checked directly using (\ref{i})--(\ref{iii}). Thus, there is a norm-preserving line if and only if $$ m + n - 1 - \det(A)^2 = (a^2+c^2)+(b^2+d^2) - 1 -\det(A)^2 \ge 0 , $$ which coincides with condition (\ref{condition}). \end{proof} \begin{proof} [Second Proof] The eigenvalues $\lambda_1$ and $\lambda_2$ of the symmetric matrix $B = A^t A$ are (real and) nonnegative; assume $0 \le \lambda_1 \le \lambda_2$. By the extreme properties of eigenvalues (see, for example, \cite{gelfand} or \cite{shilov}), we have \begin{eqnarray*} \lambda_1 &=& \min_{\norm{v} = 1} \normm{A \ve v}^2 = \min_{\norm{v} = 1} (\ve v, B \ve v) \cr &\le& \max_{\norm{v} = 1} \normm{A \ve v}^2 = \max_{\norm{v} = 1} (\ve v, B \ve v) = \lambda_2 . \end{eqnarray*} Therefore, there will exist norm-preserving lines $\ve v$ such that $\norm{v} = \normm{A \ve v}$ if and only if $$ \lambda_1 \le 1 \le \lambda_2. $$ When the eigenvalues are strictly positive, this condition guarantees the intersection of the ellipse $(\ve v, B \ve v) = 1$, for which the half-axes are $$ {1 \over \sqrt{\lambda_1}} \qquad \hbox{ and } \qquad {1 \over \sqrt{\lambda_2}}, $$ with the unit circle $ \norm{v} = 1$ (see Figure \ref{firstEllipse}). \begin{figure}[htbp] \begin{center} \includegraphics[height=50mm]{ellipse1.eps} \caption{Illustration of R. Lopez's example.} \label{firstEllipse} \end{center} \end{figure} The eigenvalues of $B$ are found from the characteristic equation \begin{equation} \lambda^2 - t \lambda + \Delta = 0, \label{chareq} \end{equation} where $$ t = {\rm trace}\, {B} \qquad\hbox{ and }\qquad \Delta = \det B . $$ In terms of $B$, condition (\ref{condition}) reads as $$ t \ge 1 + \Delta . $$ Notice that $\Delta = \det(B) \ge 0$, so actually (\ref{condition}) implies $t \ge 1 + \Delta \ge 1 > 0$. Solving (\ref{chareq}) we find $$ \lambda = {t \over 2} \pm {1\over2} \sqrt{t^2 - 4 \Delta} . $$ Let us assume now that $t \ge 1 + \Delta$. Then $$ t^2 - 4 \Delta \ge (1 +\Delta)^2 - 4 \Delta = (1 - \Delta)^2 . $$ so that $\sqrt{t^2 - 4 \Delta} \ge \abs{1 - \Delta}$. For the largest eigenvalue $\lambda_1$ we have $$ \lambda_1 \ge {1 \over 2}(1 + \Delta) + {1 \over 2}\abs{1- \Delta} . $$ Considering both cases $1 \ge \Delta$ and $\Delta \ge 1$, we conclude that \begin{equation} \lambda_1 \ge \max\{ \Delta, 1\} . \label{boundOne} \end{equation} In particular, we have $\lambda_1\ge 1$, as desired. Now let $\lambda_2$ be the smallest eigenvalue. Since $\lambda_1 \lambda_2 = \Delta$, we have $$ \lambda_2 = {\Delta \over \lambda_1} , $$ and condition (\ref{boundOne}) now implies that indeed $\lambda_2 \le 1$. Thus, condition (\ref{condition}) guarantees the existence of norm-preserving lines. The converse is straightforward. \end{proof} \subsection{Families of matrices with integer solutions.} We want to find matrices $$ A = \begin{pmatrix} a&b \\ c&d \end{pmatrix} $$ with integer or rational entries, for which the norm-preserving lines are determined by vectors $\ve v$ with integer coordinates; we will call these {\it integer solution lines.}% \footnote{Of course, if, say, a norm-preserving line is determined by $\ve v = \langle 1, 2 \rangle$, then it is also determined by $\ve v = \langle \sqrt{2}, 2 \sqrt{2} \rangle$. The idea is that {\it there exists\/} a determining vector with integer coordinates. In the $2 \times 2$ case, this means we are interested in solution lines with rational slopes.} Recall that $$\normm{A \ve v}^2 = (A\ve v, A \ve v) = (\ve v, B \ve v), $$ where \[ B = A^t \, A = \begin{pmatrix} a^2 + c^2 & ab+cd \\ ab+cd & b^2 +d^2 \end{pmatrix} = \begin{pmatrix} m & p \\ p & n \end{pmatrix}, \] so that the solutions of $\normm{A \ve v}^2 = \norm{\ve v}^2$ are given by $\ve v = \langle x, y \rangle$, where $(x, y)$ is a (nontrivial) solution of \begin{equation} (m-1) x^2 + 2p xy + (n-1) y^2 = 0 . \label{diophantine1} \end{equation} If either $m=1$ or $n=1$, it is not hard to get families of solutions. For example, for the two-parameter family $$ A = \begin{pmatrix} \frac3{\strut 5} & b \cr \frac{\strut 4}5 & d \end{pmatrix} , $$ we get norm-preserving lines determined by $\ve v_1 = \langle 1, 0 \rangle$ and $$ \ve v_2 = \langle 5(1 - b^2 - d^2), 2(3b+4d) \rangle . $$ \medskip For the general case, let us assume that the entries of $A$ are integers, and seek integer solutions of the Diophantine equation (\ref{diophantine1}). Assuming $n \neq 1$ and solving for $y/x$, as we did in (\ref{eqForRatio}), we conclude that there are integer solution lines if and only if the discriminant $p^2 - (m-1)(n-1)$ is a perfect square, say, $k^2$. This leads to the new Diophantine equation $$ p^2 - (m-1)(n-1) = k^2 , $$ which can be rewritten as \begin{equation} (m-1)(n-1) = p^2 - k^2 = (p+k)(p-k) , \label{diophantine2} \end{equation} where $m, n, p$ are given by (\ref{i})--(\ref{iii}). We will now find two-parameter families of solutions. To this end, let us set \begin{eqnarray*} m-1 &=& p+k, \\ n-1 &=& p-k, \end{eqnarray*} or \begin{eqnarray} (a^2+c^2)-1 &=& p+k, \label{eq1} \\ (b^2+d^2)-1 &=& p-k, \label{eq2} \end{eqnarray} to which we will include the equation \begin{equation} 2(ab + cd) = 2p , \label{eq3} \end{equation} stemming from the definition (\ref{iii}) of $p$. Adding (\ref{eq1}) and (\ref{eq2}), and subtracting (\ref{eq3}), we get \begin{equation} (a-b)^2 + (c-d)^2 = 2, \label{eq4} \end{equation} from which we conclude that \begin{equation} \abs{a-b} = 1 \qquad \hbox { and } \qquad \abs{c-d} = 1. \label{absoluto} \end{equation} Next, subtracting (\ref{eq2}) from (\ref{eq1}), we obtain \begin{equation*} a^2 - b^2 + c^2 - d^2 = 2k, \end{equation*} or \begin{equation} (a-b)(a+b) + (c-d)(c+d) = 2k \label{eq5} \end{equation} Considering all the possibilities in (\ref{absoluto}), we obtain the following four two-parameter families of matrices $$ \begin{pmatrix} a & a \pm 1 \cr c & c \pm 1 \end{pmatrix} . $$ For each of them, we can use (\ref{eq5}) to find the value of $k$. Incidentally, their transposes, $$ \begin{pmatrix} a & c \cr a \pm 1 & c \pm 1 \end{pmatrix} , $$ also have integer solution lines. The example of Robert Lopez corresponds to the case $a-b = 1$ and $c - d = 1$, which leads to \begin{equation} A = \begin{pmatrix} a & a - 1 \cr c & c - 1 \end{pmatrix} ; \label{lopezGen} \end{equation} he chose the particular values $a = 4$ and $c = -2$. For the general solution of type (\ref{lopezGen}), we find from (\ref{eq5}) that $k = a + c - 1$. Substituting this value into (\ref{eqForRatio}), which now looks like $$ {y \over x} = {- p \pm k \over n-1} , $$ we conclude that the norm-preserving lines are determined by the vectors $$ \ve v_1 = \langle 1, -1 \rangle $$ and $$ \ve v_2 = \langle (a-1)^2+(c-1)^2-1, 1- a^2 - c^2 \rangle . $$ The remaining cases can be discussed similarly. \begin{figure}[htbp] \begin{center} \includegraphics[height=50mm]{ellipse2.eps} \caption{Norm-preserving directions shown.} \label{secondEllipse} \end{center} \end{figure} Figure \ref{secondEllipse} illustrates the case $a=2$, $c=-3$; the two length-preserving lines are shown. The direction vectors are $\ve v_1 = \langle 1, -1 \rangle$ and $\ve v_2 = \langle 4, -3 \rangle$. As expected, the solution lines pass through the intersections of the ellipse $\normm{A \ve v} = 1$ with the unit circle. The corresponding picture for the matrix chosen by Robert Lopez was shown in Figure \ref{firstEllipse}; the solutions lines are not depicted, since they are rather close to each other. \smallskip If the ellipse is tangent to the unit circle, as for example when $a = 3$, $c = -2$, we get only one integer solution line. \section{The $3 \times 3$ case.} The $3 \times 3$ case is considerably more complicated, as well as more interesting. If $A$ is a $3 \times 3$ real-valued matrix, also regarded as a linear map from ${\Bbb R}^3$ to itself, then in general $ \normm{A \ve v}^2 = 1$ is an ellipsoid, in terms of the coordinates of $\ve v = \langle x, y, z \rangle$. As in the $2\times2$ case, we have $$ \normm{A \ve v}^2 = (A \ve v, A \ve v) = (\ve v, B \ve v ) , $$ where $B = A^t A$ is a symmetric matrix with nonnegative eigenvalues. The equation for norm-preserving vectors, $\normm{A \ve v} = \norm{v}$, or $(\ve v, B \ve v) =( \ve v, \ve v)$, is equivalent to the cone \begin{equation} (\ve v, (B-I)\ve v) = 0 , \label{thecone} \end{equation} where $I$ is the identity matrix. As in the $2 \times 2$ case, if we denote the eigenvalues of $B$ by $0 \le \lambda_1 \le \lambda_2 \le \lambda_3$, then there is a solution of (\ref{thecone}) if and only if \begin{equation} \label{conditionFor3by3} \lambda_1 \le 1 \le \lambda_3 , \end{equation} which guarantees a nonempty intersection of the ellipsoid (or degenerate ellipsoid) $\normm{B \ve v}^2 = 1$ with the unit sphere $\norm{v}^2 = 1$. If condition (\ref{conditionFor3by3}) is satisfied, then the cone determined by (\ref{thecone}) will pass through this intersection of the ellipsoid and the unit sphere. As an illustration, for the matrix in Example 1, Figure \ref{ExampleOne} (a) shows the ellipsoid $\normm{A \ve v}^2 = 1$ and the unit sphere, and Figure \ref{ExampleOne} (b) has the added solution cone $\normm{A \ve v} = \norm v$; compare with Figure \ref{secondEllipse} for the $2 \times 2$ case. \begin{figure}[ht] \centering \subfloat[]{{\includegraphics[height=53mm]{ellipsoidNsphere.eps}}} \quad \subfloat[]{{\includegraphics[height=53mm]{EllipsoidConeSphere.eps}}} \caption{Illustration for Example 1: (a) ellipsoid and sphere; (b) same, plus solution cone. \label{ExampleOne}} \end{figure} For the $3 \times 3$ case, however, it is considerably harder to find an expression for condition (\ref{conditionFor3by3}) directly in terms of the entries of $A$. Instead, we will limit ourselves to discuss several examples that exhibit the various possible outcomes regarding the existence of integer solution lines. Before giving examples, let us add another comment: unlike the $2 \times 2$ case, in the $3 \times 3$ case one cannot hope that in general {\it all\/} the lines in the cone (\ref{thecone}) for a given matrix $A$ with integer or rational coefficients will turn out to be integer solution lines. Examples 2 and 3 show that, however, we can still get infinitely many such lines. \subsection{Example 1: no integer solution lines.} Let us consider the symmetric matrix \begin{equation} A = \begin{pmatrix} 1&1&\frac{\strut 1}{\strut 2}\cr 1&\frac{\strut 1}{\strut 2}&1\cr \frac{\strut 1}{\strut 2}&1&1 \end{pmatrix} \label{firstA} \end{equation} The form $ \normm{A \ve v}^2 = (A \ve v, A \ve v) = (\ve v, B \ve v) $ will in this case have matrix $$ B = A^t A = A^2 = \begin{pmatrix} \frac{\strut 9}{\strut 4}&2&2\cr 2& \frac{\strut 9}{\strut 4}&2\cr 2&2& \frac{\strut 9}{\strut 4} \end{pmatrix} $$ so that equation (\ref{thecone}) will be \begin{equation} \frac 54 x^2 + 4 x y + 4 x z + \frac54 y^2 + 4 y z + \frac54 z^2 = 0 . \label{thisCone} \end{equation} This equation has infinitely many real-valued solutions, which constitute the solution cone shown in Figure \ref{ExampleOne} (b). We want to show, however, that there are no integer solution lines, that is, no nontrivial vectors $\ve v$ with integer coordinates such that $\normm{A \ve v} = \norm{v}$. Indeed, (\ref{thisCone}) is a quadratic equation in $z$, which we can solve: $$ z = - {8 \over 5}(x+y) \pm {\sqrt{39 x^2 + 48 xy + 39 y^2} \over 5} . $$ Therefore, there will be integer solution lines if and only if the discriminant $39 x^2 + 48 xy + 39 y^2$ is a perfect square. We now prove this does not happen. \begin{theorem} \label{theo2} The Diophantine equation \begin{equation} 39 x^2 + 48 x y + 39 y^2 = u^2 \label{uno} \end{equation} has no nontrivial solutions, that is, no nonzero integer solutions. \end{theorem} \begin{proof} Let $x = 2^a v$ and $y = 2^b w$, with $a, b$ nonnegative integers and $v, w$ odd. We may assume that $a \le b$; otherwise, we interchange $x$ and $y$. Then $$ 39 x^2 + 48 x y + 39 y^2 = 2^{2a}(39 v^2 + 2^{b-a} 48 \, vw + 2^{2(b-a)} 39 \, w^2) , $$ and this will be a square only if and only if the expression in parentheses is a square. But if $b = a$ this expression is congruent to $2$ mod 4, while if $b > a$ this expression is congruent to $3$ mod 4, so in either case this is impossible. \end{proof} \subsection{Example 2: a dense set of integer solution lines.} Consider the symmetric matrix \begin{equation} A = \begin{pmatrix} 1&2&2\cr 2&1&2\cr 2&2&1 \end{pmatrix} . \label{firstA} \end{equation} Here $$ B = A^t A = A^2 = \begin{pmatrix} 9&8&8\cr 8&9&8\cr 8&8&9 \end{pmatrix} $$ so that equation (\ref{thecone}) will be \begin{equation*} 8x^2 + 16 x y + 16 x z + 8 y^2 + 16 y z + 8 z^2 = 0 , \end{equation*} or (after dividing by $8$) \begin{equation} (x+y+z)^2 = 0 . \label{aPlane} \end{equation} In our case, the cone degenerates into the plane $x+y+z = 0$. We can pick an integer basis, say $\ve v_1 = \langle 1, 0, -1 \rangle$ and $\ve v_2 = \langle 0, 1, -1 \rangle$, and obtain every integer norm-preserving line as generated by $\alpha \ve v_1 + \beta \ve v_2$, with integer coefficients $\alpha, \beta$ such that $\alpha^2 + \beta^2 > 0$. This constitutes a dense set of integer solution lines, among all possible solutions in the plane $x+y+z=0$. The ellipsoid $\normm{A \ve v}^2 = 1$ lies inside the unit sphere, and is tangent to it along the intersection of the sphere with the solution plane; Figure \ref{EllipseAndPlane} depicts the situation. \begin{figure}[htbp] \begin{center} \includegraphics[height=50mm]{sphereNequator.eps} \caption{Solution cone is degenerate.} \label{EllipseAndPlane} \end{center} \end{figure} \subsection{Example 3: infinitely many integer solution lines.} Finally, let us consider an example when there are still infinitely many integer solution lines, yet we cannot guarantee they are dense in the cone of all solution lines. Consider the matrix \begin{equation} A = \begin{pmatrix} 1&2&3\cr 2&1&1\cr 1&1&1 \end{pmatrix} . \label{firstA} \end{equation} One eigenvalue of $A$ is $-1$, with eigenvector $\ve = \langle -1, 1, 0 \rangle$, which therefore provides one integer solution line. Are there any other such lines? The matrix $B = A^t A$ is $$ B = \begin{pmatrix} 6&5&6\cr 5&6&8\cr 6&8&11 \end{pmatrix} $$ and consequently equation (\ref{thecone}) becomes \begin{equation*} 5 x^2 + 10 x y + 12 x z + 5 y^2 + 16 y z + 10 z^2 = 0 . \end{equation*} Solving for $x$ (which provides a slightly shorter answer than solving for $z$) yields $$ x = - y - {6 \over 5} z \pm {\sqrt{-20 yz - 14 z^2} \over 5} . $$ We conclude that there will be integer solutions lines if and only if the discriminant $-20 yz - 14 z^2$ is a perfect square \footnote{Notice, by the way, that by setting $z=0$ we get back the eigenline determined by $\langle -1, 1, 0 \rangle$.} This leads to the Diophantine equation \begin{equation} -20 yz - 14 z^2 = u^2 . \label{diophantine3} \end{equation} Let us find all integer solutions of (\ref{diophantine3}). As remarked previously, if $z=0$ we get back the eigenline corresponding to $\lambda = -1$, generated by $\langle -1, 1, 0\rangle$, so let us assume $z \neq 0$. If $z$ is odd, then $$ -20yz - 14z^2 \equiv -14 \equiv 2 \pmod 4 , $$ and hence cannot be a square. Therefore $z$ must be even, so $z = 2a$ for some integer $a$. Substituting into (\ref{diophantine3}) we get $$ 8 ( - 5 y a - 7 a^2) = u^2 , $$ whence $u$ is divisible by $4$. Letting $u = 4b$ and dividing by $8$, we obtain $$ a(-5y - 7a) = 2b^2 . $$ Letting $a = v^2 q$, with $q$ square-free, we obtain \begin{equation} \label{intermediate} v^2 q ( -5y - 7v^2 q ) = 2 b^2 . \end{equation} Denoting for a moment $p = -5y - 7 v^2 q$, the above equation reads \begin{equation} \label{intermediate2} v^2 pq = 2 b^2 , \end{equation} where $q$ is square-free. This implies that $pq$ is even. If $p$ is even, so $p = 2t$ for some integer $t$, then (\ref{intermediate2}) becomes $v^2 tq = b^2$, and it follows that $t = qr^2$ for some integer $r$, so that \begin{equation} \label{pIs} p = 2q r^2. \end{equation} On the other hand, if $p$ is odd, then we must have $q = 2 s$, with $s$ odd; hence (\ref{intermediate2}) is now $v^2 ps = b^2$, whence $p = s r^2$ for some integer $r$, and therefore $$ p = s r^2 = {q \over 2} r^2 = 2 q \left({r \over 2}\right)^2 , $$ and we get back expression (\ref{pIs}) for $p$, if we allow $r$ to be an integer or a half-integer. We conclude that $-5y - 7 v^2 q$ must be equal to $2qr^2$, for some integer or half-integer $r$. Solving for $y$, we obtain \begin{equation} \label{yValue} y = - {q \over 5} (7v^2 + 2r^2) , \end{equation} where $v, q$, and $2r$ are arbitrary integers, subject only to the condition that $y$ be an integer. Moreover, since rational coordinate vectors also determine integer solution lines, we can even drop this additional condition. To find the value of the other variables in terms of $(v, q, r)$, we observe first that \begin{equation} \label{zValue} z = 2a = 2v^2 q, \end{equation} and that, from (\ref{intermediate}), we get $ b = vqr , $ whence \begin{equation} \label{uValue} u = 4vqr . \end{equation} (We have chosen only the ``$+$'' sign for $b$, since the ``$\pm$'' is recovered when computing the $x$-value; see (\ref{xValue}).) Next, since \begin{equation} \label{xValue} x = -y - {6 \over 5} z \pm {1 \over 5} u , \end{equation} we get \begin{equation} x = \displaystyle {q \over \strut 5}(2 r^2 -5 v^2 \pm 4vr) . \end{equation} Equations (\ref{xValue}), (\ref{yValue}), and (\ref{zValue}) provide the coordinates of all possible integer solution lines. Observing, however, that $q$ is a common factor of all three coordinates, and proportional vectors provide the same solutions, we may just set $q=1$, replace $r$ by $r/2$, and conclude that all solution lines of (\ref{diophantine3}) are given by $$ \begin{cases} x = \displaystyle {1 \over \strut 10}(r^2 - 10 v^2 \pm 4vr) \cr y =- \displaystyle {1 \over \strut 10} (14 v^2 + r^2) \cr z = 2 v^2 , \cr \end{cases} $$ where $v$ and $r$ are arbitrary integers. For example, choosing $(v, r) = (1, 4)$ we get the vectors $$ \left\langle {11 \over 5}, -3, 2 \right\rangle \sim \langle 11, -15, 10 \rangle \qquad \hbox{ and } \qquad \langle 1, 3, -2 \rangle ; $$ and choosing $(v, r) = (1, 1)$ we obtain $$ \left\langle -{1\over2}, -{3\over2}, 2 \right\rangle \sim \langle 1, 3, -4 \rangle \qquad \hbox{ and } \qquad \left\langle - {13 \over 10}, - {3 \over 2}, 2 \right\rangle \sim \langle 13, 15, -20 \rangle . $$ \subsection{A general method.} There is a method for obtaining infinitely many integer solution lines, which can be applied to a general $3 \times 3$ matrix with integer or rational coefficients; the drawback is that one must know (at least) one nontrivial solution. This is an idea by T.~Piezas \cite{wolfram}. Namely, if we know one particular integer solution $(y, z, u) = (m, n, p)$ of the Diophantine equation $$ a y^2 + b y z + c z^2= d u^2, $$ then a two-parameter family of solutions is given by \begin{eqnarray*} y &=& (a m+b n) s^2 + 2 c n s t- c m t^2, \\ z &=&a n s^2+2 a m s t+(b m+c n) t^2, \\ u &=& p (a s^2+b s t+c t^2), \end{eqnarray*} where $s$ and $t$ are arbitrary integers. For example, for the matrix $$ A = \begin{pmatrix} 1&2&3\cr3&4&5\cr2&3&4 \end{pmatrix} , $$ equation (\ref{thecone}) for $\ve v = \langle x, y, z \rangle$ is $$ 13 x^2 + 40 xy + 52 xz + 28 y^2 + 76 yz + 49 z^2 = 0 , $$ which solved with respect to $x$ produces \begin{equation} \label{ValueOfX} x = {- 20 y \over 13} - 2z \pm {\sqrt{36y^2 + 52 yz + 39 z^2} \over 13} , \end{equation} so to get integer solutions we need to solve the Diophantine equation \begin{equation} 36y^2 + 52 yz + 39 z^2 = u^2. \end{equation} Setting $z=0$, it is not hard to guess the particular solution $(y, z, u) = (m, n, p) = (1, 0, 6)$. The corresponding two-parameter solution family looks like \begin{eqnarray} y &=& 36s^2 - 39 t^2\cr z &=& 72 st + 52 t^2\cr u &=& 216 s^2 + 312 st + 234 t^2. \end{eqnarray} Each choice of $(s, t)$ produces two integer solution lines $\ve v = \langle x,y,z \rangle$, using the two $x$-values provided by (\ref{ValueOfX}). For example, for $(s, t) = (1,1)$ we get $$ \left\langle - {2402 \over 13}, -3, 124 \right\rangle \sim \langle -2402, -39, 1612\rangle, \qquad\hbox{ and } \qquad \langle -302, -3, 124 \rangle ; $$ and $(s, t) = (1, 2)$ yields $$ \left\langle - {4976 \over 13}, -120, 352 \right\rangle \sim \langle -4976, -1560, 4576\rangle, \qquad\hbox{ and } \qquad \langle -656, -120, 352 \rangle . $$ \section{Questions for further study, applications.} \begin{itemize} \item Unlike the case of eigenpairs, the solution lines to equation (\ref{norma}) are very much dependent on the chosen norm in ${\Bbb R}^n$. It would be interesting to discuss similar solutions for other norms in Euclidean space. \item We have only grazed the case of a $3 \times 3$ matrix $A$. Can we find solvability conditions of (\ref{norma}) in terms of the coefficients of $A$, as we did in the $2 \times 2$ case? Can one find nontrivial families of integer matrices $A$ for which (\ref{norma}) has integer solutions? \item {\bf Application to toral automorphisms.} Many of the $2 \times 2$ integer matrices we studied, for example, the subfamily \begin{equation} \label{autom} A = \begin{pmatrix} q+1 & q \cr q & q-1 \end{pmatrix} , \end{equation} with $q$ integer, are symmetric and have determinant $-1$; therefore, they can be regarded also as linear automorphisms of the (flat) 2-torus, which possess very interesting dynamical properties; see, for example \cite{katok}, p.~42. Nontrivial such automorphisms have eigenvectors with irrational slopes; on the other hand, the integer solution lines of (\ref{autom}) bisect the eigendirections. We can therefore use integer arithmetic to compute iterates of vectors in the stable and in the unstable manifolds of such automorphisms. For example, the matrix \begin{equation} \label{automPart} A = \begin{pmatrix} 3 & 2 \cr 2 & 1 \end{pmatrix} \end{equation} has integer solution lines generated by $\ve v_1 = \langle 1, -1 \rangle$ and $\ve v_2 =\langle -1, 3 \rangle$, and irrational eigenvalues $$ \lambda_1 = 2+\sqrt 5 \qquad\hbox{ and } \qquad \lambda_2 = 2 - \sqrt 5 . $$ If we choose the equal-norm vectors $$ \ve v_1 = \langle 1, -1 \rangle \qquad\hbox{ and } \qquad \ve v_3 = \frac{1}{\sqrt 5} \ve v_2 = \frac{1}{\sqrt 5} \langle -1, 3 \rangle , $$ then $\ve u = \ve v_1 + \ve v_3$ will be along the unstable direction of $A$, and $\ve w = \ve v_1 - \ve v_3$ will be along the stable direction. Therefore, on the one hand $$ A^n \ve u = \lambda_1^n \ve u = (1 + \sqrt 5)^n \ve u, $$ and on the other hand $$ A^n \ve u = A^n \ve v_1 + \frac{1}{\sqrt{5}} A^n \ve v_2 . $$ For example, \begin{align*} &A^{10} \ve u = \begin{pmatrix} 1346269 & 832040 \cr 832040 & 514229 \end{pmatrix} \ve u = \cr &= \left\langle 514229 + \frac{1}{\sqrt{5}} 1149851, \, 317811 + \frac{710647}{\sqrt{5}} \right\rangle , \end{align*} provides a way to compute $(2 + \sqrt 5)^{10} \ve u$ using only integer arithmetic. A similar calculation can be used for iterates of vectors in the stable direction. \end{itemize} \section{Acknowledgments.} I am grateful to Dr.~Robert Lopez for introducing the interesting idea of vectors whose norms are preserved by a linear map. I also wish to thank the Editorial Board member of the {\it American Mathematical Monthly}, for valuable comments, corrections, and suggestions that were used to significantly improve the overall quality of the paper. This applies, especially, to a shorter and more conceptual proof of Theorem \ref{theo2}, and to a complete solution of the Diophantine equation (\ref{diophantine3}).
{ "timestamp": "2021-06-24T02:01:39", "yymm": "2106", "arxiv_id": "2106.12019", "language": "en", "url": "https://arxiv.org/abs/2106.12019", "abstract": "We study directions along which the norms of vectors are preserved under a linear map. In particular, we find families of matrices for which these directions are determined by integer vectors. We consider the two-dimensional case in detail, and also discuss the extension to the three-dimensional case.", "subjects": "Number Theory (math.NT); Spectral Theory (math.SP)", "title": "Length-Preserving Directions and Some Diophantine Equations", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9883127402831221, "lm_q2_score": 0.822189123986562, "lm_q1q2_score": 0.8125799861581386 }
https://arxiv.org/abs/1803.05419
Generalised Structural CNNs (SCNNs) for time series data with arbitrary graph topology
Deep Learning methods, specifically convolutional neural networks (CNNs), have seen a lot of success in the domain of image-based data, where the data offers a clearly structured topology in the regular lattice of pixels. This 4-neighbourhood topological simplicity makes the application of convolutional masks straightforward for time series data, such as video applications, but many high-dimensional time series data are not organised in regular lattices, and instead values may have adjacency relationships with non-trivial topologies, such as small-world networks or trees. In our application case, human kinematics, it is currently unclear how to generalise convolutional kernels in a principled manner. Therefore we define and implement here a framework for general graph-structured CNNs for time series analysis. Our algorithm automatically builds convolutional layers using the specified adjacency matrix of the data dimensions and convolutional masks that scale with the hop distance. In the limit of a lattice-topology our method produces the well-known image convolutional masks. We test our method first on synthetic data of arbitrarily-connected graphs and human hand motion capture data, where the hand is represented by a tree capturing the mechanical dependencies of the joints. We are able to demonstrate, amongst other things, that inclusion of the graph structure of the data dimensions improves model prediction significantly, when compared against a benchmark CNN model with only time convolution layers.
\section{Introduction} The \textit{proceedings} are the records of a conference.\footnote{This is a footnote} ACM seeks to give these conference by-products a uniform, high-quality appearance. To do this, ACM has some rigid requirements for the format of the proceedings documents: there is a specified format (balanced double columns), a specified set of fonts (Arial or Helvetica and Times Roman) in certain specified sizes, a specified live area, centered on the page, specified size of margins, specified column width and gutter size. \section{The Body of The Paper} Typically, the body of a paper is organized into a hierarchical structure, with numbered or unnumbered headings for sections, subsections, sub-subsections, and even smaller sections. The command \texttt{{\char'134}section} that precedes this paragraph is part of such a hierarchy.\footnote{This is a footnote.} \LaTeX\ handles the numbering and placement of these headings for you, when you use the appropriate heading commands around the titles of the headings. If you want a sub-subsection or smaller part to be unnumbered in your output, simply append an asterisk to the command name. Examples of both numbered and unnumbered headings will appear throughout the balance of this sample document. Because the entire article is contained in the \textbf{document} environment, you can indicate the start of a new paragraph with a blank line in your input file; that is why this sentence forms a separate paragraph. \subsection{Type Changes and {\itshape Special} Characters} We have already seen several typeface changes in this sample. You can indicate italicized words or phrases in your text with the command \texttt{{\char'134}textit}; emboldening with the command \texttt{{\char'134}textbf} and typewriter-style (for instance, for computer code) with \texttt{{\char'134}texttt}. But remember, you do not have to indicate typestyle changes when such changes are part of the \textit{structural} elements of your article; for instance, the heading of this subsection will be in a sans serif\footnote{Another footnote here. Let's make this a rather long one to see how it looks.} typeface, but that is handled by the document class file. Take care with the use of\footnote{Another footnote.} the curly braces in typeface changes; they mark the beginning and end of the text that is to be in the different typeface. You can use whatever symbols, accented characters, or non-English characters you need anywhere in your document; you can find a complete list of what is available in the \textit{\LaTeX\ User's Guide} \cite{Lamport:LaTeX}. \subsection{Math Equations} You may want to display math equations in three distinct styles: inline, numbered or non-numbered display. Each of the three are discussed in the next sections. \subsubsection{Inline (In-text) Equations} A formula that appears in the running text is called an inline or in-text formula. It is produced by the \textbf{math} environment, which can be invoked with the usual \texttt{{\char'134}begin\,\ldots{\char'134}end} construction or with the short form \texttt{\$\,\ldots\$}. You can use any of the symbols and structures, from $\alpha$ to $\omega$, available in \LaTeX~\cite{Lamport:LaTeX}; this section will simply show a few examples of in-text equations in context. Notice how this equation: \begin{math} \lim_{n\rightarrow \infty}x=0 \end{math}, set here in in-line math style, looks slightly different when set in display style. (See next section). \subsubsection{Display Equations} A numbered display equation---one set off by vertical space from the text and centered horizontally---is produced by the \textbf{equation} environment. An unnumbered display equation is produced by the \textbf{displaymath} environment. Again, in either environment, you can use any of the symbols and structures available in \LaTeX\@; this section will just give a couple of examples of display equations in context. First, consider the equation, shown as an inline equation above: \begin{equation} \lim_{n\rightarrow \infty}x=0 \end{equation} Notice how it is formatted somewhat differently in the \textbf{displaymath} environment. Now, we'll enter an unnumbered equation: \begin{displaymath} \sum_{i=0}^{\infty} x + 1 \end{displaymath} and follow it with another numbered equation: \begin{equation} \sum_{i=0}^{\infty}x_i=\int_{0}^{\pi+2} f \end{equation} just to demonstrate \LaTeX's able handling of numbering. \subsection{Citations} Citations to articles~\cite{bowman:reasoning, clark:pct, braams:babel, herlihy:methodology}, conference proceedings~\cite{clark:pct} or maybe books \cite{Lamport:LaTeX, salas:calculus} listed in the Bibliography section of your article will occur throughout the text of your article. You should use BibTeX to automatically produce this bibliography; you simply need to insert one of several citation commands with a key of the item cited in the proper location in the \texttt{.tex} file~\cite{Lamport:LaTeX}. The key is a short reference you invent to uniquely identify each work; in this sample document, the key is the first author's surname and a word from the title. This identifying key is included with each item in the \texttt{.bib} file for your article. The details of the construction of the \texttt{.bib} file are beyond the scope of this sample document, but more information can be found in the \textit{Author's Guide}, and exhaustive details in the \textit{\LaTeX\ User's Guide} by Lamport~\shortcite{Lamport:LaTeX}. This article shows only the plainest form of the citation command, using \texttt{{\char'134}cite}. Some examples. A paginated journal article \cite{Abril07}, an enumerated journal article \cite{Cohen07}, a reference to an entire issue \cite{JCohen96}, a monograph (whole book) \cite{Kosiur01}, a monograph/whole book in a series (see 2a in spec. document) \cite{Harel79}, a divisible-book such as an anthology or compilation \cite{Editor00} followed by the same example, however we only output the series if the volume number is given \cite{Editor00a} (so Editor00a's series should NOT be present since it has no vol. no.), a chapter in a divisible book \cite{Spector90}, a chapter in a divisible book in a series \cite{Douglass98}, a multi-volume work as book \cite{Knuth97}, an article in a proceedings (of a conference, symposium, workshop for example) (paginated proceedings article) \cite{Andler79}, a proceedings article with all possible elements \cite{Smith10}, an example of an enumerated proceedings article \cite{VanGundy07}, an informally published work \cite{Harel78}, a doctoral dissertation \cite{Clarkson85}, a master's thesis: \cite{anisi03}, an online document / world wide web resource \cite{Thornburg01, Ablamowicz07, Poker06}, a video game (Case 1) \cite{Obama08} and (Case 2) \cite{Novak03} and \cite{Lee05} and (Case 3) a patent \cite{JoeScientist001}, work accepted for publication \cite{rous08}, 'YYYYb'-test for prolific author \cite{SaeediMEJ10} and \cite{SaeediJETC10}. Other cites might contain 'duplicate' DOI and URLs (some SIAM articles) \cite{Kirschmer:2010:AEI:1958016.1958018}. Boris / Barbara Beeton: multi-volume works as books \cite{MR781536} and \cite{MR781537}. A couple of citations with DOIs: \cite{2004:ITE:1009386.1010128, Kirschmer:2010:AEI:1958016.1958018}. Online citations: \cite{TUGInstmem, Thornburg01, CTANacmart}. \subsection{Tables} Because tables cannot be split across pages, the best placement for them is typically the top of the page nearest their initial cite. To ensure this proper ``floating'' placement of tables, use the environment \textbf{table} to enclose the table's contents and the table caption. The contents of the table itself must go in the \textbf{tabular} environment, to be aligned properly in rows and columns, with the desired horizontal and vertical rules. Again, detailed instructions on \textbf{tabular} material are found in the \textit{\LaTeX\ User's Guide}. Immediately following this sentence is the point at which Table~\ref{tab:freq} is included in the input file; compare the placement of the table here with the table in the printed output of this document. \begin{table} \caption{Frequency of Special Characters} \label{tab:freq} \begin{tabular}{ccl} \toprule Non-English or Math&Frequency&Comments\\ \midrule \O & 1 in 1,000& For Swedish names\\ $\pi$ & 1 in 5& Common in math\\ \$ & 4 in 5 & Used in business\\ $\Psi^2_1$ & 1 in 40,000& Unexplained usage\\ \bottomrule \end{tabular} \end{table} To set a wider table, which takes up the whole width of the page's live area, use the environment \textbf{table*} to enclose the table's contents and the table caption. As with a single-column table, this wide table will ``float'' to a location deemed more desirable. Immediately following this sentence is the point at which Table~\ref{tab:commands} is included in the input file; again, it is instructive to compare the placement of the table here with the table in the printed output of this document. \begin{table*} \caption{Some Typical Commands} \label{tab:commands} \begin{tabular}{ccl} \toprule Command &A Number & Comments\\ \midrule \texttt{{\char'134}author} & 100& Author \\ \texttt{{\char'134}table}& 300 & For tables\\ \texttt{{\char'134}table*}& 400& For wider tables\\ \bottomrule \end{tabular} \end{table*} It is strongly recommended to use the package booktabs~\cite{Fear05} and follow its main principles of typography with respect to tables: \begin{enumerate} \item Never, ever use vertical rules. \item Never use double rules. \end{enumerate} It is also a good idea not to overuse horizontal rules. \subsection{Figures} Like tables, figures cannot be split across pages; the best placement for them is typically the top or the bottom of the page nearest their initial cite. To ensure this proper ``floating'' placement of figures, use the environment \textbf{figure} to enclose the figure and its caption. This sample document contains examples of \texttt{.eps} files to be displayable with \LaTeX. If you work with pdf\LaTeX, use files in the \texttt{.pdf} format. Note that most modern \TeX\ systems will convert \texttt{.eps} to \texttt{.pdf} for you on the fly. More details on each of these are found in the \textit{Author's Guide}. \begin{figure} \includegraphics{fly} \caption{A sample black and white graphic.} \end{figure} \begin{figure} \includegraphics[height=1in, width=1in]{fly} \caption{A sample black and white graphic that has been resized with the \texttt{includegraphics} command.} \end{figure} As was the case with tables, you may want a figure that spans two columns. To do this, and still to ensure proper ``floating'' placement of tables, use the environment \textbf{figure*} to enclose the figure and its caption. And don't forget to end the environment with \textbf{figure*}, not \textbf{figure}! \begin{figure*} \includegraphics{flies} \caption{A sample black and white graphic that needs to span two columns of text.} \end{figure*} \begin{figure} \includegraphics[height=1in, width=1in]{rosette} \caption{A sample black and white graphic that has been resized with the \texttt{includegraphics} command.} \end{figure} \subsection{Theorem-like Constructs} Other common constructs that may occur in your article are the forms for logical constructs like theorems, axioms, corollaries and proofs. ACM uses two types of these constructs: theorem-like and definition-like. Here is a theorem: \begin{theorem} Let $f$ be continuous on $[a,b]$. If $G$ is an antiderivative for $f$ on $[a,b]$, then \begin{displaymath} \int^b_af(t)\,dt = G(b) - G(a). \end{displaymath} \end{theorem} Here is a definition: \begin{definition} If $z$ is irrational, then by $e^z$ we mean the unique number that has logarithm $z$: \begin{displaymath} \log e^z = z. \end{displaymath} \end{definition} The pre-defined theorem-like constructs are \textbf{theorem}, \textbf{conjecture}, \textbf{proposition}, \textbf{lemma} and \textbf{corollary}. The pre-defined de\-fi\-ni\-ti\-on-like constructs are \textbf{example} and \textbf{definition}. You can add your own constructs using the \textsl{amsthm} interface~\cite{Amsthm15}. The styles used in the \verb|\theoremstyle| command are \textbf{acmplain} and \textbf{acmdefinition}. Another construct is \textbf{proof}, for example, \begin{proof} Suppose on the contrary there exists a real number $L$ such that \begin{displaymath} \lim_{x\rightarrow\infty} \frac{f(x)}{g(x)} = L. \end{displaymath} Then \begin{displaymath} l=\lim_{x\rightarrow c} f(x) = \lim_{x\rightarrow c} \left[ g{x} \cdot \frac{f(x)}{g(x)} \right ] = \lim_{x\rightarrow c} g(x) \cdot \lim_{x\rightarrow c} \frac{f(x)}{g(x)} = 0\cdot L = 0, \end{displaymath} which contradicts our assumption that $l\neq 0$. \end{proof} \section{Conclusions} This paragraph will end the body of this sample document. Remember that you might still have Acknowledgments or Appendices; brief samples of these follow. There is still the Bibliography to deal with; and we will make a disclaimer about that here: with the exception of the reference to the \LaTeX\ book, the citations in this paper are to articles which have nothing to do with the present subject and are used as examples only. \section{Introduction} \indent The success of deep learning, specifically convolutional neural networks (CNNs), in computer vision \citep{krizhevsky2012imagenet} has spurred applications of deep learning methods to domains such as natural language processing \citep{gehring2017convolutional,kalchbrenner2016neural,bradbury2016quasi}, speech recognition \citep{graves2013speech}, human activity recognition \citep{ordonez2016deep,neverova2016learning,toshev2014deeppose} and weather forecasting \citep{xingjian2015convolutional}. By design, CNNs share parameters across the input features and have sparse connections between layers, making them effective and efficient models for exploiting the local stationarity and lattice topology of pixels in an image. Similarly, recurrent neural networks (RNNs) and sliding windows, which extract features from temporal data by reusing the model parameters across different time steps, implicitly assume a stationary distribution within the input. \indent In the realm of human activity modeling, the modelling processes can be broadly categorized into two categories - activity recognition and activity pattern detection. Activity recognition focuses on detection and classification of predetermined activities \citep{ordonez2016deep,yang2015deep} or surveillance technology \citep{neverova2016learning,boominathan2016crowdnet}, while deep learning techniques have been used in modelling human activity, with most studies focusing on activity recognition \citep{ordonez2016deep,yang2015deep,du2015hierarchical,ji20133d,jain2016structural} and limited set performing unsupervised learning on human-activity data \citep{butepage2017deep,holden2016deep}. \indent However, as with most CNN studies used outside of the Computer Vision domain, the use of CNNs in this case is improper, as CNNs are optimised for data with a lattice topology, such as the pixel array of an image. When used with data that doesn't conform to a lattice topology, CNN performance drops, as the convolutional function can not fully capture the correlations between neighbouring connected data nodes. \indent The application of deep learning models such as CNNs to human kinematics data is thus not straightforward, as the structure of human motion capture data is subjected to the constraints of human anatomy \citep{lin2000modeling}. Unlike the regular lattice array of images, human motion capture data have a tree-like structure (each hand is attached to an arm, which is jointly attached to the trunk, etc.) \cite{lin2000modeling}. Moreover, human kinematics data generally contains both spatial and temporal features, and it is important to be able to capture spatio-temporal correlations between the features. Most deep learning models are only adept at modeling spatial and temporal features separately \citep{simonyan2014two} or in a stage-wise manner \citep{yang2015deep,ordonez2016deep,sainath2015convolutional} - the applications of deep learning models to model spatio-temporal features simultaneously requires significant ingenuity in the design of either architecture or new artificial neuron units \citep{xingjian2015convolutional,du2015hierarchical,jain2016structural}. We hereby demonstrate a novel CNN architecture that can deep learn time series data with an arbitrary graph structure. We combine work on adjacency matrices with traditional CNN and RNN architectures, to allow us to perform deep learning on human kinematics data. We present both a generative model and a predictive model, built with our novel architecture. We train and test several models including our own on in-house human kinematics data, and find that our Structural Convolutional Neural Networks (SCNNs) outperform time-based convolutional neural networks. We also find that within our Structural Convolution AutoEncoder (SCAE), the convolutional kernels learn to only represent ethologically relevant hand movements in a sparse manner. \subsection{Modelling Human Kinematics Data} \indent Human kinematics data is most often represented as graph-structured spatio-temporal data. This proves a major hurdle to accurate modelling - most techniques in this regard have historically fallen short, in having no spatial or temporal convolution, or through restricting rather than incorporating graph structure, resulting in suboptimal prediction performance. \indent One of the earliest and simplest approaches to modelling human kinematics is the 'sliding window' method \citep{ordonez2016deep,yang2015deep,ji20133d,butepage2017deep}, which outperforms all recurrent neural networks in short term prediction \citep{gers2001applying} for human activity recognition tasks \citep{yang2015deep,ordonez2016deep,roggen2010collecting,jain2016structural}. Whilst useful, this approach doesn't conserve the spatial correlations that exist within the input data. Another traditional approach to modelling human kinematics temporally involves building a single end-to-end architecture consisting of convolutional and recurrent layers in a stage-wise manner \cite{sainath2015convolutional, ordonez2016deep}. This approach, however, lacks the capability to work on an arbitrary graph structure, as it features a regular convolution function. To address the problems described above, several models have been proposed that feature a graph structured convolution. Li et al. \citep{li2015hierarchical} proposed 3 hand-crafted multi-stream bidirectional RNNs that models each part of the body separately. Even though these models have hierarchical feature extractions that allow them to achieve better classification accuracy, their fusing layers do not account for the correlation of data prior to passing into the bidirectional layers. In addition, 2 out of 3 models fail to account for the structure of and the correlation between the spatial features. Another approach is the tree-based CNN, originally introduced in the natural language processing domain \citep{mou2015discriminative,mou2015natural,mou2016recognizing,collins2002convolution}. In this model, the input to the neural network needs to be organized hierarchically in a tree graph which allows for hierarchical feature extraction. However, this model also restricts the data structure into that of a tree, not allowing for arbitrarily-defined structure. A structural recurrent neural network was proposed by \citep{jain2016structural} in order to model spatio-temporal data with such arbitrary graph structure. Whilst this approach might generate human-like motion, this model is prone to the long-term dependencies problem common to all RNN models. \section{Methodology} \subsection{Data Acquisition \& Preprocessing} We captured natural hand movements during daily life activities in our research group (following \cite{belic2015decoding}).The glove was calibrated against optical motion tracking methods using \cite{vicente2013calibration}. All subjects gave written consent and the experimental procedure was approved by a local ethics committee. Subjects (N=10) wore a right-hand CyberGlove (CyberGlove Systems LLC, San Jose, CA, U.S.A.). The glove measures joint abduction in 22 hand joints using stretch sensors embedded in the material with a spatial resolution of <1 degree (see Fig.~\ref{fig:hand} for joints tracked) and a sampling rate of 90 Hz. We recorded multiple hours of data per subject, yielding over 5 million samples. \begin{figure} \begin{center} \subfigure[Joint locations]{ \label{fig:handsens} \includegraphics[width = 0.75\hsize]{./figures/Hand.png}} \subfigure[Graphical model of an image.]{ \label{fig:imagegraphical} \includegraphics[width = 0.45\hsize]{./figures/ImageGraphical.png}} \subfigure[Graphical model of a human hand.]{ \label{fig:handtree} \includegraphics[width = 0.45\hsize]{./figures/HandGraphicalModel.png}} \subfigure[Hand adjacency matrix]{ \label{fig:adjmathand} \includegraphics[width = 0.75\hsize]{./figures/adjmathand.png}} \caption{(a) Locations of the 22 sensors embedded in CyberGlove, used to measure the angle of the joints (15 sensors), the abduction between fingers (4), wrist flexion, wrist abduction and palm arch (1 each). (b,c) The features for an image are arranged on a grid, for a given node on the lattice (black node). It has high correlation with its neighbors (grey nodes), whereas the features of the hand motion data set can be arranged according to the anatomical structure of the hand. (d) Adjacency matrix for the hand.} \label{fig:hand} \end{center} \end{figure} \subsection{Structural Convolutional Neural Networks} To both capture the spatio-temporal correlation from within the graph, and to work with any arbitrary graph structure, we propose a novel deep learning architecture, the Structural Convolutional Neural Network (SCNN). Our network design builds on several studies \citep{du2015hierarchical,butepage2017deep,mou2015discriminative,bruna2013spectral,henaff2015deep,mou2015natural,mou2016recognizing,jain2016structural,niepert2016learning,bronstein16geometric} that attempt to embed graph structure into the neural network itself. In contrast to these previous methods, our neural network architectures defines and uses specialized convolutional kernel with an arbitrarily definable adjacency matrix. This enables us to embed prior knowledge for example in the form of physically known neighbourhood relationships between sensors. To help explain our network architecture, we first defined the following for a graph with $F$ number of nodes and the adjacency matrix $\vec{A} \in \mathbb{R}^{F\times F}$: \begin{align} \vec{y}^{\ell-1} &= \begin{bmatrix} \vec{y}_1^{\ell-1}, \ldots, \vec{y}_F^{\ell-1} \end{bmatrix}^\top, &\textit{Previous layer's output}\\ \vec{y}^\ell &= \begin{bmatrix} \vec{y}_1^\ell, \ldots, \vec{y}_F^\ell \end{bmatrix}^\top, &\textit{Current layer's output}\\ \vec{W}^\ell & = \begin{bmatrix} \vec{W}^\ell_1, \ldots, \vec{W}^\ell_F \end{bmatrix}^\top, &\textit{Current layer's weights}\\ \vec{b}^\ell & = \begin{bmatrix} \vec{b}_1, \ldots, \vec{b}_F \end{bmatrix}^\top, &\textit{Current layer's biases} \end{align} where \begin{align*} \vec{y}^{\ell-1} & \in \mathbb{R}^{T\times F \times N },\\ \vec{y}^{\ell} & \in \mathbb{R}^{ (T-(t-1))\times F \times M},\\ \vec{W}^{\ell} & \in \mathbb{R}^{F \times t \times F \times N \times M},\\ \vec{b}^{\ell} & \in \mathbb{R}^{F \times M},\\ \vec{y}^{\ell-1}_i & \in \mathbb{R}^{T \times 1 \times N}, \forall i =1, \ldots, F,\\ \vec{y}^{\ell}_i & \in \mathbb{R}^{(T-(t-1)) \times 1 \times N}, \forall i =1, \ldots, F,\\ \vec{W}^{\ell}_i & \in \mathbb{R}^{t \times F \times N \times M}, \forall i =1, \ldots, F,\\ \vec{b}^{\ell}_i & \in \mathbb{R}^{1 \times M}, \forall i =1, \ldots, F. \end{align*} The kernel is made up of $F$ sub-kernels and each of the sub-kernels $i$, which corresponds to node $i$, has weights $W_i^\ell$ with the dimension of $t\times F\times N\times M$. The sub-kernels are slid across the temporal dimension of the input, producing an output of $(T-(t-1)) \times 1 \times M$ for each node $i$. The output is then passed through an activation function $g$ to produce: \begin{align} \vec{y}_i^\ell &= g\left(\vec{W}_i^\ell * \vec{y}^{\ell-1} + \vec{b}_i^\ell\right)\\ \vec{W}_i^\ell &=\begin{bmatrix} \vec{w}_{i1}^\ell \\ \vdots \\ \vec{w}_{iF}^\ell\end{bmatrix} \end{align} where $*$ is the convolution operation, \begin{align} \vec{W}_i^{\ell} * \vec{y}^{\ell-1} & = \sum_{j=1}^F \vec{w}_{ij}^\ell * \vec{y}_j^{\ell-1} \end{align} and $\vec{w}_{ij}^\ell$ is the sub-kernel weights for the $i$ node with its $j$ neighbor, \begin{align} \vec{w}_{ij}^\ell \in \begin{cases} \mathbb{R}^{t \times 1 \times N \times M},&\textit{if $\vec{A}_{ij}\neq 0$},\\ \mathbf{0}^{t \times 1 \times N \times M},&\textit{if $\vec{A}_{ij}=0$}. \end{cases} \end{align} where $\vec{w}_{ij}^\ell$ represents the sub-kernel and $\vec{A} \in \mathbb{R}^{F\times F}$ represents the adjacency matrix. Thus, if applied to the following graph: \begin{figure}[H] \begin{center} \includegraphics[width = .65 \hsize]{./figures/SampleGraph.png} \caption{Example of dependency graph to be modeled by our structural convolutional neural networks.} \label{fig:SampleGraph} \end{center} \end{figure} The graph can be represented by the adjacency matrix, $\vec{A}$: \begin{align} \vec{A} & = \begin{bmatrix} 2 & 1 & 0 & 0 & 1\\ 1 & 2 & 1 & 1 & 0\\ 0 & 1 & 2 & 0 & 0\\ 0 & 1 & 0 & 2 & 1\\ 1 & 0 & 0 & 1 & 2 \end{bmatrix}. \end{align} The kernel weights $\vec{W}^\ell$ consists of the sub-kernels with the corresponding weights, $\vec{W}^\ell_i$ below: \begin{align} \vec{W}^\ell_1 &= \begin{bmatrix} \vec{w}_{11}^\ell & \vec{w}_{12}^\ell & \vec{0} & \vec{0} & \vec{w}_{15}^\ell \end{bmatrix}^\top\\ \vec{W}^\ell_2 &= \begin{bmatrix} \vec{w}_{21}^\ell & \vec{w}_{22}^\ell & \vec{w}_{23}^\ell & \vec{w}_{24}^\ell & \vec{0} \end{bmatrix}^\top\\ \vec{W}^\ell_3 &= \begin{bmatrix} \vec{0} & \vec{w}_{32}^\ell & \vec{w}_{33}^\ell & \vec{0} & \vec{0} \end{bmatrix}^\top\\ \vec{W}^\ell_4 &= \begin{bmatrix} \vec{0} & \vec{w}_{42}^\ell & \vec{0} & \vec{w}_{44}^\ell & \vec{w}_{45}^\ell \end{bmatrix}^\top\\ \vec{W}^\ell_5 &= \begin{bmatrix} \vec{w}_{51}^\ell & \vec{0} & \vec{0} & \vec{w}_{54}^\ell & \vec{w}_{55}^\ell \end{bmatrix}^\top. \end{align} \begin{figure}[H] \centering \subfigure[Mechanics of the sub-kernels on the input layer. The sub-kernels are represented by the dotted rounded rectangles.]{\label{fig:subkernels}\includegraphics[width = 0.55\hsize]{./figures/InputStructuralConvolution.png}} \hspace{15mm} \subfigure[Structural convolutional layer. The number in the nodes represents the nodes in the input that are convolved.]{\label{fig:subkernelconvolution}\includegraphics[width = 0.55\hsize]{./figures/OutputStructuralConvolution.png}} \caption{The sub-kernels convolve only specific nodes in the input layer to produce the corresponding nodes in the convolutional output layer. For example, the sub-kernel that encompasses the input nodes 1, 2, 3 and 4 maps those input nodes to the purple node in the convolutional layer. The structure of the graph remains intact after the convolution operation. The recurrent edges are omitted for brevity.} \end{figure} Figure \ref{fig:subkernels} and \ref{fig:subkernelconvolution} shows the workings of the structural convolution for the graph in Figure \ref{fig:SampleGraph}, for input with a single channel and a single kernel. Each of the sub-kernels will only take some of the nodes of the graph for the convolution operation. Furthermore, each of the sub-kernels is distinct to the input nodes. For example, in Figure \ref{fig:subkernels}, the sub-kernel for node 2 (in purple) will take all the neighbors of node 2 that are 1 path length away for the convolution operation. The output for the convolution operation is then mapped to the corresponding node on the convolution layer. Additionally, the use of an adjacency matrix allows an arbitrary graph structure to embed itself into the core convolution function and thus preserve any spatial correlations the data might possess prior to and after the convolution function. Furthermore, in contrast to the hard limit placed on the number of possible node connections in the previous study \citep{niepert2016learning}, our method allows all nodes that are reachable within a predetermined path to be covered by one convolution function and, as a result, allows for more efficient construction of CNNs for data that have large graph structures. \subsection{Neural Network Structure} We present two novel neural network architectures that leverage our graph-structural approach: 1. A structural convolutional autoencoder, and 2. A structural convolutional neural network, in order to test our architecture in both supervised and unsupervised learning environments. \paragraph{Structural Convolutional AutoEncoder (SCAE)} For unsupervised learning, this study implements the structural convolutional autoencoder as shown in Figure \ref{fig:scae}. The model is trained initially without any regularization until weights are relatively stable. Thereafter, we impose the $L1$ regularization penalty to fine-tune the weights further. \begin{figure}[ht] \begin{center} \includegraphics[width = 1.0\hsize]{./figures/tcnn.png} \caption{Time convolutional neural networks (TCNNs). Notations for the blocks are similar to Figure \ref{fig:scnn}, with the exception of the convolutional layer. The convolutional layer in this model, TC(time steps, filters) only convolves the input temporally.} \label{fig:tcnn} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width = 1.0\hsize]{./figures/scnn.png} \caption{Structural convolutional neural networks (SCNN). The convolution operation is done on both spatial and temporal dimensions. SC(time steps, filters) denotes the number of time steps to convolve and the number of feature maps produced from the structural convolution. ReLU and BN refer to the ReLU activation layer and batch normalization respectively. MaxPool(time steps) denotes the temporal max pooling and FC(number of neurons) denotes the fully connected layer.} \label{fig:scnn} \end{center} \end{figure} \begin{figure}[ht] \begin{center} \includegraphics[width = 1.0\hsize]{./figures/cae.png} \caption{Structural convolutional autoencoder (SCAE). Similarly to the SCNN, the convolution operation is performed on both the spatial and temporal dimensions, with differences in the deconvolution process, with respect to the temporal unpooling and fully-connected layers.} \label{fig:scae} \end{center} \end{figure} With the large number of parameters in the model, it is relatively easy to train such that it can reconstruct its input perfectly. However, such a model would not provide us with significant insight. As the $L1$ regularization penalty encourages sparsity within the model, by retraining the model with $L1$ regularization, we fine tune the kernel weights such that some weights will have zero-values. In essence, the model will subsequently prioritize features that are more representative of natural hand movement. \paragraph{Structural Convolutional Neural Networks (SCNN)} For the prediction task, we implement two different models: structural convolutional neural networks (SCNN) as in Figure \ref{fig:scnn} and temporal convolutional neural networks (TCNN) as in Figure \ref{fig:tcnn}. The latter TCNN model is used as a baseline from which to benchmark the relative performance of the SCNN model. The input to both models is the subsample of the time series with a 500-step time window, while the output is the subsequent 100-step time shift. \subsection{Neural Network Implementation} We implemented our models using the Tensorflow package \citep{abadi2016tensorflow}. In addition, we also implement $L1$ regularization on the parameter optimization. The Xavier initialization \citep{glorot2010understanding} was implemented to randomly initialize the weights of the convolution kernels. All biases were initialized to 0.5. An ADAM optimization technique was implemented to carry out our parameter optimization \citep{kingma2014adam}. Since our system can train several neural networks in parallel, a batching process was implemented to split the training data into 32 batches. Additionally, to ensure fast convergence of the parameter values \citep{bengio2012practical} and to prevent overfitting, the training batches are not constructed by subsampling sequentially, but instead by randomly shuffling into batches. Lastly, batch normalization \citep{ioffe2015batch} was implemented to help reduce internal covariance shift, and thus ensure fast convergence of the trainable parameters. \section{Results} To train, test, and validate our neural network structure, the joint angle time series data were segregated into training, test and validation data with 55\%, 35\%, and 10\% proportions. For standardization purposes, the training dataset attributes such as mean and variance were subsequently used to standardize every dataset to have zero mean and unit variance. For the predictive model, the inverse transform was applied at the output layer of the SCNN. A 500-time-step sliding window (equivalent to 3.56 seconds at 140 Hz) was applied to separate each dataset into multiple time frames. Additionally, a 100-step time shift was also applied to create prediction windows. \subsection{Kernels \& Activation Layers Visualisation} We selectively visualize the first layer kernels of the SCAE model to understand the representations of human motor dynamics. (Figure \ref{fig:kernelvisualization}). As the effects of the $L1$ regularization force most of the kernel weights to zero, the non-zero weights represent the most prominent motion features in the data. These also provide indications that we can further reduce the number of parameters in our model. \begin{figure}[ht] \begin{center} \includegraphics[width = 1.0\hsize]{./figures/kernelvisualizationsparse2.png} \caption{Visualization of the sub-kernel weights for the first layer. By imposing the $L1$ regularization, a large number weights are set to zero, and the non-zero weights are sufficient to reconstruct the input.} \label{fig:kernelvisualization} \end{center} \end{figure} For the kernels, the weights have the same sign across the time convolution, which implies the first layer captures the positions of the joints across time. Properties of the motion dynamics, such as velocity and acceleration, are likely to be captured in the deeper layers of the network. \begin{figure}[t] \begin{center} \includegraphics[width = 0.95\hsize]{./figures/recurrence.png} \caption{Recurrence plot for the activations of a specific feature map node in Layer 3. Threshold used to compute the recurrence is $1 \times 10^{-4}$.} \label{fig:recur} \end{center} \end{figure} During training we notice a quasi-periodicity in the neural network activations. To confirm this phenomenon, we produced a recurrence plot of the activations (Figure \ref{fig:recur}), which shows a stereotypical quasi-periodic pattern, by using the following binary function: \begin{align} R(t_i, t_j) &= \begin{cases} 1, & \parallel a(t_i) -a(t_j) \parallel_2 \leq \varepsilon\\ 0, & \text{otherwise} \end{cases} \end{align} where $a(t)$ denotes the activation at time $t$ and $\varepsilon$ is the predetermined threshold. It can be observed that the SCAE is able to learn both the spatial and temporal structure in the data. The fact that only selected nodes have significant activations within the feature maps shows that the model learns the prominent spatial features. In addition, the quasi-periodicity of the activations indicates that the SCAE model also captures the temporal structure in the deeper layers of the model. \subsection{Hand Movement Prediction} The inclusion of the fully connected layer enables the model to predict with higher accuracy. However, unlike regular classification tasks where the number of classes tend to be small, for a regression task the fully-connected layer requires a much larger number of points to obtain accurate predictions for a longer prediction horizon. \begin{figure*}[ht] \centering \subfigure[Training data]{\label{fig:rmseconsotraining}\includegraphics[width = 0.49\hsize]{./figures/rmse_conso_training.png}} \subfigure[Test data]{\label{fig:rmseconsotest}\includegraphics[width = 0.50\hsize]{./figures/rmse_conso_test.png}} \caption{RMSE for TCNN and SCNN aggregated across all features. The TCNN is shown to perform consistently worse than the equivalent SCNN model.} \label{fig:RMSE_Conso} \end{figure*} Our model extends the work in \citep{tehunpub2014bis} by including the graph structure of the features and predicting a fixed horizon instead of merely the next time step. We plot the aggregated RMSE in Figure \ref{fig:RMSE_Conso}. The RMSE for the TCNN model is consistently higher than that for the SCNN model (Figure \ref{fig:RMSE_Conso}). Since the TCNN only incorporates the correlations between the spatial features at the fully-connected layer, the inclusion of the dependency graph of the spatial features in constructing the convolution layer is beneficial to the model's predictive power. Also from Figure \ref{fig:RMSE_Conso}, it is observed that the RMSE for both model worsens for predictions that are many time-steps ahead. The deterioration of the RMSE across time-steps is well within our expectations, as predictions that are significantly further ahead are naturally much less reliable. Overall, both of our models can predict the movements of the data, even at a distant prediction horizon, however they fail to capture the magnitude of those movements. The SCNN outperforms the TCNN in terms of RMSE attained. These results validate the observations from \citep{butepage2017deep,du2015hierarchical} that the inclusion of the graph structure for human motion capture related tasks improves the prediction quality and allows us to lengthen the prediction horizon. \section{Discussion} Our main methodological contribution is the introduction of the structural convolutional neural network, which allows efficient design of bespoke convolutional kernels via the specification of the dependency graph of the features. While this study focuses on the application of the model to human hand motion data, the proposed model can actually be applied to data with arbitrary topology. These special cases can be derived by specifying the adjacency matrices of the features in a specific manner. \paragraph{Prediction Model} For prediction tasks, we compared two models: our structural convolutional neural networks (SCNNs) and the well-known time convolutional neural networks (TCNNs). The difference between these two networks is that of inclusion of the adjacency matrix of the spatial features in the convolution masks. Based on the predictions we obtained for both models, we observed the following: \begin{itemize} \itemsep0em \item The SCNN model outperforms the TCNN model in terms of the RMSE and $R^2$ values. By embedding the topology of the spatial features of the data, the model is able include the local spatio-temporal interactions between the different joints in the early stages of the model. \item The improvement of the prediction for the SCNN stems from the inclusion of the graph structure allowing the neural network to extract more meaningful representations of movement dynamics, allowing for a higher accuracy across a longer prediction horizon. \end{itemize} Our approach allows us to design bespoke convolutional kernels using the adjacency matrix of the spatial features. We demonstrate here that our approach improves the prediction quality and extends the prediction horizon significantly. This efficiency comes at a price: Unlike the structural RNN by \citep{jain2016structural} and the graph CNN by \citep{niepert2016learning}, our current approach does not support directed edges or edges features and is limited to undirected graphs. Thus, a natural extension of this study would be the inclusion of RNNs to the SCNNs by constructing a structural convolutional recurrent neural network similar to the convolutional LSTM in \citep{xingjian2015convolutional}, as a combined architecture may be better able to capture long-term spatio-temporal correlations. Beyond the inclusion of RNNs our convolutional kernel construction method can be improved by unsupervised graph structure estimation from the data. We applied this approach to the graph structure of the human body kinematics time series, and show that we outperform conventional time convolutional neural networks. Our approach allows the development of deep learning models trained on arbitrary graph structured data, be it medical data (e.g. fMRI-based brain network activity), economic data (e.g. airline travel numbers on the airport connectivity graph) or social data (e.g. social networks variables). The largest benefit for our method's flexibility is its scalability - the structural convolution requires a single adjacency matrix for all the spatial features. \begin{acks} This work has received funding from the European Union's Horizon 2020 research and innovation programme under grant eNHANCE (grant no 644000) -- www.enhance-motion.eu. \end{acks}
{ "timestamp": "2018-05-31T02:15:04", "yymm": "1803", "arxiv_id": "1803.05419", "language": "en", "url": "https://arxiv.org/abs/1803.05419", "abstract": "Deep Learning methods, specifically convolutional neural networks (CNNs), have seen a lot of success in the domain of image-based data, where the data offers a clearly structured topology in the regular lattice of pixels. This 4-neighbourhood topological simplicity makes the application of convolutional masks straightforward for time series data, such as video applications, but many high-dimensional time series data are not organised in regular lattices, and instead values may have adjacency relationships with non-trivial topologies, such as small-world networks or trees. In our application case, human kinematics, it is currently unclear how to generalise convolutional kernels in a principled manner. Therefore we define and implement here a framework for general graph-structured CNNs for time series analysis. Our algorithm automatically builds convolutional layers using the specified adjacency matrix of the data dimensions and convolutional masks that scale with the hop distance. In the limit of a lattice-topology our method produces the well-known image convolutional masks. We test our method first on synthetic data of arbitrarily-connected graphs and human hand motion capture data, where the hand is represented by a tree capturing the mechanical dependencies of the joints. We are able to demonstrate, amongst other things, that inclusion of the graph structure of the data dimensions improves model prediction significantly, when compared against a benchmark CNN model with only time convolution layers.", "subjects": "Machine Learning (stat.ML); Machine Learning (cs.LG)", "title": "Generalised Structural CNNs (SCNNs) for time series data with arbitrary graph topology", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9559813526452772, "lm_q2_score": 0.8499711794579723, "lm_q1q2_score": 0.8125565978477339 }
https://arxiv.org/abs/2001.10928
Discrete Trace Theorems and Energy Minimizing Spring Embeddings of Planar Graphs
Tutte's spring embedding theorem states that, for a three-connected planar graph, if the outer face of the graph is fixed as the complement of some convex region in the plane, and all other vertices are placed at the mass center of their neighbors, then this results in a unique embedding, and this embedding is planar. It also follows fairly quickly that this embedding minimizes the sum of squared edge lengths, conditional on the embedding of the outer face. However, it is not at all clear how to embed this outer face. We consider the minimization problem of embedding this outer face, up to some normalization, so that the sum of squared edge lengths is minimized. In this work, we show the connection between this optimization problem and the Schur complement of the graph Laplacian with respect to the interior vertices. We prove a number of discrete trace theorems, and, using these new results, show the spectral equivalence of this Schur complement with the boundary Laplacian to the one-half power for a large class of graphs. Using this result, we give theoretical guarantees for this optimization problem, which motivates an algorithm to embed the outer face of a spring embedding.
\section{Introduction} Graph drawing is an area at the intersection of mathematics, computer science, and more qualitative fields. Despite the extensive literature in the field, in many ways the concept of what constitutes the optimal drawing of a graph is heuristic at best, and subjective at worst. For a general review of the major areas of research in graph drawing, we refer the reader to \cite{battista1998graph,kaufmann2003drawing}. When energy (i.e. Hall's energy, the sum of squared distances between adjacent vertices) minimization is desired, the optimal embedding in the plane is given by the two-dimensional diffusion map induced by the eigenvectors of the two smallest non-zero eigenvalues of the graph Laplacian \cite{MR2154691, MR2063526, MR2029596}. This general class of graph drawing techniques is referred to as spectral layouts. When drawing a planar graph, often a planar embedding (a drawing in which edges do not intersect) is desirable. However, spectral layouts of planar graphs are not guaranteed to be planar. When looking at triangulations of a given domain, it is commonplace for the near-boundary points of the spectral layout to ``grow" out of the boundary, or lack any resemblance to a planar embedding. For instance, see the spectral layout of a random triangulation of a disk and rectangle in Figure \ref{fig1}. In his 1962 work titled ``How to Draw a Graph," Tutte found an elegant technique to produce planar embeddings of planar graphs that also minimize ``energy" in some sense \cite{MR0158387}. In particular, for a three-connected planar graph, he showed that if the outer face of the graph is fixed as the complement of some convex region in the plane, and every other point is located at the mass center of its neighbors, then the resulting embedding is planar. This embedding minimizes Hall's energy, conditional on the embedding of the boundary face. This result is now known as Tutte's spring embedding theorem, and this general class of graph drawing techniques is known as force-based layouts. While this result is well known (see \cite{knudson_lamb}, for example), it is not so obvious how to embed the outer face. This, of course, should vary from case to case, depending on the dynamics of the interior. \begin{figure} \begin{center} \subfloat[ Circle]{\includegraphics*[width=1.75 in,height = 1.75in]{c_d}} \qquad \subfloat[$3$-by-$1$ Rectangle]{\includegraphics*[width= 3.5 in, height = 1.5 in]{r_d}} \\ \subfloat[Spectral Layout]{\includegraphics*[width=1.75 in,height = 1.75in]{c_l}} \qquad \subfloat[Spectral Layout]{\includegraphics*[width= 3.5 in, height = 1.5 in]{r_l}} \\ \subfloat[Schur Complement Layout]{\includegraphics*[width=1.75 in,height = 1.75in]{c_s}} \qquad \subfloat[Schur Complement Layout]{\includegraphics*[width= 3.5 in, height = 1.5 in]{r_s}} \caption{Delaunay triangulations of $1250$ points randomly generated on the disk (A) and rectangle (B), their non-planar spectral layouts (C) and (D), and planar layouts using a spring embedding of the Schur complement of the graph Laplacian with respect to the interior vertices (E) and (F).} \label{fig1} \end{center} \end{figure} In this work, we examine how to embed the boundary face such that the embedding is convex and minimizes Hall's energy over all such convex embeddings with some given normalization. While it is not clear how to exactly minimize energy over all convex embeddings in polynomial time, it also is not clear that this is a NP-hard optimization problem. Proving that this optimization problem is NP-hard appears to be extremely difficult, as the problem itself seems to lack any natural relation to a known NP-complete problem. In what follows, we analyze this problem and produce an algorithm with theoretical guarantees for a large class of three-connected planar graphs. \begin{figure}[t] \begin{center} \subfloat[Laplacian Embedding]{\includegraphics*[width=3in]{spectralpic}} \qquad \subfloat[Schur Complement Embedding]{\includegraphics*[width=3 in]{schurembed}} \caption{A visual example of embeddings of the 2D finite element discretization graph 3elt, taken from the SuiteSparse Matrix Collection \cite{davis2011university}. Figure (A) is the non-planar spectral layout of this 2D mesh, and Figure (B) is a planar spring embedding of the mesh, using the minimal non-trivial eigenvectors of the Schur complement to embed the boundary. } \label{fig:3elt} \end{center} \end{figure} Our analysis begins by observing that the Schur complement of the graph Laplacian with respect to the interior vertices is the correct matrix to consider when choosing an optimal embedding of boundary vertices. See Figure \ref{fig:3elt} for a visual example of a spring embedding using the two minimial non-trivial eigenvectors of the Schur complement. In order to theoretically understand the behavior of the Schur complement, we prove a discrete trace theorem. Trace theorems are a class of results in theory of partial differential equations relating norms on the domain to norms on the boundary, which are used to provide a priori estimates on the Dirichlet integral of functions with given data on the boundary. We construct a discrete version of a trace theorem in the plane for ``energy"-only semi-norms. Using a discrete trace theorem, we show that this Schur complement is spectrally equivalent to the boundary Laplacian to the one-half power. This spectral equivalence result produces theoretical guarantees for the energy minimizing spring embedding problem, but is also of independent interest and applicability in the study of spectral properties of planar graphs. These theoretical guarantees give rise to a natural algorithm with provable guarantees. The performance of this algorithm is also illustrated through numerical experiments. The remainder of the paper is as follows. In Section 2, we formally introduce Tutte's spring embedding theorem, characterize the optimization problem under consideration, and illustrate the connection to the Schur complement. In Section 3, we consider trace theorems for Lipschitz domains from the theory of elliptic partial differential equations, prove discrete energy-only variants of these results for the plane, and show that the Schur complement with respect to the interior is spectrally equivalent to the boundary Laplacian to the one-half power. In Section 4, we use the results from the previous section to give theoretical guarantees regarding approximate solutions to the original optimization problem, and use these theoretical results to motivate an algorithm to embed the outer face of a spring embedding. We present numerical results to illustrate both the behavior of Schur complement-based embeddings compared to variations of natural spectral embeddings, and the practical performance of the algorithm introduced. \section{Spring Embeddings and the Schur Complement} \label{sec:spring_schur} In this section, we introduce the main definitions and notation of the paper, formally define the optimization problem under consideration, and show how the Schur complement is closely related to this optimization problem. \subsection{Definitions and Notation} Let $G= (V,E)$, $V=\{1,...,n\} $, $E \subset \{ e \subset V \, | \, |e| = 2\}$, be a simple, connected, undirected graph. $G$ is $k$-connected if it remains connected upon the removal of any $k-1$ vertices, and is planar if it can be drawn in the plane such that no edges intersect (save for adjacent edges at their mutual endpoint). A face of a planar embedding of a graph is a region of the plane bounded by edges (including the outer infinite region, referred to as the outer face). Let $\mathcal{G}_n$ be the set of all ordered pairs $(G,\Gamma)$, where $G$ is a simple, undirected, planar, three-connected graph of order $n>4$, and $\Gamma \subset V$, $n_\Gamma:=|\Gamma|$, are the vertices of some face of $G$. Three-connectedness is an important property for planar graphs, which, by Steinitz's theorem, guarantees that the graph is the skeleton of a convex polyhedron \cite{steinitz1922polyeder}. This characterization implies that for three-connected graphs ($n>4$), the edges corresponding to each face in a planar embedding are uniquely determined by the graph. In particular, the set of faces is simply the set of induced cycles, so we may refer to faces of the graph without specifying an embedding. One important corollary of this result is that, for $n>4$, the vertices of any face form an induced simple cycle. Let $N_G(i)$ be the neighborhood of vertex $i$, $N_G(S)$ be the union of the neighborhoods of the vertices in $S$, and $d_G(i,j)$ be the distance between vertices $i$ and $j$ in the graph $G$. When the associated graph is obvious, we may remove the subscript. Let $d(i)$ be the degree of vertex $i$. Let $G[S]$ be the graph induced by the vertices $S$, and $d_{S}(i,j)$ be the distance between vertices $i$ and $j$ in $G[S]$. If $H$ is a subgraph of $G$, we write $H \subset G$. The Cartesian product $G_1 \square G_2$ between $G_1 =(V_1,E_1)$ and $G_2 = (V_2,E_2)$, is the graph with vertices $(v_1,v_2) \in V_1 \times V_2$ and edges $ \left((u_1,u_2),(v_1,v_2)\right) \in E$ if $(u_1,v_1) \in E_1$ and $u_2 = v_2$, or $u_1 = v_1$ and $(u_2,v_2) \in E_2$. The graph Laplacian $L_G \in \mathbb{R}{n \times n}$ of $G$ is the symmetric matrix defined by $$ \langle L_G x , x \rangle = \sum_{\{i,j\} \in E} (x_i - x_j)^2,$$ and, in general, a matrix is the graph Laplacian of some weighted graph if it is symmetric diagonally dominant, has non-positive off-diagonal entries, and the vector $\mathbf{1}:=(1,...,1)^T$ lies in its nullspace. The convex hull of a finite set of points $X$ is denoted by conv$(X)$, and a point $x \in X$ is a vertex of conv$(X)$ if $x \not \in \text{conv}(X \backslash x)$. Given a matrix $A$, we denote the $i^{th}$ row by $A_{i,\boldsymbol{\cdot}}$, the $j^{th}$ column by $A_{\boldsymbol{\cdot},j}$, and the entry in the $i^{th}$ row and $j^{th}$ column by $A_{i,j}$. \subsection{Spring Embeddings} Here and in what follows, we refer to $\Gamma$ as the ``boundary" of the graph $G$, $V\backslash \Gamma$ as the ``interior," and generally assume $n_\Gamma:= |\Gamma|$ to be relatively large (typically $n_\Gamma = \Theta(n^{1/2})$). Of course, the concept of a ``boundary" face is somewhat arbitrary, though, depending on the application from which the graph originated (i.e., a discretization of some domain), one face is often already designated as the boundary face. If a face has not been designated, choosing the largest induced cycle is a reasonable choice. By embedding $G$ in the plane and traversing the embedding, one can easily find all the induced cycles of $G$ in linear time and space \cite{chiba1985linear}. Without loss of generality, suppose that $\Gamma = \{n- n_\Gamma + 1,..., n\}$. A matrix $X \in \mathbb{R}{n \times 2}$ is said to be a planar embedding of $G$ if the drawing of $G$ using straight lines and with vertex $i$ located at coordinates $X_{i,\boldsymbol{\cdot}}$ for all $i$ is a planar drawing. A matrix $X_\Gamma \in \mathbb{R}{n_\Gamma \times 2}$ is said to be a convex embedding of $\Gamma$ if the embedding is planar and every point is a vertex of the convex hull $\text{conv}(\{[X_\Gamma]_{i,\boldsymbol{\cdot}}\}_{i=1}^{n_\Gamma})$. Tutte's spring embedding theorem states that if $X_\Gamma$ is a convex embedding of $\Gamma$, then the system of equations $$ X_{i,\boldsymbol{\cdot}} = \begin{cases} \frac{1}{d(i)} \sum_{j \in N(i)} X_{j,\boldsymbol{\cdot}} \quad \; \; i = 1,...,n-n_\Gamma \\ [X_{\Gamma}]_{i-(n-n_\Gamma), \boldsymbol{\cdot}} \qquad i = n-n_\Gamma+1 ,...,n \end{cases}$$ has a unique solution $X$, and this solution is a planar embedding of $G$ \cite{MR0158387}. We can write both the Laplacian and embedding of $G$ in block-notation, differentiating between interior and boundary vertices as follows: $$L_{G} = \begin{pmatrix} L_{o} + D_{o} & -A_{o,\Gamma} \\ -A_{o,\Gamma}^T & L_{\Gamma} + D_\Gamma \end{pmatrix} \in \mathbb{R}{n \times n},\quad X = \begin{pmatrix} X_o \\ X_\Gamma \end{pmatrix} \in \mathbb{R}{n \times 2},$$ where $L_{o},D_o \in \mathbb{R}{(n - n_\Gamma ) \times (n - n_\Gamma )}$, $L_\Gamma, D_\Gamma \in \mathbb{R}{ n_\Gamma \times n_\Gamma }$, $A_{o,\Gamma} \in \mathbb{R}{(n- n_\Gamma ) \times n_\Gamma }$, $X_o \in \mathbb{R}{(n- n_\Gamma ) \times 2}$, $X_\Gamma \in \mathbb{R}{ n_\Gamma \times 2}$, and $L_o$ and $L_\Gamma$ are the Laplacians of $G[V\backslash \Gamma]$ and $G[\Gamma]$, respectively. Using block notation, the system of equations for the Tutte spring embedding of some convex embedding $X_\Gamma$ is given by $$ X_o = ( D_o + D[L_o] )^{-1} [(D[L_o] - L_o)X_o + A_{o,\Gamma} X_\Gamma],$$ where $D[A]$ is the diagonal matrix with diagonal entries given by the diagonal of $A$. Therefore, the unique solution to this system is $$X_o =(L_o +D_o)^{-1} A_{o,\Gamma} X_\Gamma.$$ We note that this choice of $X_o$ not only guarantees a planar embedding of $G$, but also minimizes Hall's energy, namely, $$\arg\min_{X_o} h(X) = (L_o +D_o)^{-1} A_{o,\Gamma} X_\Gamma,$$ where $h(X):= \text{Tr}(X^T L X)$ (see \cite{MR2029596} for more on Hall's energy). While Tutte's theorem is a very powerful result, guaranteeing that, given a convex embedding of any face, the energy minimizing embedding of the remaining vertices results in a planar embedding, it gives no direction as to how this outer face should be embedded. In this work, we consider the problem of producing a planar embedding that is energy minimizing, subject to some normalization. We consider embeddings that satisfy $X_\Gamma^T X_\Gamma = I$ and $X^T_\Gamma \mathbf{1} = 0$, though other normalizations, such as $X^T X = I$ and $X^T \mathbf{1} = 0$, would be equally appropriate. The analysis that follows in this paper can be readily applied to this alternate normalization, but it does require the additional step of verifying a norm equivalence between $V$ and $\Gamma$ for the harmonic extension of low energy vectors, which can be produced relatively easily for the class of graphs considered in Section \ref{sec:trace}. Let $\mathcal{X}$ be the set of all convex, planar embeddings $X_\Gamma$ that satisfy $X_\Gamma^T X_\Gamma = I$ and $X^T_\Gamma \mathbf{1} = 0$. The main optimization problem under consideration is \begin{equation} \min \; h(X) \quad s.t. \quad X_\Gamma \in \text{cl}(\mathcal{X}),\label{eqn:opt} \end{equation} where cl$(\boldsymbol{\cdot})$ is the closure of a set. $\mathcal{X}$ is not a closed set, and so the minimizer of (\ref{eqn:opt}) may be a non-convex embedding. However, by the definition of closure, any such minimizer is arbitrarily close to a convex embedding. The normalizations $X^T_\Gamma \mathbf{1} = 0$ and $X^T_\Gamma X_\Gamma = I$ ensure that the solution does not degenerate into a single point or line. In what follows we are primarily concerned with approximately solving this optimization problem. It is unclear whether there exists an efficient algorithm to solve (\ref{eqn:opt}) or if the associated decision problem is NP-hard. If (\ref{eqn:opt}) is NP-hard, it seems extremely difficult to verify that this is indeed the case. This remains an open problem. \subsection{Schur Complement of $V \backslash \Gamma$} Given some choice of $X_\Gamma$, by Tutte's theorem the minimum value of $h(X)$ is attained when $X_o =(L_o +D_o)^{-1} A_{o,\Gamma} X_\Gamma$, and given by \begin{eqnarray*} h(X) &=& \text{Tr} \left[ \begin{pmatrix} [(L_o +D_o)^{-1} A_{o,\Gamma} X_\Gamma]^T & X^T_\Gamma \end{pmatrix} \begin{pmatrix} L_{o} + D_{o} & -A_{o,\Gamma} \\ -A_{o,\Gamma}^T & L_{\Gamma} + D_\Gamma \end{pmatrix} \begin{pmatrix} (L_o +D_o)^{-1} A_{o,\Gamma} X_\Gamma \\ X_\Gamma \end{pmatrix} \right] \\ &=& \text{Tr} \big(X_\Gamma^T \big[ L_\Gamma + D_\Gamma - A^T_{o,\Gamma} ( L_o + D_o)^{-1} A_{o,\Gamma} \big] X_\Gamma \big) \, = \, \text{Tr} \left(X_\Gamma^T S_\Gamma X_\Gamma \right), \end{eqnarray*} where $S_\Gamma$ is the Schur complement of $L_G$ with respect to $V \backslash \Gamma$, $$S_\Gamma = L_\Gamma + D_\Gamma - A^T_{o,\Gamma} ( L_o + D_o)^{-1} A_{o,\Gamma}. $$ For this reason, we can treat $X_o$ as a function of $X_\Gamma$ and instead consider the optimization problem \begin{equation} \min \; h_\Gamma(X_\Gamma) \quad s.t. \quad X_\Gamma \in \text{cl}(\mathcal{X}),\label{p1} \end{equation} where $$ h_\Gamma(X_\Gamma):=\text{Tr} \big(X_\Gamma^T S_\Gamma X_\Gamma \big) .$$ This immediately implies that, if the minimal two non-trivial eigenvectors of $S_\Gamma$ produce a convex embedding, then this is the exact solution of (\ref{p1}). However, a priori, there is no reason to think that this embedding would be planar or convex. In Section \ref{sec:energy_min_embed}, we perform numerical experiments that suggest that this embedding is often planar, and ``near" a convex embedding in some sense. However, even if the embedding is planar, converting a non-convex embedding to a convex one may increase the objective function by a large amount. In Section \ref{sec:trace}, we show that $S_\Gamma$ and $L_\Gamma^{1/2}$ are spectrally equivalent. This spectral equivalence leads to provable guarantees for an algorithm to approximately solve (\ref{p1}), as the minimal two eigenvectors of $L_\Gamma^{1/2}$ are planar and convex. First, we present a number of basic properties of the Schur complement of a graph Laplacian in the following proposition. For more information on the Schur complement, we refer the reader to \cite{carlson1986schur,MR625249,MR2160825}. \begin{proposition}\label{prop:schur} Let $G=(V,E)$, $n = |V|$, be a graph and $L_G \in \mathbb{R}{n \times n}$ the associated graph Laplacian. Let $L_G$ and vectors $v \in \mathbb{R}{n}$ be written in block form \begin{equation*} L(G) = \begin{pmatrix} L_{11} & L_{12} \\ L_{21} & L_{22} \end{pmatrix} ,\quad v = \begin{pmatrix} v_1 \\ v_2 \end{pmatrix},\quad \end{equation*} where $L_{22} \in \mathbb{R}{m \times m}$, $v_2 \in \mathbb{R}{m}$, and $L_{12} \ne 0$. Then \begin{enumerate}[(1)] \item $S=L_{22} - L_{21} L^{-1}_{11} L_{12}$ is a graph Laplacian, \item $ \sum_{i=1}^m (e^T_i L_{22} \mathbf{1}_m) e_i e^T_i - L_{21} L^{-1}_{11} L_{12}$ is a graph Laplacian, \item $\langle S w , w \rangle = \inf \{ \langle L v , v \rangle | v_2 = w \}$. \end{enumerate} \end{proposition} \begin{proof} Let $P = \begin{pmatrix} -L_{11}^{-1} L_{12} \\ I \end{pmatrix} \in \mathbb{R}{n \times m}$. Then $$ P^T L P = \begin{pmatrix} -L_{21} L_{11}^{-1} & I \end{pmatrix} \begin{pmatrix} L_{11} & L_{12} \\ L_{21} & L_{22} \end{pmatrix} \begin{pmatrix} -L_{11}^{-1} L_{12} \\ I \end{pmatrix} = L_{22} - L_{21} L^{-1}_{11} L_{12} = S.$$ Because $ L_{11} \mathbf{1}_{n-m} + L_{12} \mathbf{1}_m = 0$, we have $\mathbf{1}_{n-m} = -L_{11}^{-1} L_{12} \mathbf{1}_m$. Therefore $P \mathbf{1}_m = \mathbf{1}_n$, and, as a result, \begin{equation*}S \mathbf{1}_m = P^T L P \mathbf{1}_m = P^T L \mathbf{1}_n = P^T 0 = 0.\end{equation*} In addition, \begin{eqnarray*} \left[\sum_{i=1}^m (e^T_i L_{22} \mathbf{1}_m) e_i e^T_i - L_{21} L^{-1}_{11} L_{12} \right] \mathbf{1}_m &=& \bigg[ \sum_{i=1}^m (e^T_i L_{22} \mathbf{1}_m) e_i e^T_i - L_{22} \bigg] \mathbf{1}_m + S \mathbf{1}_m \\ &=& \sum_{i=1}^m (e^T_i L_{22} \mathbf{1}_m) e_i - L_{22} \mathbf{1}_m \\ &=& \bigg[ \sum_{i=1}^m e_i e^T_i - I_m \bigg] L_{22} \mathbf{1}_m \, = \, 0. \end{eqnarray*} $L_{11}$ is an M-matrix, so $L_{11}^{-1}$ is a non-negative matrix. $L_{21} L_{11}^{-1} L_{12}$ is the product of three non-negative matrices, and so must also be non-negative. Therefore, the off-diagonal entries of $S$ and $ \sum_{i=1}^m (e^T_i L_{22} \mathbf{1}) e_i e^T_i - L_{21} L^{-1}_{11} L_{12}$ are non-positive, and so both are graph Laplacians. Consider $$\langle L v, v \rangle = \langle L_{11} v_1 , v_1 \rangle + 2 \langle L_{12} v_2 , v_1 \rangle + \langle L_{22} v_2, v_2 \rangle ,$$ with $v_2$ fixed. Because $L_{11}$ is symmetric positive definite, the minimum occurs when $$ \frac{\partial} { \partial v_1} \langle L v, v \rangle = 2 L_{11} v_1 + 2 L_{12} v_2 = 0.$$ Setting $v_1 = -L_{11}^{-1} L_{12} v_2$, the desired result follows. \end{proof} The Schur complement Laplacian $S_\Gamma$ is the sum of two Laplacians $L_\Gamma$ and $D_\Gamma - A^T_{o,\Gamma} ( L_o + D_o)^{-1} A_{o,\Gamma}$, where the first is the Laplacian of $G[\Gamma]$, and the second is a Laplacian representing the dynamics of the interior. In the next section we prove the spectral equivalence of $S_\Gamma$ and $L_\Gamma^{1/2}$ for a large class of graphs by first proving discrete energy-only trace theorems. Then, in Section \ref{sec:energy_min_embed}, we use this spectral equivalence to prove theoretical properties of (\ref{p1}) and motivate an algorithm to approximately solve this optimization problem. \section{Trace Theorems for Planar Graphs} \label{sec:trace} The main result of this section takes classical trace theorems from the theory of partial differential equations and extends them to a class of planar graphs. However, for our purposes, we require a stronger form of trace theorem, one between energy semi-norms (i.e., no $\ell^2$ term), which we refer to as ``energy-only" trace theorems. These energy-only trace theorems imply their classical variants with $\ell^2$ terms almost immediately. We then use these new results to prove the spectral equivalence of $S_\Gamma$ and $L_\Gamma^{1/2}$ for the class of graphs under consideration. This class of graphs is rigorously defined below, but includes planar three-connected graphs that have some regular structure (such as graphs of finite element discretizations). In what follows, we prove spectral equivalence with explicit constants. While this does make the analysis slightly messier, it has the benefit of showing that equivalence holds for constants that are not too large, thereby verifying that the equivalence is a practical result which can be used in the analysis of algorithms. We begin by formally describing a classical trace theorem. Let $\Omega \subset \mathbb{R}^d$ be a domain with boundary $\Gamma = \delta \Omega$ that, locally, is a graph of a Lipschitz function. $H^1(\Omega)$ is the Sobolev space of square integrable functions with square integrable weak gradient, with norm \[ \| u \|^2_{1,\Omega } = \|\nabla u \|^2_{L^2(\Omega) } + \| u \|^2_{L^2(\Omega)}, \quad \mbox{where}\quad \| u \|^2_{L_2(\Omega)} = \int_\Omega u^2 \, dx . \] Let \[ \| \varphi \|^2_{1/2,\Gamma } = \| \varphi \|^2_{L_2(\Gamma)} + \iint_{\Gamma\times \Gamma}\frac{(\varphi(x) - \varphi(y))^2}{|x-y|^d} \,dx \, dy \] for functions defined on $\Gamma$, and denote by $H^{1/2}(\Gamma)$ the Sobolev space of functions defined on the boundary $\Gamma$ for which $\|\boldsymbol{\cdot}\|_{1/2,\Gamma}$ is finite. The trace theorem for functions in $H^1(\Omega)$ is one of the most important and used trace theorems in the theory of partial differential equations. More general results for traces on boundaries of Lipschitz domains, which involve $L^p$ norms and fractional derivatives, are due E.~Gagliardo~\cite{1957GagliardoE-aa} (see also~\cite{1988CostabelM-aa}). Gagliardo's theorem, when applied to the case of $H^1(\Omega)$ and $H^{1/2}(\Gamma)$, states that if $\Omega\subset\mathbb{R}^d$ is a Lipschitz domain, then the norm equivalence \[ \|\varphi\|_{1/2,\Gamma} \eqsim \inf\{ \|u\|_{1,\Omega} \;\big|\; u|_\Gamma = \varphi\} \] holds (the right hand side is indeed a norm on $H^{1/2}(\Gamma)$). These results are key tools in proving a priori estimates on the Dirichlet integral of functions with given data on the boundary of a domain $\Omega$. Roughly speaking, a trace theorem gives a bound on the energy of a harmonic function via norm of the trace of the function on $\Gamma=\partial\Omega$. In addition to the classical references given above, further details on trace theorems and their role in the analysis of PDEs (including the case of Lipschitz domains) can be found in \cite{1972LionsJ_MagenesE-aa,1967NecasJ-aa}. There are several analogues of this theorem for finite element spaces (finite dimensional subspaces of $H^1(\Omega)$). For instance, in~\cite{MR1126677} it is shown that the finite element discretization of the Laplace-Beltrami operator on the boundary to the one-half power provides a norm which is equivalent to the $H^{1/2}(\Gamma)$-norm. Here we prove energy-only analogues of the classical trace theorem for graphs $(G, \Gamma) \in \mathcal{G}_n$, using energy semi-norms $$ | u |^2_{G} = \langle L_G u, u \rangle \qquad \text{and} \qquad | \varphi|^2_{\Gamma} = \sum_{\substack{p,q \in \Gamma, \\p<q}} \frac{\left(\varphi(p)-\varphi(q)\right)^2}{d^2_G(p,q)}.$$ The energy semi-norm $| \boldsymbol{\cdot} |_{G}$ is a discrete analogue of $\|\nabla u \|_{L^2(\Omega) }$, and the boundary semi-norm $| \boldsymbol{\cdot} |_{\Gamma}$ is a discrete analogue of the quantity $\iint_{\Gamma\times \Gamma} \frac{(\varphi(x) - \varphi(y))^2}{|x-y|^{2}} \, dx \, dy$. In addition, by connectivity, $| \boldsymbol{\cdot} |_{G}$ and $| \boldsymbol{\cdot} |_{\Gamma}$ are norms on the quotient space orthogonal to $\mathbf 1$. We aim to prove that for any $\varphi \in \mathbb{R}{n_\Gamma}$, $$ \frac{1}{c_1} \, |\varphi|_\Gamma \le \min_{u|_{\Gamma} = \varphi} |u|_G \le c_2 \, |\varphi|_\Gamma$$ for some constants $c_1,c_2$ that do not depend on $n_\Gamma,n$. We begin by proving these results for a simple class of graphs, and then extend our analysis to more general graphs. Some of the proofs of the below results are rather technical, and are therefore reserved for the appendix. \subsection{Trace Theorems for a Simple Class of Graphs} Let $G_{k,\ell} = C_k \, \square \, P_\ell$ be the Cartesian product of the $k$ vertex cycle $C_k$ and the $\ell$ vertex path $P_{\ell}$, where $ 4 \ell <k < 2 c \ell$ for some constant $c \in \mathbb{N}$. The lower bound $4 \ell < k$ is arbitrary in some sense, but is natural, given that the ratio of boundary length to in-radius of a convex region is at least $2\pi$. Vertex $(i,j)$ in $G_{k,\ell}$ corresponds to the product of $i \in C_{k}$ and $j \in P_{\ell}$, $i = 1,...,k$, $j = 1,...,\ell$. The boundary of $G_{k,\ell}$ is defined to be $\Gamma = \{(i,1)\}_{i=1}^k$. Let $u \in \mathbb{R}^{k \times \ell}$ and $\varphi \in \mathbb{R}^k$ be functions on $G_{k,\ell}$ and $\Gamma$, respectively, with $u[(i,j)]$ denoted by $u(i,j)$ and $\varphi[(i,1)]$ denoted by $\varphi(i)$. For the remainder of the section, we consider the natural periodic extension of the vertices $(i,j)$ and the functions $u(i,j)$ and $\varphi(i)$ to the indices $i \in \mathbb{Z}$. In particular, if $i \not \in \{1,...,k\}$, then $(i,j) := (i^*,j)$, $\varphi(i) := \varphi( i^*)$, and $u(i,j) := u(i^*,j)$, where $i^* \in \{1,...,k\}$ and $ i^* = i \mod k$. Let $G^*_{k,\ell}$ be the graph resulting from adding to $G_{k,\ell}$ all edges of the form $\{(i,j),(i-1,j+1)\}$ and $\{(i,j),(i+1,j+1)\}$, $i = 1,...,k$, $j=1,...,\ell-1$. We provide a visual example of $G_{k,\ell}$ and $G^*_{k,\ell}$ in Figure \ref{fig:gkl}. First, we prove a trace theorem for $G_{k,\ell}$. We have broken the proof of the trace theorem into two lemmas. Lemma \ref{lm:bounded} shows that the discrete trace operator is bounded, and Lemma \ref{lm:inverse} shows that it has a continuous right inverse. Taken together, these lemmas imply our desired result. \begin{lemma}\label{lm:bounded} Let $G = G_{k,\ell}$, $ 4 \ell <k < 2 c \ell$, $c \in \mathbb{N}$, with boundary $\Gamma = \{(i,1)\}_{i=1}^k$. For any $u \in \mathbb{R}^{k \times \ell}$, the vector $\varphi = u|_\Gamma$ satisfies $$|\varphi|_{\Gamma} \le \max\{ \sqrt{3c},2 \pi\} \, |u|_G.$$ \end{lemma} \begin{proof} We can decompose $\varphi(p+h) - \varphi(h)$ into a sum of differences, given by \begin{eqnarray*} \varphi(p+h)-\varphi(p) &=& \sum_{i=1}^{s-1} u(p+h,i)-u(p+h,i+1)\\ && + \sum_{i=1}^{h} u(p+i,s)-u(p+i-1,s)\\ && + \sum_{i=1}^{s-1} u(p,s-i+1) -u(p,s-i), \end{eqnarray*} where $s = \bigg\lceil \frac{ h }{c} \bigg \rceil$. By Cauchy-Schwarz, \begin{eqnarray*} \sum_{p=1}^k \sum_{h=1}^{\lfloor k/2 \rfloor} \left( \frac{\varphi(p+h)-\varphi(p)}{h} \right)^2 & \le & 3 \sum_{p=1}^k \sum_{h=1}^{\lfloor k/2 \rfloor} \left( \frac{1}{ h } \sum_{i=1}^{s-1} u(p+h,i)-u(p+h,i+1) \right)^2\\ && + 3 \sum_{p=1}^k \sum_{h=1}^{\lfloor k/2 \rfloor} \left( \frac{1}{ h } \sum_{i=1}^{h} u(p+i,s)-u(p+i-1,s) \right)^2\\ && + 3 \sum_{p=1}^k \sum_{h=1}^{\lfloor k/2 \rfloor} \left( \frac{1}{ h } \sum_{i=1}^{s-1} u(p,s-i+1) -u(p,s-i) \right)^2.\\ \end{eqnarray*} We bound the first and the second term separately. The third term is identical to the first. Using Hardy's inequality~\cite[Theorem~326]{HardyLittlewoodPolya}, we can bound the first term by \begin{eqnarray*} \sum_{p=1}^k \sum_{h=1}^{\lfloor k/2 \rfloor} \left( \frac{1}{ h } \sum_{i=1}^{s-1} u(p,i)-u(p,i+1) \right)^2 &=& \sum_{p=1}^k \sum_{s=1}^{\ell} \left( \frac{1}{ s } \sum_{i=1}^{s-1} u(p,i)-u(p,i+1) \right)^2 \sum_{\substack{h:\lceil h/c \rceil = s \\ 1 \le h \le \lfloor k/2 \rfloor}} \frac{s^2}{h^2} \\ &\le& 4 \sum_{p=1}^k \sum_{s=1}^{\ell-1} \big( u(p,s)-u(p,s+1) \big)^2 \sum_{\substack{h:\lceil h/c \rceil = s \\ 1 \le h \le \lfloor k/2 \rfloor}} \frac{s^2}{h^2}. \end{eqnarray*} We have $$\sum_{\substack{h:\lceil h/c \rceil = s \\ 1 \le h \le \lfloor k/2 \rfloor}} \frac{s^2}{h^2} \le s^2 \sum_{i=c(s-1)+1}^{cs} \frac{1}{i^2} \le \frac{s^2(c-1)}{(c(s-1)+1)^2} \le \frac{4(c-1)}{(c+1)^2} \le \frac{1}{2}$$ for $s \ge 2$ ($c \ge 3$, by definition), and for $s = 1$, $$ \sum_{\substack{h:\lceil h/c \rceil = 1 \\ 1 \le h \le \lfloor k/2 \rfloor}} \frac{1}{h^2} \le \sum_{i=1}^\infty \frac{1}{i^2} = \frac{\pi^2}{6}.$$ Therefore, we can bound the first term by $$ \sum_{p=1}^k \sum_{h=1}^{\lfloor k/2 \rfloor} \left( \frac{1}{ h } \sum_{i=1}^{s-1} u(p,i)-u(p,i+1) \right)^2 \le \frac{2 \pi^2}{3} \sum_{p=1}^k \sum_{s=1}^{\ell-1} \big( u(p,s)-u(p,s+1) \big)^2.$$ For the second term, we have \begin{eqnarray*} \sum_{p=1}^k \sum_{h=1}^{\lfloor k/2 \rfloor} \left( \frac{1}{ h } \sum_{i=1}^{h} u(p+i,s)-u(p+i-1,s) \right)^2 &\le& \sum_{p=1}^k \sum_{h=1}^{\lfloor k/2 \rfloor} \frac{1}{ h } \sum_{i=1}^{h} \big( u(p+i,s)-u(p+i-1,s) \big)^2 \\ &\le& c \sum_{p=1}^k \sum_{s=1}^{\ell} \big( u(p+1,s)-u(p,s) \big)^2. \end{eqnarray*} Combining these bounds produces the desired result $$| \varphi |_\Gamma \le \max\{ \sqrt{3c},2 \pi\}\, |u|_{G}.$$ \end{proof} \begin{figure}[t] \begin{center} \subfloat[$G_{16,3} =C_{16} \square P_3$]{\includegraphics*[width=2in]{gkl}} \qquad \qquad \subfloat[$G^*_{16,3}$]{\includegraphics*[width=2 in]{gklstar}} \caption{A visual example of $G_{k,\ell}$ and $G^*_{k,\ell}$ for $k = 16$, $\ell = 3$. The boundary $\Gamma$ is given by the outer (or, by symmetry, inner) cycle.} \label{fig:gkl} \end{center} \end{figure} In order to show that the discrete trace operator has a continuous right inverse, we need to produce a provably low-energy extension of an arbitrary function on $\Gamma$. Let \begin{equation*}\label{ave} a=\frac{1}{k} \sum_{p = 1}^{k} \varphi(p) \qquad \text{and} \qquad a(i,j) = \frac{1}{2j-1} \sum_{h=1-j}^{j-1} \varphi(i+h). \end{equation*} We consider the extension \begin{equation} \label{eqn:ext} u(i,j)= \frac{j-1}{\ell-1}a+\left(1-\frac{j-1}{\ell-1}\right)a(i,j). \end{equation} In the appendix (Lemma \ref{app:lm1}), we prove the following inverse result for the discrete trace operator. \begin{lemma}\label{lm:inverse} Let $G =G_{k,\ell}$, $ 4 \ell <k < 2 c \ell$, $c \in \mathbb{N}$, with boundary $\Gamma = \{(i,1)\}_{i=1}^k$. For any $\varphi \in \mathbb{R}{k}$, the vector $u$ defined by (\ref{eqn:ext}) satisfies $$ |u|_G \le \sqrt{2c + \frac{233}{9}} \, |\varphi|_\Gamma.$$ \end{lemma} Combining Lemmas \ref{lm:bounded} and \ref{lm:inverse}, we obtain our desired trace theorem. \begin{theorem}\label{thm:disctrace} Let $G =G_{k,\ell}$, $ 4 \ell <k < 2 c \ell$, $c \in \mathbb{N}$, with boundary $\Gamma = \{(i,1)\}_{i=1}^k$. For any $\varphi \in \mathbb{R}{k}$, $$ \frac{1}{\max\{ \sqrt{3c},2 \pi\}} \, |\varphi|_\Gamma \le \min_{u|_{\Gamma} = \varphi} |u|_G \le \sqrt{2c + \frac{233}{9}} \, |\varphi|_\Gamma.$$ \end{theorem} With a little more work, we can prove a similar result for a slightly more general class of graphs. Using Theorem \ref{thm:disctrace}, we can almost immediately prove a trace theorem for any graph $H$ satisfying $G_{k,\ell} \subset H \subset G^*_{k,\ell}$. In fact, Lemma \ref{lm:bounded} carries over immediately. In order to prove a new version of Lemma \ref{lm:inverse}, it suffices to bound the energy of $u$ on the edges in $G^*_{k,\ell}$ not contained in $G_{k,\ell}$. By Cauchy-Schwarz, \begin{eqnarray*} |u|_{G^*}^2 &=& |u|^2_{G} + \sum_{i=1}^k \sum_{j=1}^{\ell-1} \bigg[ \left(u(i,j+1)-u(i-1,j) \right)^2 + \left(u(i,j+1)-u(i+1,j) \right)^2 \bigg] \\ &\le& 3 \sum_{i=1}^{k}\sum_{j=1}^{\ell} (u(i+1,j)-u(i,j))^2 + 2 \sum_{i=1}^{k}\sum_{j=1}^{\ell-1}(u(i,j+1)-u(i,j))^2, \end{eqnarray*} and therefore Corollary \ref{thm:disctrace2} follows immediately from the proofs of Lemmas \ref{lm:bounded} and \ref{lm:inverse}. \begin{corollary}\label{thm:disctrace2} Let $H$ satisfy $G_{k,\ell} \subset H \subset G^*_{k,\ell}$, $ 4 \ell <k < 2 c \ell$, $c \in \mathbb{N}$, with boundary $\Gamma = \{(i,1)\}_{i=1}^k$. For any $\varphi \in \mathbb{R}{k}$, $$ \frac{1}{\max\{ \sqrt{3c},2 \pi\}} \, |\varphi|_\Gamma \le \min_{u|_{\Gamma} = \varphi} |u|_H \le \sqrt{4c + \frac{475}{9}} \, |\varphi|_\Gamma.$$ \end{corollary} \subsection{Trace Theorems for General Graphs} In order to extend Corollary \ref{thm:disctrace2} to more general graphs, we introduce a graph operation which is similar to in concept an aggregation (a partition of $V$ into connected subsets) in which the size of aggregates are bounded. In particular, we give the following definition. \begin{definition} The graph $H$, $G_{k,\ell}\subset H \subset G^*_{k,\ell}$, is said to be an $M$-aggregation of $(G,\Gamma) \in \mathcal{G}_n$ if there exists a partition $\mathcal{A} = a_* \cup \{ a_{i,j} \}_{i=1,...,k}^{j=1,...,\ell}$ of $V(G)$ satisfying \begin{enumerate} \item $G[a_{i,j}]$ is connected and $ |a_{i,j}| \le M$ for all $i =1,...,k$, $j=1,...,\ell$, \item $ \Gamma \subset \bigcup_{i=1}^k a_{i,1} $, and $\Gamma \cap a_{i,1} \ne \emptyset$ for all $i = 1,...,k$, \item $ N_G(a_*) \subset a_* \cup \bigcup_{i=1}^k a_{i,\ell}$, \item the aggregation graph of $\mathcal{A}\backslash a_*$, given by $(\mathcal{A} \backslash a_*, \{ \left(a_{i_1,j_1}, a_{i_2,j_2} \right) \, | \, N_G(a_{i_1,j_1}) \cap a_{i_2,j_2} \ne 0\})$, is isomorphic to $H$. \end{enumerate} \end{definition} We provide a visual example in Figure \ref{fig:magg}, and, later, in Subsection \ref{sub:ex}, we show that this operation applies to a fairly large class of graphs. For now, we focus using the above definition to prove trace theorems for graphs that have an $M$-aggregation $H$, for some $G_{k,\ell}\subset H \subset G^*_{k,\ell}$. However, the $M$-aggregation procedure is not the only operation for which we can control the behavior of the energy and boundary semi-norms. For instance, the behavior of our semi-norms under the deletion of some number of edges can be bounded easily if there exists a set of paths of constant length, with one path between each pair of vertices which are no longer adjacent, such that no edge is in more than a constant number of these paths. In addition, the behavior of these semi-norms under the disaggregation of large degree vertices is also relatively well-behaved, see \cite{hu2017approximation} for details. We give the following result regarding graphs $(G,\Gamma)$ for which some $H$, $G_{k,\ell}\subset H \subset G^*_{k,\ell}$, is an $M$-aggregation of $(G,\Gamma)$, but note that a large number of minor refinements are possible, such as the two briefly mentioned in this paragraph. \begin{figure}[t] \begin{center} \subfloat[graph $(G,\Gamma)$]{\includegraphics*[width=1.9in]{Magg_1}} \qquad \subfloat[partition $\mathcal{A}$]{\includegraphics*[width=1.9 in]{Magg_2}} \qquad \subfloat[$G_{6,2} \subset H \subset G^*_{6,2}$]{\includegraphics*[width=1.6 in]{Magg_3}} \caption{An example of an $M$-aggregation. Figure (A) provides a visual representation of a graph $G$, with boundary vertices $\Gamma$ enlarged. Figure (B) shows a partition $\mathcal{A}$ of $G$, in which each aggregate (enclosed by dotted lines) has order at most four. The set $a_*$ is denoted by a shaded region. Figure (C) shows the aggregation graph $H$ of $\mathcal{A}\backslash a_*$. The graph $H$ satisfies $G_{6,2} \subset H \subset G^*_{6,2}$, and is therefore a $4$-aggregation of $(G,\Gamma)$.} \label{fig:magg} \end{center} \end{figure} \begin{theorem}\label{thm:general-trace} If $H$, $G_{k,\ell}\subset H \subset G^*_{k,\ell}$, $4\ell<k<2c\ell$, $c \in \mathbb{N}$, is an $M$-aggregation of $(G,\Gamma) \in \mathcal{G}_n$, then for any $\varphi \in \mathbb{R}{n_\Gamma}$, $$ \frac{1}{ 6 M \sqrt{M+3} \max\{ \sqrt{3c},2 \pi\}} \, |\varphi|_\Gamma \le \min_{u|_{\Gamma} = \varphi} |u|_G \le 28 M^2 \sqrt{ 3 c + 20} \, |\varphi|_\Gamma.$$ \end{theorem} The proof of this result is rather technical, and can be found in the appendix (Theorem \ref{app:thm1}). The same proof of Theorem \ref{thm:general-trace} also immediately implies a similar result. Let $\widetilde L \in \mathbb{R}{n_\Gamma \times n_\Gamma}$ be the Laplacian of the complete graph on $\Gamma$ with weights $w(i,j)= d^{-2}_\Gamma(i,j)$. The same proof implies the following. \begin{corollary}\label{thm:tildeversion} If $H$, $G_{k,\ell}\subset H \subset G^*_{k,\ell}$, $4\ell<k<2c\ell$, $c \in \mathbb{N}$, is an $M$-aggregation of $(G,\Gamma) \in \mathcal{G}_n$, then for any $\varphi \in \mathbb{R}{n_\Gamma}$, $$ \frac{1}{ 6 M \sqrt{M+3} \max\{ \sqrt{3c},2 \pi\}} \, \langle \widetilde L \varphi, \varphi \rangle^{1/2} \le \min_{u|_{\Gamma} = \varphi} |u|_G \le 28 M^2 \sqrt{ 3 c + 20} \, \langle \widetilde L \varphi, \varphi \rangle^{1/2}.$$ \end{corollary} \subsection{Spectral Equivalence of $S_\Gamma$ and $L_\Gamma^{1/2}$} By Corollary \ref{thm:tildeversion}, and the property $\langle \varphi , S_\Gamma \varphi \rangle =\min_{u|_{\Gamma} = \varphi} |u|^2_G $ (see Proposition \ref{prop:schur}), in order to prove spectral equivalence between $S_\Gamma$ and $L_\Gamma^{1/2}$, it suffices to show that $L_\Gamma^{1/2}$ and $\widetilde L$ are spectrally equivalent. This can be done relatively easily, and leads to a proof of the main result of the section. \begin{theorem} \label{thm:specequiv} If $H$, $G_{k,\ell}\subset H \subset G^*_{k,\ell}$, $4\ell<k<2c\ell$, $c \in \mathbb{N}$, is an $M$-aggregation of $(G,\Gamma) \in \mathcal{G}_n$, then for any $\varphi \in \mathbb{R}{n_\Gamma}$, $$ \frac{1}{36 M^2 (M+3) \max\{ 3c,4 \pi^2 \} \left( \frac{2}{3 \pi} + \frac{\sqrt{2}}{27} \right) } \, \langle L^{1/2}_\Gamma \varphi, \varphi \rangle \le \langle S_\Gamma \varphi, \varphi \rangle \le \frac{784 M^4(3c+20)}{\left( \frac{1}{2 \pi} - \frac{\sqrt{2}}{12} \right)} \, \langle L^{1/2}_\Gamma \varphi, \varphi \rangle.$$ \end{theorem} \begin{proof} Let $\phi(i,j) = \min\{i-j \mod n_\Gamma, \; j-i \mod n_\Gamma\}$. $G[\Gamma]$ is a cycle, so $ \widetilde L(i,j) = - \phi(i,j)^{-2}$ for $i \ne j$. The spectral decomposition of $L_\Gamma$ is well known, namely, \begin{equation*} L_\Gamma = \sum_{k=1}^{\big\lfloor \tfrac{n_\Gamma}{2} \big\rfloor } \lambda_k(L_\Gamma) \bigg[\frac{x_k x_k^T}{\|x_k\|^2} +\frac{y_k y_k^T}{\|y_k\|^2} \bigg],\end{equation*} where $\lambda_k(L_\Gamma) = 2 - 2 \cos\tfrac{2 \pi k }{n_\Gamma} $ and $x_k(j) = \sin \tfrac{2 \pi k j }{n_\Gamma}$, $y_k(j) = \cos \tfrac{2 \pi k j }{n_\Gamma} $, $j = 1,...,n_\Gamma$. If $n_\Gamma$ is odd, then $\lambda_{(n_\Gamma-1)/2}$ has multiplicity two, but if $n_\Gamma$ is even, then $\lambda_{n_\Gamma/2}$ has only multiplicity one, as $x_{n_\Gamma/2} = 0$. If $k \ne n_\Gamma/2$, we have \begin{equation*} \| x_k \|^2 = \sum_{j=1}^{n_\Gamma} \sin^2 \bigg( \frac{2 \pi k j}{n_\Gamma}\bigg) = \frac{n_\Gamma}{2} - \frac{1}{2} \sum_{j=1}^{n_\Gamma} \cos \bigg( \frac{4 \pi k j}{n_\Gamma} \bigg) =\frac{n_\Gamma}{2} - \frac{1}{4} \left[ \frac{ \sin ( 2 \pi k (2 + \frac{1}{n_\Gamma} ) ) }{ \sin \frac{2 \pi k }{n_\Gamma} } - 1 \right] = \frac{n_\Gamma}{2},\end{equation*} and so $\|y_k\|^2 = \frac{n_\Gamma}{2}$ as well. If $k = n_\Gamma/2$, then $\|y_k \|^2 = n_\Gamma$. If $n_\Gamma$ is odd, \begin{eqnarray*} L_\Gamma^{1/2}(i,j) & = & \frac{2 \sqrt{2}}{n_\Gamma} \sum_{k=1}^{\frac{n_\Gamma-1}{2}} \left[1 - \cos \left(\frac{2 k \pi }{n_\Gamma}\right) \right]^{1/2} \left[ \sin \left( \frac{2 \pi k i }{n_\Gamma} \right) \sin \left( \frac{2 \pi k j }{n_\Gamma} \right) - \cos \left( \frac{2 \pi k i }{n_\Gamma} \right) \cos \left( \frac{2 \pi k j}{n_\Gamma} \right) \right] \\ & = & \frac{4}{n_\Gamma} \sum_{k=1}^{\frac{n_\Gamma-1}{2}} \sin \left( \frac{\pi}{2} \frac{2 k }{n_\Gamma} \right) \cos \left( \phi(i,j) \pi \frac{2k}{n_\Gamma}\right) \, = \, \frac{2}{n_\Gamma} \sum_{k=0}^{n_\Gamma} \sin \left( \frac{\pi}{2} \frac{2 k }{n_\Gamma} \right) \cos \left( \phi(i,j) \pi \frac{2k}{n_\Gamma}\right), \end{eqnarray*} and if $n_\Gamma$ is even, \begin{eqnarray*} L_\Gamma^{1/2}(i,j) & = & \frac{2}{n_\Gamma}(-1)^{i+j} + \frac{4}{n_\Gamma} \sum_{k=1}^{\frac{n_\Gamma}{2}-1} \sin \left( \frac{\pi}{2} \frac{2 k }{n_\Gamma} \right) \cos \left( \phi(i,j) \pi \frac{2k}{n_\Gamma}\right) \\ & = & \frac{2}{n_\Gamma} \sum_{k=0}^{n_\Gamma} \sin \left( \frac{\pi}{2} \frac{2 k }{n_\Gamma} \right) \cos \left( \phi(i,j) \pi \frac{2k}{n_\Gamma}\right). \end{eqnarray*} $L^{1/2}_{\Gamma}(i,j)$ is simply the trapezoid rule applied to the integral of $\sin (\tfrac{\pi}{2} x) \cos( \phi(i,j) \pi x)$ on the interval $[0,2]$. Therefore, $$ \bigg|L_\Gamma^{1/2}(i,j) + \frac{2}{\pi(4 \phi(i,j)^2 -1)} \bigg| = \bigg| L_\Gamma^{1/2}(i,j) - \int_{0}^2 \sin\left(\frac{\pi}{2} x\right) \cos \left(\phi(i,j) \pi x\right) dx \bigg| \le \frac{2}{3 n_\Gamma^2} ,$$ where we have used the fact that if $f \in C^2([a,b])$, then $$ \bigg| \int_a^b f(x) dx - \frac{f(a)+f(b)}{2}(b-a) \bigg| \le \frac{(b-a)^3}{12} \max_{\xi \in [a,b]} |f''(\xi)|.$$ Noting that $n_\Gamma \ge 3$, it quickly follows that $$ \bigg( \frac{1}{2 \pi} - \frac{\sqrt{2}}{12} \bigg) \langle \tilde L \varphi, \varphi \rangle \le \langle L^{1/2}_{\Gamma} \varphi, \varphi \rangle \le \bigg( \frac{2}{3 \pi} + \frac{\sqrt{2}}{27} \bigg) \langle \tilde L \varphi, \varphi \rangle.$$ Combining this result with Corollary \ref{thm:tildeversion}, and noting that $\langle \varphi , S_\Gamma \varphi \rangle = |\widehat u|^2_G$, where $\widehat u$ is the harmonic extension of $\varphi$, we obtain the desired result $$ \frac{1}{36 M^2 (M+3) \max\{ 3c,4 \pi^2 \} \left( \frac{2}{3 \pi} + \frac{\sqrt{2}}{27} \right) } \, \langle L^{1/2}_\Gamma \varphi, \varphi \rangle \le \langle S_\Gamma \varphi, \varphi \rangle \le \frac{584 M^4(3c+14)}{\left( \frac{1}{2 \pi} - \frac{\sqrt{2}}{12} \right)} \, \langle L^{1/2}_\Gamma \varphi, \varphi \rangle.$$ \end{proof} \subsection{An Illustrative Example}\label{sub:ex} While the concept of a graph $(G,\Gamma)$ having some $H$, $G_{k,\ell} \subset H \subset G^*_{k,\ell}$, as an $M$-aggregation seems somewhat abstract, this simple formulation in itself is quite powerful. As an example, we illustrate that this implies a trace theorem (and, therefore, spectral equivalence) for all three-connected planar graphs with bounded face degree (number of edges in the associated induced cycle) and for which there exists a planar spring embedding with a convex hull that is not too thin (a bounded distance to Hausdorff distance ratio for the boundary with respect to some point in the convex hull) and satisfies bounded edge length and small angle conditions. Let $\mathcal{G}_n^{f \le c}$ be the elements of $(G,\Gamma) \in \mathcal{G}_n$ for which every face other than the outer face $\Gamma$ has at most $c$ edges. We prove the following theorem\footnote{The below theorem is shown for $\ell \le k$ to avoid certain trivial cases involving small $n$. The same theorem holds for $n$ sufficiently large and $4 \ell < k$, but it should also be noted that the entire analysis of this section also holds for $\ell \le k$, albeit with worse constants.} in the appendix (Theorem \ref{app:thm2}). \begin{theorem}\label{thm:example} If there exists a planar spring embedding $X$ of $(G, \Gamma) \in \mathcal{G}_n^{f\le c_1}$ for which \begin{enumerate}[(1)] \item $K= \text{conv}\left(\{ [X_\Gamma]_{i,\boldsymbol{\cdot}} \}_{i=1}^{n_\Gamma} \right)$ satisfies $$\sup_{u \in K} \inf_{v \in \partial K} \sup_{w \in \partial K} \frac{\|u - v \| }{ \|u - w \|} \ge c_2>0,$$ \item $X$ satisfies $$ \max_{\substack{\{i_1,i_2\} \in E \\ \{j_1,j_2\} \in E}} \frac{ \|X_{i_1,\boldsymbol{\cdot}} - X_{i_2,\boldsymbol{\cdot}} \|}{ \|X_{j_1,\boldsymbol{\cdot}} - X_{j_2,\boldsymbol{\cdot}} \|} \le c_3 \quad \text{and} \quad \min_{\substack{i \in V \\ j_1,j_2 \in N(i)}} \angle \, X_{j_1,\boldsymbol{\cdot}} \, X_{i,\boldsymbol{\cdot}} \, X_{j_2,\boldsymbol{\cdot}} \ge c_4>0,$$ \end{enumerate} then there exists an $H$, $G_{k,\ell}\subset H \subset G^*_{k,\ell}$, $\ell \le k<2c\ell$, $c \in \mathbb{N}$, such that $H$ is an $M$-aggregation of $(G,\Gamma)$ where $c$ and $M$ are constants that depend on $c_1$, $c_2$, $c_3$, and $c_4$. \end{theorem} \section{Approximately Energy Minimizing Embeddings}\label{sec:energy_min_embed} In this section, we make use of the analysis of Section \ref{sec:trace} to give theoretical guarantees regarding approximate solutions to (\ref{p1}), which inspires the construction of a natural algorithm to approximately solve this optimization problem. In addition, we give numerical results for our algorithm. Though in the previous section we took great care to produce results with explicit constants for the purpose of illustrating practical usefulness, in what follows we simply suppose that we have the spectral equivalence \begin{equation} \label{eqn:speceq} \frac{1}{c_1} \, \langle L^{1/2}_\Gamma x, x \rangle \le \langle S_\Gamma x, x \rangle \le c_2 \, \langle L^{1/2}_\Gamma x, x \rangle, \end{equation} for all $x \in \mathbb{R}{n_\Gamma}$ and some constants $c_1$ and $c_2$ which are not too large and can be explicitly chosen based on the results of Section \ref{sec:trace}. \subsection{Theoretical Guarantees} Again, we note that if the minimal two non-trivial eigenvectors of $S_\Gamma$ produce a convex embedding, then this is the exact solution of (\ref{p1}). However, if this is not the case, then, by spectral equivalence, we can still make a number of statements. The convex embedding $X_C$ given by $$[X_{C}]_{j,\boldsymbol{\cdot}} = \frac{2}{n_\Gamma} \bigg(\cos \frac{2 \pi j}{n_\Gamma}, \sin \frac{2 \pi j}{n_{\Gamma}} \bigg), \qquad j = 1,...,n_\Gamma,$$ is the embedding of the two minimal non-trivial eigenvectors of $L_\Gamma^{1/2}$, and therefore, \begin{equation} \label{eqn:approx} h_\Gamma(X_C) \le 4 c_2 \sin \frac{\pi}{n_\Gamma} \le c_1 c_2 \min_{ X_\Gamma \in\text{cl}\left( \mathcal{X}\right)} h_\Gamma (X_\Gamma), \end{equation} thereby producing a $c_1c_2$ approximation guarantee for (\ref{p1}). In addition, we can guarantee that the optimal embedding is largely contained in the subspace corresponding to the $k$ minimal eigenvalues of $L^{1/2}_\Gamma$ when $k$ is a reasonably large constant. In particular, if $X^*_\Gamma$ minimizes (\ref{p1}), and $\Pi_{i}$ is the $\ell^2$-orthogonal projection onto the direct sum of the eigenvectors corresponding to the $i$ minimal non-trivial eigenvalues (counted with multiplicity) of $L^{1/2}_\Gamma$, then \begin{eqnarray*} h_\Gamma(X_\Gamma^*) &\ge& \text{Tr}\big( \left[ (I - \Pi_{2 i}) X_\Gamma^* \right]^T S_\Gamma (I - \Pi_{2 i}) X_\Gamma^* \big) \\ &\ge& \frac{1}{c_1} \text{Tr}\big( \left[ (I - \Pi_{2 i}) X_\Gamma^* \right]^T L^{1/2}_\Gamma (I - \Pi_{2 i}) X_\Gamma^* \big) \\ &\ge& \frac{2}{c_1} \sin \left( \frac{\pi(i+1)}{n_\Gamma} \right) \text{Tr}\big( \left[ (I - \Pi_{2 i}) X_\Gamma^* \right]^T (I - \Pi_{2 \ell}) X_\Gamma^* \big), \end{eqnarray*} and $h_\Gamma(X_\Gamma^*) \le h_\Gamma(X_C)$, which, by using the property $\tfrac{2x}{\pi} \le \sin x \le x$ for all $x \in \left[0,\tfrac{\pi}{2}\right]$, implies that $$\text{Tr}\big( \left[ (I - \Pi_{2 i}) X_\Gamma^* \right]^T (I - \Pi_{2 i}) X_\Gamma^* \big) \le \frac{2 c_1 c_2 \sin \left(\pi/n_\Gamma \right)}{\sin\left(\pi(i+1)/n_\Gamma \right)} \le \frac{\pi c_1 c_2}{i+1}.$$ \subsection{Algorithmic Considerations} The theoretical analysis of Subsection 4.1 inspires a number of natural techniques to approximately solve (\ref{p1}), such as exhaustively searching the direct sum of some constant number of low energy eigenspaces of $S_\Gamma$. However, numerically, it appears that when the pair $(G,\Gamma)$ satisfies certain conditions, such as the conditions of Theorem \ref{thm:example}, the minimal non-trivial eigenvector pair often produces a convex embedding, and when it does not, the removal of some small number of boundary vertices produces a convex embedding. If the embedding is almost convex (i.e., convex after the removal of some small number of vertices), a convex embedding can be produced by simply moving these vertices so that they are on the boundary and between their two neighbors. Given an approximate solution to (\ref{p1}), one natural approach simply consists of iteratively applying a smoothing matrix, such as $d I - S_\Gamma$, $d > \rho(S_\Gamma)$, or the inverse $S_\Gamma^{-1}$ defined on the subspace $\{x \, | \, \langle x, {\bf 1} \rangle = 0 \}$, until the matrix $X_\Gamma$ is no longer a convex embedding. In fact, applying this procedure to $X_C$ immediately produces a technique that approximates the optimal solution within a factor of at least $c_1 c_2$, and possibly better given smoothing. In order to have the theoretical guarantees that result from using $X_C$, and benefit from the possibly nearly-convex Schur complement low energy embedding, we introduce Algorithm \ref{alg1}. \begin{algorithm}[!tb] \caption{Embed the Boundary $\Gamma$ \label{alg1}} \begin{enumerate} \item[] $ X = \text{minimaleigenvectors}(G,\Gamma)$ \item[] { \bf If} $\text{isplanar}(X) = 0$, \begin{enumerate} \item[] $X \leftarrow \left\{\frac{2}{n_\Gamma} \bigg(\cos \frac{2 \pi j}{n_\Gamma}, \sin \frac{2 \pi j}{n_{\Gamma}} \bigg)\right\}_{i=1}^{n_\Gamma}$ \end{enumerate} \item[] { \bf Else} \begin{enumerate} \item[] { \bf If} $\text{isconvex}(X) = 1$, \begin{enumerate} \item[] $X_{alg} = X$ \item[] end Algorithm \end{enumerate} \item[] { \bf Else} \begin{enumerate} \item[] $X \leftarrow \text{makeconvex}(X)$ \item[] $X \leftarrow X - \frac{{\bf 1}_{n_\Gamma}{\bf 1}^T_{n_\Gamma} X}{n_\Gamma} $ \item[] solve $[X^T X] Q = Q \Lambda$, $Q$ orthogonal, $\Lambda$ diagonal \item[] $X \leftarrow X Q \Lambda^{-1/2}$ \item[] { \bf If } $h_\Gamma(X) > h_\Gamma \left( \left\{\frac{2}{n_\Gamma} \bigg(\cos \frac{2 \pi j}{n_\Gamma}, \sin \frac{2 \pi j}{n_{\Gamma}} \bigg)\right\}_{i=1}^{n_\Gamma}\right)$ \begin{enumerate} \item[] $X \leftarrow \left\{\frac{2}{n_\Gamma} \bigg(\cos \frac{2 \pi j}{n_\Gamma}, \sin \frac{2 \pi j}{n_{\Gamma}} \bigg)\right\}_{i=1}^{n_\Gamma}$ \end{enumerate} \end{enumerate} \end{enumerate} \item[] $\text{gap} =1$ \item[] { \bf While } $\text{gap} >0$, \begin{enumerate} \item[] $\widehat X \leftarrow \text{smooth} (X)$ \item[] { \bf If} $\text{isplanar}(\widehat X) = 0$, \begin{enumerate} \item[] $\text{gap} \leftarrow -1$ \end{enumerate} \item[] { \bf Else} \begin{enumerate} \item[] { \bf If} $\text{isconvex}(\widehat X) = 0$, \begin{enumerate} \item[] $\widehat X \leftarrow \text{makeconvex}(\widehat X)$ \end{enumerate} \item[] $\widehat X \leftarrow \widehat X - \frac{{\bf 1}_{n_\Gamma}{\bf 1}^T_{n_\Gamma} \widehat X}{n_\Gamma} $ \item[] solve $[\widehat X^T \widehat X] Q = Q \Lambda$, $Q$ orthogonal, $\Lambda$ diagonal \item[] $\widehat X \leftarrow \widehat X Q \Lambda^{-1/2}$ \item[] $\text{gap} \leftarrow h_\Gamma(X) - h_\Gamma(\widehat X)$ \item[] { \bf If} $\text{gap} >0$ \begin{enumerate} \item[] $X \leftarrow \widehat X$ \end{enumerate} \end{enumerate} \end{enumerate} \item[] $X_{alg} = X$ \end{enumerate} \end{algorithm} Algorithm \ref{alg1} takes a graph $(G,\Gamma) \in \mathcal{G}_n$ as input, and first computes the minimal two non-trivial eigenvectors of the Schur complement, denoted by $X$. If $X$ is planar and convex, the algorithm terminates and outputs $X$, as it has found the exact solution to (\ref{p1}). If $X$ is non-planar, then this embedding is replaced by $X_C$, the minimal two non-trivial eigenvectors of the boundary Laplacian to the one-half power. If $X$ is planar, but non-convex, then some procedure is applied to transform $X$ into a convex embedding. The embedding is then shifted so that the origin is the center of mass, and a change of basis is applied so that $X^TX = I$. However, if $h_\Gamma (X) > h_\Gamma(X_C)$, then clearly $X_C$ is a better initial approximation, and we still replace $X$ by $X_C$. We then perform some form of smoothing to our embedding $X$, resulting in a new embedding $\widehat X$. If $\widehat X$ is non-planar, the algorithm terminates and outputs $X$. If $\widehat X$ is planar, we again apply some procedure to transform $\widehat X$ into a convex embedding, if it is not already convex. Now that we have a convex embedding $\widehat X$, we shift $\widehat X$ and apply a change of basis, so that $\widehat X^T{\bf 1} =0$ and $\widehat X^T \widehat X = I$. If $h_\Gamma (\widehat X) < h_\Gamma (X)$, then we replace $X$ by $\widehat X$ and repeat this smoothing procedure, producing a new $\widehat X$, until the algorithm terminates. If $h_\Gamma (\widehat X) \ge h_\Gamma (X)$, then we terminate the algorithm and output $X$. It is immediately clear from the statement of the algorithm that the following result holds. \begin{proposition} The embedding $X_{alg}$ of Algorithm \ref{alg1} satisfies $h_\Gamma(X_{alg}) \le c_1 c_2 \min_{ X_\Gamma \in \mathcal{X}} h_\Gamma (X_\Gamma)$. \end{proposition} We now discuss some of the finer details of Algorithm \ref{alg1}. Determining whether an embedding is planar can be done in near-linear time using the sweep line algorithm \cite{shamos1976geometric}. If the embedding is planar, testing if it is also convex can be done in linear time. One such procedure consists of shifting the embedding so the origin is the mass center, checking if the angles each vertex makes with the x-axis are are properly ordered, and then verifying that each vertex $x_i$ is not in $\text{conv}(\{o,x_{i-1},x_{i+1}\})$. Also, in practice, it is advisable to replace conditions of the form $h_\Gamma(X) - h_\Gamma (\widehat X) >0$ in Algorithm \ref{alg1} by the condition $h_\Gamma(X) - h_\Gamma (\widehat X) > \text{tol}$ for some small value of $\text{tol}$, in order to ensure that the algorithm terminates after some finite number of steps. There are a number of different choices for smoothing procedures and techniques to make a planar embedding convex. For the numerical experiments that follow, we simply consider the smoothing operation $X \leftarrow S_\Gamma^{-1} X$, and make a planar embedding convex by replacing the embedding by its convex hull, and place vertices equally spaced along each line. For example, if $x_1$ and $x_5$ are vertices of the convex hull, but $x_2,x_3,x_4$ are not, then we set $x_2 = 3/4 x_1 + 1/4 x_5$, $x_3 = 1/2 x_1 + 1/2 x_5$, and $x_4 = 1/4 x_1 + 3/4 x_5$. Given the choices of smoothing and making an embedding convex that we have outlined, the version of Algorithm \ref{alg1} that we are testing has complexity near-linear in $n$. The main cost of this procedure is the computations that involve $S_\Gamma$. All variants of Algorithm \ref{alg1} require the repeated application of $S_\Gamma$ or $S_\Gamma^{-1}$ to a vector in order to compute the minimal eigenvectors of $S_\Gamma$ (possibly also to perform smoothing). The Schur complement $S_\Gamma$ is a dense matrix and requires the inversion of a $n \times n$ matrix, but can be represented as the composition of functions of sparse matrices. In practice, $S_\Gamma$ should never be formed explicitly. Rather, the operation of applying $S_\Gamma$ to a vector $x$ should occur in two steps. First, the sparse Laplacian system $(L_o + D_o) y = A_{o,\Gamma} x$ should be solved for $y$, and then the product $S x$ is given by $S_\Gamma x = (L_{\Gamma} + D_{\Gamma}) x - A_{o,\Gamma}^T y$. Each application of $S_\Gamma$ is therefore an $O(n \log n)$ procedure (using an $O(n \log n)$ Laplacian solver). The application of the inverse $S_\Gamma^{-1}$ defined on the subspace $\{x \, | \, \langle x, {\bf 1} \rangle = 0 \}$ also requires the solution of a Laplacian system. As noted in \cite{wu2017fourier}, the action of $S_\Gamma^{-1}$ on a vector $x \in \{x \, | \, \langle x, {\bf 1} \rangle = 0 \} $ is given by $$ S_\Gamma^{-1} x = \begin{pmatrix} 0 & I \end{pmatrix} \begin{pmatrix} L_{o} + D_{o} & -A_{o,\Gamma} \\ -A_{o,\Gamma}^T & L_{\Gamma} + D_\Gamma \end{pmatrix}^{-1} \begin{pmatrix} 0 \\ x \end{pmatrix}, $$ as verified by the computation \begin{eqnarray*} S_\Gamma \left[ S_\Gamma^{-1} x \right] &=& S_\Gamma \begin{pmatrix} 0 & I \end{pmatrix} \left[ \begin{pmatrix} I & 0 \\ -A_{o,\Gamma}^T(L_{o} + D_{o})^{-1} & I \end{pmatrix} \begin{pmatrix} L_{o} + D_{o} & -A_{o,\Gamma} \\ 0 & S_\Gamma \end{pmatrix} \right]^{-1} \begin{pmatrix} 0 \\ x \end{pmatrix} \\ &=& S_\Gamma \begin{pmatrix} 0 & I \end{pmatrix} \begin{pmatrix} L_{o} + D_{o} & -A_{o,\Gamma} \\ 0 & S_\Gamma \end{pmatrix}^{-1} \begin{pmatrix} I & 0 \\ A_{o,\Gamma}^T(L_{o} + D_{o})^{-1} & I \end{pmatrix} \begin{pmatrix} 0 \\ x \end{pmatrix} \\ &=& S_\Gamma \begin{pmatrix} 0 & I \end{pmatrix} \begin{pmatrix} L_{o} + D_{o} & -A_{o,\Gamma} \\ 0 & S_\Gamma \end{pmatrix}^{-1} \begin{pmatrix} 0 \\ x \end{pmatrix} \,= \, x. \end{eqnarray*} Given that the application of $S_\Gamma^{-1}$ has the same complexity as an application $S_\Gamma$, the inverse power method is naturally preferred over the shifted power method for smoothing. \subsection{Numerical Results} \begin{table}[!t] \label{tb:numerics} \begin{center} \begin{tabular}{ | c c | c c c c c | c c c c c |} \hline & & \multicolumn{5}{|c|}{Unit Circle} & \multicolumn{5}{c|}{$3\times1$ Rectangle} \\ \multicolumn{2}{|c|}{$n = $} & 1250 & 2500 & 5000 & 10000 & 20000 & 1250 & 2500 & 5000 & 10000 & 20000 \\ \hline \% & $X_s$ & 100 & 100 & 100 & 100 & 100 & 100 & 100 & 98 & 98 & 97 \\ planar & $X_l$ & 100 & 100 & 100 & 100 & 100 & 67 & 67 & 65 & 71 & 67 \\ \hline crossings & $X_s$ & n/a & n/a & n/a & n/a & n/a & n/a & n/a & 0.042 & 0.062 & 0.063 \\ per edge & $X_l$ & n/a & n/a & n/a & n/a & n/a & 0.143 & 0.119 & 0.129 & 0.132 & 0.129 \\ \hline \# not & $X_s$ & 0.403 & 0.478 & 0.533 & 0.592 & 0.645 & 0.589 & 0.636 & 0.689 & 0.743 & 0.784 \\ convex & $X_l$ & 0.001 & 0 & 0 & 0 & 0 & 0.397 & 0.418 & 0.428 & 0.443 & 0.448 \\ \hline & $X_l$ & 1.026 & 1.024 & 1.02 & 1.017 & 1.015 & 1.938 & 2.143 & 2.291 & 2.555 & 2.861 \\ energy & $X_{sc}$ & 1.004 & 1.004 & 1.004 & 1.004 & 1.003 & 1.127 & 1.164 & 1.208 & 1.285 & 1.356 \\ ratio & $X_{alg}$ & 1.004 & 1.004 & 1.004 & 1.004 & 1.003 & 1.124 & 1.158 & 1.204 & 1.278 & 1.339 \\ & $X_{lc}$ & 1.026 & 1.0238 & 1.02 & 1.017 & 1.015 & 1.936 & 2.163 & 2.301 & 2.553 & 2.861 \\ & $ X_C $ & 1.023 & 1.023 & 1.02 & 1.017 & 1.016 & 1.374 & 1.458 & 1.529 & 1.676 & 1.772 \\ \hline \multicolumn{1}{c}{} \vspace{1 mm} \end{tabular} \end{center} \caption{Numerical results for experiments on Delaunay triangulations of $n$ points randomly generated in a disk or rectangle. One hundred experiments were performed for each convex body and choice of $n$. The row ``\% planar" gives the percent of the samples for which the boundary embedding was planar. The row ``crossings per edge" reports the average number of edge crossings per edge, where the average is taken over all non-planar embeddings. In some cases all one hundred experiments result in planar embeddings, in which case this entry does not contain a value. The row ``\# not convex" reports the average fraction of vertices which are not vertices of the resulting convex hull. This average is taken over all planar embeddings. The row ``energy ratio" reports the average ratio between the value of the objective function $h_\Gamma(\boldsymbol{\cdot})$ for the embedding under consideration and $h_\Gamma(X_s)$. This, again, is an average over all planar embeddings.} \end{table} We perform a number of simple experiments, which illustrate the benefits of using the Schur complement to produce an embedding. In particular, we consider the same two types of triangulations as in Figure \ref{fig1}, random triangulations of the unit disk and the $3$-by-$1$ rectangle. For each of these two convex bodies, we sample $n$ points uniformly at random and compute a Delaunay triangulation. For each triangulation, we compute the minimal two non-trivial eigenvectors of the graph Laplacian $L_G$, and the minimal two non-trivial eigenvectors of the Schur complement $S_\Gamma$ of the Laplacian $L_G$ with respect to the interior vertices $V \backslash \Gamma$. The properly normalized and shifted versions of the Laplacian and Schur complement embeddings are denoted by $X_l$ and $X_s$, respectively. We then check whether each of these embeddings of the boundary is planar. If the embedding is not planar, we note how many edge crossings the embedding has. If the embedding is planar, we also determine if it is convex, and compute the number of boundary vertices which are not vertices of the convex hull. If the embedding is planar, but not convex, then we simply replace it by the embedding corresponding to the convex hull of the original layout (as mentioned in Subsection 4.2). This convex-adjusted layout of the Laplacian and Schur complement embeddding (shifted and properly scaled) is denoted by $X_{lc}$ and $X_{sc}$, respectively. The embedding defined by minimal two non-trivial eigenvectors of the boundary Laplacian $L_\Gamma$, denoted by $X_C$, is the typical circular embedding of a cycle (defined in Subsection 4.1). Of course the value $h_\Gamma(X_s)$ is a lower bound for the minimum of (\ref{p1}), and this estimate is exact if $X_s$ is a planar and convex embedding. The embedding resulting from Algorithm \ref{alg1} is denoted by $X_{alg}$. For each triangulation, we compute the ratio of $h_\Gamma(X_s)$ to $h_\Gamma(X_l)$, $h_\Gamma(X_{sc})$, $h_\Gamma(X_{alg})$, $h_\Gamma(X_{lc})$, and $h_\Gamma(X_C)$, conditional on each of these layouts being planar. We perform this procedure one hundred times each for both convex bodies and a range of values of $n$. We report the results in Table \ref{tb:numerics}. \begin{figure}[t] \begin{center} \subfloat[Laplacian Embedding]{\includegraphics*[width=2.5in,height = 1.25 in ]{lapl}} \qquad \qquad \subfloat[Schur Complement Embedding]{\includegraphics*[width=2.5 in, height = 1.25 in]{schur}} \caption{An example of the Laplacian embedding $X_l$ vs the (unsmoothed) Schur complement embedding $X_s$ of the boundary of the Delaunay triangulation of 1250 points randomly generated in a $3\times1$ rectangle. The Laplacian embedding is non-planar, and far from convex. The Schur complement embedding is planar and almost a convex embedding.} \label{fig:rect} \end{center} \end{figure} These numerical results illustrate a number of phenomena. For instance, when considering the disk both the Laplacian embedding $X_{l}$ and Schur complement $X_{s}$ are always planar, usually close to convex, and their convex versions ($X_{lc}$ and $X_{sc}$) both perform reasonably well compared to the lower bound $h_\Gamma(X_s)$ for Problem (\ref{p1}). The embedding $X_{alg}$ from Algorithm \ref{alg1} produced small improvements over the results of the Schur complement, but this improvement was negligible when average ratio was rounded to the thousands place. As expected, the $L_\Gamma$-based embedding $X_C$ performs well in this instance, as the original embedding of the boundary in the triangulation is already a circle. Most likely, any graph which possesses a very high level of macroscopic symmetry shares similar characteristics. However, when we consider the rectangle, the convex version of the Schur complement embedding has a significantly better performance than the Laplacian-based embedding. In fact, for a large percentage of the simulations the Laplacian based-embedding $X_l$ was non-planar, and possessed a relatively large number of average crossings per edge. We give a visual representation of the typical difference in the Laplacian vs Schur complement embeddings of the boundary in Figure \ref{fig:rect}. In addition, in this instance, the smoothing procedure of Algorithm \ref{alg1} leads to small, but noticeable improvements. Of course, the generic embedding $X_C$ performs poorly in this case, as the embedding does not take into account any of the dynamics of the interior. The Schur complement embedding clearly outperforms the Laplacian embedding, especially for triangulations of the rectangle. From this, we can safely conclude that Laplacian embedding is not a reliable method to embed graphs, and note that, while spectral equivalence does not imply that the minimal two non-trivial eigenvectors produce a planar, near-convex embedding, practice illustrates that for well behaved graphs with some level of structure, this is a likely result. \section*{Acknowledgements} The work of L. Zikatanov was supported in part by NSF grants DMS-1720114 and DMS-1819157. The work of J. Urschel was supported in part by ONR Research Contract N00014-17-1-2177. An anonymous reviewer made a number of useful comments which improved the narrative of the paper. The authors are grateful to Louisa Thomas for greatly improving the style of presentation. \bibliographystyle{plain}
{ "timestamp": "2020-07-10T02:03:11", "yymm": "2001", "arxiv_id": "2001.10928", "language": "en", "url": "https://arxiv.org/abs/2001.10928", "abstract": "Tutte's spring embedding theorem states that, for a three-connected planar graph, if the outer face of the graph is fixed as the complement of some convex region in the plane, and all other vertices are placed at the mass center of their neighbors, then this results in a unique embedding, and this embedding is planar. It also follows fairly quickly that this embedding minimizes the sum of squared edge lengths, conditional on the embedding of the outer face. However, it is not at all clear how to embed this outer face. We consider the minimization problem of embedding this outer face, up to some normalization, so that the sum of squared edge lengths is minimized. In this work, we show the connection between this optimization problem and the Schur complement of the graph Laplacian with respect to the interior vertices. We prove a number of discrete trace theorems, and, using these new results, show the spectral equivalence of this Schur complement with the boundary Laplacian to the one-half power for a large class of graphs. Using this result, we give theoretical guarantees for this optimization problem, which motivates an algorithm to embed the outer face of a spring embedding.", "subjects": "Combinatorics (math.CO)", "title": "Discrete Trace Theorems and Energy Minimizing Spring Embeddings of Planar Graphs", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9854964207345893, "lm_q2_score": 0.8244619306896956, "lm_q1q2_score": 0.8125042817266241 }
https://arxiv.org/abs/1612.00284
The discrete Pompeiu problem on the plane
We say that a finite subset $E$ of the Euclidean plane $\mathbb{R}^2$ has the discrete Pompeiu property with respect to isometries (similarities), if, whenever $f:\mathbb{R}^2\to \mathbb{C}$ is such that the sum of the values of $f$ on any congruent (similar) copy of $E$ is zero, then $f$ is identically zero. We show that every parallelogram and every quadrangle with rational coordinates has the discrete Pompeiu property w.r.t. isometries. We also present a family of quadrangles depending on a continuous parameter having the same property. We investigate the weighted version of the discrete Pompeiu property as well, and show that every finite linear set with commensurable distances has the weighted discrete Pompeiu property w.r.t. isometries, and every finite set has the weighted discrete Pompeiu property w.r.t. similarities.
\section{Introduction}\label{s1} Let $K$ be a compact subset of the plane having positive Lebesgue measure. The set $K$ is said to have the Pompeiu property if the following condition is satisfied: whenever $f$ is a continuous function defined on the plane, and the integral of $f$ over every congruent copy of $K$ is zero, then $f\equiv 0$. It is known that the closed disc does not have the Pompeiu property, while all polygons have. (As for the history of the problem, see \cite{R} and \cite{Z}.) Replacing the Lebesgue measure by the counting measure, and the isometry group by an arbitrary family ${\cal G}$ of bijections mapping a set $X$ onto itself, we obtain the following notion. Let $K$ be a nonempty finite subset of $X$. We say that {\it $K$ has the discrete Pompeiu property with respect to the family ${\cal G}$ if the following condition is satisfied: whenever $f\colon X\to {\mathbb C}$ is such that $\sum_{x\in K} f(\phi (x))=0$ for every $\phi \in {\cal G}$, then $f\equiv 0$.} We also introduce the weighted version of the discrete Pompeiu property. We say that the $n$-tuple {\it $K = (x_1 ,\dots, x_n )$ has the weighted discrete Pompeiu property with respect to the family ${\cal G}$ if the following condition is satisfied: whenever $\alpha _1 ,\dots, \alpha _n$ are complex numbers with $\sum_{i=1}^n \alpha _i \ne 0$ and $f\colon X\to {\mathbb C}$ is such that $\sum_{j=1}^n a_j f(\phi (x_j ))=0$ for every $\phi \in {\cal G}$, then $f\equiv 0$.} Apparently, the first results concerning the discrete Pompeiu property appeared in \cite{Ze}, where the author considers the Pompeiu problem for finite subsets of ${\mathbb Z} ^n$ w.r.t. translations. The interest in the topic revived shortly after the 70th William Lowell Putnam Mathematical Competition (2009), where the following problem was posed: Let $f$ be a real-valued function on the plane such that for every square $ABCD$ in the plane, $f(A)+f(B)+f(C)+f(D)=0.$ Does it follow that $f\equiv 0$? This is nothing but asking whether the set of vertices of a square has the discrete Pompeiu property with respect to the similarities of the plane. This problem motivated the paper \cite{GD} by C. de Groote and M. Duerinckx. They prove that every finite and nonempty subset of $\mathbb{R}^2$ has the discrete Pompeiu property w.r.t. direct similarities. Another generalization of the Putnam problem appeared in \cite{KKS}, where it is proved that the set of vertices of a square has the discrete Pompeiu property with respect to the group of isometries. Recently, M. J. Puls \cite{Pu} considered the discrete Pompeiu problem in groups. In this paper we improve the results of \cite{GD} and \cite{KKS}. We show that every finite and nonempty subset of $\mathbb{R}^2$ has the weighted discrete Pompeiu property w.r.t. direct similarities (Theorem \ref{t1}). We also show that the set of vertices of every parallelogram has the discrete Pompeiu property with respect to the group of rigid motions (Theorem \ref{t3}). We show the same for quadrangles with rational coordinates (Theorem \ref{tfourrac}), and for a family of quadrangles depending on a continuous parameter (Theorem \ref{t4}). We also prove that in $\mathbb{R}^2$ all linear sets with commensurable distances have the discrete Pompeiu property w.r.t. rigid motions (Theorem \ref{t2}). These results motivate the following questions: is it true that every four element subset of the plane has the discrete Pompeiu property with respect to the group of isometries? Is it true that every nonempty and finite subset of the plane has the same property? We do not know the answer. We conclude this introduction with a remark concerning the family of translations in an Abelian group. As the following proposition shows, this family is `too small': finite sets, in general, cannot have the discrete Pompeiu property w.r.t. this group. \begin{proposition}\label{p1} Let $G$ be a torsion free Abelian group. If $E$ is a finite subset of $G$ containing at least 2 elements, then $E$ does not have the discrete Pompeiu property w.r.t. the family of all translations of $G$. \end{proposition} \noi {\bf Proof.} Note that if the torsion free rank of $G$ is less than continuum, then this is a special case of \cite[Theorem 3.1]{Pu}. In the general case let $H$ be the subgroup of $G$ generated by $E$. Then $H$ is a finitely generated torsion free Abelian group, and thus $H$ is isomorphic to $\mathbb{Z}^n$ for some finite $n$. By Zeilberger's theorem \cite{Ze}, $E$ does not have the discrete Pompeiu property in $H$ w.r.t. the family of translations; that is, there is a nonzero function $f\colon H\to {\mathbb C}$ such that the sum of the values of $f$ taken on any translate of $E$ is zero. It is clear that we can find such a function on every coset of $H$. The union of these functions has the same property on $G$, showing that $E$ does not have the discrete Pompeiu property in $G$ w.r.t. the family of translations on $G$. $\square$ \medskip In the proposition above we cannot omit the requirement that $G$ be torsion free. E.g., if $G$ is a finite group having $n\ge 3$ elements and $E$ is a subset of $G$ having $n-1$ elements, then $E$ has the discrete Pompeiu property w.r.t. translations. Indeed, if the sum of the values of $f$ is zero on each translate of $E$ then $f$ must be constant, and the constant must be zero. \section{Preliminaries: generalized polynomials and exponential functions on Abelian groups}\label{s2} Let $G$ be an Abelian group. If $f\colon G\to {\mathbb C}$ and $h\in G$, then $\Delta _h f$ denotes the function defined by $\Delta _h f (x)=f(x+h )-f(x)$ $(x\in G)$. The function $f\colon G\to {\mathbb C}$ is said to be a {\it generalized polynomial} if there is an $n$ such that $\Delta _{h_1} \ldots \Delta _{h_{n+1}} f \equiv 0$ for every ${h_1} ,\dots, {h_{n+1}} \in G$. The degree of $f$ is the smallest such $n$. Thus the generalized polynomials of degree zero are the nonzero constant functions. The degree of the identically zero function is $-1$ by definition. The function $g\colon G\to {\mathbb C}$ is an {\it exponential}, if $g\ne 0$ and $g(x+y)=g(x)\cdot g(y)$ for every $x,y\in G$. By a {\it monom} we mean a function of the form $p\cdot g$, where $p$ is a generalized polynomial, and $g$ is an exponential. Finite sums of monoms are called {\it polynomial-exponential functions}. Let $\mathbb{C}^G$ denote the linear space of all complex valued functions defined on $G$ equipped with the product topology. By a {\it variety on $G$} we mean a translation invariant, closed, linear subspace of $\mathbb{C}^G$. We say that spectral analysis holds in $G$, if every nonzero variety contains an exponential function. We shall need the fact that spectral analysis holds in every finitely generated and torsion free Abelian group. In fact, this is true in every Abelian group whose torsion free rank is less than continuum \cite{LS}. However, for finitely generated and torsion free Abelian groups this also follows from Lefranc's theorem. Lefranc proved in \cite{LF} that if $n$ is finite then {\it spectral synthesis} holds in ${\mathbb Z} ^n$; that is, every variety on ${\mathbb Z} ^n$ is spanned by polynomial-exponential functions. Therefore, if a variety $V$ on ${\mathbb Z} ^n$ contains nonzero functions, then it has to contain nonzero polynomial-exponential functions. It is easy to see that if a polynomial-exponential function $\sum_{i=1}^n p_i \cdot g_i$ is contained in a variety $V$, where $p_1 ,\dots, p_n$ are nonzero generalized polynomials and $g_1 ,\dots, g_n$ are distinct exponentials, then necessarily $g_i \in V$ holds for every $i=1,\dots, n$. Since every finitely generated and torsion free Abelian group is isomorphic to ${\mathbb Z} ^n$ for some finite $n$, it follows that spectral analysis (and, in fact, spectral synthesis) holds in such groups. We shall need the following special case. \begin{lemma}\label{l2} Let $G$ be a finitely generated subgroup of the additive group of ${\mathbb C}$, let $\alpha _{j,k}, b_{j,k}$ $(j=1,\dots, n, \ k=1,\dots, m)$ be complex numbers, and let $Y$ be a subset of ${\mathbb C}$. Let $V$ denote the set of functions $f\colon G\to {\mathbb C}$ such that $$\sum_{j=1}^n \alpha _{j,k} f(x+b_{j,k} y)=0$$ for every $k=1,\dots, m$, $x\in G$ and $y\in Y$ satisfying $b_{j,k} y \in G$ for every $j=1,\dots, n$ and $k=1,\dots, m$. Then $V$ is a variety on $G$. Consequently, if $V$ contains a non-identically zero function, then $V$ contains an exponential function defined on $G$. \end{lemma} \noi {\bf Proof.} It is clear that $V$ is a translation invariant linear subspace of $\mathbb{C}^G$. Since $G$ is countable, the topology of $\mathbb{C}^G$ is the topology determined by pointwise convergence. Obviously, if $f_i \in V$ and $f_i \to f$ pointwise on $G$, then $f\in V$. Thus $V$ is closed. $\square$ \section{Similarities}\label{s3} It was shown by C. de Groote and M. Duerinckx in \cite{GD} that {\it every finite and nonempty subset of $\mathbb{R}^2$ has the discrete Pompeiu property w.r.t. direct similarities.} By a direct similarity we mean a transformation that is a composition of translations, rotations and homothetic transformations. The authors also discuss the possible generalizations when $\mathbb{R}^2$ is replaced by $K^p$ where $K$ is a field, and the transformation group is a subgroup of $AGL(p,K)$. We note that the argument given by C. de Groote and M. Duerinckx also proves the following generalization. \begin{proposition}\label{p2} Let ${\cal G}$ be a transitive and locally commutative transformation group acting on $X$ such that for every $x,y,z\in X$ with $y\ne x\ne z$ there exists a map $f\in {\cal G}$ such that $f(x)=x$ and $f(y)=z$. Then every finite and nonempty proper subset of $X$ has the discrete Pompeiu property w.r.t. ${\cal G}$. \end{proposition} We say that a transformation $g\colon {\mathbb R} \to {\mathbb R}$ is an order preserving similarity, if $g(x)=a+cx$ for every $x\in {\mathbb R}$, where $a\in {\mathbb R}$ and $c>0$. \begin{proposition}\label{p3} Every finite and nonempty subset of ${\mathbb R}$ has the discrete Pompeiu property w.r.t. the group of order preserving similarities. \end{proposition} \noi {\bf Proof.} Although Proposition \ref{p2} cannot be applied directly, a variant of the argument given by C. de Groote and M. Duerinckx in \cite{GD} works. Let $E=\{x_1 ,\dots, x_n \}$. Suppose that $f\colon {\mathbb R} \to {\mathbb R}$ is such that $\sum_{i=1}^n f(a+cx_i )=0$ for every $a\in {\mathbb R}$ and $c>0$. Replacing $E$ by a translated copy we may assume that $0=x_1 <x_2 < \ldots <x_n$. We put $A_i =\{ x_i +x_i x_j \colon j=2,\dots, n\}$ and $B_j =\{ x_i +x_i x_j \colon i=2,\dots, n\}$. Then $A_i \cup \{ x_i \}$ is the image of $E$ under an order preserving similarity for every $i\ge 2$, and thus $\sum_{j=2}^n f(x_i +x_i x_j )=-f(x_i )$ $(i=2,\dots, n)$. Similarly, $B_j \cup \{ 0 \}$ is the image of $E$ under an order preserving similarity for every $j\ge 2$, and thus $\sum_{i=2}^n f(x_i +x_i x_j )=-f(0 )$ $(j=2,\dots, n)$. Therefore, \begin{align*} f(0)= & -\sum_{i=2}^n f(x_i )= \sum_{i=2}^n \sum_{j=2}^n f(x_i +x_i x_j ) = \sum_{j=2}^n \sum_{i=2}^n f(x_i +x_i x_j ) = \\ &=\sum_{j=2}^n (-f(0))=-(n-1)f(0). \end{align*} Thus we have $f(0)=0$. For every $b\in {\mathbb R}$, the function $T_b f$ defined by $T_b f (x)=f(x+b)$ also satisfies the condition $\sum_{i=1}^n T_b f(a+cx_i )=0$ for every $a\in {\mathbb R}$ and $c>0$. Therefore, $T_b f(0)=f(b)=0$ for every $b\in {\mathbb R}$. $\square$ \medskip De Groote and M. Duerinckx ask in \cite{GD} if the finite subsets of the plane have the weighted discrete Pompeiu property w.r.t. direct similarities. In the next theorem we show that the answer is affirmative. \begin{theorem}\label{t1} Every $n$-tuple of distinct points of $\mathbb{R}^2$ has the weighted discrete Pompeiu property w.r.t. direct similarities. \end{theorem} \noi {\bf Proof.} We identify $\mathbb{R}^2$ with the complex plane ${\mathbb C}$. We put ${\mathbb C} ^* ={\mathbb C} \setminus \{ 0\}$. Let $(b_1 ,\dots, b_n )$ be an $n$-tuple of distinct complex numbers. Let $\alpha _1 ,\dots, \alpha _n$ be complex numbers such that $\sum_{i=1}^n \alpha _i \ne 0$, and let $f\colon {\mathbb C} \to {\mathbb C}$ be such that \begin{equation}\label{e2} \sum_{i=1}^n \alpha _i f(x+b_i y)=0 \end{equation} for every $x\in {\mathbb C}$ and $y\in {\mathbb C} ^*$. We have to prove that $f\equiv 0$. If \eqref{e2} holds for every $x,y\in {\mathbb C}$, then $f\equiv 0$ is one of the statements of \cite[Theorem 2.4]{KV}. Therefore, it is enough to show that if \eqref{e2} holds for every $x\in {\mathbb C}$ and $y\in {\mathbb C} ^*$, then it holds for every $x,y\in {\mathbb C}$. In the following theorem we shall prove more. We say that a family $I$ of subsets of ${\mathbb C}$ is a proper and translation invariant ideal, if $A,B\in I$ implies $A\cup B\in I$, $A\in I$ and $B\subset A$ implies $B\in I$, ${\mathbb C} \notin I$, and if $A\in I$ then $A+c =\{ x+c \colon x\in A\} \in I$ for every $c\in {\mathbb C}$. It is clear that the family of finite subsets of ${\mathbb R}$ is a proper and translation invariant ideal. \begin{theorem}\label{t5} Let $I$ be a proper and translation invariant ideal of subsets of $\mathbb{C}$. Let $b_1 ,\dots, b_n$ be distinct complex numbers, and suppose that the functions $f_1 ,\dots, f_n \colon \mathbb{C} \to \mathbb{C}$ satisfy \begin{equation}\label{el1} \sum_{i=1}^n f_i (x+b_i y)=0 \end{equation} for every $x\in \mathbb{C}$ and $y\in {\mathbb C} \setminus A$, where $A\in I$. Then each $f_i$ is a generalized polynomial of degree $\le n-2$, and \eqref{el1} holds for every $x,y\in {\mathbb C}$. \end{theorem} \noi {\bf Proof.} First we prove that each $f_i$ is a generalized polynomial of degree $\le n-2$. We prove by induction on $n$. The case of $n=1$ is obvious. Now let $n\ge 2$, and suppose that the statement is true for $n-1$. Let $f_1 ,\dots, f_n$ satisfy \eqref{el1} for every $x$ and $y\notin A$, where $A\in I$. Since the role of the functions $f_i$ is symmetric, it is enough to prove that $f_1$ is a generalized polynomial of degree $\le n-2$. Note that $b_1 \ne b_n$ by assumption. Let $h\in \mathbb{C}$ be fixed. Then we have \begin{equation}\label{e3} \sum_{i=1}^{n} f_i (x+h+b_i y) =0 \end{equation} for every $x$ and $y\in {\mathbb C} \setminus A$, and \begin{equation}\label{e4} \sum_{i=1}^{n} f_i (u+b_i v ) =0 \end{equation} for every $u$ and $v\notin A$. Substituting $u=x-b_1 h/(b_n -b_1 )$ and $v=y+h/(b_n -b_1 )$ into \eqref{e4} and subtracting from \eqref{e3} we obtain $$\Delta _h f_1 (x+b_1 y) +\sum_{i=2}^{n-1} \left[ f_i (x+h+ b_i y) - f_i \left( x+ \frac{b_i -b_1}{b_n -b_i} h +b_i y \right) \right] =0$$ for every $y$ such that $y\notin A$ and $v=y+h/(b_n -b_1 ) \notin A$. (If $n=2$ then the sum on the left hand side is empty.) Putting $g_i (z)=f_i (z+h) - f_i \left( z+ \tfrac{b_i -b_1}{b_n -b_i}h \right)$ $(z\in \mathbb{C} )$, we obtain that $$\Delta _h f_1 (x+b_1 y) +\sum_{i=2}^{n-1} g_i (x+b_i y) =0$$ for every $x$ and for every $y\notin A\cup (A-h/(b_n -b_1 ))$. Since $A\cup (A-h/(b_n -b_1 ))\in I$, it follows from the induction hypothesis that $\Delta _h f_1$ is a generalized polynomial of degree $\le n-3$. As this is true for every $h$, we obtain that $f_1$ is a generalized polynomial of degree $\le n-2$. We still have to prove that \eqref{el1} holds for every $x,y\in {\mathbb C}$. Let $x\in {\mathbb C}$ be fixed, and put $G(y)=\sum_{i=1}^{n} f_i (x+b_i y)$ for every $y\in {\mathbb C}$. We have to prove that $G(y)=0$ for every $y\in {\mathbb C}$. It is easy to see that if $f$ is a generalized polynomial, then so is $y\mapsto f (x+b y) =g(y)$. This can be proved by induction on the degree of $f$, using $\Delta _h g (y)=\Delta _{bh} f(x+by)$. Since each $f_i$ is a generalized polynomial, it follows that so is $g_i (y)=f_i (x+b_i y)$ for every $i$, and thus so is $G=g_1 +\ldots +g_n$. We know that $G(y)=0$ for every $y\notin A$. Therefore, in order to prove $G\equiv 0$, it is enough to show that if $f\colon {\mathbb C} \to {\mathbb C}$ is a generalized polynomial and $f(x)=0$ for every $x\in {\mathbb C} \setminus A$ where $A\in I$, then $f\equiv 0$. We prove by induction on the degree of $f$. The statement is obvious if the degree is $\le 0$. Indeed, in this case $f$ is constant, and has a value equal to zero, since $I$ is a proper ideal. Suppose the degree of $f$ is $n>0$, and the statement is true for generalized polynomials of degree $<n$. For every $h$, we have $\Delta _h f (x)=0$ for every $x\in {\mathbb C} \setminus (A \cup (A-h))$. Since $A \cup (A-h) \in I$, it follows from the induction hypothesis that $\Delta _h f(x)=0$ for every $x$. This is true for every $h$, which shows that $f$ is constant. As we saw above, the constant must be zero. This completes the proof. \hfill $\square$ \section{Isometries and rigid motions: some general remarks}\label{s4} By a rigid motion we mean an isometry that preserves orientation. An isometry of $\mathbb{R}^2$ is a rigid motion if it is a translation or a rotation. \begin{proposition}\label{p5} Every subset of the plane containing $1$, $2$ or $3$ points has the discrete Pompeiu property w.r.t. rigid motions. \end{proposition} \noi {\bf Proof.} The case of the singletons is obvious. Let $E=\{a,b\}$ and $r=|a-b| >0$. Suppose that $f\colon \mathbb{R}^2 \to {\mathbb C}$ is such that $f(\sigma (a))+f(\sigma (b))=0$ for every rigid motion $\sigma$. Then $f$ has the same value at every pair of points $a_1,a_2$ with distance $\le 2r$. Indeed, there is a point $b$ such that $|b-a_i| =r$ $(i=1,2)$, and thus $f(a_1 )=-f(b)=f(a_2 )$. Now, any two points $a,b \in \mathbb{R}^2$ can be joined by a sequence of points $a=a_0 ,\dots, a_n =b$ such that $|a_i -a_{i-1}| \le 2r$, and thus $f(a)=f(b)$. Therefore, $f$ must be constant, and the value of the constant must be zero. Let $H=\{a,b,c\}$, where $a,b,c$ are distinct, and let $f\colon \mathbb{R}^2 \to {\mathbb C}$ be such that $f(\sigma (a))+f(\sigma (b)) +f(\sigma (c))=0$ for every rigid motion $\sigma$. By changing the notation of the points $a,b,c$ if necessary, we may assume that $c\ne (a+b)/2$. Let $c'=a+b-c$. Then $c'$ is the reflection of $c$ about middle point of the segment $[a,b]$, and thus $f(\sigma (b))+f(\sigma (a)) +f(\sigma (c'))=0$ for every rigid motion $\sigma$. Thus $f(\sigma (c'))=f(\sigma (c))$ for every rigid motion $\sigma$, which implies that $f(x)=f(y)$ whenever $|x-y|=|c' -c|$. The argument above shows that $f$ is constant, and, in fact, $f\equiv 0$. $\square$ \begin{remark} \label{r1} {\rm It is easy to see that if $n\le 2$, then every $n$-tuple has the weighted discrete Pompeiu property w.r.t. isometries. The same is true for those triplets $(a,b,c)$ whose points are not collinear. In this case we have to modify the proof above by choosing the point $c'$ to be the reflection of $c$ about the line going through $a$ and $b$ instead of the point $a+b-c$ in order to avoid changing the weights of $a$ and $b$.} \end{remark} \begin{proposition}\label{p6} Let $E$ be a finite set in the plane. If there exists an isometry $\sigma$ such that $|E\cap\sigma(E)|=|E|-1$, then $E$ has the discrete Pompeiu property w.r.t. isometries. \end{proposition} \noi {\bf Proof.} Let $E\setminus \sigma (E)=\{ a\}$ and $\sigma (E) \setminus E=\{ b \}$. If $f\colon X\to {\mathbb C}$ is such that $\sum_{x\in \phi (E)} f(x)=0$ for every isometry $\phi$, then taking the difference of the equations $\sum_{x\in (\phi \sigma ) (E)} f(x)=0$ and $\sum_{x\in \phi (E)} f(x)=0$, we obtain $f(\phi (a))=f(\phi (b))$ for every isometry $\phi$. Thus $f(x)=f(y)$ whenever $|x-y|=|a-b|$. As we saw before, this implies that $f$ is identically zero. $\square$ \begin{remark} Concerning the discrete Pompeiu property in higher dimensions, we note that Proposition \ref{p6} holds without any essential modification in $\mathbb{R}^n$ for every $n\ge 2$. As for Proposition \ref{p5}, it is easy to see that every subset of $\mathbb{R}^n$ $(n\ge 2)$ containing affinely independent points has the discrete Pompeiu property w. r. t. isometries. Using an inductive argument it is enough to consider the case of $n+1$ points in general position. Such a set satisfies the condition of Proposition \ref{p6}: let $\sigma$ be the reflection about a facet. \end{remark} \bigskip By Proposition \ref{p6}, if a set $E$ consists of consecutive vertices of a regular $n$-gon $R$, and $E\ne R$, then $E$ has the discrete Pompeiu property w.r.t. isometries. Also, if $E$ is a finite set of collinear points forming an arithmetic progression, then $E$ has the discrete Pompeiu property w.r.t. isometries. Our following theorem is the generalization of this fact. \begin{theorem} \label{t2} Let $E$ be an $n$-tuple of collinear points in $\mathbb{R}^2$ with pairwise commensurable distances. Then $E$ has the weighted discrete Pompeiu property w.r.t. rigid motions of the plane. \end{theorem} \begin{lemma} \label{l5} Let $x_1 ,\dots, x_n ,y_1 ,\dots, y_k \in \mathbb{R}^2$ and $\alpha _1 ,\dots, \alpha _n ,\beta _1 ,\dots, \beta _k \in {\mathbb C}$ be such that \begin{itemize} \item[{\rm (i)}] $y_1 ,\dots, y_k$ are collinear with commensurable distances, \item[{\rm (ii)}] $\sum_{i=1}^n \alpha _i \ne 0$, and \item[{\rm (iii)}] at least one of the numbers $\beta _1 ,\dots, \beta _k$ is nonzero. \end{itemize} If $f\colon \mathbb{R}^2 \to {\mathbb R}$ is such that \begin{equation}\label{e24} \sum_{i=1}^n \alpha _i f(\sigma (x_i ))=\sum_{j=1}^k \beta _j f(\sigma (y_j ))=0 \end{equation} for every rigid motion $\sigma$, then $f$ is identically zero. \end{lemma} \noi {\bf Proof.} We identify $\mathbb{R}^2$ with the complex plane ${\mathbb C}$. We put ${\mathbb C} ^* ={\mathbb C} \setminus \{ 0\}$ and $S^1 =\{ x\in {\mathbb C} \colon |x|=1\}$. Then every rigid motion is of the form $x\mapsto a+ux$ $(x\in {\mathbb C} )$, where $a\in {\mathbb C}$ and $u\in S^1$. Let $a,c\in {\mathbb C}$ and $c\ne 0$. If we replace $x_i$ by $a+cx_i$, $y_j$ by $a+cy_j$ for every $i$ and $j$, and replace $f$ by $f_1 (x)=(x/c)$, then \eqref{e24} remains valid for every rigid motion $\sigma$. Indeed, for every $\sigma$, the map $x\mapsto \sigma (a+cx) /c$ is a rigid motion if and only if $\sigma$ is. Note that if $f_1$ is identically zero, then so is $f$. Therefore, replacing $x_i$ by $a+cx_i$, $y_j$ by $a+cy_j$ for every $i=1,\dots, n$ and $j=1,\dots, k$ with a suitable $a\in {\mathbb C}$ and $c\in {\mathbb C} ^*$, we may assume that $y_1 ,\dots, y_k$ are positive integers. By supplementing the system if necessary, we may assume that $y_j =j$ $(j=1,\dots, m)$. We put $\beta _j =0$ for every added $j$. Then we have \begin{equation}\label{e1} \sum_{i=1}^n \alpha _i f (x+ ux_i)= \sum_{j=1}^m \beta _j f(x+ju )=0 \end{equation} for every $x\in \mathbb{C}$ and $u\in S^1$. We show that this implies $f\equiv 0$. Suppose that $f$ is not identically zero, and let $z_0 \in \mathbb{C}$ be such that $f(z_0 )\ne 0$. Let $K$ be an integer greater than $\max _{1\le i\le n} |x_i |$. It is clear that every $z\in {\mathbb C}$ with $|z|<K$ is the sum of $K$ elements of $S^1$. Let $U$ be a finite subset of $S^1$ such that $1\in U$, and $x_i /\nu$ is the sum of $K$ elements of $U$ for every $i=1,\dots, n$ and $\nu =1,\dots, N$, where $N=m^{K\cdot m^K}$. Let $G$ denote the additive subgroup of $\mathbb{C}$ generated by the elements $z_0$, $u\in U$ and $u x_i$ $(u\in U, \ i=1,\dots, n)$. Then $G$ is a finitely generated subgroup of $\mathbb{C}$. Let $V$ denote the set of functions defined on $G$ and satisfying \eqref{e1} for every $x\in G$ and $u\in U$. The set of functions $V$ contains the restriction of $f$ to $G$ which is not identically zero, as $z_0 \in G$. Therefore, by Lemma \ref{l2}, $V$ contains an exponential function $g \colon G\to \mathbb{C}$. If $u\in U$, then \eqref{e1} gives $\sum_{j=1}^m \beta _j g (u)^j =0$. Therefore, $g (u)$ is a root of the polynomial $p(x)=\sum_{j=1}^{m} \beta _j x^{j-1}$. Let $\Lambda$ denote the set of the nonzero roots of $p$. Then $\Lambda$ has at most $m-1$ elements, and $g (u)\in \Lambda$ for every $u\in U$. For every $i=1,\dots, n$ and $\nu =1,\dots, N$, $x_i /\nu$ is the sum of $K$ elements of $U$. Thus $g(x_i /\nu )$ is the product of $K$ elements of $g(U)\subset \Lambda$. Therefore, the set $F=\{ g(x_i /\nu ) \colon i=1,\dots, n, \ \nu =1,\dots, N\}$ has less than $m^K$ elements. Let $1\le i\le n$ be fixed, and put $g (x_i)=c$. We prove $c=1$. Since $g(x_i /\nu )\in F$ for every $\nu =1,\dots, m^K$, there are integers $1\le \nu <\mu \le m^K$ such that $g (x_i /\nu )=g (x_i /\mu )$. Then $$c^\mu =g (x_i /\nu )^{\nu \mu}=g (x_i /\mu )^{\nu \mu} =c^\nu ,$$ and thus $c^{\mu -\nu }=1$. Let $\mu -\nu =s$, then $s<m^K$ and $c^s =1$. If $s=1$, then $c=1$ is proved. If $s>1$, then, by $g (x_i /s^t )\in F$ for every $t=1,\dots, m^K$, there are integers $1\le r<t\le m^K$ and there is an element $b\in F$ such that $g (x_i /s^r )=g (x_i /s^t )=b$. Then $$c=g(x_i )=b^{s^t} =b^{s^r \cdot s^{t -r}} =c^{s^{t-r}} =1,$$ since $c^s =1$. This proves $g (x_i)=1$ for every $i=1,\dots, n$. Then, applying \eqref{e1} with $x=0$ and $u=1$, we obtain $\sum_{i=1}^n \alpha _i =0$ which is impossible. This contradiction completes the proof. \hfill $\square$ \medskip \noindent {\bf Proof of Theorem \ref{t2}.} Let $E=(x_1 ,\dots, x_n )$, where $x_1 ,\dots, x_n$ are collinear with commensurable distances. Let $\alpha _1 ,\dots, \alpha _n$ be complex numbers with $\sum_{i=1}^n \alpha _i \ne 0$, and let $f\colon {\mathbb C} \to {\mathbb C}$ satisfy $\sum_{i=1}^n \alpha _i f(\sigma (x_i ))=0$ for every rigid motion $\sigma$. Applying Lemma \ref{l5} with $k=n$, $y_i =x_i$ and $\beta _i =\alpha _i$ $(i=1,\dots, n)$, we obtain that $f$ is identically zero. \hfill $\square$ \begin{remark} {\rm The isometry group of ${\mathbb R}$ consists of translations and reflections. Since no finite subset of ${\mathbb R}$ has the discrete Pompeiu property w.r.t. translations by Proposition \ref{p1}, and every reflected copy of the set $\{ 1,\dots, n\}$ is also a translated copy, it follows that} the set $\{ 1,\dots, n\}$ does not have the discrete Pompeiu property w.r.t. isometries of ${\mathbb R}$. {\rm (This is why we had to step out from ${\mathbb R}$ into the plane in the proof of Theorem \ref{t2}.) Note, however, that there are subsets of ${\mathbb Z}$ which have the discrete Pompeiu property w.r.t. isometries of ${\mathbb R}$. The set of integers $0=z_0 < z_1 < \ldots < z_k$ has this property if and only if the polynomials $p(x)=\sum_{i=0}^k x^{z_i}$ and $q(x)=\sum_{i=0}^k x^{z_k - z_i}$ have no common roots. (This follows immediately from Zeilberger's theorem \cite{Ze}.) This condition is clearly satisfied if the set $\{z_0, z_1, \ldots, z_k\}$ is not symmetric about the point $(z_0+z_k )/2$, and if $p$ is irreducible in $\mathbb{Z}[x]$. Since each coefficient of $p$ is $0$ or $1$, it is easy to decide whether $p$ is irreducible or not. If there is an $n\ge 3$ such that $p(n)$ is prime, then $p$ is irreducible (see \cite{M}). By the Buniakowski-Schinzel conjecture, this condition is also necessary for the irreducibility of $p$. } \end{remark} \section{Quadrangles under isometries}\label{s5} \begin{theorem} \label{t3} The set of vertices of any parallelogram has the discrete Pompeiu property w.r.t. rigid motions of the plane. \end{theorem} \noi {\bf Proof.} We identify $\mathbb{R}^2$ with the complex plane ${\mathbb C}$. We put ${\mathbb C} ^* ={\mathbb C} \setminus \{ 0\}$ and $S^1 =\{ u\in {\mathbb C} \colon |u|=1\}$. Let $E$ be a set of vertices of a parallelogram. Without loss of generality we may assume that $0\in E$. Then $E=\{ 0,a,b,a+b\}$, where $0 \ne a,b\in \mathbb{C}$ and $a\ne b$. Clearly, it is enough to prove that if $f\colon {\mathbb C} \to {\mathbb C}$ is such that \begin{equation}\label{epo4} f(x)+f(x+ay)+f(x+by)+f(x+(a+b)y)=0 \end{equation} for every $x\in \mathbb{C}$ and $y\in S^1$, then $f\equiv 0$. Suppose that there exists a nonzero $f$ satisfying \eqref{epo4}, and let $z_0\in \mathbb{C}$ be such that $f(z_0)\ne 0$. Let $F$ be a finite subset of ${\mathbb C}$, and let $G$ denote the additive subgroup of ${\mathbb C}$ generated by $F\cup \{ z_0\}$. Let $V$ denote the set of functions $f\colon G\to \mathbb{C}$ satisfying \eqref{epo4} for every $x\in G$ and $y\in S^1_G =\{ y\in S^1 \colon ay,by\in G \}$. Since $f|_G \in V$ and $z_0 \in G$, it follows that $V\ne 0$. By Lemma \ref{l2}, there exists an exponential function $g$ in $V$. Since $g$ satisfies \eqref{epo4} and $g(x+ay)=g(x)g(ay)$ and $g(x+(a+b)y) =g(x)g(ay) g(by)$, we obtain $g(x)(1+g(ay)+g(by)+g(ay)g(by))=0$ whenever $x\in G$ and $y\in S^1_G $. Since $g(x)\ne 0$, we get $(1+g(ay))(1+g(by))=0$ for every $y\in S^1 _G$ . That is, we have either $g(ay)=-1$ or $g(by)=-1$ for every $y\in S^1_G$. Let $P$ be an arbitrary parallelogram obtained from $E$ by a rigid motion and having vertices in $G$. Then the vertices of $P$ are $c=x$, $d=x+ay$, $e=x+(a+b)y$, $f=x+by$ with a suitable $x\in G$ and $y\in S^1 _G$. Then we have either $g(d)/g(c)=g(e)/g(f)=-1$ or $g(f)/g(c)=g(e)/g(d)=-1$. In other words, the values of $g$ at the points $c,d,e,f$ are either $g(c), -g(c),g(e), -g(e)$ or $g(c), g(d), -g(d), -g(c)$. Therefore, the vertex set of the parallelogram can be decomposed into two pairs with $g$-values of the form $(x,-x)$ in each pair. Let ${{\mathbb C}}^*= X_1 \cup X_2$ be a decomposition of ${{\mathbb C}}^*$ such that $X_1=-X_2$. Let $h(x)=1$ if $g(x)\in X_1$, and $h(x)=-1$ if $g(x)\in X_2$. Then $h\colon G \to \{1,-1\}$ has the following property: if $\sigma$ is a rigid motion and if $\sigma (E)\subset G$, then there are two elements of $\sigma (E)$ where the function $h$ takes the value $1$, and at the other two elements of $\sigma (E)$ the function $h$ takes the value $-1$. Since this is true for every group generated by any finite subsets of $\mathbb{R}^2$, we may apply Rado's selection principle \cite{G}. We find that there exists a function $h\colon \mathbb{R}^2 \to \{ 1,-1\}$ such that whenever $\sigma$ is a rigid motion, then there are two elements of $\sigma (E)$ where the function $h$ takes the value $1$, and at the other two elements of $\sigma (E)$ the function $h$ takes the value $-1$. The existence of such a function, however, contradicts a known fact of Euclidean Ramsey theory. By a theorem of Shader \cite[Theorem 3]{Sh}, for every $2$-coloring of the plane, and for every parallelogram $E$, there is a congruent copy $P$ of $E$ such that at least three vertices of $P$ has the same color. It is clear from the proof that $P$ can be obtained from $E$ by a rigid motion. (See the Remark on p. 563 in \cite{EG}.) This contradicts the existence of the function $h$ with the properties described, proving that $f$ must be identically zero. $\square$ \bigskip Our next aim is to prove the following. \begin{theorem}\label{tfourrac} Every set $E\subset \mathbb{R}^2$ of four points having rational coordinates has the weighted discrete Pompeiu property w.r.t. the group of isometries of $\mathbb{R}^2$. \end{theorem} \noi {\bf Proof.} If the points of $E$ are collinear, then the statement is a consequence of Theorem \ref{t2}. If there are three collinear points of $E$, then the statement follows from Proposition \ref{p6}. Therefore, we may assume that the points of $E$ are in general position. Let $E=(x_1 ,\dots, x_4 )$. By changing the order of the indices we may assume that $x_1$ and $x_2$ are vertices of the convex hull of $E$. Let $\alpha _1 ,\dots, \alpha _4$ be complex numbers such that $\sum _{j=1}^4 \alpha _j \ne 0$, and let $f\colon {\mathbb C} \to {\mathbb C}$ be such that \begin{equation} \label{e18} \sum_{j=1}^4 \alpha _j f(\sigma (x_j ))=0 \end{equation} for every isometry $\sigma$. We have to show that $f$ is identically zero. If any of the numbers $\alpha _1 ,\dots, \alpha _4$ is zero then $f\equiv 0$ follows from Remark \ref{r1}. Therefore, we may assume that $\alpha _4 \ne 0$. Let $\sigma _1$ be the reflection about the line $\ell _1$ going through the points $x_1 ,x_2$. Let $y_1 =\sigma _1 (x_4 )$ and $x_5 =\sigma _1 (x_3 )$, then $y_1$ and $x_5$ have rational coordinates. We have, for every $\sigma$, $(\sigma \circ \sigma _1 ) (x_i )=\sigma (x_i )$ for $i=1,2$ , $(\sigma \circ \sigma _1 ) (x_3 )=\sigma (x_5 )$ and $(\sigma \circ \sigma _1 ) (x_4 )=\sigma (y_1 )$. Therefore $$ \alpha _1 f(\sigma (x_1))+\alpha _2 f(\sigma (x_1))+\alpha _3 f(\sigma (x_5 ))+\alpha _4 f(\sigma (y_1 )) =0 $$ for every isometry $\sigma$. Subtracting \eqref{e18} we obtain \begin{equation} \label{e7} \alpha _3 f(\sigma (x_5 ))+\alpha _4 f(\sigma (y_1 )) -\alpha _3 f(\sigma (x_3 ))- \alpha _4 f(\sigma (x_4 )) =0 \end{equation} for every isometry $\sigma$. Suppose that the line going through the points $x_3$ and $x_4$ is perpendicular to $\ell _1$. Then the points $x_5 ,y_1 , x_3 ,x_4$ are collinear. They have rational coordinates, so the distances between them are commensurable. Now $f$ satisfies both \eqref{e18} and \eqref{e7} for every isometry $\sigma$, and thus, by Lemma \ref{l5}, $f\equiv 0$. Therefore, we may assume that the line going through the points $x_3$ and $x_4$ is not perpendicular to $\ell _1$. Let $\sigma _2$ be the reflection about the line $\ell _2$ going through the points $x_3 ,x_5$. Note that the lines $\ell _1$ and $\ell _2$ are perpendicular. We put $y_2 =\sigma _2 (y_1 )$, $y_3 =\sigma _2 (x_4 )$ and $y_4 =x_4$. Then $y_1 ,y_2 ,y_3 ,y_4$ are the vertices of a rectangle $R$ listed either clockwise or counter-clockwise. It is clear that $y_1 ,y_2 ,y_3 ,y_4$ have rational coordinates. We claim that \begin{equation}\label{e14} f(\sigma (y_1 )) -f(\sigma (y_2 )) +f(\sigma (y_3 )) -f(\sigma (y_4 )) =0 \end{equation} holds for every isometry $\sigma$. Indeed, $(\sigma \circ \sigma _2 ) (x_5 )= \sigma (x_5 )$, $(\sigma \circ \sigma _2 )(x_3 )= \sigma (x_3 )$, $(\sigma \circ \sigma _2 )(y_1 )= \sigma (y_2 )$ and $(\sigma \circ \sigma _2 ) (x_4 )= \sigma (y_3 )$ and thus, by \eqref{e7} we obtain $$\alpha _3 f(\sigma (x_5 ))+\alpha _4 f(\sigma (y_2 )) -\alpha _3 f(\sigma (x_3 ))-\alpha _4 f(\sigma (y_3 )) =0.$$ Subtracting \eqref{e7} and dividing by $-\alpha _4$ we obtain \eqref{e14} for every isometry $\sigma$. Since the coordinates of $y_1 ,\dots, y_4$ are rational, it follows that the side lengths of $R$ are commensurable. (The side lengths themselves can be irrational.) Thus, there exists a square $Q$ with vertices $z_1 ,\dots, z_4$ such that $Q$ can be decomposed into finitely many translated copy of $R$. If we add the equations \eqref{e14} for those translations $\sigma$ that map $R$ into these translated copies, then we get \begin{equation}\label{e6} f(z_1 ) -f(z_2 ) +f(z_3 ) -f(z_4 ) =0, \end{equation} since all other terms cancel out. By rescaling the set $E$ and also the function $f$ if necessary, we may assume that the side length of $Q$ is $1$. Clearly, \eqref{e6} must hold whenever $z_1 ,\dots, z_4$ are the vertices of a square of unit side length. That is, we have \begin{equation*} f(x) -f(x+u ) -f(x+u\cdot i ) +f(x+u+u\cdot i ) =0 \end{equation*} for every $x\in {\mathbb C}$ and $u\in S^1$. Now we turn to the proof of $f\equiv 0$. Suppose this is not true, and fix a $z_0 \in {\mathbb C}$ such that $f(z_0 )\ne 0$. Let $a_1 ,\dots, a_N$ be vectors of length $12$ such that each of the numbers $x_1 ,\dots, x_4$ is the sum of some of the $a_j$'s. Let $u_j =a_j /12$ and $v_j =(3u_j +4 u_j \cdot i )/5$ for every $j=1,\dots, N$. Then $u_j, v_j$ are unit vectors for every $j$. Let $U$ denote the set of vectors $$u_j, \, u_j \cdot i, \, v_j, \, v_j \cdot i \qquad (j=1,\dots, N),$$ and let $G$ denote the additive group generated by the set $U\cup \{ x_j u\colon j=1\stb4,\ u\in U\} \cup \{ z_0 \}$. Then $G$ is a finitely generated group. Let $V$ be the set of functions $g\colon G\to {\mathbb C}$ satisfying the following condition: $$\sum_{j=1}^4 \alpha _j g(x+x_j \cdot u)=0$$ and \begin{equation}\label{e8} g(x)-g(x+u)-g(x+u\cdot i)+g(x+u+u\cdot i )=0 \end{equation} for every $x\in G$ and $u\in U$. The set $V$ contains a non-identically zero function (namely the restriction of $f$ to $G$), so by Lemma \ref{l2}, $V$ contains an exponential function $g$. Then \eqref{e8} implies $(1-g(u))\cdot (1-g(u\cdot i))=0$, and thus we have either $g(u)=1$ or $g(u\cdot i)=1$ for every $u\in U$. Now we show that $g(a_j )=1$ for every $j=1,\dots, N$. If $g(u_j )=1$, then this follows from $g(a_j )=g(12 u_j )=g(u_j )^{12}$. Therefore we may assume that $g(u_j \cdot i)=1$. Since $5v_j =3u_j +4u_j \cdot i$ and $5v_j \cdot i =-4u_j +3\cdot u_j \cdot i$, we have $g(v_j )^5 =g(u_j )^3 \cdot g(u_j \cdot i)^4=g(u_j )^3$ and $g(v_j \cdot i )^5 =g(u_j )^{-4} \cdot g(u_j \cdot i)^3=g(u_j )^{-4}$. Now we have either $g(v_j )=1$ or $g(v_j \cdot i)=1$. Thus at least one of $g(u_j )^3 =1$ and $g(u_j )^{-4} =1$ must hold. Thus $g(u_j )^{12}=1$ in both cases, which gives $g(a_j )=g(12 u_j )=g(u_j )^{12} =1$. Since each $x_j$ is the sum of some of the numbers $a_1 ,\dots, a_N$, it follows that $g(x_j )$ is the product of some of the numbers $g(a_1 ) ,\dots, g(a_N )$. Thus $g(x_j )=1$ for every $j=1,\dots, 4$. However, by $\sum_{j=1}^4 \alpha _j g(x_j )=0$ this implies $\sum_{j=1}^4 \alpha _j =0$, which contradicts the assumption $\sum_{j=1}^4 \alpha _j \ne 0$. This contradiction proves that $f\equiv 0$. $\square$ \bigskip Finally, we present a family of quadrangles depending on a continuous parameter such that each member of the family has the discrete Pompeiu property w.r.t. the isometry group. Let a non-regular triangle $ABC \triangle$ be given in the plane. The steps of the construction are summarized as follows: \begin{figure}[h] \centering \includegraphics[viewport=0 0 404 234, scale=0.5]{prect.jpg} \caption{A Pompeiu quadrangle belonging to $\alpha=23^{\circ}$.} \end{figure} \begin{itemize} \item since $ABC \triangle$ is non-regular we can suppose that the point $C$ is not on the perpendicular bisector of $AB$; especially $C$ and $B$ are supposed to be on the same side of the perpendicular bisector of $AB$. \item let $0< \alpha < 45^{\circ}$ be a given angle and choose a point $P$ on the line $AB$ such that $A$ and $P$ are separated by the point $B$ and the angle enclosed by the lines $PB$ and $PC$ is of measure $\alpha$ (see Figure 1). \item $E$ is the point of the perpendicular bisector of $AB$ such that the line $CE$ intersects the bisector under an angle of measure $\alpha$ (see Figure 1). \item $G$ is the point on the line $PC$ such that the triangle $EGC \triangle$ has a right angle at $G$. Then, necessarily, the perpendicular bisector of $AB$ is the bisector of the angle of $EGC \triangle$ at the vertex $E$. \item $D_{\alpha}$ is the reflection of $C$ about the point $G$. \end{itemize} \begin{theorem}\label{t4} The set $H_{\alpha}=\{A, B, C, D_{\alpha}\}$ has the Pompeiu property w.r.t. the isometry group. \end{theorem} \noi {\bf Proof.} Suppose that the angle $\alpha$ is given, and let $D:=D_{\alpha}$ for the sake of simplicity. For any point $P$ let $P'$ be the image of $P$ under the reflection about the perpendicular bisector of $AB$. Then $A'=B$, $B'=A$ and the points $C$, $C'$, $D$ and $D'$ form a symmetric trapezium such that $D'C=CC'=C'D$; see Figure 2. Using that $$f(A)+f(B)+f(C)+f(D)=0\ \ \textrm{and}\ \ f(A')+f(B')+f(C')+f(D')=0$$ it follows that the alternating sum of the values of $f$ at the vertices of the trapezium $CC'DD'$ vanishes, i.e. \begin{equation} \label{symtrap} f(C)-f(C')+f(D)-f(D')=0. \end{equation} Since equation (\ref{symtrap}) holds on any congruent copy of the trapezium $CC'DD'$ we have \begin{equation} \label{trick1} f(C)-f(C')+f(D)-f(D')=0\ \ \textrm{and}\ \ f(C')-f(D)+f(H')-f(C)=0 \end{equation} as Figure 2 shows: the trapezium $CC'DH'$ comes by a translation $C \mapsto C'$ and a rotation about the point $D$. \begin{figure} \centering \includegraphics[viewport=0 0 493 300, scale=0.5]{prect03.jpg} \caption{The proof of Theorem \ref{t4}.} \end{figure} Therefore \begin{equation} \label{key} f(D')=f(H') \end{equation} and equation (\ref{key}) holds on any congruent copy of the segment $D'H'$ of measure $r$. This means that $f$ takes the same values at any pair of points having distance $r$. Since any pair of points can be joined by a (finite) chain of circles with radius $r$ it follows that $f$ is a constant function. Especially, the constant must be zero. $\square$
{ "timestamp": "2016-12-02T02:06:20", "yymm": "1612", "arxiv_id": "1612.00284", "language": "en", "url": "https://arxiv.org/abs/1612.00284", "abstract": "We say that a finite subset $E$ of the Euclidean plane $\\mathbb{R}^2$ has the discrete Pompeiu property with respect to isometries (similarities), if, whenever $f:\\mathbb{R}^2\\to \\mathbb{C}$ is such that the sum of the values of $f$ on any congruent (similar) copy of $E$ is zero, then $f$ is identically zero. We show that every parallelogram and every quadrangle with rational coordinates has the discrete Pompeiu property w.r.t. isometries. We also present a family of quadrangles depending on a continuous parameter having the same property. We investigate the weighted version of the discrete Pompeiu property as well, and show that every finite linear set with commensurable distances has the weighted discrete Pompeiu property w.r.t. isometries, and every finite set has the weighted discrete Pompeiu property w.r.t. similarities.", "subjects": "Functional Analysis (math.FA); Group Theory (math.GR); Spectral Theory (math.SP)", "title": "The discrete Pompeiu problem on the plane", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9852713844203499, "lm_q2_score": 0.8244619220634456, "lm_q1q2_score": 0.8123187393533137 }
https://arxiv.org/abs/2003.10112
Sparsest Piecewise-Linear Regression of One-Dimensional Data
We study the problem of one-dimensional regression of data points with total-variation (TV) regularization (in the sense of measures) on the second derivative, which is known to promote piecewise-linear solutions with few knots. While there are efficient algorithms for determining such adaptive splines, the difficulty with TV regularization is that the solution is generally non-unique, an aspect that is often ignored in practice. In this paper, we present a systematic analysis that results in a complete description of the solution set with a clear distinction between the cases where the solution is unique and those, much more frequent, where it is not. For the latter scenario, we identify the sparsest solutions, i.e., those with the minimum number of knots, and we derive a formula to compute the minimum number of knots based solely on the data points. To achieve this, we first consider the problem of exact interpolation which leads to an easier theoretical analysis. Next, we relax the exact interpolation requirement to a regression setting, and we consider a penalized optimization problem with a strictly convex data-fidelity cost function. We show that the underlying penalized problem can be reformulated as a constrained problem, and thus that all our previous results still apply. Based on our theoretical analysis, we propose a simple and fast two-step algorithm, agnostic to uniqueness, to reach a sparsest solution of this penalized problem.
\section*{Acknowledgments} The authors are thankful to Shayan Aziznejad for many discussions related to this work and for his elegant connection between the (\gBLASSO) problem and its discrete counterpart (see~\eqref{eq:optizlambda}). Julien Fageot was supported by the Swiss National Science Foundation (SNSF) under Grant P2ELP2\_181759. The work of Thomas Debarre, Quentin Denoyelle, and Michael Unser is supported by the SNSF under Grant 200020\_184646/1 and the European Research Council (ERC) under Grant 692726-GlobalBioIm. {\footnotesize \bibliographystyle{IEEEtran} \section{Conclusion} In this paper, we fully described the solution set of the~\eqref{eq:noiselessclean} problem, which consists in interpolating data points by minimizing the TV norm of the second derivative. More precisely, we specified the cases in which it has a unique solution, the form of all the solutions, and the subset of sparsest solutions. We also proposed a simple and fast algorithm to reach (one of) the sparsest solution(s). We then extended these results to the~\eqref{eq:noisyclean} problem, by showing that it can be reformulated as a~\eqref{eq:noiselessclean} problem. Next, we introduced a two-step algorithm to solve the~\eqref{eq:noisyclean} problem, the first step of which consists in solving a discrete $\ell_1$-regularized problem, and the second in applying our algorithm to solve a~\eqref{eq:noiselessclean} problem. Finally, we applied our algorithm to some simulated data, and suggested plotting the sparsity \vs data fidelity error plot in order to judiciously select a suitable value of the regularization parameter. This paper paves the way for the study of supervised learning problems through the formulation of variational inverse problems with TV-based regularization, by completely describing the one-dimensional scenario. A future exciting - albeit much more challenging - prospect would be to achieve similar results in higher dimensions, \ie to reconstruct functions $f:\R^d \to \R$ with $d>1$. This would be a major milestone to better understand ReLU networks and deep learning in general, whose practical outstanding performances are yet to be fully explained. \section{The Sparsest Solution(s) of \eqref{eq:noiselessclean}} \label{sec:sparsestsolutions} \subsection{Characterization of the Sparsest Solution(s)} \label{sec:sparsenoiseless} We have already identified the situations where \eqref{eq:noiselessclean} admits a unique solution, in which case it is the canonical solution introduced in Definition~\ref{def:canonicalsolutiondef}. When the solution is not unique, Theorem \ref{theo:RTweneed} ensures that the extreme-point solutions are piecewise-linear functions with at most $(K-2)$ knots, and Theorem \ref{theo:sol_set} gives a complete description of the solution set. In this section, we go further by providing a complete answer to the following questions: \begin{itemize} \item what is the minimal number of knots of a solution of \eqref{eq:noiselessclean}? \item what are the sparsest solutions, \ie the ones reaching this minimum number of knots? \end{itemize} These questions are addressed in Theorem~\ref{thm:sparsest_sol}. Let $\etacano$ be defined as in Proposition \ref{prop:canocertif} for fixed values of $\x, \obs \in \R^M$, and let \begin{align} \label{eq:saturation_indices} \Isat &\eqdef \{ m \in \{ 2, \ldots, M-1 \}: \ \etacano(x_m)=\pm 1 \text{ and } \etacano(x_{m})\neq \etacano(x_{m-1}) \} \nonumber \\ &= \{ s_1, \ldots, s_{N_s} \} \qwithq s_1 < \cdots < s_{N_s}. \end{align} In other words, $N_s = \# \Isat$ corresponds to the number of times $\etacano$ reaches $\pm 1$. Next, let $\alpha_n\in\NN$ for $n\in\{1,\ldots,N_s\}$ be the number of consecutive saturations for every occurrence of $\etacano$ reaching $\pm 1$, \ie \begin{align} \label{eq:num_saturations} \alpha_n \eqdef \min \{ n \in \N: \ \etacano(x_{s_n+n+1}) \neq \etacano(x_{s_n}) \}. \end{align} In what follows, $\lceil x \rceil$ is the smallest integer larger or equal to $x\in \R$. \begin{theorem}[Sparsest Solutions of~\eqref{eq:noiselessclean}] \label{thm:sparsest_sol} Let $\x\in \R^M$ be the ordered sampling locations, $\obs \in \R^M$ with $M \geq 4$. Concerning the minimum sparsity of a solution of~\eqref{eq:noiselessclean}, the following hold. \begin{enumerate} \item The lowest possible sparsity (\emph{i.e.}, number of knots) of a piecewise-linear solution of~\eqref{eq:noiselessclean} is \begin{align} \sparsity{\x}{\obs} = \sum_{n=1}^{N_s} \left\lceil\frac{\alpha_n+1}{2} \right\rceil, \end{align} where the $\alpha_n$ are defined in~\eqref{eq:num_saturations}, and $N_s = \# \Isat$ where $\Isat$ is defined in~\eqref{eq:saturation_indices}. \item There is a unique sparsest solution of~\eqref{eq:noiselessclean} if and only if none of the $\alpha_n$ are nonzero even numbers. \item If one or more $\alpha_n>0$ are even, then there are uncountably many sparsest solutions to~\eqref{eq:noiselessclean}. The number of degrees of freedom $n_{\mathrm{free}} (\x, \obs)$ of the set of sparsest solutions is equal to the number of even $\alpha_n$ coefficients, that is, \begin{equation} n_{\mathrm{free}} (\x, \obs) = \sum_{n=1}^{N_s} \One_{\alpha_n \in 2\N_{\geq 1}}. \end{equation} More precisely, for each saturation region of $\etacano$, fixing a single knot within a certain admissible segment uniquely determines the other knots within the saturation region. \end{enumerate} \end{theorem} The proof of Theorem \ref{thm:sparsest_sol} is given in Appendix \ref{sec:sparsest_solutions_proof}. Illustrations of its items 2. and 3. with a single saturation region (\ie $N_s=1$) are given in Figures \ref{fig:3_sat} and \ref{fig:2_sat} respectively. In Figure \ref{fig:3_sat}, the unique sparsest solution is shown. In Figure \ref{fig:2_sat}, any point $\widetilde{\mathrm{P}}_1$ in the segment that connects the points $\P0{2}$ and $\widetilde{\mathrm{P}}$ yields one of the sparsest solutions, with a uniquely determined second knot $\widetilde{\mathrm{P}}_2$. In the latter example, there is thus a single degree of freedom $n_{\mathrm{free}}(\x, \obs)$ in the set of sparsest solutions to~\eqref{eq:noiselessclean}. \begin{figure}[t] \centering \subfloat[Sparsest solution]{\includegraphics[width=0.5\linewidth]{3_sat.eps}} \subfloat[Canonical certificate]{\includegraphics[width=0.5\linewidth]{3_sat_cert.eps}} \caption{Example with $M=6$ and $\alpha = 3$ consecutive saturation intervals of $\etacano$ at -1. The unique sparsest solution has $P=2$ knots.} \label{fig:3_sat} \end{figure} \begin{figure}[t] \centering \subfloat[Example of a sparsest solution]{ \begin{tikzpicture} \node[anchor=south west,inner sep=0] (image) at (0,0,0) {\includegraphics[width=0.5\linewidth]{2_sat.eps}}; \begin{scope}[x={(image.south east)},y={(image.north west)}] \draw[gray] (0.42,0.88) node {$\widetilde{\mathrm{P}}$}; \draw[matblue] (0.31,0.74) node {$\widetilde{\mathrm{P}}_1$}; \draw[matblue] (0.69,0.9) node {$\widetilde{\mathrm{P}}_2$}; \end{scope} \end{tikzpicture} } \subfloat[Canonical certificate]{\includegraphics[width=0.5\linewidth]{2_sat_cert.eps}} \caption{Example with $M=5$ and $\alpha = 2$ consecutive saturation intervals of $\etacano$ at -1. The sparsest solutions have $P=2$ knots.} \label{fig:2_sat} \end{figure} \subsection{Algorithm for Reaching a Sparsest Solution} \label{sec:algonoiseless} The results of Theorem \ref{thm:sparsest_sol} suggest a simple yet elegant algorithm for constructing a sparsest solution of~\eqref{eq:noiselessclean} for given sampling locations $\x = (x_1, \ldots , x_M)$ and data $\obs = (y_{0, 1}, \ldots , y_{0, M})$. The pseudocode is given in Algorithm \ref{alg:algo_constrained}, which applies the sparsifying procedure described in Lemma \ref{lem:construction_sparsest_sol} in every saturation interval. Since the latter is rather lengthy and technical, it is given in Appendix~\ref{sec:sparsest_solutions_proof} for ease of reading. The proof of Theorem \ref{thm:sparsest_sol} guarantees that the output $f^\ast$ of Algorithm \ref{alg:algo_constrained} is indeed a sparsest solution to~\eqref{eq:noiselessclean}, with sparsity $\sparsity{\x}{\obs}$ as defined in Theorem \ref{thm:sparsest_sol}. The following observations can be made concerning Algorithm \ref{alg:algo_constrained}. \begin{algorithm}[t] \KwIn{$\x, \obs$} compute $a_1, \ldots a_M$ defined in \eqref{eq:a_coefs}; $[\etacano(x_1), \ldots, \etacano(x_M)] = [0, \sign(a_2), \ldots, \sign(a_{M-1}), 0]$;\\ compute $N_s$, $s_1$, $\ldots$, $s_{N_s}$ and $\alpha_1$, $\ldots$, $\alpha_{N_s}$ defined in \eqref{eq:saturation_indices} and \eqref{eq:num_saturations};\\ $\hat{\V \tau} = [\,]; \hat{\V a} = [\,]$; \\ \For{ $n \leftarrow 1$ \KwTo $N_s$}{ $P \leftarrow \lceil\frac{\alpha_n+1}{2} \rceil$ ; \\% \tcp*{Minimum sparsity in $[x_{s_n}, x_{s_n+\alpha_n}]$}\\ compute $\tilde{\tau}_1$, $\ldots$, $\tilde{\tau}_P$ and $\tilde{a}_1$, $\ldots$, $\tilde{a}_P$ using \eqref{eq:sparsest_sol_odd} or \eqref{eq:sparsest_sol_even};\\ $\hat{\V \tau} \leftarrow [\hat{\V \tau}, \tilde{\tau}_1, \ldots, \tilde{\tau}_P]$; \\ $\hat{\V a} \leftarrow [\hat{\V a}, \tilde{a}_1, \ldots, \tilde{a}_P]$; } \Return $\fopt \leftarrow \sum_{k=1}^{K} \hat{a}_k (\cdot - \hat{\tau}_k)_+$ \vspace{2mm} \caption{Pseudocode of our algorithm to find a sparsest solution of~\eqref{eq:noiselessclean}.} \label{alg:algo_constrained} \end{algorithm} \begin{itemize} \item In the cases where the sparsest solution is not unique, the choice of solution specified by \eqref{eq:sparsest_sol_even} (which is \emph{not} the one shown in Figure~\ref{fig:2_sat}) is guided by simplicity. However, it is an arbitrary choice that can be adapted depending on the application. \item Notice that the $x_m$ such that $\etacano(x_m) = 0$ need not be included in the vector of knots $\x'$ built in the algorithm, since we have $a_m = 0$. Therefore, there is in fact no knot at $x_m$ in the canonical solution, which implies that the sparsity of $\fcano$ is strictly less than $M-2$. This corresponds to alignment cases of the data points, \ie the points $\P0{m-1}$, $\P0{m}$, and $\P0{m+1}$ are aligned, as illustrated in Figure~\ref{fig:cano_example}. \item Algorithm \ref{alg:algo_constrained} is extremely fast: it takes linear time $\Spc{O}(M)$ with respect to the number of data points. This is a remarkable feature, since it is used to solve a continuous-domain optimization problem exactly. In fact, it is in the same complexity class as the simple computation of $\fcano$. \item Algorithm \ref{alg:algo_constrained} can be translated into an online algorithm, \ie an updated solution can be computed efficiently if a new input data point is added. More precisely, when a new data point $\P0{M+1}$ is added, the reconstructed signal is at worst only modified in the saturation interval $I = [x_{s_{n}-1}, x_{s_n + \alpha_n}]$ if $x_{M+1} \in I$. Since in practice, we usually have $\alpha_n \ll M$, the computational complexity of updating the solution is typically much smaller than rerunning the complete offline algorithm. \end{itemize} \section{Introduction} \label{sec:intro} Regression problems consist in learning a function $f$ that best fits some data $(x_m, y_m)_{m=1}^M$, where $M$ is the number of data points, in the sense that $f(x_m) \approx y_m$. This is typically achieved by parametrizing $f$ with a vector of parameters $\V{\theta}$, and minimizing some objective function with respect to $\V{\theta}$. The oldest and most basic form or regression is linear regression: $f$ is parametrized as a linear (or affine) function. Although this model has the advantage of being very simple, it is very limited due to the fact that many data distributions are poorly approximated by linear functions, as illustrated by the dotted line example in Figure~\ref{fig:regression_vs_NN}. The choice of parametrization $\V\theta$ is therefore crucial, as it must strike an appropriate balance between two conflicting desirable properties. Firstly, in order to be suitable for a variety of problems, the parametric model should be flexible enough to represent a large class of functions. In the field of machine learning, where regression is known as \emph{supervised learning}, this quest for universality is for instance highlighted by several universal approximation theorems for artificial neural networks \cite{cybenko1989approximation, hornik1991approximation,leshno1993multilayer}. Next, the model should be simple enough so that it generalizes well to input vectors $\V{x}$ that are outside of the training set. Indeed, a known pitfall of machine learning algorithms is overfitting, which happens when the model is unduly complex and fits too closely to the training data~\cite[Chapter 3]{mitchell1997machine}. This leads to poor generalization abilities for out-of-sample data. This pitfall is often dealt with by adding some regularization to the objective function, which tends to simplify the model. The overarching guiding principle to avoid overfitting is Occam's razor: the simplest model that explains the data well will generalize better and should thus be selected. \begin{figure}[t] \centering \includegraphics[width=0.5\linewidth]{1D_example.eps} \caption{Examples of reconstructions} \label{fig:regression_vs_NN} \end{figure} \subsection{Problem Formulation} In this paper, we study the regression (or supervised learning) problem in one dimension, \ie $f:\R \to \R$ and $x_m, y_m \in \R$. However, instead of parametrizing the reconstructed function, we formulate the learning problem as a regularized inverse problem in a continuous-domain framework. Inspired by their connection (that we discuss later on) to popular ReLU (rectified linear unit) neural networks, we focus on reconstructing piecewise-linear splines. Our metric for model simplicity is sparsity, \ie the number of spline knots. For regularization purposes, we therefore use the total variation (TV) norm for measures $\Vert \cdot \Vert_\Spc{M}$, which is defined over the space of bounded Radon measures $\Spc{M}(\R)$. This norm is known to promote sparse solutions in the desired sense, as will be clarified in~\eqref{eq:formsolution}. We formulate the following optimization problem, which we refer to as the \emph{generalized Beurling LASSO}~\eqref{eq:noisyproblem} \begin{align}\label{eq:noisyproblem} \argmin_f \sum_{m=1}^M E(f(x_m), y_m) + \lambda \Vert\D^2 f \Vert_{\Spc{M}}, \tag{\gBLASSO} \end{align} where $E$ is a cost function that penalizes the fidelity of $f(x_m)$ to the data $y_m \in \R$ (\eg a quadratic loss $E(z, y) = \frac{1}{2} (z - y)^2$). We assume that the sampling locations are ordered, \ie $x_1 < \cdots < x_M$. The parameter $\lambda > 0$ balances the contribution of the data fidelity and the regularization, and $\D^2$ is the second-derivative operator. The terminology generalized Beurling LASSO comes from the Beurling LASSO (BLASSO) which is used in the Dirac recovery literature \cite{de2012exact}. Indeed, the~\eqref{eq:noisyproblem} problem is a generalization of the BLASSO due to the presence of a regularization operator $\D^2$, which is not present in the latter problem. It is known~\cite{Unser2017splines,gupta2018continuous,boyer2019representer} that the extreme points solutions to problem~\eqref{eq:noisyproblem} are piecewise-linear splines of the form \begin{align} \label{eq:formsolution} \fopt(x) = b_0 + b_1 x + \sum_{k=1}^K a_k (x-\tau_k)_+, \end{align} where $x_+ = \max(0, x)$ is the ReLU, $b_0, b_1, a_k, \tau_k \in \R$, and the number of spline knots $K$ is bounded by $K \leq M-2$. This representer theorem has two important components: \begin{itemize} \item problem \eqref{eq:noisyproblem} has solutions of the prescribed form, \ie piecewise-linear splines. This stems from the choice of the regularization, \ie the TV norm of the second derivative; \item the sparsity is bounded by the number of training data by $K \leq M-2$. \end{itemize} In terms of model simplicity, the bound $K \leq M-2$ is typically uninformative in machine learning problems: in Figure \ref{fig:regression_vs_NN}, it yields $K \leq M-2 = 198$, which is clearly much higher than the desired sparsity. However, this bound does not take the effect of the regularization parameter $\lambda$ into account. Indeed, $\lambda \to 0$ will roughly lead to a learned function $f$ that interpolates all the data points, with typically close to $K = (M-2)$ knots. At the other extreme, the limit $\lambda \to +\infty$ leads to linear regression and thus sparsity $K=0$ due to the fact that linear functions are not penalized by the regularization. Therefore, the interesting case is the intermediate regime (as illustrated by the solid curve in Figure~\ref{fig:regression_vs_NN}), in which the overall trend is that the sparsity $K$ decreases as $\lambda$ increases. Hence, $\lambda$ controls the universality \vs simplicity trade-off. \subsection{Summary of Contributions and Outline} The above purely qualitative observation is far from telling the whole story. In particular, it does not prescribe how $\lambda$ should be chosen in practice. We attempt to overcome this impediment by giving a full description of the solution set of~\eqref{eq:noisyproblem}. The basis of our analysis is the classical observation (see for instance \cite[Theorem 5]{gupta2018continuous}) that when $E$ is strictly convex, there exists a unique vector $\V{y}_\lambda = (y_{\lambda, 1}, \ldots, y_{\lambda, M}) \in \R^M$ such that problem \eqref{eq:noisyproblem} is equivalent to the constrained problem \begin{align} \label{eq:noiselessproblem} \argmin_{\substack{f: f(x_m) = y_{\lambda,m}, \\ m\in\{1,\ldots,M\}}} \Vert\D^2 f \Vert_{\Spc{M}} \tag{\gCBP}, \end{align} which we refer to as the \emph{generalized continuous basis pursuit}~\eqref{eq:noiselessproblem}, following the terminology of \cite{ekanadham2011recovery, duval2017sparseII}. We therefore carry out our theoretical analysis on the more straightforward~\eqref{eq:noiselessproblem} problem, and we attest that these results apply to~\eqref{eq:noisyproblem} as well, provided that $\V{y}_\lambda$ is known. For this analysis, we use mathematical tools based on duality theory, and we exploit the very specific form of the so-called dual certificate for our regularization operator $\Op{D}^2$. We describe in a systematic way the form of the solution set and identify the set of sparsest solutions, which is to the best of our knowledge a first in the TV-based inverse-problem literature: existing works are limited to identifying the form of certain solutions. For instance, it is known that the function that simply connects the points $(x_1, y_{0, 1}), \ldots, (x_M, y_{0, M})$ is always a solution to~\eqref{eq:noiselessproblem} (see~\cite[Theorem 1]{koenker1994quantile} and~\cite[Proposition 7]{mammen1997locally}). We refer to it as the \emph{canonical solution}. Building on this result, our contributions on the theoretical and algorithmic sides concerning the~\eqref{eq:noisyproblem} are summarized below. \begin{enumerate} \item { \textbf{Theory} Our main theoretical contributions are the following. \begin{itemize} \item In Section~\ref{sec:noiseless}, we fully describe the solution set of~\eqref{eq:noiselessproblem} by specifying the intervals in which all solutions follow the canonical solution, and those in which they do not (Theorem~\ref{theo:sol_set}). This allows us to characterize the cases where~\eqref{eq:noiselessproblem} admits a unique solution. When they differ, we give a geometrical description of the set in which the graph of all solutions lies in Theorem~\ref{thm:limitdomain}. \item When there are multiple solutions, the canonical solution can be made sparser in certain regions, which is the topic of Section~\ref{sec:sparsestsolutions}. More precisely, in Theorem~\ref{thm:sparsest_sol}, we express the minimum achievable sparsity of a solution to~\eqref{eq:noiselessproblem} as a simple function of $\x \eqdef (x_1, \ldots, x_M)$ and $\V{y}_0$, which we denote by $\sparsity{\x}{\V{y}_0}$. Concerning the solution set, we fully describe the set of sparsest solutions of~\eqref{eq:noiselessproblem}. In particular, we characterize the cases of uniqueness, and provide a description of the sparsest solutions together with the number of degrees of freedom $n_{\mathrm{free}} (\x, \obs)$, that we characterize and show to be finite. \item In Section~\ref{sec:solutionnoisy}, we extend the results of the first two items to the~\eqref{eq:noisyproblem} problem. This is a consequence of the aforementioned equivalence between the~\eqref{eq:noisyproblem} and the~\eqref{eq:noiselessproblem} problems, given in Proposition~\ref{prop:penalized_to_constrained}. We also specify the limit value $\lambda_{\text{max}}$, for which any $\lambda \geq \lambda_{\text{max}}$ amounts to linear regression in Proposition~\ref{prop:linear_regression}. \end{itemize} } \item {\textbf{Algorithm} These theoretical findings warrant our simple and fast algorithm, presented in Section~\ref{sec:algonoisy}, for reaching (one of) the sparsest solution(s) to~\eqref{eq:noisyproblem}. The algorithm is divided in two parts: first, we compute the $\V{y}_\lambda$ vector for the~\eqref{eq:noiselessproblem} problem by solving a standard discrete $\ell_1$-regularized problem. Next, we find a sparsest solution to~\eqref{eq:noisyproblem} (with sparsity $\sparsity{\x}{\V{y}_\lambda}$) by optimally sparsifying its canonical solution in some prescribed regions that are determined by our theoretical results. This sparsification step is detailed in Algorithm \ref{alg:algo_constrained} and has complexity $\mathcal{O}(M)$. This complete algorithm provides a simple and fast way for the user to judiciously choose $\lambda$ by evaluating the data fidelity loss $\sum_{m=1}^M E(f(x_m), y_m)$ \vs the optimal sparsity $\sparsity{\x}{\V{y}_0}$ --- which depends on $\lambda$ --- as a proxy for the universality \vs simplicity trade-off. We illustrate this in our experiments in Section~\ref{sec:experiments}. The value of $\lambda$ may vary between $\lambda \to 0$ (which at the limit amounts to the~\eqref{eq:noiselessproblem} problem) and an upper bound $\lambda = \lambda_{\text{max}}$ mentioned above. Note that existing algorithms that solve the~\eqref{eq:noisyproblem} such as that introduced in \cite{mammen1997locally} are a lot more complex and computationally expensive. Moreover, to the best of our knowledge, no existing algorithm has the guarantee of reaching a sparsest solution of~\eqref{eq:noiselessproblem} or~\eqref{eq:noisyproblem}. } \end{enumerate} \subsection{Related Works} \paragraph{Discrete $\ell_1$ Optimization.} Putting aside for now the regularization operator $\Op{D}^2$, the optimization problems~\eqref{eq:noiselessproblem} and~\eqref{eq:noisyproblem} are the continuous domain counterparts of the basis pursuit~\cite{chen2001atomic} and the LASSO~\cite{tibshirani1996regression}, which were introduced in the late 90's. These problems are the precursors of the type of $\ell_1$-recovery techniques used in compressed sensing~\cite{Donoho2006,Candes2006sparse,eldar2012compressed,Foucart2013mathematical}. These approaches provide solutions with only few nonzero coefficients. They are at the cornerstone of sparse statistical learning~\cite{hastie2015statistical} and sparse signal processing~\cite{rish2014sparse} Theoretical recovery guarantees have been proved, see for example~\cite{Donoho1992Superresolution}; however it is worth noting that in their initial formulations, these methods are inherently discrete and therefore adapted to recover finite-dimensional physical quantities. \paragraph{Reconstruction in Infinite-Dimensional Spaces.} In our context, we aim at learning a continuous-domain function $f : \mathbb{R} \rightarrow \R$ from finite-dimensional data (the values $y_m =f(x_m)$ for $m\in\{1,\ldots,M\}$). It is therefore natural to formulate the optimization task in infinite dimension to perform the reconstruction. The problem is then inherently ill-posed: not only is the system undetermined, as it is also the case in compressed sensing, but we have infinitely many degrees of freedom with finitely many constraints for the reconstruction. Kernel methods based on quadratic regularization are an elegant way of removing this ill-posedness~\cite{Scholkopf2001generalized}, with the effect of restricting the approximation to a finite-dimensional subset of a Hilbert space~\cite{Wahba1990spline,Berlinet2011reproducing,badoual2018periodic}. The challenge is then to choose this Hilbert space adequately. These approaches are fruitful, but they still ultimately revert to the finite-dimensional setting. Taking inspiration from $\ell_1$-based methods for sparse vectors, new approaches have been proposed that go beyond the Hilbert space setting, such as~\cite{Adcock2015generalized,adcock2017breaking,bhandari2018sampling,Bodmann2018compressed}. \paragraph{Reconstruction in Measure Spaces.} A fertile continuous-domain problem to which discrete $\ell_1$ methods were recently adapted is sparse spikes deconvolution~\cite{de2012exact,bhaskar2013atomic,candes2014towards,Bredies2013inverse,Duval2015exact}. The aim is to recover sums of Dirac masses (point sources signals) over a continuous domain by extending the $\ell_1$ regularization to a gridless setup thanks to the total variation norm $\mnorm{\cdot}$, which is defined over the space of Radon measures $\Spc{M}(\R)$. The underlying optimization problems, the continuous basis pursuit (CBP)~\cite{ekanadham2011recovery,duval2017sparseII} and the BLASSO~\cite{de2012exact}, are thus solved over a nonreflexive Banach space. The role of the total variation norm in variational methods has a rich history~\cite{Zuhovickii1962,krein1977markov} (see \cite[Section 1]{boyer2019representer} for additional references). From a theoretical standpoint, many reconstruction guarantees are proved for the CBP and the BLASSO such as exact recovery of discrete measures (sums of Dirac masses) in the noiseless case~\cite{candes2014towards,Fernandez-Granda2016Super}, robustness to noise~\cite{Bredies2013inverse,Candes2013Super,Azais2015Spike,Bhaskar2014minimax}, support recovery~\cite{duval2017sparseII,Duval2015exact,duval2017sparseI,Poon2019Support} and super-resolution for positive discrete measures~\cite{de2012exact,Denoyelle2017support,Poon2019Multidimensional,Schiebinger2018Superresolution,Duval2019characterization,garcia2020approximate,chi2020harnessing}. From a numerical standpoint, there exist several different strategies to solve the CBP or the BLASSO. A first one is based on spatial discretization which leads back to the LASSO and algorithms such as FISTA~\cite{Beck2009fast}. There are also greedy algorithms such as continuous-domain Orthogonal Matching Pursuit (OMP)~\cite{Elvira2019Omp}. In special setups (typically Fourier measurements), it is possible to reformulate the optimization problems as semidefinite programs~\cite{candes2014towards,de2016exact,Catala2017low}. Finally, recent developments based on the Frank-Wolfe (FW) algorithm~\cite{Frank1956algorithm} solve the BLASSO directly over the space of Radon measures~\cite{Bredies2013inverse}. These FW-based methods improve on the traditional FW algorithm due to the possibility of moving the spikes in the continuous domain to further decrease the objective function~\cite{boyd2017alternating,denoyelle2019sliding,courbot2019sparse,flinth2019linear}. \paragraph{From Dirac Masses Recovery to Spline Reconstruction.} More generally, Dirac masses recovery is part of a trend that promotes continuous-domain formalisms for signal reconstruction. By adding a differential operator to the total variation regularization, one allows for more diverse reconstructions than the recovery of sums of Dirac impulses, while keeping the sparsity-promoting effect of the total variation norm. Even predating the era of ReLU networks,~\eqref{eq:noisyproblem} and, to a greater degree,~\eqref{eq:noiselessproblem}---or variations thereof---have been of keen interest to the signal processing and statistics communities. Adding a differential operator leads to spline reconstructions, a result that can be traced back to~\cite{fisher1975spline,boor1976best} in the 70's. In \cite{pinkus1988smoothest}, Pinkus proved that the canonical solution---that simply connects the data points---is the unique solution to~\eqref{eq:noiselessproblem} in some special cases, a result that we recover in our analysis. Later, Koenker \etal ~\cite[Theorem 1]{koenker1994quantile} and Mammen and Van de Geer~\cite[Proposition 7]{mammen1997locally} proved that the canonical solution is indeed a solution to~\eqref{eq:noiselessproblem}. These works also propose algorithms to solve~\eqref{eq:noiselessproblem} for any value of $\lambda$. However, contrary to this paper, none of the aforementioned works describe the full solution set of~\eqref{eq:noiselessproblem}, nor identifies its sparsest solutions. There has been a promising new surge of very recent works on related problems, both on the theoretical and the algorithmic sides~\cite{unser2018representer,boyer2019representer,duval2019epigraphical,flinth2019exact,bredies2019sparsity,simeoni2019sparse,simeoni2020functional,debarre2019hybrid}. Several very general theories, that incorporate~\eqref{eq:noisyproblem} and~\eqref{eq:noiselessproblem}, and that deal with optimization in Banach spaces with various differential regularization operators, have also been recently developed~\cite{boyer2019representer,bredies2019sparsity,Unser2019native}. \paragraph{ReLU Networks, Piecewise-Linear Splines, and the \eqref{eq:noisyproblem}.} A modern approach to supervised learning is neural networks, which in recent years have become the gold standard for an impressive number of applications~\cite{goodfellow2016deep}. Many recent papers have highlighted the property that today's state-of-the-art convolutional neural networks (CNNs) with rectified linear unit (ReLU) activations specify an input-output relation $f: \R^d \to \R$, where $d$ is the number of dimensions, that is continuous and piecewise-linear (CPWL) \cite{pascanu2014number, montufar2014number, balestriero2018mad}. This result stems from the fact that the ReLU nonlinearity is itself a CPWL function, as well as, for instance, the widespread max-pooling operation. In fact, there are indications that using more general piecewise-linear splines as activation functions could be more effective than restricting to the ReLU or leaky ReLU \cite{agostinelli2015learning, unser2018representer,aziznejad2020deep}. In the one-dimensional case $d=1$, it follows that the learned function of a ReLU network is a piecewise-linear spline~\cite{daubechies2019nonlinear}, just like the solutions to the \eqref{eq:noisyproblem} given by \eqref{eq:formsolution}. The trade-off between universality and Occam's razor is then determined by the network size and architecture. Many recent papers in the literature have investigated this connection between ReLU networks and piecewise-linear splines \cite{poggio2015notes, boelcskei2019optimal}, including universality properties~\cite{daubechies2019nonlinear, yarotsky2017error, petersen2018optimal}. We also mention \cite{gribonval2019approximation}, which considers more general spline activation functions. Moreover, several works have specifically underscored the relevance of the~\eqref{eq:noisyproblem} --- or related problems~\cite{dios2020sparsity} --- in machine learning by showing that it is equivalent to the training of a one-dimensional ReLU network with standard weight decay \cite{savarese2019how, parhi2019minimum}. Therefore, although the current trend of overparametrizing neural networks is somewhat antagonistic to our paradigm of sparsity, our full description of the solution set of \eqref{eq:noisyproblem} (including its non-sparse solutions) could be relevant to the neural network community. Others recent works have designed multidimensional ($d>1$) equivalents of the regularization term $\Vert\D^2 f \Vert_{\Spc{M}}$ and derive similar connections to neural networks \cite{parhi2020neural, ongie2020function}. \section{Mathematical Preliminaries} \label{sec:math_bckgrnd} The task of recovering a continuous-domain function from finitely many samples is obviously ill-posed; this issue is commonly addressed by adding a regularization term. As a regularization norm, we consider $\Vert \cdot \Vert_\Spc{M}$, which is the continuous-domain counterpart of the $\ell_1$-norm, and is known to promote sparse solutions~\cite{Foucart2013mathematical}. Some of the results of this section (in Sections~\ref{sec:BVspace} and~\ref{sec:BVRT}) are not new, as they can be seen as a special case of the general framework developed in previous works~\cite{Unser2017splines,gupta2018continuous,Unser2019native} to the case of the second-derivative operator $\mathrm{L} = \mathrm{D}^2$. Nevertheless, we provide a self-contained treatment, for the benefit of readers who are unfamiliar with the general theory. \subsection{The Measure Space $\Radon$} We denote by $\Radon$, the space of bounded Radon measures on $\RR$. It is a nonreflexive Banach space and is defined as the topological dual of the space $\Co$ of continuous functions that vanish at $\pm\infty$ endowed with the supremum norm $\normi{\cdot}$. The duality product between a measure $w \in \Radon$ and a function $f \in \Co$ is denoted by $\langle w, f \rangle \eqdef \int_{\R} f \mathrm{d} w$. The norm on $\Radon$ is called the total-variation norm and is given by \begin{equation} \label{eq:Mnorm} \forall w\in\Radon, \quad \mnorm{w} \eqdef \sup_{f \in \Co, \ \lVert f \rVert_{\infty} \leq 1} \langle w, f \rangle. \end{equation} Moreover, we have the continuous embeddings \begin{equation} \Sch \subseteq \Radon \subseteq \Schp, \end{equation} where $\Sch$ is the Schwartz space of smooth and rapidly decaying functions and $\Schp$ is its topological dual, the space of tempered distributions~\cite{Schwartz1966distributions}. We observe that we can replace $\Co$ by $\Sch$ in~\eqref{eq:Mnorm}, by invoking the denseness of $\Sch$ in $\Co$, and then characterize the bounded Radon measures among $\Schp$ as \begin{equation} \Radon = \{ w \in \Schp, \ \sup_{f \in \Sch, \ \lVert f \rVert_{\infty} \leq 1} \langle w, f \rangle < \infty \}. \end{equation} \subsection{The Native Space $\BV$} \label{sec:BVspace} Motivated by the form of the regularization in~\eqref{eq:noiselessproblem} and~\eqref{eq:noisyproblem}, we introduce the space over which we shall optimize both problems. It is defined as \begin{equation} \label{eq:BV2} \BV \eqdef \{ f \in \mathcal{S}'(\R), \ \D^2 f \in \Radon\}, \end{equation} with $\D^2 : \Schp \rightarrow \Schp$ the second-derivative operator. The space $\BV$ has been considered and studied in~\cite[Section 2.2]{unser2018representer}. It is the second-order generalization of the well-known space of functions with bounded variation. For the sake of completeness, a detailed presentation of the mathematical properties of $\BV$ is provided in Appendix~\ref{app:1}. For now, it is important to remember that $\BV$ is a Banach space equipped with the norm \begin{equation} \label{eq:normBVfirst} \lVert f \rVert_{\mathrm{BV}^{(2)}} \eqdef \lVert \mathrm{D}^2 f \rVert_{\Radon} + \sqrt{f(0)^2 + (f(1) - f(0))^2}. \end{equation} Moreover, any function $f \in \BV$ is continuous and such that $f(x) = \mathcal{O}(x)$ at infinity (see Proposition~\ref{prop:BV2} in Appendix~\ref{app:1}). For any $w \in \Radon$, we denote by $\Itwo \{ w\}$ the unique function $f \in \BV$ such that $\D^2 f = w$ and $f(0) = f(1) = 0$, according to the last point of Proposition~\ref{prop:BV2}. Then, $\Itwo$ is a continuous operator from $\Radon$ to $\BV$, whose main properties are summarized in Proposition~\ref{prop:Itwo} in Appendix~\ref{app:1}. Its effect is to doubly integrate the measure on which it operates\footnote{The notation $\Itwo$ has two justifications. First, it recalls that this operator is a right-inverse of the second derivative $\mathrm{D}^2$. However, the index 0 indicates that $\Itwo$ is \emph{not} a left-inverse, as revealed by Proposition~\ref{prop:Itwo} in Appendix~\ref{app:1}.}. Moreover, any $f \in \BV$ can be uniquely decomposed as \begin{align} \label{eq:fdecompose} \forall x \in \R, \quad f(x) = \Itwo \{w\} (x) + \alpha + \beta x, \end{align} where $w\in \Radon$ and $\alpha,\beta \in \R$ satisfy \begin{align} w = \D^2 f, \quad \alpha = f(0), \qandq \beta = f(1) - f(0) . \end{align} We call the measure $w$ the \emph{innovation} of $f$. The key members of $\BV$ we are interested in are piecewise-linear splines, which are defined as follows. \begin{definition}[Piecewise-Linear Spline] \label{def:splines} A piecewise-linear spline is a function $f\in \BV$ whose innovation $w = \D^2 f \in \Radon$ is a weighted sum of Dirac masses $w = \sum_{k=1}^K a_k \delta(\cdot - \tau_k)$, where $K \in \N$ is the number of knots (\ie singularities), called the \emph{sparsity} of the spline and $a_k, \tau_k \in \R$. \end{definition} It follows from Definition~\ref{def:splines} that a piecewise-linear spline $f$ can equivalently be written as \begin{align} \label{eq:splines_def} f(x) = b_0 + b_1 x + \sum_{k=1}^K a_k (x - \tau_k)_+, \end{align} where $b_0, b_1 \in \R$. Note that this representation is different from that of~\eqref{eq:fdecompose} (in general, $(\alpha, \beta) \neq (b_0, b_1)$); however we favor the representation \eqref{eq:splines_def} for splines due to its simplicity. \subsection{Representer Theorem for $\BV$} \label{sec:BVRT} The native space $\BV$ allows us to precisely define the optimization problems we are interested in. Indeed, it is the largest space for which the regularization $\lVert \D^2 f \rVert_{\Spc{M}}$ is well-defined and finite. The following result is a special case of a more general theory, which is now well established. \begin{theorem}[Representer Theorem for $\BV$] \label{theo:RTweneed} Let $\x=(x_1,\ldots,x_M) \in \R^M$ be a collection of distinct $M \geq 2$ ordered sampling locations and $\obs \in \R^M$. We consider the set of solutions \begin{equation}\label{eq:noiseless} \mathcal{V}_0 \eqdef \underset{\substack{f \in \BV \\ f(x_m) = y_{0,m}, \ m=1, \ldots, M}}{\arg \min} \lVert \Op{D}^2 f \rVert_{\mathcal{M}}. \tag{\gCBP} \end{equation} Moreover, we fix $\lambda > 0$ and $\V{y} \in \R^M$, together with a cost function $E : \R \times \R \rightarrow \R^+$ such that $E(\cdot, y)$ is strictly convex, coercive, and differentiable for any $y \in \R$ and $\lambda > 0$. We also consider the set of solutions \begin{align}\label{eq:noisy} \mathcal{V}_\lambda \eqdef \argmin_{f \in \BV} \sum_{m=1}^M E(f(x_m), y_m) + \lambda \Vert\D^2 f \Vert_{\Spc{M}}. \tag{\gBLASSO} \end{align} Then, for any $\lambda \geq 0$ (including $0$), $\mathcal{V}_\lambda$ is nonemtpy, convex, and weak-* compact in $\BV$, and is the weak-* closure of the convex hull of its extreme points. The latter are all piecewise-linear splines of the form \begin{equation} \label{eq:extremepoints} f_{\mathrm{extreme}} (x) = b_0 + b_1 x + \sum_{k=1}^{K} a_k (x - \tau_k)_+, \end{equation} where $b_0, b_1 \in \R$, the weights $a_k$ are nonzero, the knots locations $\tau_k \in \R$ are distinct, and $K \leq M-2$. \end{theorem} Following the seminal work of Fisher and Jerome~\cite{fisher1975spline}, this result was proved in~\cite[Theorem 2]{Unser2017splines} for $\lambda = 0$ and for a general spline-admissible operator $\mathrm{L}$ in the regularization term $\lVert \mathrm{L} \cdot \rVert_{\Radon}$. The case $\lambda > 0$ is proved in \cite[Theorem 4]{gupta2018continuous} for a general cost function $E$, by reducing the analysis to the optimization problem~\eqref{eq:noiseless} (as we shall do in Section~\ref{sec:noisy}). Theorem~\ref{theo:RTweneed} is then a particular case of these two works for the regularization operator $\mathrm{L} = \D^2$, whose null space is generated by $x\mapsto 1$ and $x \mapsto x$, and for sampling measurements. Note that the application of these known theorems requires to prove that the point evaluation $f\mapsto f(x_0)$ is weak-* continuous on $\BV$ for any $x_0 \in \R$, which has been shown in~\cite[Theorem 1]{unser2018representer}. These theorems has been recently revisited and/or extended by several authors~\cite{boyer2019representer,flinth2019exact,bredies2019sparsity}. Theorem~\ref{theo:RTweneed} is called a ``representer theorem", as initially proposed in~\cite{Unser2017splines}, because it specifies the form of the extreme-point solutions of the optimization problem. It is then possible to reduce the optimization task to functions of the form~\eqref{eq:extremepoints}, which considerably simplifies the analysis~\cite{gupta2018continuous,Debarre2019}. Theorem~\ref{theo:RTweneed} is also an existence result. It guarantees that the minimization problem~\eqref{eq:noiselessproblem} admits at least one piecewise-linear solution. In particular, if the solution is unique, then it is a piecewise-linear spline. However, Theorem~\ref{theo:RTweneed} is not informative regarding the knots locations $\tau_k$, which may be distinct from the sampling locations $x_m$. To the best of our knowledge, very few attempts have been made to characterize the cases where gTV optimization problems admit a unique solution, and to describe the solution set when the solution is not unique. In this paper, we provide complete answers to these questions for the reconstruction of functions via sampling measurements and with $\mathrm{BV}^{(2)}$-type regularization. \subsection{Dual Certificates} This section presents the main tools for the study of the~\eqref{eq:noiseless} problem (with $\x\in\RR^M$ the ordered distinct sampling locations and $\V y_0\in\RR^M$ the measurements), coming from the duality theory, which are at the core of our contributions. Our strategy consists in studying a particular class of continuous functions, called \emph{dual certificates}, which can be used individually to certify that an element $f\in\BV$ is a solution of the optimization problem~\eqref{eq:noiseless}. More interestingly, from the properties of a given dual certificate, it is possible to precisely describe the whole structure of the set of solutions (see Theorem~\ref{theo:sol_set}) and, in particular, to determine whether or not the sparse solution given by Theorem~\ref{theo:RTweneed} is the unique solution of the problem (see Proposition~\ref{prop:uniqueness}). Before giving the main results of this section (Propositions~\ref{prop:cns-sol-constrained} and \ref{prop:cns-sol-constrained-fixed-dual-certif}), let us first introduce the definition of a dual pre-certificate. \begin{definition}[Dual Pre-Certificates]\label{def:dual-certif} We say that a function $\eta\in\Co$ is a \emph{dual pre-certificate} (for the problem~\eqref{eq:noiseless}) if its norm satisfies $\normi{\eta} \leq 1$ and if $\eta$ is of the form \begin{align} \eta = \sum_{m=1}^M \dualvarm \green{x_m - \cdot} \end{align} for some vector $\dualvar = (c_1 , \ldots, c_M) \in \R^M$ such that $\dotp{\dualvar}{\un} = \dotp{\dualvar}{\x} = 0$ (with $\un\eqdef(1,\ldots,1)\in\RR^M$). \end{definition} A dual pre-certificate is therefore a piecewise-linear spline. The conditions $\dotp{\dualvar}{\un} = \dotp{\dualvar}{\x} = 0$ ensure that $\eta$ is compactly supported, and is thus an element of $\Co$ (indeed, we have $\eta(x) = - \langle \V{c}, \V{1} \rangle x + \langle \V{c}, \V{x} \rangle = 0$ for any $x \leq x_1$). We shall present an explicit construction of such a pre-certificate in Proposition~\ref{prop:canocertif} with the piecewise-linear spline $\etacano$. A dual certificate is a pre-certificate that satisfies an additional condition (see Proposition~\ref{prop:cns-sol-constrained}) that ensures that the vector $\V c\in\RR^M$ in Definition~\ref{def:dual-certif} is a solution of the dual problem of~\eqref{eq:noiseless}. From~\eqref{eq:fdecompose}, we know we can parametrize any $f\in\BV$ with a unique element $(w,(\al,\be)) \in\Radon\times\RR^2$ through the relation \begin{align} \forall x \in \R, \quad f(x) = \Itwo\{w\} (x) +\al+\be x. \end{align} Dual certificates determine the localization of the support of $w$ when $f$ is a solution of~\eqref{eq:noiseless}. To formulate this property, we need the following definition which introduces the concepts of signed support of a measure (see Section 1.4 of~\cite{Duval2015exact}) and signed saturation set of a pre-certificate (see~\cite[Definition 3]{Duval2015exact}). \begin{definition}[Signed Support and Signed Saturation Set]\label{def:supp-sat} Let $w\in\Radon$ and $\eta\in\Co$ be a dual pre-certificate in the sense of Definition~\ref{def:dual-certif}. We define the \emph{signed support} of $w$ by \begin{align} \ssupp w \eqdef \supp(w_+) \times \{1\} \cup \supp(w_-) \times \{-1\}, \end{align} where $w_+$ and $w_-$ are positive measures coming from the Jordan decomposition of $w=w_+ - w_-$. Moreover from the positive and negative saturation sets of $\eta$, defined as \begin{align} \satp \eta \eqdef \{x\in\RR;\ \eta(x)=1\} \qandq \satn \eta \eqdef \{x\in\RR;\ \eta(x)=-1\} \end{align} respectively, we define the \emph{signed saturation set} of $\eta$ by \begin{align} \ssat \eta \eqdef \satp{\eta} \times \{1\} \cup \satn{\eta} \times \{-1\}. \end{align} \end{definition} Note that the sets $\ssupp w$, $\satp \eta$, $\satn \eta$, $\ssat \eta$ are all closed. A dual pre-certificate $\eta$ is a piecewise-linear spline in $\Co$ with norm $\lVert \eta \rVert_\infty \leq 1$. Hence, its signed saturation set is necessarily a union of closed intervals (that can be singletons). We can now state the first main result of this section, the proof of which can be found in Appendix~\ref{app:2}. It characterizes the solutions of~\eqref{eq:noiseless} via the signed support of their innovation using the signed saturation set of some dual pre-certificate. \begin{proposition}\label{prop:cns-sol-constrained} Let $\x\in \R^M$ be the ordered sampling locations, and $\V{y}_0 \in \R^M$. An element $\fopt \in\BV$ is a solution of~\eqref{eq:noiseless} if and only if $\fopt$ satisfies the interpolation conditions $\fopt(x_m)=y_{0,m}$ for all $m\in\{1,\ldots,M\}$ and one can find a dual pre-certificate $\eta$ (Definition~\ref{def:dual-certif}) such that \begin{align} \mnorm{w} = \dotp{w}{\eta}, \label{eq:duality-interpolation-general} \end{align} where $w\eqdef \mathrm{D}^2 \{\fopt\}$ is the innovation of $\fopt$. The condition \eqref{eq:duality-interpolation-general} is moreover equivalent to the inclusion \begin{align}\label{eq:duality-interpolation-expanded} \ssupp {w} \subset \ssat \eta. \end{align} The dual pre-certificate $\eta$ is then called a \emph{dual certificate} (for problem~\eqref{eq:noiseless}). \end{proposition} \begin{remark} When $\fopt\in\BV$ is a piecewise-linear spline, \ie $\fopt(x)= \sum_{k=1}^K a_k\green{x -\tau_k}+b_0+b_1 x$ for all $x\in \R$ (see~\eqref{eq:splines_def}), the condition~\eqref{eq:duality-interpolation-expanded} is equivalent to the following interpolation requirements on the dual pre-certificate $\eta$ \begin{align} \forall k\in\{1,\ldots,K\}, \ \eta(\tau_k) = \sign(a_k) \end{align} \end{remark} From Proposition~\ref{prop:cns-sol-constrained}, a dual certificate $\eta$ is thus a dual pre-certificate that certifies that a given $\fopt\in\BV$ is a solution of~\eqref{eq:noiseless}, \ie $\fopt$ satisfies $\fopt(x_m)=y_{0,m}$ for all $m\in\{1,\ldots,M\}$ and $\ssupp {\Op D^2 \fopt} \subset \ssat \eta$ (or equivalently $\mnorm{\Op D^2 \fopt} = \dotp{\Op D^2 \fopt}{\eta}$). Once we know that some $\eta$ is a dual certificate, it can be used to check whether \emph{any} $f\in\BV$ is a solution of~\eqref{eq:noiseless}. In other words, contrary to what is seemingly implied in Proposition~\ref{prop:cns-sol-constrained}, there is no need to find a new dual pre-certificate for each candidate solution $f$. This is formulated in the following proposition, the proof of which can be found in Appendix~\ref{app:2bis}. \begin{proposition}\label{prop:cns-sol-constrained-fixed-dual-certif} Let $\x\in \R^M$ be the ordered sampling locations, $\V{y}_0 \in \R^M$, and let $\eta\in\Co$ be dual certificate as defined in Proposition~\ref{prop:cns-sol-constrained} for the problem~\eqref{eq:noiseless}. Then, an element $\fopt \in\BV$ is a solution of~\eqref{eq:noiseless} if and only if $\fopt$ satisfies the interpolation conditions $\fopt(x_m)=y_{0,m}$ for all $m\in\{1,\ldots,M\}$ and \begin{align}\label{eq:duality-interpolation-expanded-fixed-dual-certif} \ssupp {w} \subset \ssat \eta. \end{align} or equivalently $\mnorm{w} = \dotp{w}{\eta}$, where $w\eqdef\Op D^2 \fopt$ is the innovation of $\fopt$. \end{proposition} To end this section, let us illustrate how the concept of dual certificates can be used to describe the solution set of~\eqref{eq:noiseless}. Suppose that we know that some $\eta$ is a dual certificate (we prove in Proposition~\ref{prop:optimality_cano} that this is the case of the dual pre-certificate $\etacano$ introduced in Proposition~\ref{prop:canocertif}), then the condition $\ssupp w \subset \ssat \eta$ of Proposition~\ref{prop:cns-sol-constrained-fixed-dual-certif} enforces strong constraints on any candidate solution of~\eqref{eq:noiseless}. This is all the more true when $\ssat \eta$ is a discrete set, which we consider in the next definition and proposition. \begin{definition}[Nondegeneracy]\label{def:non-degen} Let $\x\in \R^M$ be the ordered sampling locations, $\V{y}_0 \in \R^M$ and let $\eta \in \Co$ be any dual certificate as defined in Proposition~\ref{prop:cns-sol-constrained}. We say that $\eta$ is \emph{nondegenerate} if its signed saturation set $\ssat \eta$ defined in Definition~\ref{def:supp-sat} is a discrete set. Otherwise, we say that it is \emph{degenerate}. \end{definition} \begin{proposition}[General Uniqueness Result for~\eqref{eq:noiseless}]\label{prop:general-uniqueness} Let $\x\in \R^M$ be the ordered sampling locations and $\V{y}_0 \in \R^M$. If there exists a nondegenerate dual certificate in the sense of Definition~\ref{def:dual-certif}, then the optimization problem~\eqref{eq:noiseless} has a unique solution, which is a piecewise-linear spline in the sense of Definition~\ref{def:splines} with $K \leq M-2$ knots $\tau_k$ that form a subset of the sampling points $\{x_2,\ldots,x_{M-1}\}$. \end{proposition} The proof of Proposition~\ref{prop:general-uniqueness} is given in Appendix~\ref{app:3}. \section{Experiments} \label{sec:experiments} In this section, we describe the implementation of our two-step algorithm presented in Section \ref{sec:algonoisy} and show our experimental results. The first step of our algorithm - which consists in solving problem \eqref{eq:optizlambda} with ADMM - is implemented using GlobalBioIm, a Matlab inverse-problem library developed by the Biomedical Imaging Group at EPFL \cite{soubies2019pocket}. In all our experiments, we choose the standard quadratic data fidelity loss $E(z, y) = \frac{1}{2}(y-z)^2$. This choice leads to $\partial_1E(y, z) = y-z$, which enables the simple computation of $\lambda_{\text{max}}$ using \eqref{eq:lambda_max}. We present an illustrative example with $M=30$ simulated data points in Figure \ref{fig:cost_sparsity}. A small number is chosen for visualization purposes; an application of our algorithm with a larger number of $M=200$ data points was shown in Figure \ref{fig:regression_vs_NN}. The sampling locations $x_m$ are generated following a uniform distribution in the $[\frac{m-1}{M}, \frac{m}{M}]$ intervals for $m=1, \hdots, M$. Next, the ground-truth signal, a piecewise-linear spline $f_0$ in the sense of Definition~\ref{def:splines} with 2 knots, is generated, with random knot locations $\tau_m$ within the interval $[0,1]$, and i.i.d. Gaussian amplitudes $a_m$ ($\sigma_a^2 = 1$). We then have $y_m = f_0(x_m) + n_m$ for $m=1, \hdots, M$, where $\V{n} \in \R^M$ is i.i.d. Gaussian noise ($\sigma_n^2 = 4 \times 10^{-4}$). \subsection{Extreme Values of $\lambda$} The reconstructions using our algorithm for extreme values of $\lambda$ - \ie $\lambda \to 0$ which leads to exact interpolation of the data, and $\lambda = \lambda_{\text{max}}$ which leads to linear regression - are shown in Figure \ref{fig:cost_sparsity_extremes}. Clearly, none of these solutions are satisfactory: on one hand, linear regression is too simple to model the data adequately. On the other hand, the exact interpolator suffers from overfitting. Although thanks to the sparsification procedure in Algorithm~\ref{alg:algo_constrained}, its sparsity $\sparsity{\x}{\obsr} = 20$ is smaller than the theoretical bound $M-2 = 28$ given by Theorem \ref{theo:RTweneed}, it is still clearly much larger than the desired outcome. \subsection{Sparsity \vs Data Fidelity Loss Trade-Off} Next, we show the sparsity $\sparsity{\x}{\obsr}$ \vs error $\Vert \V{y} - \obsr \Vert$ trade-off curve in Figure \ref{fig:cost_sparsity_tradeoff}. The latter was obtained by applying our algorithm with 20 values of $\lambda$ (equispaced on a logarithmic scale) within the range $[\lambda_{\min}, \lambda_{\text{max}}]$, with $\lambda_{\text{max}} = 0.1713$ (as defined in~\eqref{eq:lambda_max}) and $\lambda_{\min} \eqdef 10^{-5} \times \lambda_{\text{max}} $. We thus observe the evolution from exact interpolation to linear regression as $\lambda$ increases. Ideally, one would like to choose to value of $\lambda$ that minimizes $\Vert \V{y}_0 - \obsr \Vert$, \ie the error with respect to the noiseless data $\V{y}_0$. However, in practice, the noiseless data is unknown, and one must use the noisy data $\V{y}$. Depending on the noise level, solely minimizing $\Vert \V{y} - \obsr \Vert$ might not be a desirable objective, since it leads to overfitting. Hence, we consider the trade-off between data fidelity loss and sparsity as a proxy for the standard universality \vs simplicity trade-off in machine learning. Note that we choose the data fidelity loss $\Vert \V{y} - \obsr \Vert$ instead of $\lambda$ as the $x$-axis metric, since it is an increasing function of the latter, and the former is easier to interpret. This trade-off curve does not specify a single optimal value of the regularization parameter $\lambda$. Instead, it helps the user choose an appropriate balance by giving quantitative, interpretable data about the possible trade-offs. A key observation is that this curve is not necessarily monotonous: the sparsity can increase as $\Vert \V{y} - \obsr \Vert$ increases, as shown in Figure \ref{fig:cost_sparsity_tradeoff}. This lack of monotonicity is rather counter-intuitive, since the overall trend as $\lambda$ increases is to go from sparsity $\sparsity{\x}{\V{y}} = 20$ to $\sparsity{\x}{\V{y}_{\lambda_{\text{max}}}} = 0$. Note that a similar behavior has been known to occur in the context of the homotopy method \cite{bach2011optimization}, although it is far from being systematic. However, the interesting feature is that, in the sparsity \vs error trade-off, some values of $\lambda$ are sometimes strictly better than others for both metrics, such as the star point over the square point in Figure~\ref{fig:cost_sparsity_tradeoff}. Having access to the full trade-off curve such as Figure \ref{fig:cost_sparsity_tradeoff} is very helpful to judiciously select a suitable value of $\lambda$. This holds true as well when the curve is monotonic: indeed, the user should select the value of $\lambda$ such that the data fidelity is lowest for the desired level of sparsity, \ie the leftmost point of every plateau. \subsection{Example Reconstructions} To illustrate the non-monotonicity of the sparsity \vs error curve, examples of reconstructions for two specific values of $\lambda$ are shown in Figures \ref{fig:cost_sparsity_sparse} and \ref{fig:cost_sparsity_nonsparse}. Indeed, the former reconstruction has a lower value of $\lambda$, and thus lower data-fidelity loss. Nevertheless, the reconstruction in Figure \ref{fig:cost_sparsity_sparse} is sparser, with $\sparsity{\x}{\obsr} = 3$ \vs 6 in Figure \ref{fig:cost_sparsity_nonsparse}. Note that this gap is not a numerical artefact, since the magnitude of the weights $\tilde{a}_k$ associated to the knots in Figure \ref{fig:cost_sparsity_nonsparse} is much greater than numerical precision. This indicates that the value of $\lambda$ for Figure \ref{fig:cost_sparsity_sparse} should be preferred to that of \ref{fig:cost_sparsity_nonsparse}. \begin{figure}[t] \centering \subfloat[Extreme cases.]{\includegraphics[width=0.5\linewidth, valign=t]{cost_sparsity_extremes.eps} \label{fig:cost_sparsity_extremes}} \subfloat[Sparsity \vs error trade-off. The reconstruction corresponding to the star point is shown in Figure~\ref{fig:cost_sparsity_sparse}, and the one corresponding to the square point in Figure~\ref{fig:cost_sparsity_nonsparse}.]{\includegraphics[width=0.5\linewidth, valign=t]{cost_sparsity.eps}\label{fig:cost_sparsity_tradeoff}} \\ \subfloat[$\lambda = 1.7 \times 10^{-3}$, loss $\Vert \V{y} - \obsr \Vert = 0.0983$, sparsity $\sparsity{\x}{\obsr} = 3$.]{\includegraphics[width=0.5\linewidth, valign=t]{cost_sparsity_middle_sparse.eps}\label{fig:cost_sparsity_sparse}} \subfloat[$\lambda = 1.71 \times 10^{-2}$, loss $\Vert \V{y} - \obsr \Vert = 0.1429$, sparsity $\sparsity{\x}{\obsr} = 6$.]{\includegraphics[width=0.5\linewidth, valign=t]{cost_sparsity_middle_nonsparse.eps}\label{fig:cost_sparsity_nonsparse}} \caption{Example of reconstruction for varying regularization $0 \leq \lambda \leq \lambda_\text{max} = 0.1713$ with $M=30$ simulated data points. } \label{fig:cost_sparsity} \end{figure} \section{The Solutions of~\eqref{eq:noisyclean}}\label{sec:noisy} We now focus on the~\eqref{eq:noisyclean} problem, in which the interpolation of the data is no longer required to be exact as in Section~\ref{sec:noiseless}, but is formulated as a penalized problem with a regularization parameter $\lambda > 0$. In practice, such problems are typically formulated when we have access to noise-corrupted measurements $\V{y} = \V{y}_0 + \V{n}$ where $\V{n} \in \R^M$ is a noise term. In this case, we solve the following optimization problem \begin{equation}\label{eq:noisyclean} \Spc{V}_\lambda \eqdef \argmin_{f \in \BV} \sum_{m=1}^M E(f(x_m), y_{m}) + \lambda \lVert \Op{D}^2 f \rVert_{\mathcal{M}}, \tag{\gBLASSO} \end{equation} where $E(\cdot, y)$ is a strictly convex, coercive, and differentiable cost function (typically quadratic, \ie $E(z, y) = \frac{1}{2} (z - y)^2$) for any $y \in \R$, and $\lambda > 0$ is a regularization parameter. The latter controls the weight between the data fidelity term $\sum_{m=1}^M E(f(x_m), y_{m})$ and the regularization term $\Vert \Op{D}^2 f \Vert_{\Spc{M}}$, and should therefore be adapted to the noise level. \subsection{From~\eqref{eq:noiselessclean} to~\eqref{eq:noisyclean}: Reduction to the Noiseless Case} \label{sec:solutionnoisy} We now show that the~\eqref{eq:noisyclean} problem can be reduced to an optimization problem of the form~\eqref{eq:noiselessclean} (see~\cite[Theorem 5]{gupta2018continuous}), as is often done in finite-dimensional optimization problems~\cite[Lemma 1]{Tibshirani2013lasso}. \begin{proposition}[Reformulation of~\eqref{eq:noisyclean} as~\eqref{eq:noiselessclean}] \label{prop:penalized_to_constrained} Let $\x\in \R^M$ be the ordered sampling locations, and $\V{y}\in \R^M$ with $M \geq 2$. Let $E : \R\times\R \rightarrow \R^+$ be a cost function such that $E(\cdot, y)$ is strictly convex, coercive, and differentiable for every $y \in \R$. Then, there exists a unique $\V{y}_\lambda \in \R^M$ such that, for any $\fopt \in \mathcal{V}_\lambda$, $\fopt(x_m) = y_{\lambda, m}$ for all $m\in\{1, \ldots, M\}$. Moreover, we have that the~\eqref{eq:noisyclean} problem is equivalent to the~\eqref{eq:noiselessclean} problem with the measurement vector $\V{y}_0 = \V{y}_\lambda$, \ie \begin{equation} \label{eq:noisy_constrained} \mathcal{V}_{\lambda} = \argmin_{\substack{f \in \BV \\ f(x_m) = y_{\lambda, m}, \ m=1, \ldots, M}} \lVert \Op{D}^2 f \rVert_{\Spc{M}}. \end{equation} \end{proposition} The proof of Proposition~\ref{prop:penalized_to_constrained} is provided in Appendix~\ref{sec:penalized_to_constrained}. The implications of this result for our problem are huge: it implies that all the results of Section~\ref{sec:noiseless}---in particular, uniqueness, form the solutions, and sparsest solutions---can be applied to the penalized problem~\eqref{eq:noisyclean}. The only---but crucial---catch is that the samples $\V{y}_\lambda\in\RR^M$ are unknown. Fortunately, the following proposition enables us to compute them through a standard $\ell_1$-regularized discrete optimization. \begin{proposition} \label{prop:optizlambda} Assume that the hypotheses of Proposition~\ref{prop:penalized_to_constrained} are met. Then, the vector $\V{y}_\lambda\in\RR^M$ defined in Proposition~\ref{prop:penalized_to_constrained} is the unique solution of the discrete minimization problem \begin{equation} \label{eq:optizlambda} \V{y}_\lambda = \argmin_{\V{z}\in\R^M} \sum_{m=1}^M E(z_m, y_m) + \lambda \Vert \M{L} \V{z} \Vert_{1}, \end{equation} where $\M{L} \in \R^{(M-2)\times M}$ is given by \begin{align} \label{eq:matrixL} \setlength\arraycolsep{2pt} \M{L} \eqdef \begin{pmatrix} v_1 & - (v_1 + v_2) & v_2 & 0 & \cdots & 0 \\ 0 & v_2 & - (v_2 + v_3) & v_3 & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & \ddots & 0 \\ 0 & \cdots & 0 & v_{M-2}& - (v_{M-2} + v_{M-1}) & v_{M-1} \end{pmatrix}, \end{align} and $\V{v} \eqdef (v_1, \ldots , v_{M-1}) \in \R^{M-1}$ is defined as $v_m \eqdef \frac{1}{x_{m+1} - x_m}$ for $m \in\{1,\ldots, M-1\}$. \end{proposition} \begin{proof} In this proof, we denote by $f_{\V{z}}$ the canonical solution (defined in Definition~\ref{def:canonicalsolutiondef}) of the~\eqref{eq:noiselessclean} problem with sampling locations $\V{x}$ and data point $\V{y}_0 = \V{z}$. Let us first prove that if $\zopt\in \R^M$ is a solution of problem~\eqref{eq:optizlambda}, then $f_{\zopt}\in\BV$ is a solution of problem~\eqref{eq:noisyclean}. We then deduce that for all $m\in\{1,\ldots,M\}$, $z_m=f_{\zopt}(x_m)= y_{\la,m}$ (where the last equality is true thanks to Proposition~\ref{prop:penalized_to_constrained}), which proves the desired result, \ie $\V y_\la = \zopt$ is the unique solution of problem~\eqref{eq:optizlambda}. Let $\V{z} \in \R^M$. Using Equations~\eqref{eq:canonical_sol} and \eqref{eq:a_coefs}, we have that $\Vert \Op{D}^2 f_{\V{z}} \Vert_\Spc{M} = \sum_{m=2}^{M-1} \vert a_m \vert$, where $a_m = \frac{z_{m+1} - z_m}{x_{m+1} - x_m} - \frac{z_{m} - z_{m-1}}{x_{m} - x_{m-1}}$. Therefore, we have $\Vert \Op{D}^2 f_{\V{z}} \Vert_\Spc{M} = \Vert \M{L} \V{z} \Vert_{1}$, where $\M{L}$ is given by Equation~\eqref{eq:matrixL}. This yields $\sum_{m=1}^M E(f_{\V{z}}(x_m), y_m) + \lambda \Vert f_{\V{z}} \Vert_\Spc{M} = \sum_{m=1}^M E(z_m, y_m) + \lambda \Vert \M{L} \V{z} \Vert_{1}$. Applied to the particular case $\V{z} = \V{y}_\lambda$, we obtain the equality $\sum_{m=1}^M E(y_{\lambda, m}, y_m) + \lambda \Vert \M{L} \V{y}_\lambda \Vert_{1} = \Spc{J}_\lambda$, where $\Spc{J}_\lambda$ is the optimal cost of~\eqref{eq:noisyclean}, since by Proposition~\ref{prop:optimality_cano}, $f_{\V{y}_\lambda} \in \Spc{V}_\lambda$. This proves that the optimal value of problem~\eqref{eq:optizlambda} is lower or equal than $\Spc{J}_\lambda$. Next, let $\zopt$ be a solution of problem~\eqref{eq:optizlambda} (which exists due to the coercivity of $E(\cdot, y)$ for any $y \in \R$). We thus have from before that \begin{align} \Spc{J}_\lambda \leq \sum_{m=1}^M E(f_{\zopt}(x_m), y_m) + \lambda \Vert \Op{D}^2 f_{\zopt} \Vert_\Spc{M} = \sum_{m=1}^M E(z_m, y_m) + \lambda \Vert \M{L} \zopt \Vert_{1} \leq \Spc{J}_\lambda, \end{align} which yields the desired result $f_{\zopt} \in \Spc{V}_\la$. \end{proof} \subsection{Algorithm for Reaching a Sparsest Solution of~\eqref{eq:noisyclean}} \label{sec:algonoisy} By combining results from the previous sections, we now formulate the following simple algorithmic pipeline to reach a sparsest solution of~\eqref{eq:noisyclean}. \begin{proposition} \label{prop:algo_complete} Let $\x\in \R^M$ be the ordered sampling locations and $\V{y} \in \R^M$ with $M \geq 2$, and let $E: \R\times \R \to \R^+$ be a cost function such that $E(\cdot, y)$ is strictly convex, coercive, and differentiable for any $y \in \R$. Let the function $\fopt$ be obtained through the following two-step procedure: \begin{enumerate} \item Compute $\V{y}_\lambda\in\RR^M$ (defined in Proposition~\ref{prop:penalized_to_constrained}) by solving problem~\eqref{eq:optizlambda}; \item Apply Algorithm~\ref{alg:algo_constrained} with the measurement vector $\V{y}_0 = \V{y}_\lambda$ to compute a sparsest solution $\fopt$ of the~\eqref{eq:noiselessclean} problem given by Equation~\eqref{eq:noisy_constrained}. \end{enumerate} Then, $\fopt$ is one of the sparsest solutions to the~\eqref{eq:noisyclean} problem, with sparsity $\sparsity{\x}{\V{y}_\lambda}$ as defined in Theorem~\ref{thm:sparsest_sol}. \end{proposition} \begin{proof} Proposition~\ref{prop:penalized_to_constrained} guarantees that the problem~\eqref{eq:noisyclean} is equivalent to the problem~\eqref{eq:noiselessclean} with the measurement vector $\V{y}_0 = \V{y}_\lambda$. Proposition~\ref{prop:optizlambda} then specifies that $\V{y}_\lambda$ can be computed by solving problem~\eqref{eq:optizlambda}. Finally, as demonstrated in the proof of Theorem~\ref{thm:sparsest_sol}, the output $\fopt$ of Algorithm~\ref{sec:algonoiseless} reaches a sparsest solution of the corresponding~\eqref{eq:noiselessclean} problem, which thus has sparsity $\sparsity{\x}{\V{y}_\lambda}$. \end{proof} Proposition~\ref{prop:algo_complete} proposes a simple but very powerful algorithm. It reaches a sparsest solution of the problem~\eqref{eq:noisyclean} - a challenging task a priori - in two simple steps. The first consists in solving a standard $\ell_1$-regularized discrete problem, for which many off-the-shelf solvers such as ADMM~\cite{boyd2010distributed} are available. The second is our proposed sparsifying procedure, which converges in finite time. The following remarks can be made concerning Proposition~\ref{prop:algo_complete}. \begin{remark} The bottleneck of the pipeline described in Proposition~\ref{prop:algo_complete} is its item 1., as Problem~\eqref{eq:optizlambda} admits no closed-form solution due to the non-differentiable $\ell_1$ term. It is thus typically solved using an iterative procedure that does not converge in finite time, such as ADMM. By contrast, as explained in Section~\ref{sec:algonoiseless}, item 2. requires a finite number $\Spc{O}(M)$ of operations to reach an exact solution. \end{remark} \begin{remark} Algorithm~\ref{alg:algo_constrained} still converges to a solution of the problem~\eqref{eq:noisyclean} when $E$ is only a convex function, and not strictly convex as assumed in Propositions~\ref{prop:penalized_to_constrained} and~\ref{prop:optizlambda}. The difference is that Proposition~\ref{prop:penalized_to_constrained} no longer holds true in that there is no unique vector of measurements $\V{y}_\lambda$. The solution set of the constrained problem~\eqref{eq:noisy_constrained} is thus in general a strict subset of $\Spc{V}_\lambda$. Hence, the obtained solution is not necessarily the sparsest solution of the full solution set $\Spc{V}_\lambda$, but only of this subset. As for the assumption that $E$ is differentiable, it is not a requirement for Proposition~\ref{prop:algo_complete}. However, as it is needed later on in Proposition~\ref{prop:linear_regression}, we include it in order to have consistent assumptions concerning $E$ throughout the paper. \end{remark} \subsection{Range of the Regularization Parameter $\lambda$} \label{sec:range_lambda} In practice, the choice of the regularization parameter $\lambda$ is the critical element that determines the performance of our algorithm. Although this choice is highly data-dependant, in this section, we show that the search can be restricted to a bounded interval. The lower bound is $\lambda \to 0$, which corresponds at the limit to exact interpolation, that is the~\eqref{eq:noiselessclean} problem. The upper bound $\lambda \to +\infty$ corresponds to the linear regression regime, which is described in the following proposition. \begin{proposition}[Linear Regression Regime of~\eqref{eq:noisyclean}] \label{prop:linear_regression} Let $\x\in \R^M$ be the ordered sampling locations and $\V{y} \in \R^M$ with $M \geq 2$. Let $E: \R\times \R \to \R^+$ be a cost function such that $E(\cdot, y)$ is strictly convex, coercive, and differentiable for any $y \in \R$. Then, the following properties hold. \begin{enumerate} \item There is a unique solution $(\opt \alpha, \opt \beta) \in \R^2$ to the linear regression problem \begin{align} \label{eq:linear_regression} (\opt \alpha, \opt \beta) \eqdef \argmin_{(\alpha, \beta) \in \R^2} \sum_{m=1}^M E(\alpha + \beta x_m, y_m). \end{align} \end{enumerate} We can thus define the value \begin{align} \label{eq:lambda_max} \lambda_{\text{max}} \eqdef \left\Vert {\M{L}^T}^\dagger \begin{pmatrix}\partial_1 E(\opt \alpha + \opt \beta x_m, y_1) \\ \vdots \\ \partial_1 E(\opt \alpha + \opt \beta x_M, y_M) \end{pmatrix} \right\Vert_\infty, \end{align} where $\partial_1 E$ denotes the partial derivative with respect to the first variable of $E$, the matrix ${\M{L}^T}^\dagger$ denotes the pseudoinverse of $\M{L}^T$, and $\M{L}$ is defined as in~\eqref{eq:matrixL}. \begin{enumerate} \setcounter{enumi}{1} \item For any $\lambda \geq \lambda_{\text{max}}$, the solution to the discrete problem problem~\eqref{eq:optizlambda} is given by $\V{y}_\lambda = \opt \alpha \V{1} + \opt \beta \x$, where $\V{1} \eqdef (1, \ldots , 1) \in \R^M$. \item For any $\lambda \geq \lambda_{\text{max}}$, the solution to the~\eqref{eq:noisyclean} problem is unique and is the linear function $f_{\text{max}}$ given by $f_{\text{max}}(x) \eqdef \opt \alpha + \opt \beta x$. \end{enumerate} \end{proposition} The proof of Proposition~\ref{prop:linear_regression} is given in Appendix~\ref{sec:linear_regression_proof}. Proposition~\ref{prop:linear_regression} guarantees that the range of $\lambda$ can be restricted to the interval $(0, \lambda_{\text{max}}]$: indeed, all values $\lambda \geq \lambda_{\text{max}}$ lead to linear regression. Moreover, the value of $\lambda_{\text{max}}$ given in \eqref{eq:lambda_max} only depends on the data $\x, \V{y} \in \R^M$ and is easy to compute numerically - the most costly step being the computation of the pseudoinverse ${\M{L}^T}^\dagger$. Note that item 2 in Proposition~\ref{prop:linear_regression}, which stems from duality theory, is a generalization of a well-known result for the LASSO problem~\cite[Proposition 1.3]{bach2011optimization}, which plays a crucial role in the homotopy method~\cite{osborne2000lasso}. The difference here is the presence of a non-invertible regularization matrix $\M{L}$ in problem~\eqref{eq:optizlambda}, which requires additional arguments in the proof. \section{The Solutions of~\eqref{eq:noiselessclean}} \label{sec:noiseless} In this section, we consider the optimization problem~\eqref{eq:noiselessclean} where the $x_m$ for $m\in\{1,\ldots,M\}$ are distinct and ordered sampling locations and $\obs \in \R^M$ is a fixed measurement vector. This setting is especially relevant when the measurements $y_{0,m}$ are exactly the values of the input signal at locations $x_m$ (noiseless case). The solution set is \begin{equation}\label{eq:noiselessclean} \mathcal{V}_0 \eqdef \underset{\substack{f \in \BV \\ f(x_m) = y_{0,m}, \ m\in\{1,\ldots,M\}}}{\arg \min} \lVert \Op{D}^2 f \rVert_{\mathcal{M}}, \tag{\gCBP} \end{equation} and is known to admit at least one piecewise-linear solution due to Theorem~\ref{theo:RTweneed}. \subsection{Canonical Solution and Canonical Dual Certificate} \label{sec:canonicalstuff} Thereafter, we identify the complete set of solutions~\eqref{eq:noiselessclean}. This allows us to fully determine in which cases this optimization problem admits a unique solution. Our analysis is based on the construction of a pair $(\fcano, \etacano) \in \BV \times \Spc{C}_0(\R)$ that satisfies Proposition \ref{prop:cns-sol-constrained}, which we call the canonical solution and canonical dual certificate respectively. The former is simply the function that connects the points $\P0{m} = \Point{x_m}{y_{0,m}}$. \begin{definition}[Canonical Interpolant] \label{def:canonicalsolutiondef} Let $\x \in \R^M$ be the ordered sampling locations and $\obs \in \R^M$ with $M \geq 2$. We define $\fcano$ as the unique piecewise-linear spline that interpolates the data points with the minimum number of knots, \ie such that \begin{itemize} \item $\fcano (x_m) = y_{0,m}$ for any $m \in \{1, \ldots , M \}$ and \item $\fcano$ has at most $(M-2)$ knots which form a subset of $\{x_m; 2\leq m\leq M-1\}$. \end{itemize} We refer to $\fcano$ as the \emph{canonical interpolant}. \end{definition} The existence and uniqueness of $\fcano$ in Definition \ref{def:canonicalsolutiondef} simply follows from the number of degrees of freedom of a piecewise-linear spline whose knots are known. The canonical interpolant is of the form \begin{equation} \label{eq:canonical_sol} \fcano(x) = a_1 x + a_M + \sum_{m=2}^{M-1} a_m (x - x_m)_+ \end{equation} with $\V{a} = (a_1, \ldots , a_M )\in \R^M$. By definition, $\fcano$ is linear on the interval $(x_m, x_{m+1})$ for $m \in \{2, \ldots , M-1\}$. The interpolatory conditions $\fcano(x_m) = y_m$ and $\fcano(x_{m+1}) = y_{m+1}$ then imply that its slope is $s_m = \frac{y_{0,m+1}-y_{0,m}}{x_{m+1} - x_m}$. Yet from~\eqref{eq:canonical_sol} we get that $s_m = a_1 + \cdots + a_m$. This implies that $a_1 = s_1$ and that $a_m = s_m - s_{m-1}$ for $m\in\{2, \ldots, M-1\}$. Finally, the equation $\fcano(x_1) = y_{0,1}$ yields $a_M = y_{0,1} - a_1 x_1$. Consequently, the vector $\V{a} \in \R^M$ in~\eqref{eq:canonical_sol} is given by \begin{align} \label{eq:a_coefs} \begin{cases} a_1 = \frac{y_{0,2}-y_{0,1}}{x_2 - x_1}, \\ a_m = \frac{y_{0,m+1}-y_{0,m}}{x_{m+1} - x_m} - \frac{y_{0,m}-y_{0,m-1}}{x_m - x_{m-1}}, \quad \forall m\in\{2, \ldots, M-1\}, \\ a_M = y_{0,1} - \frac{y_{0,2}-y_{0,1}}{x_2 - x_1} x_1. \end{cases} \end{align} In order to prove that $\fcano$ is always a solution of~\eqref{eq:noiselessclean}, we construct a particular dual pre-certificate $\etacano$. \begin{proposition}[Canonical Pre-Certificate] \label{prop:canocertif} Let $\x\in \R^M$ be the ordered sampling locations and $\obs \in \R^M$. Let $\V{a} \in \R^M$ be the vector defined by~\eqref{eq:a_coefs}. There exists a unique piecewise-linear spline $\etacano$ given by \begin{align} &\etacano \eqdef \sum_{m=1}^M \dualvarm \green{x_m-\cdot} \qwithq \dualvar=(c_1, \ldots, c_M) \in\RR^M, \label{eq:condition1etacano} \\ &\dotp{\dualvar}{\un} = \dotp{\dualvar}{\x} = 0,\\ &\forall m\in\{2,\ldots,M-1\}, \ \etacano(x_m)=\sign(a_m). \label{eq:condition3etacano} \end{align} with the convention $\sign(0)=0$. Moreover, since $\etacano(x) = 0$ for $x \leq x_1$ and $x \geq x_M$, we have $\etacano \in \Co$ and $\normi{\etacano} = 1$. Hence, $\etacano$ is a dual pre-certificate in the sense of Definition~\ref{def:dual-certif}. \end{proposition} \begin{figure}[t] \subfloat[Canonical solution $\fcano$]{\includegraphics[width=0.5\linewidth]{can_sol.eps}} \subfloat[Canonical dual certificate $\etacano$]{\includegraphics[width=0.5\linewidth]{can_cert.eps}} \caption{Example of a canonical solution and canonical dual certificate for $M=6$ with $x_m = m-1$. We have $a_2<0$, $a_3=0$, $a_4<0$, and $a_5>0$, where the $a_m$ are defined in \eqref{eq:a_coefs}. } \label{fig:cano_example} \end{figure} \begin{proof} The existence and uniqueness of such a spline follows the same argument as for $\fcano$, applied to the data points $(x_1-1, 0)$, $(x_1, 0)$, $(x_m, \sign(a_m))$ for $m\in\{2, \ldots, M-1\}$, $(x_M, 0)$ and $(x_M+1, 0)$. Note that the points $(x_1-1, 0)$ and $(x_M+1, 0)$ at the boundaries add two additional interpolation constraints to~\eqref{eq:condition3etacano}. Moreover, they imply that $\etacano$ does not have a linear term and is thus of the form~\eqref{eq:condition1etacano}. Next, we notice that for $x\leq x_1$, we have $\etacano(x) = -\langle \V{c}, \x \rangle x + \langle \V{c}, \V{1} \rangle = 0$, due to $\langle \V{c}, \x \rangle = \langle \V{c}, \V{1} \rangle = 0$. For $x\geq x_M$, $(x_m - x)_+ = 0$ for every $m \in\{1,\ldots , M\}$, hence $\etacano(x) = 0$. Then, as a piecewise-linear spline with compact support, $\etacano$ is of course in $\Co$. Being compactly supported, it is also clear that $\etacano$ attains its maximum and minimum values at its knots. In particular, $\lVert \etacano \rVert_{\infty} = \max_{m \in\{1,\ldots , M\}} \lvert \etacano(x_m) \rvert = 1$. \end{proof} We now prove that the pair $(\fcano, \etacano) \in \BV \times \Spc{C}_0(\R)$ satisfies Proposition \ref{prop:cns-sol-constrained}. Although the fact that $\fcano$ is a solution to~\eqref{eq:noiselessclean} is known \cite{koenker1994quantile, mammen1997locally} and is significant in its own right, the key element of this result is the construction of the dual certificate $\etacano$. The latter will be essential to fully describe the solution set $\Spc{V}_0$. \begin{proposition} \label{prop:optimality_cano} Let $\x\in \R^M$ be the ordered sampling locations and $\obs \in \R^M$. The canonical interpolant $\fcano$ defined in Definition~\ref{def:canonicalsolutiondef} is a solution of~\eqref{eq:noiselessclean} and $\etacano$, defined in Proposition~\ref{prop:canocertif}, is a dual certificate as defined in Proposition \ref{prop:cns-sol-constrained}. \end{proposition} \begin{proof} By construction, the interpolation conditions $\fcano(x_m) = y_{0,m}$ for all $m\in\{1,\ldots,M\}$ are satisfied. Moreover thanks to Proposition~\ref{prop:canocertif}, $\etacano$ is a dual pre-certificate. By Proposition~\ref{prop:cns-sol-constrained}, it remains to prove that \begin{align}\label{eq:oc-etacano} \ssupp{\Op D^2 \fcano}\subset\ssat\etacano, \end{align} from which we deduce both that $\fcano$ is a solution of~\eqref{eq:noiselessclean} and that $\etacano$ is a dual certificate. Since, again by construction, $\etacano(x_m)=\sign(a_m)$ for all $m\in\{1,\ldots,M\}$ and $\Op D^2 \fcano=\sum_{m=2}^{M-1} a_m \dirac{x_m}$, this proves~\eqref{eq:oc-etacano}. \end{proof} Due to Proposition~\ref{prop:optimality_cano}, we call $\fcano$ the \emph{canonical solution} and $\etacano$ the \emph{canonical dual certificate} of the optimization problem~\eqref{eq:noiselessclean}. We show an example of such functions for given data points $(x_m, y_{0,m})_{m\in\{1,\ldots,6\}}$ in Figure~\ref{fig:cano_example}. Notice that the points $\P0{2}$, $\P0{3}$, and $\P0{4}$ are aligned, which implies that $a_3=0$ (defined in~\eqref{eq:a_coefs}). \subsection{Characterization of the Solution Set} \label{sec:solutionnoiseless} Although identifying a solution $\fcano$ to~\eqref{eq:noiselessclean} is an important first step, this solution is not unique in general. We characterize the case of uniqueness in Proposition~\ref{prop:uniqueness}, and then provide a complete description of the solution set when the solution is not unique in Theorem~\ref{theo:RTweneed}. We shall see that the canonical dual certificate $\etacano$ plays an essential role regarding these issues. \begin{proposition}[Uniqueness Result for~\eqref{eq:noiselessclean}] \label{prop:uniqueness} Let $\x\in \R^M$ be the ordered sampling locations and $\obs \in \R^M$. Then, the following conditions are equivalent. \begin{enumerate} \item~\eqref{eq:noiselessclean} has a unique solution. \item The canonical dual certificate $\etacano$ (defined in Proposition \ref{prop:canocertif}) is non-degenerate (see Definition \ref{def:non-degen}). \item For all $m \in\{2,\ldots, M-2\}$, $a_m a_{m+1} \leq 0$, where $\V{a}\in\RR^M$ is given by~\eqref{eq:a_coefs}. \end{enumerate} \end{proposition} \begin{proof} The equivalence $2. \Leftrightarrow 3.$ comes from the fact that $\etacano$ is non-degenerate if and only if it never saturates at $1$ or $-1$ between two consecutive knots. This is equivalent to item 3 because for all $m \in\{2,\ldots, M-1\}$, $\etacano(x_{m}) = \sign(a_{m})$. The implication $2. \Rightarrow 1.$ is given by Proposition~\ref{prop:general-uniqueness}. We now prove the contraposition of the reverse implication $1. \Rightarrow 2$. We thus assume that $\etacano$ is degenerate, and wish to prove that~\eqref{eq:noiselessclean} has multiple solutions. Using item 3., there exists an index $m \in \{2, \ldots, M-2 \}$ such that $a_{m} a_{m+1} > 0$. We now invoke the following lemma (illustrated in Figure~\ref{fig:mountain}) that plays an important role throughout the paper. \begin{lemma} \label{lem:mountain} Let $\x\in \R^M$ be the ordered sampling locations, and $\V{y}_0 \in \R^M$ with $M \geq 4$. Let $m \in \{ 2, \ldots, M-2 \}$ be an index such that $a_m a_{m+1} > 0$, where $\V{a}\in\RR^M$ is defined as in~\eqref{eq:a_coefs}. Then, the lines $(\P0{m-1}, \P0{m})$ and $(\P0{m+1}, \P0{m+2})$ are intersecting at a point $\mathrm{\widetilde{P}} = \Point{\tilde{\tau}}{\tilde{y}}$ such that $x_m < \tilde{\tau} < x_{m+1}$. Moreover, the piecewise-linear spline $\fopt$ defined by \begin{align} \label{eq:mountain} \fopt(x) \eqdef \begin{cases} \frac{y_{0,m} - y_{0,m-1}}{x_m - x_{m-1}} (x- x_{m-1})+ y_{0,m-1}, & \text{for }x_m < x \leq \tilde{\tau} \\ \frac{y_{0,m+2} - y_{0,m+1}}{x_{m+2} - x_{m+1}} (x- x_{m+1})+ y_{0,m+1}, & \text{for }\tilde{\tau} < x < x_{m+1} \\ \fcano(x) & \text{for }x\not\in (x_m, x_{m+1}), \end{cases} \end{align} which has no knots at $x_m$ or $x_{m+1}$, is a solution of~\eqref{eq:noiselessclean}. \end{lemma} \begin{proof} Let $I_0 = \{2, \ldots, M-1 \} \setminus \{ m, m + 1 \}$. We then define \begin{align} \fopt(x) \eqdef a_1 x + a_M + \sum_{m' \in I_0} a_{m'} (x - x_{m'})_+ + \tilde{a} (x - \tilde{\tau})_+, \end{align} where $\tilde{a} = a_{m} + a_{m + 1}$ and $\tilde{\tau} = \frac{a_{m}x_{m} + a_{m+1}x_{m+1}}{\tilde{a}}$. By definition, $\tilde{\tau}$ is a barycenter of $x_{m}$ and $x_{m+1}$ with weights $\frac{a_{m}}{\tilde{a}}$ and $\frac{a_{m+1}}{\tilde{a}}$. Yet $a_{m}$ and $a_{m+1}$ have the same (nonzero) signs, which implies that these weights are in the interval $(0,1)$ and thus that $\tilde{\tau} \in (x_{m}, x_{m+1})$. Yet $\fopt$ has no knots at $x_m$ and $x_{m+1}$, so it must follow the line $(\P0{m-1}, \P0{m})$ in the interval $[x_m, \tilde{\tau}]$, and the line $(\P0{m+1}, \P0{m+2})$ in the interval $[\tilde{\tau}, x_{m+1}]$, which conforms with the first two first lines in~\eqref{eq:mountain}. Due to the continuity of $\fopt$, these lines are therefore intersecting at the point $\mathrm{\widetilde{P}} = \Point{\tilde{\tau}}{\tilde{y}} = \Point{\tilde{\tau}}{\fopt(\tilde{\tau})}$. Next, for $x \leq x_{m}$, we have $a_{m} (x - x_{m})_+ + a_{m+1}(x - x_{m+1} )_+ = \tilde{a} (x - \tilde{\tau})_+ = 0$. Similarly, for $x \geq x_{m+1}$, we have $a_{m} (x - x_{m})_+ + a_{m+1}(x - x_{m+1} )_+ = \tilde{a} (x - \tilde{\tau})_+ = \tilde{a} (x - \tilde{\tau})$ since $x\geq \tilde{\tau}$. Therefore, for any $x \not\in (x_{m}, x_{m+1})$, we have $\fcano(x) = \fopt(x)$, which conforms with the third line in~\eqref{eq:mountain}. This also implies that implies that $\fopt(x_m) = \fcano(x_m) = y_{0,m}$ for all $m\in\{1,\ldots,M\}$. Moreover, we have $\Vert \Op{D}^2 \fcano \Vert_\Spc{M} = \sum_{m=2}^{M-1} a_m = \sum_{m\in I_0} a_m + \tilde{a} = \Vert \Op{D}^2 \fopt \Vert_\Spc{M}$. Therefore, $\fopt$ has the same measurements and regularization cost as $\fcano$, which implies that it is also a solution of~\eqref{eq:noiselessclean}. \end{proof} Since $\fopt$ defined in Lemma~\ref{lem:mountain} is a solution to \eqref{eq:noiselessclean} such that $\fopt \neq \fcano$, \eqref{eq:noiselessclean} has multiple solutions, which concludes the proof. \end{proof} To the best of our knowledge, Proposition~\ref{prop:uniqueness} is a new result. A similar uniqueness result is presented in \cite[Theorem 4.2]{pinkus1988smoothest}, but with more restrictive conditions than item 3. It follows from Proposition~\ref{prop:uniqueness} that when $M =3$, the solution of~\eqref{eq:noiselessproblem} is always unique because the certificate is always non-degenerate, and is given by $\fcano$. We go much further in Theorem~\ref{theo:sol_set} by providing the full characterization of the solution set when $M\geq 4$. \begin{theorem}[Characterization of the Solution Set of~\eqref{eq:noiselessclean}] \label{theo:sol_set} Let $\x\in \R^M$ be the ordered sampling locations and $\obs \in \R^M$ with $M \geq 4$, and let $\fcano$ and $\etacano$ be the functions defined in Definition \ref{def:canonicalsolutiondef} and Proposition \ref{prop:canocertif} respectively. A function $\fopt \in \BV$ is solution of~\eqref{eq:noiselessclean} if and only if $\fopt(x_m) = y_{0,m}$ for $m\in\{1,\ldots,M\}$, and the following conditions are satisfied for $m\in\{2,\ldots,M-2\}$ \begin{itemize} \item $\fopt = \fcano$ in $[x_m,x_{m+1}]$ if $|\etacano| < 1$ in $(x_m,x_{m+1})$; \item $\fopt$ is convex in $[x_{m-1},x_{m+2}]$ if $\etacano = 1$ in $[x_m,x_{m+1}]$; \item $\fopt$ is concave in $[x_{m-1},x_{m+2}]$ if $\etacano = -1$ in $[x_m,x_{m+1}]$; \item $\fopt = \fcano$ in $(-\infty, x_2)$ and $(x_{M-1}, + \infty)$. \end{itemize} \end{theorem} To illustrate Theorem~\ref{theo:sol_set}, a simple example with $M=4$ data points for which the solution is not unique is given in Figure~\ref{theo:sol_set}. Indeed, the canonical dual certificate saturates at -1 in the interval $[1, 2]$. Therefore, by Theorem~\ref{theo:sol_set}, any function that coincides with $\fcano$ in $\R\setminus [1, 2]$ and that is concave in the interval $[0, 3]$ is a solution. This includes the sparsest solution (with a single knot), as well as non-sparse solutions, \eg with a quadratic regime in $[1, 2]$ as in Figure~\ref{fig:mountain}. \begin{figure}[t] \centering \subfloat[Various solutions]{\includegraphics[width=0.5\linewidth]{mountain.eps}} \subfloat[Canonical dual certificate]{\includegraphics[width=0.5\linewidth]{mountain_cert.eps}} \caption{Example with $M=4$ of a non-unique solution ($\etacano$ saturates at -1). An example of a non-sparse solution with a quadratic regime in $[1, 2]$ is given.} \label{fig:mountain} \end{figure} \begin{proof}[Proof of Theorem~\ref{theo:sol_set}] Let $\fopt$ be a solution of~\eqref{eq:noiselessclean}. According to Proposition~\ref{prop:optimality_cano}, $\etacano$ is a dual certificate. According to Proposition \ref{prop:cns-sol-constrained-fixed-dual-certif}, we therefore have that $ \ssupp {\mathrm{D}^2 \fopt } \subset \ssat \etacano$, meaning that $\mathrm{D}^2 \fopt = 0$ on the complement $\ssat \etacano^c$ of $\ssat \etacano$. In particular, we have that $(-\infty, x_2] \subset \ssat \etacano^c$, hence $\fopt$ is linear on this interval. The interpolation constraints $\fopt(x_1) = \fcano (x_1)$ and $\fopt(x_2) = \fcano (x_2)$ then imply that $\fopt = \fcano$ on $(-\infty, x_2] $. The same argument holds for the interval $[x_{M-1}, + \infty)$ and any interval $(x_m, x_{m+1})$ on which $\etacano$ does not saturate. Assume now that $[x_m,x_{m+1}]\subset \satp \etacano$; that is, $\etacano = 1$ on $[x_m,x_{m+1}]$. We use the Jordan decomposition of $\mathrm{D}^2 \fopt = w = w_+ - w_{-}$ where $w_+$ and $w_-$ are positive measures. By~\eqref{eq:duality-interpolation-expanded}, we know that $w_{-} = 0$ on $[x_m,x_{m+1}]$ because its support is included in $\satn{\etacano}$. Hence, on this interval, $\mathrm{D}^2 \fopt = w = w_+$ is a positive measure, implying that $\mathrm{D} \fopt$ is increasing and therefore that $\fopt$ is convex on $[x_m,x_{m+1}]$. Now, if $(x_{m-1},x_{m})\subset \satp{\eta}^c\cap\satn{\eta}^c$ then, as above, $\D^2 f_{|(x_{m-1},x_{m})}^* = 0$. Otherwise, by continuity of $\etacano$, we have $(x_{m-1},x_{m}) \subset \satp\etacano$ hence $\D^2 f_{|(x_{m-1},x_{m})}^*\geq 0$. As a result $\fopt$ is convex on $(x_{m-1},x_{m+1}]$. The same argument proves that $\fopt$ is convex on $[x_m, x_{m+2})$, and therefore on the whole interval $(x_{m-1},x_{m+2})$. Suppose conversely that $\fopt$ satisfies all the conditions of Theorem \ref{theo:sol_set}. Let us prove that it is a solution of $\eqref{eq:noiselessclean}$. By Proposition~\ref{prop:cns-sol-constrained-fixed-dual-certif}, we just need to check that $\fopt$ satisfies $\ssupp{\D^2 \fopt}\subset\ssat\etacano$ since by construction, $\fopt(x_m)=y_{0,m}$ . By definition of $\etacano$, we have $\D^2 \fopt = 0$ on $\satp{\etacano}^c\cap\satn{\etacano}^c$ (because $\D^2 \fopt$ is equal to $\fcano$ which is linear on that set). Moreover, $\D^2 \fopt\geq 0$ on $\satp\etacano$ (because by assumption, $\fopt$ is convex on intervals where $\etacano = 1$) and $\D^2 \fopt\leq 0$ on $\satn\etacano$ (because $\fopt$ is concave on intervals where $\etacano = -1$). This means that $\supp{w_+} \subset \satp{\etacano}$ and $\supp{w_-} \subset \satp{\etacano}$ where $\mathrm{D}^2 \fopt = w_+ - w_{-}$ is again the Jordan decomposition of $\mathrm{D}^2 \fopt$. Finally, as expected, we have that \begin{equation} \ssupp{\mathrm{D}^2 \fopt} = \supp {w_+} \times \{1\} \cup \supp{w_-} \times \{-1\} \subset \satp{\etacano} \times \{1\} \cup \satn{\etacano} \times \{-1\} = \ssat \etacano, \end{equation} hence $\fopt$ is a solution of \eqref{eq:noiselessclean}. \end{proof} \begin{corollary} \label{coro:uncountable} If~\eqref{eq:noiselessclean} has more than one solution, then it has an uncountable number of solutions. \end{corollary} \begin{proof} If the solution is not unique, then the dual certificate $\etacano$ is degenerate, and therefore saturates over some interval $(x_m,x_{m+1})$. Then, Theorem~\ref{theo:sol_set} characterizes the whole set of solutions, which is clearly uncountably infinite. \end{proof} Corollary~\ref{coro:uncountable} is the continuous counterpart of the well-known fact that the discrete LASSO either admits a unique solution or an uncountable number of solutions~\cite[Lemma 1]{Tibshirani2013lasso}. Even with infinitely many solutions, we are able to delimit the geometric domain that contains the graphs of all solutions by exploiting the local convex/concavity. We recall that $\mathrm{P}_{0,m} = [x_m \ y_{0,m}]^T$ for $m\in\{1,\ldots,M\}$, and that for $\mathrm{A},\mathrm{B}\in \R^2$, we denote by $(\mathrm{A},\mathrm{B})$ the line joining $\mathrm{A}$ and $\mathrm{B}$. Then, for $M \geq 4$, we consider the set of indices \begin{equation} \label{eq:indicesnoparallel} \mathcal{X} \eqdef \mathcal{X}(\x, \obs) \eqdef \left\{m \in \{2, \ldots, M-2\}; \ a_m a_{m+1} > 0 \right\}, \end{equation} where we recall that $ a_m = \frac{y_{0,m+1}-y_{0,m}}{x_{m+1} - x_m} - \frac{y_{0,m}-y_{0,m-1}}{x_m - x_{m-1}}$ (see~\eqref{eq:a_coefs}). The slope condition $a_m a_{m+1}> 0$ in~\eqref{eq:indicesnoparallel} is equivalent to the fact that the lines $(\mathrm{P}_{0,m-1}, \mathrm{P}_{0,m})$ and $(\mathrm{P}_{0,m+1}, \mathrm{P}_{0,m+2})$ are not parallel (otherwise we would have that $a_m = -a_{m+1}$, hence $a_m a_{m+1} \leq 0$) and that their intersection point, that we denote by $\widetilde{\mathrm{P}}_m= [\tilde{\tau}_m \ \tilde{y}_m]^T$, is such that $x_m \leq \tilde{\tau}_m \leq x_{m+1}$ according to Lemma \ref{lem:mountain}. We can thus introduce the triangles $\Delta_m$, whose vertices are the points $\mathrm{P}_{0,m}$, $\widetilde{\mathrm{P}}_m$, and $\mathrm{P}_{0,m+1}$. Theorem \ref{thm:limitdomain} makes the link between the graph of any solution $\fopt\in\BV$ of~\eqref{eq:noiselessclean}, the graph of $\fcano$ and the triangles $\Delta_m$. \begin{theorem}[Geometric Domain of the Graph of Solutions of~\eqref{eq:noiselessclean}] \label{thm:limitdomain} Let $\x\in \R^M$ be the ordered sampling locations and $\obs \in \R^M$ with $M \geq 4$. Then, we have \begin{equation} \label{eq:domain} \cup_{\fopt \in \mathcal{V}_0} \mathcal{G}(\fopt) = \mathcal{G}(\fcano) \cup \left( \cup_{m \in \mathcal{X}} \Delta_m \right), \end{equation} where $\fcano$ is defined in Definition~\ref{def:canonicalsolutiondef}, $\Spc{X}$ is defined in~\eqref{eq:indicesnoparallel}, and the $\Delta_m$ triangles are defined in the above paragraph. \end{theorem} \begin{figure}[t] \centering \begin{tikzpicture} \node[anchor=south west,inner sep=0] (image) at (0,0,0) {\includegraphics[width=0.5\linewidth]{graph_location.eps}}; \begin{scope}[x={(image.south east)},y={(image.north west)}] \draw (0.45,0.67) node {$\Delta_2$}; \draw (0.62,0.75) node {$\Delta_3$}; \end{scope} \end{tikzpicture} \caption{Example with $M=5$ of the geometric domain $\cup_{\fopt \in \mathcal{V}_0} \mathcal{G}(\fopt)$ containing all the solutions to~\eqref{eq:noiselessclean}. We have $\Spc{X} = \{ 2, 3\}$ and thus two triangles $\Delta_m$; all solutions follow $\fcano$ everywhere else.} \label{fig:graph_location} \end{figure} The relation~\eqref{eq:domain} reveals the smallest possible geometric domain containing all the graphs of the solutions of~\eqref{eq:noiselessclean}. To obtain a solution of~\eqref{eq:noiselessclean}, one just needs to follow the graph of $\fcano$ outside the triangles $\Delta_m$ and take a convex or concave function inside them. An example of this domain is given in Figure~\ref{fig:graph_location} with $M=5$ and $\#\Spc{X} = 2$ triangles (this same example is treated further later in Figure~\ref{fig:2_sat}). The proof of Theorem~\ref{thm:limitdomain} is given in Appendix~\ref{app:limitdomain}. Next, Section~\ref{sec:sparsestsolutions} is dedicated to the study of the sparsest piecewise-linear solutions of~\eqref{eq:noiselessclean}. \section{The Space $\BV$} \label{app:1} As a complement to the characterization of the space $\BV$ in Section~\ref{sec:BVspace}, we summarize its main properties in Proposition~\ref{prop:BV2}, revealing its Banach-space structure. The construction of the native space for general spline-admissible operator $\mathrm{L}$ (we consider here the case $\mathrm{L} = \D^2$) is developed in \cite{Unser2019native}. \begin{proposition}[Properties of $\BV$] \label{prop:BV2} The space $\BV$ has the following properties. \begin{enumerate} \item Any function $f\in \BV$ is continuous and satisfies $f(x) = \mathcal{O}(x)$ at infinity. Affine functions $f$ such that $f(x) = a x + b$ for $a,b \in \R$ are elements of $\BV$. \item The linear space $\BV$ is isomorphic to $\Radon \times \R^2$ via the relation \begin{equation} \label{eq:bijection} f \mapsto \left( \D^2 f , (f(0), f(1)-f(0)) \right). \end{equation} \item The space $\BV$ is a Banach space for the norm \begin{equation} \label{eq:normBV} \lVert f \rVert_{\mathrm{BV}^{(2)}} \eqdef \lVert \mathrm{D}^2 f \rVert_{\mathcal{M}} + \sqrt{f(0)^2 + (f(1) - f(0))^2}. \end{equation} \item For any $w \in \Radon$, there exists a unique $f \in \BV$ such that $\D^2 f = w$ and $f(0) = f(1) = 0$. \end{enumerate} \end{proposition} \begin{proof}[Proof of Proposition~\ref{prop:BV2}] A function in $\BV$ is the integration of a bounded-variation function, and is therefore continuous. If $f$ is such that $\mathrm{D}^2 f \in \Radon$, then $\mathrm{D} f$ is bounded by $\lVert \mathrm{D}^2 f \rVert_{\mathcal{M}}$. Hence, \begin{equation} \lvert f(x) \rvert = \left\lvert f(0) + \int_{0}^x ( \mathrm{D} f ) (t) \mathrm{d} t \right\rvert \leq \lvert f(0) \rvert + \lVert \mathrm{D} f \rVert_{\infty} \lvert x \rvert, \end{equation} and $f(x) = \mathcal{O} (x)$ at infinity. Moreover, for an affine function $f$ such that $f(x) = \alpha + \beta x$, we obviously have that $\D^2 f = 0 \in \Radon$, hence $f \in \BV$. The relation~\eqref{eq:bijection} is clearly linear and is a bijection, since any $f \in \BV$ can be uniquely recovered from its second derivative via the specification of two boundary conditions, here the values of $f(0)$ and $f(1)$. Hence, \eqref{eq:bijection} is an isomorphism. Due to this isomorphism, $\BV$ inherits the Banach space structure of $\Radon\times \R^2$ for the norm $\lVert (w, (\alpha,\beta)) \rVert_{\mathcal{M}\times\R^2} = \lVert w \rVert + \sqrt{\alpha^2 + \beta^2}$ and is hence a Banach space for the norm~\eqref{eq:normBV}. For the last point, by definition, any $f\in \Schp$ such that $\D^2 f = w$ is in $\BV$. The space of solutions of $\mathrm{D}^2 f = w$ is then a two-dimensional space, and the solution is uniquely characterized by the specification of the two boundary conditions $f(0) = 0$ and $f(1) = 0$. \end{proof} In Section~\ref{sec:BVspace}, we have introduced the operator $\Itwo$. We now summarize its main properties. \begin{proposition}[Kernel of $\Itwo$] \label{prop:Itwo} For any $w \in \Radon$, $\Itwo\{w\}$ is given by \begin{equation} \label{eq:I2integral} \Itwo \{ w\} (x) \eqdef \int_{\R} g(x,y) \mathrm{d}w (y) = \langle w , g(x,\cdot) \rangle, \end{equation} where $g$ is the kernel defined over $\R^2$ as \begin{equation} \label{eq:kernelI2} g(x,y) \eqdef (x-y)_+ - (-y)_+ + x \left( (-y)_+ - (1-y)_+ \right), \end{equation} and is such that $g(x,\cdot)$ is a continuous and compactly supported function for any $x\in \R$. Then, the operator $\Itwo$ is linear and continuous from $\Radon$ to $\BV$ and satisfies the right-inverse and pseudo-left-inverse relations \begin{align} \forall w\in\Radon,& \quad \Op D^2\{ \Itwo \{w\} \} = w,\\ \forall f\in\BV, \ \forall x \in \R, & \quad f(x) = \Itwo\{ \mathrm{D}^2 \{f \} \}(x) + f(0) + (f(1) - f(0)) x. \label{eq:Irightleft} \end{align} In particular, $\Itwo$ is a right-inverse of the second-derivative $\D^2$. Moreover, any $f \in \BV$ can be uniquely decomposed as \begin{equation} \label{eq:fdecompose-bis} \forall x \in \R, \quad f(x) = \Itwo \{w\} (x) + \alpha + \beta x, \end{equation} where $w\in \Radon$, $\alpha,\beta \in \R$ are given by \begin{equation} \label{eq:walphabeta} w = \D^2 f, \quad \alpha = f(0), \quad \text{and} \quad \beta = f(1) - f(0) . \end{equation} \end{proposition} \begin{proof}[Proof of Proposition~\ref{prop:Itwo}] We fix $x \in\R$. We easily verify that $g(x,y) = 0$ for $\lvert y \rvert \geq \max(1, \lvert x \rvert)$, hence $g(x,\cdot)$ is compactly supported. The continuity of $g(x, \cdot)$ is obvious as a sum of continuous functions (remarking that $y \mapsto y_+$ is continuous). Therefore, $g(x,\cdot) \in \Co$ and the duality product $\langle w, g(x,\cdot) \rangle$ is well defined for any $w \in \Radon$ and $x\in \R$. For $w \in \Radon$ and $x \in \R $, we set $f(x) = \langle w , g(x,\cdot ) \rangle$. From the definition of $g$, we have, denoting by $\partial_x$ the partial derivative with respect to the first variable, \begin{equation} \partial_x^2 \{g \} (x ,y) = \delta(x-y). \end{equation} We therefore deduce that \begin{equation} \mathrm{D}^2 \{f\}(x) = \langle \partial_x^2 \{g \} (x ,\cdot) , w \rangle = \langle \delta(x-\cdot), w \rangle = w(x), \end{equation} where we use the slight abuse of notation of keeping the variable $x$ for measures (that may not be defined pointwise) to distinguish when we operate over the first or the second variable. Moreover, we have that $g(0,y) = g(1,y) = 0$ for all $y\in \R$, which yields $f(0) = f(1) = 0$. From the definition of $\Itwo$, $\Itwo \{ w \}$ is the unique function satisfying these properties, proving that $\Itwo \{w\} (x) = f (x) = \langle w , g(x,\cdot ) \rangle$ for every $x\in \R$ and $w \in \Radon$. This shows \eqref{eq:I2integral}. Next, it is clear that $\Itwo$ is linear from $\Radon$ to $\BV$. The continuity of $\Itwo$ follows from the fact that \begin{equation} \lVert \Itwo \{w\} \rVert_{\mathrm{BV}^{(2)}} = \lVert \D^2 \Itwo \{w\} \rVert_\mathcal{M} + \sqrt{(\Itwo \{w\} (0))^2 + ((\Itwo \{w\} (1) - \Itwo \{w\} (0))^2} = \lVert w \rVert_\mathcal{M}. \end{equation} The equality $\mathrm{D}^2 \Itwo \{w\} = w$ comes from the definition of $\Itwo \{w\}$. For the right-hand side of~\eqref{eq:Irightleft}, we remark that $\mathrm{D}^2 \{ \Itwo \D^2 \{f\} \} = \D^2 f$ by definition, hence $ \Itwo \D^2 \{f\} (x) = f(x) + \alpha + \beta x$ for every $x\in \R$ and some constants $\alpha, \beta \in \R$. The equations $ \Itwo \D^2 \{f\} (0) = \Itwo \D^2 \{f\} (1) = 0$ then specify the constants $\alpha$ and $\beta$, which proves~\eqref{eq:Irightleft}. Finally,~\eqref{eq:fdecompose-bis} and~\eqref{eq:walphabeta} can be seen as reformulations of the right equality in~\eqref{eq:Irightleft}. The uniqueness follows from the simple fact that $\D^2 f = w$ determines $f$ when the values of $f(0)$ and $f(1)$ are fixed. \end{proof} \section{Proof of Proposition~\ref{prop:cns-sol-constrained}}\label{app:2} The forward operator considered in this paper is a sampling operator (the functions $f\in\BV$ are sampled at the locations $x_m\in\RR$ for $m\in\{1,\ldots,M\}$). Let us denote it, for the convenience of the proof, as a linear operator $\nuf:\BV\to\RR^M$ such that \begin{align}\label{eq:forward-op-nu} \forall f\in\BV, \quad \nuf(f)\eqdef(f(x_m))_{1\leq m\leq M}. \end{align} The proof of Proposition~\ref{prop:cns-sol-constrained} can be divided in several steps. First, we reformulate~\eqref{eq:noiseless} into an equivalent optimization problem thanks to the decomposition of any $f\in\BV$ given by~\eqref{eq:fdecompose}. This is stated in the next lemma. \begin{lemma}\label{lem:equivalentOP} The problem~\eqref{eq:noiseless} is equivalent to \begin{align}\label{eq:constrained-equi} \underset{(w,(\alpha, \beta))\in\Radon\times\RR^2}{\min} \ \iota_{\{\obs\}}(\nuu(w) + \alpha\un+\beta\x) + \mnorm{w}. \end{align} where $\iota_{\{\obs\}}$ is the indicator of the convex set $\{\obs\}$, which is zero at $\obs$ and $+\infty$ elsewhere, and \begin{align}\label{eq:def-nuu} \nuu \eqdef \nuf\circ\Itwo : \Radon \to \RR^M \end{align} is the modified forward operator. This equivalence is in the sense that there exists a bijection given by the unique decomposition of any $f\in\BV$ as $f=\Itwo\{w\} + \al + \be(\cdot)$ with $(w,(\al,\be))\in\Radon\times\RR^2$ (see\eqref{eq:fdecompose}) between the solution sets of both optimization problems. \end{lemma} From now on, we consider the equivalent problem~\eqref{eq:constrained-equi} and analyze it using tools from duality theory. The search space $\Radon\times\RR^2$ of this optimization problem is endowed with the weak-* topology, which is defined in terms of its predual space $\Co\times\RR^2$. Using~\eqref{eq:I2integral}, the modified operator $\nuu$ can be expressed as $\nuu(w)=(\dotp{w}{g(x_m,\cdot)})_{1\leq m\leq M}$, where $g(x_m,\cdot)\in\Co$ for all $m\in\{1,\ldots,M\}$ by Proposition~\ref{prop:Itwo}. Since $\Radon$ is the dual of $\Co$, this implies that the linear functional $\nuu:\Radon\to\RR^M$ is weak-* continuous \cite[Theorem IV.20, p. 114]{Reed1980methods}. The adjoint $\nuu^* : \RR^M \to \Co$ of $\nuu$ is thus uniquely defined and is given by \begin{align}\label{eq:nuu-star} \forall \dualvar\in\RR^M, \quad \nuu^*(\dualvar)=\sum_{m=1}^M \dualvarm g(x_m,\cdot), \end{align} since $\dotp{w}{\nuu^*(\dualvar)}=\dotp{\nuu(w)}{\dualvar}=\dotp{\pa{\dotp{w}{g(x_m,\cdot)}}_{1\leq m\leq M}}{\dualvar}=\dotp{w}{\sum_{m=1}^M \dualvarm g(x_m,\cdot)}$, for all $w\in\Radon$ and $\dualvar\in\RR^M$. The second part of the proof consists in determining the dual problem of~\eqref{eq:constrained-equi}, proving that strong duality between the primal and dual problem holds (\ie that the optimal values of both problems are equal and finite) and then deriving the optimality conditions which characterize the solutions of problem~\eqref{eq:constrained-equi}. This is done in the next lemma. \begin{lemma} \label{lem:optimality-certificate} The dual problem of~\eqref{eq:constrained-equi} is given by \begin{align}\label{eq:constrained-dual} &\underset{\dualvar\in\Cc}{\sup} \dotp{\obs}{\dualvar}, \qwithq \Cc\eqdef\{\dualvar\in\RR^M;\dotp{\dualvar}{\un}=\dotp{\dualvar}{\x}=0,\ \normi{\nuu^*(\dualvar)}\leq1\}. \end{align} Moreover, it has at least one solution and strong duality holds between problems~\eqref{eq:constrained-equi} and~\eqref{eq:constrained-dual}. Finally, for any $(w,(\al,\be))\in\Radon\times\RR^2$ and $\V c\in\RR^M$, we have the equivalence between the following statements: \begin{enumerate} \item $(w,(\al,\be))$ is a solution of~\eqref{eq:constrained-equi} and $\V c$ is a solution of~\eqref{eq:constrained-dual}. \item $(w,(\al,\be))$ and $\V c$ satisfy the following conditions: \begin{align} &\nuu(w) + \alpha\un+\beta\x = \obs,\label{eq:ic}\\ &\dotp{\dualvar}{\un} = \dotp{\dualvar}{\x} = 0, \quad \mnorm{w} = \dotp{w}{\nuu^*(\dualvar)} \qandq \normi{\nuu^*(\dualvar)}\leq1. \label{eq:oc} \end{align} \end{enumerate} \end{lemma} \begin{proof} Let us first obtain the dual problem~\eqref{eq:constrained-dual}. The proof follows the technique of perturbed problems detailed in~\cite[Chapter 3]{Ekeland1976convex}. \paragraph{Dual problem.}Let us write the (primal) problem~\eqref{eq:constrained-equi} as \begin{align}\label{eq:primal-proof} &\underset{(w,(\alpha, \beta))\in\Radon\times\RR^2}{\min} F(w,(\alpha,\beta)) + G(\Lambda(w,(\alpha,\beta))),\\ &\qwhereq F(w,(\al,\be))\eqdef\mnorm{w}, \quad \forall \dualvar\in \RR^M, \ G(\dualvar)\eqdef \iota_{\{\obs\}}(\dualvar) \qandq \Lambda(w,(\al,\be)) \eqdef \nuu(w) + \alpha\un+\beta\x. \notag \end{align} The functions $F$ and $G$ are convex, lower semi-continuous and not identically equal to $\pm\infty$. By~\cite[Equation (4.18)]{Ekeland1976convex}, the dual problem of~\eqref{eq:primal-proof} is thus given by $\underset{\dualvar\in\RR^M}{\sup} -F^*(\Lambda^*(\dualvar)) - G^*(-\dualvar)$, where $F^*$ and $G^*$ are the Fenchel conjugates of $F$ and $G$ respectively, and $\Lambda^* : \RR^M\to\Co\times\RR^2$ is the adjoint of $\Lambda$. One can check that for all $\dualvar\in\RR^M$, $G^*(\dualvar)=\dotp{\dualvar}{\obs}$, for all $\eta\in\Co$ and $\al,\be\in\RR$, $F^*(\eta, (\al,\be))=\iota_{\normi{\cdot}\leq1}(\eta) + \iota_{\{(0,0)\}}((\al,\be))$ (with $\iota_{\normi{\cdot}\leq1}$ the indicator function of the closed unit ball in $\Co$ for the uniform norm), and for all $\dualvar\in\RR^M$, $\Lambda^*(\dualvar)=\pa{\nuu^*(\dualvar),(\dotp{\dualvar}{\un},\dotp{\dualvar}{\x})}$. Therefore, the dual problem can be rewritten as \begin{align}\label{eq:dual-proof} &-\underset{\dualvar\in\RR^M}{\inf} \ \iota_\Cc (\dualvar) + \dotp{-\dualvar}{\obs}, \end{align} where $\Cc\subset\RR^M$ is the convex set defined in~\eqref{eq:constrained-dual}. Problem~\eqref{eq:dual-proof} is clearly the same as problem~\eqref{eq:constrained-dual}, which proves the first statement of the lemma. \paragraph{Strong duality.}To prove strong duality between problems~\eqref{eq:constrained-equi} and~\eqref{eq:constrained-dual} (\ie they have the same optimal value), we start by showing strong duality between \begin{align}\label{eq:dual-equi-sd} \underset{\dualvar\in\RR^M}{\inf} \ \iota_\Cc (\dualvar) + \dotp{-\dualvar}{\obs}, \end{align} and its dual problem. We then conclude by observing that the optimal value of the dual problem of~\eqref{eq:dual-equi-sd} is equal to the optimal value of problem~\eqref{eq:constrained-equi} up to a sign. Indeed, this last statement proves that both problems~\eqref{eq:constrained-equi} and~\eqref{eq:constrained-dual} have the same optimal value since problem~\eqref{eq:dual-equi-sd} is, up to a sign, the dual problem~\eqref{eq:constrained-dual} (which rewrites as in~\eqref{eq:dual-proof}). We first start by proving that strong duality holds between problem~\eqref{eq:dual-equi-sd} and its dual problem. The aim is to apply~\cite[Proposition~2.3, Chapter~3]{Ekeland1976convex}. With the notations of~\cite{Ekeland1976convex}, let us denote the map $\Phi:\RR^M\times\Co\to\RR\cup\{+\infty\}$ as \begin{align}\label{eq:phi-perturbed} \forall (\dualvar,\eta)\in\RR^M\times\Co, \quad \Phi(\dualvar,\eta)\eqdef \dotp{-\dualvar}{\obs} + \iota_{\{(0,0)\}}\pa{(\dotp{\dualvar}{\un},\dotp{\dualvar}{\x})} + \iota_{\normi{\cdot}\leq1}(\nuu^*(\dualvar) - \eta). \end{align} This map $\Phi$ defines a perturbed problem to problem~\eqref{eq:dual-equi-sd}, since by definition, for all $\dualvar\in\RR^M$, \eq{ \Phi(\dualvar,0)=\iota_\Cc (\dualvar) + \dotp{-\dualvar}{\obs} } is the objective function of problem~\eqref{eq:dual-equi-sd}. Now let us check that the assumptions of~\cite[Proposition~2.3]{Ekeland1976convex} are satisfied for $\Phi$ and problem~\eqref{eq:dual-equi-sd}: \begin{itemize} \item $\Phi$ is convex, \item the optimal value of problem~\eqref{eq:dual-equi-sd} is finite due to the weak duality (primal-dual inequality given below) between problems~\eqref{eq:primal-proof} and~\eqref{eq:dual-proof}, which yields \eq{ -\infty < -\underset{\dualvar\in\RR^M}{\inf} \ \iota_\Cc (\dualvar) + \dotp{-\dualvar}{\obs} \leq \underset{(w,(\alpha, \beta))\in\Radon\times\RR^2}{\inf} \mnorm{w} + \iota_{\{\obs\}}(\nuu(w) + \alpha\un+\beta\x) < +\infty, } \item the map $\eta\in\Co\mapsto\Phi(\V 0, \eta) = \iota_{\normi{\cdot}\leq1}(-\eta)$ is finite and continuous at $\eta = 0\in\Co$. \end{itemize} Therefore, we deduce that strong duality holds between problem~\eqref{eq:dual-equi-sd} and its dual problem given by \begin{align}\label{eq:bidual} \underset{w\in\Radon}{\sup} \ -\Phi^*(\V{0},w), \end{align} and that this last optimization problem has at least one solution. Writing the map $\Phi$ as $\Phi(\dualvar,\eta)=\tilde F(\dualvar) + \tilde G(\tilde\Lambda(\dualvar)-\eta)$ with $\tilde F(\dualvar)\eqdef\dotp{-\dualvar}{\obs}+\iota_{V^\perp}(\dualvar)$, $V\eqdef\Span(\un,\x)\subset\RR^M$, $\tilde G\eqdef\iota_{\normi{\cdot}\leq1}(\cdot)$, and $\tilde\Lambda=\nuu^*$, then problem~\eqref{eq:bidual} becomes $-\min_{w\in\Radon}\tilde{F}^*(\tilde\Lambda^*(w))+\tilde G^*(-w)$, \ie \begin{align} \label{eq:bidual-eq} -\min_{w\in\Radon} \iota_V(\nuu(w)+\obs)+\mnorm{w}. \end{align} We now verify that the optimal value of \begin{align}\label{eq:bidual-minus} \min_{w\in\Radon} \iota_V(\nuu(w)+\obs)+\mnorm{w}, \end{align} \ie minus the optimal value of the dual problem of~\eqref{eq:dual-equi-sd} is equal to the optimal value of problem~\eqref{eq:constrained-equi} \begin{align} \underset{(w,(\alpha, \beta))\in\Radon\times\RR^2}{\min} \ \iota_{\{\obs\}}(\nuu(w) + \alpha\un+\beta\x) + \mnorm{w}. \end{align} Let $w\in\Radon$ be a solution of problem~\eqref{eq:bidual-minus} (which we know to exist by~\cite[Proposition~2.3]{Ekeland1976convex}). Since the objective function of problem~\eqref{eq:bidual-minus} is finite at $w$, we obtain that $\nuu(w)+\obs\in V$, \ie there exists $(\al,\be)\in\RR^2$ such that $\obs = \nuu(-w)+\al\un+\be\x$. Assume by contradiction that there exist $(\tilde w, (\tilde \al,\tilde \be))\in\Radon \times \RR^2$ that achieve a lower cost than $(w,(\al,\be))$ in~\eqref{eq:constrained-equi}, \ie \eq{ \iota_{\{\obs\}}(\nuu(-w) + \al\un+\be\x) + \mnorm{-w} > \iota_{\{\obs\}}(\nuu(\tilde w) + \tilde\al \un + \tilde\be \x) + \mnorm{\tilde w}. } Since the left term of this inequality in finite, we must have $\obs = \nuu(\tilde w) + \tilde\al \un + \tilde\be \x$ and \begin{align}\label{eq:ineq-mnorm} \mnorm{w} > \mnorm{-\tilde w}. \end{align} Since $\nuu(-\tilde w)+\obs = \tilde\al \un + \tilde\be \x\in V$, we deduce thanks to~\eqref{eq:ineq-mnorm} that $-\tilde w$ achieves a lower cost than $w$ for problem~\eqref{eq:bidual-minus}, which contradicts the assumption on $w$. Hence, for all $w\in\Radon$, $(\al,\be)\in\RR^2$, we have \eq{ \iota_{\{\obs\}}(\nuu(-w) + \al\un+\be\x) + \mnorm{-w} \leq \iota_{\{\obs\}}(\nuu(w) + \al \un + \be \x) + \mnorm{w}, } \ie $(-w, (\al,\be))\in\Radon\times\RR^2$ is a solution of problem~\eqref{eq:constrained-equi}. Therefore, we get that the optimal values of problems~\eqref{eq:bidual-minus} and~\eqref{eq:constrained-equi} are equal since \eq{ \iota_V(\nuu(w)+\obs)+\mnorm{w} = \iota_{\{\obs\}}(\nuu(-w) + \al\un+\be\x) + \mnorm{-w}. } \paragraph{Optimality conditions.}To derive the optimality conditions given in~\eqref{eq:ic} and~\eqref{eq:oc}, we apply~\cite[Proposition~2.4, Chapter~3]{Ekeland1976convex}. We have already proved that strong duality holds, and that the primal problem~\eqref{eq:constrained-equi} has at least one solution. To apply the proposition, it remains to prove that the dual problem~\eqref{eq:constrained-dual} also has at least one solution. This holds true due to the following \begin{itemize} \item the objective function of problem~\eqref{eq:constrained-dual} is a continuous linear form over the convex set $\Cc$, \item the convex set $\Cc=V^\perp\cap\Dd \subset \R^M$ is compact as the intersection of the closed set $V^\perp$ and the compact set $\Dd\eqdef\{\dualvar\in\RR^M; \normi{\nuu^*(\dualvar)}\leq1\}$. The main argument to prove the compacity of $\Dd$ is that $\mbox{Im}(\nuu^*)\subset\Co$ is finite dimensional. Let us prove it in a formal way. Consider the map $F:\RR^M\to\Ff$ given by \eq{ \forall \dualvar\in\RR^M, \quad F(\dualvar)\eqdef\sum_{m=1}^M \dualvarm g(x_m,\cdot)=\nuu^*(\dualvar) } (using ~\eqref{eq:nuu-star} for the last equality), where $\Ff\eqdef \Span\pa{\{g(x_m,\cdot); 1\leq m\leq M\}}$. Then, $F$ is \begin{itemize} \item linear; \item injective and thus bijective due to the linear independence of the family $(g(x_m,\cdot))_{1\leq m\leq M}$. This independence can be proved by considering that $(g(x_m,\cdot))_{1\leq m\leq M}$ is a family of piecewise-linear splines with each finitely many knots, and so there exists a nonempty interval $I$ in which all the $g(x_m,\cdot)$ are linear functions; \item continuous with $\Ff\subset\Co$ endowed with the uniform norm $\normi{\cdot}$. \end{itemize} Therefore, by the bounded inverse theorem, $F^{-1}$ is continuous. Moreover, note that $\Ee\eqdef\{f\in\Ff; \normi{f}\leq 1\}$ is bounded and closed, and is thus compact (since $\Ff=\mbox{Im}(\nuu^*)$ is finite dimensional). This proves that $\Dd=F^{-1}(\Ee)$ is compact. \end{itemize} The convexity and the compacity of $\Cc$ imply that there is at least one extreme point of $\Cc$ that is a solution of problem~\eqref{eq:constrained-dual}. Hence, the assumptions of~\cite[Proposition~2.4, Chapter~3]{Ekeland1976convex} are satisfied, which implies that any solution $(w,(\al,\be))\in\Radon\times\RR^2$ of (the primal) problem~\eqref{eq:constrained-equi} and $\dualvar\in\RR^M$ of (the dual) problem~\eqref{eq:constrained-dual} are linked by the optimality conditions \begin{align} &\nuu(w) + \alpha\un+\beta\x = \obs,\\ &\dotp{\dualvar}{\un} = \dotp{\dualvar}{\x} = 0, \quad \mnorm{w} = \dotp{w}{\nuu^*(\dualvar)} \qandq \normi{\nuu^*(\dualvar)}\leq1. \end{align} Conversely, if any $(w,(\al,\be))\in\Radon\times\RR^2$ and $\dualvar\in\RR^M$ satisfy the optimality conditions given above, then again by~\cite[Proposition~2.4, Chapter~3]{Ekeland1976convex} we obtain that $(w,(\al,\be))\in\Radon\times\RR^2$ and $\dualvar\in\RR^M$ are solutions of the primal and dual problems respectively. This proves the last statement of the lemma. \end{proof} The last intermediate result needed for the proof of Proposition~\ref{prop:cns-sol-constrained} is given in the next lemma, where we prove that any continuous function $\nuu^*(\dualvar)\in\Co$ with $\dualvar\in\RR^M$ satisfying the orthogonality conditions given in~\eqref{eq:oc} is a piecewise-linear spline whose knots are located at the sampling points $\x=(x_m)_{1\leq m\leq M}$. \begin{lemma}\label{lem:struct-certif} Let $\dualvar\in\RR^M$ such that $\dotp{\dualvar}{\un} = \dotp{\dualvar}{\x} = 0$. Then, we have $\nuu^*(\dualvar) = \sum_{m=1}^M \dualvarm \green{x_m - \cdot}$. \end{lemma} \begin{proof} We know by~\eqref{eq:nuu-star} and~\eqref{eq:kernelI2} that \begin{align} \nuu^*(\dualvar) &= \dotp{\dualvar}{\pa{g(x_m,\cdot)}_{1\leq m\leq M}}, \\ &= \dotp{\dualvar}{\pa{\green{x_m-x} - (-x)_+ + x_m((-x)_+ - (1 - x)_+)}_{1\leq m\leq M}}, \\ &= \dotp{\dualvar}{\pa{\green{x_m-x}}_{1\leq m\leq M}} - (-x)_+ \underbrace{\dotp{\dualvar}{\un}}_{=0} + ((-x)_+ - (1 - x)_+)\underbrace{\dotp{\dualvar}{\x}}_{=0}, \end{align} which proves that $\nuu^*(\dualvar) = \sum_{m=1}^M \dualvarm \green{x_m - \cdot}$. \end{proof} We can now prove Proposition~\ref{prop:cns-sol-constrained}. \begin{proof}[Proof of Proposition~\ref{prop:cns-sol-constrained}] Suppose that $\fopt\in\BV$ is a solution of~\eqref{eq:noiseless}. Then, $\fopt$ satisfies the interpolation conditions $\fopt(x_m)=y_{0,m}$ for all $m\in\{1,\ldots,M\}$, and $(w,(\al,\be))\in\Radon\times\RR^2$ is a solution of problem~\eqref{eq:constrained-equi} where $\fopt=\Itwo\{w\} + \al + \be (\cdot)$. By Lemma~\ref{lem:optimality-certificate}, there exists a $\V c\in\RR^M$ solution of problem~\eqref{eq:constrained-dual} which then satisfies $\dotp{\dualvar}{\un} = \dotp{\dualvar}{\x} = 0$ with $\normi{\nuu^*(\dualvar)}\leq1$. Let us denote $\eta \eqdef \nuu^*(\dualvar)\in\Co$. By Lemma~\ref{lem:struct-certif}, we have $\eta = \sum_{m=1}^M \dualvarm \green{x_m - \cdot}$ \ie $\eta$ is a dual pre-certificate (Definition~\ref{def:dual-certif}). Moreover, again by Lemma~\ref{lem:optimality-certificate}, we know that $\mnorm{w} = \dotp{w}{\eta}$ which gives the direct implication. For the reverse implication, the dual pre-certificate $\eta$ given by the statement satisfies $\eta=\nuu^*(\dualvar)$ by Lemma~\ref{lem:struct-certif}, and since $\fopt$ satisfies the interpolation conditions, we deduce that $\nuu(w) + \alpha\un+\beta\x = \obs$ where $\al$ and $\be$ are defined thanks to the relation $\fopt=\Itwo\{w\} + \al + \be (\cdot)$. Hence, by Lemma~\ref{lem:optimality-certificate}, $(w,(\al,\be))\in\Radon\times\RR^2$ is a solution of problem~\eqref{eq:constrained-equi} (and $\V c$ is a solution of problem~\eqref{eq:constrained-dual}), \ie $\fopt$ is a solution of ~\eqref{eq:noiseless}. Let us now prove that the relation $\mnorm{w} = \dotp{w}{\eta}$ is equivalent to $\ssupp w \subset \ssat \eta$ when $\eta$ is a dual pre-certificate (see Definition~\ref{def:supp-sat} for the definition of the signed support and signed saturation set). First, we have that $\mnorm{w} = \mnorm{w_{|\satp{\eta}}} + \mnorm{w_{|\satn{\eta}}} + \mnorm{w_{|S^c}}$ (see~\cite[Theorem~6.2]{rudin1986real}), where $S\eqdef\satp\eta\cup\satn\eta$, hence \begin{align} \pa{\mnorm{w_{|\satp{\eta}}} - \dotp{w_{|\satp{\eta}}}{\eta}} + \pa{\mnorm{w_{|\satn{\eta}}} - \dotp{w_{|\satn{\eta}}}{\eta}} + \pa{\mnorm{w_{|S^c}} - \dotp{w_{|S^c}}{\eta}} = 0. \end{align} Each of the three terms in the sum is nonnegative by definition of $\mnorm{\cdot}$, and the fact that $\normi{\eta}\leq 1$, so that the equality $\mnorm{w} = \dotp{w}{\eta}$ is equivalent to \begin{align} &\mnorm{w_{|\satp{\eta}}} = \dotp{w_{|\satp{\eta}}}{\eta},\label{eq:mnorm-plus}\\ &\mnorm{w_{|\satn{\eta}}} = \dotp{w_{|\satn{\eta}}}{\eta},\label{eq:mnorm-neg}\\ &\mnorm{w_{|S^c}} = \dotp{w_{|S^c}}{\eta}\label{eq:mnorm-comp}. \end{align} Consider the Jordan decomposition of $w$: $w=w_+ - w_-$. Then $\mnorm{w_{|\satp{\eta}}}=w_+\pa{\satp\eta}+w_-\pa{\satp\eta}$ and $\dotp{w_{|\satp{\eta}}}{\eta}=\int_{\satp\eta}\d w=w_+\pa{\satp\eta}-w_-\pa{\satp\eta}$, so that~\eqref{eq:mnorm-plus} is equivalent to $w_-\pa{\satp\eta} = 0$ \ie \eq{ \supp(w_-)\cap\satp\eta=\varnothing. } Similarly, we can prove that~\eqref{eq:mnorm-neg} is equivalent to \eq{ \supp(w_+)\cap\satn\eta=\varnothing, } since $\dotp{w_{|\satn{\eta}}}{\eta}=-\int_{\satn\eta}\d w$. As a result, to obtain the desired equivalence, it remains to prove that~\eqref{eq:mnorm-comp} is the same as $w_{|S^c}=0$. The arguments can be found for example in~\cite{de2012exact} (see the proof of Lemma A.1), but we reproduce the reasoning here for the sake of completeness. Consider the closed sets for all $k>0$ \eq{ \Omega_k \eqdef \RR \setminus \pa{ S \ + \ \left( -\frac{1}{k},\frac{1}{k} \right) }\subset S^c. } Suppose by contradiction that there exists $k>0$ such that $\mnorm{w_{|\Omega_k}}>0$. Since $|\eta|<1$ on the closed set $\Omega_k$ (because it is true on the bigger open set $S^c$), we deduce that $\dotp{w_{|\Omega_k}}{\eta}<\mnorm{w_{|\Omega_k}}$ and then \eq{ \mnorm{w} = \dotp{w_{|\Omega_k}}{\eta} + \dotp{w_{|\Omega_k^c}}{\eta} < \mnorm{w_{|\Omega_k}} + \mnorm{w_{|\Omega_k^c}} = \mnorm{w}, } which is a contradiction. Hence, we have $\mnorm{w_{|\Omega_k}}=0$ for all $k>0$, which yields $\mnorm{w_{|S^c}}=0$ since $S^c = \cup_{k>0} \Omega_k$, \ie $w_{|S^c}=0$. \end{proof} \section{Proof of Proposition~\ref{prop:cns-sol-constrained-fixed-dual-certif}}\label{app:2bis} The proof of Proposition~\ref{prop:cns-sol-constrained-fixed-dual-certif} is very similar to the proof of Proposition~\ref{prop:cns-sol-constrained}, and is derived from the optimality conditions given in Lemma~\ref{lem:optimality-certificate}. \begin{proof}[Proof of Proposition~\ref{prop:cns-sol-constrained-fixed-dual-certif}] Let $\eta$ be a dual certificate in the sense of Proposition~\ref{prop:cns-sol-constrained-fixed-dual-certif}. By definition of $\eta$ (it is in particular a dual pre-certificate in the sense of Definition~\ref{def:dual-certif}) and by Lemma~\ref{lem:struct-certif}, there exists $\V c\in\RR^M$ such that $\eta=\nuu^*(\dualvar)$ and $\dotp{\dualvar}{\un} = \dotp{\dualvar}{\x} = 0$. Since $\eta$ is a dual certificate, Proposition~\ref{prop:cns-sol-constrained} implies that there exists a $\tilde f\in\BV$ satisfying the interpolation conditions and such that $\mnorm{\Op D^2 \tilde f}=\dotp{\Op D^2 \tilde f}{\eta}$. This implies that $\V c$ and $(\tilde w, (\tilde \alpha, \tilde \beta))\in\Radon\times\RR^2$, where $\tilde f=\Itwo\{\tilde w\}+\tilde\al + \tilde\be (\cdot)$, satisfy~\eqref{eq:ic} and~\eqref{eq:oc} \ie in particular $\V c$ is a solution of the dual problem~\eqref{eq:constrained-dual} by Lemma~\ref{lem:optimality-certificate}. Using this fixed vector $\V c\in\RR^M$ and the decomposition of any $f\in\BV$ as $f=\Itwo\{w\} + \al + \be (\cdot)$ (see~\eqref{eq:fdecompose}), the equivalence in Lemma~\ref{lem:optimality-certificate} directly yields that $\fopt$ is a solution of~\eqref{eq:noiseless} if and only if $\fopt$ satisfies the interpolation conditions $\fopt(x_m) = y_{0,m}$ and $\mnorm{\Op{D}^2 \fopt} = \dotp{\Op{D}^2 \fopt}{\eta}$, which concludes the proof. \end{proof} \section{Proof of Proposition~\ref{prop:general-uniqueness}}\label{app:3} Let $\fopt\in\BV$ be a solution of problem~\eqref{eq:noiseless} given by Theorem~\ref{theo:RTweneed}. By~\eqref{eq:fdecompose}, there exist $w\in\Radon$ and $(\al,\be)\in\RR^2$ such that $\fopt=\Itwo\{w\} + \al + \be(\cdot)$. By the assumption of the proposition, there exists a nondegenerate dual certificate $\eta$, so that by applying Proposition~\ref{prop:cns-sol-constrained-fixed-dual-certif}, we obtain $\ssupp{w} \subset \ssat \eta$. Moreover, we have that $\ssat \eta \subset \{x_2,\ldots,x_{M-1}\}$ due to the two following facts \begin{itemize} \item $\eta = \sum_{m=1}^M \dualvarm \green{x_m - \cdot}$ (as a dual pre-certificate, see Lemma~\ref{lem:struct-certif}), \item $\ssat\eta$ is a discrete set (as $\eta$ is nondegenerate). \end{itemize} This implies that $\eta$ must be equal to $\pm 1$ at the points $\{x_2,\ldots,x_{M-1}\}$, which yields \eq{ w = \sum_{k=2}^{M-1} a_k\dirac{x_k}, } where the $a_k \in \R$ are (possibly zero) weights. In particular, this implies that $\fopt$ is a piecewise-linear spline with at most $(M-2)$ knots that are a subset of $\{x_2,\ldots,x_{M-1}\}$. It remains to prove that the coefficients $a_2,\ldots,a_{M-1},\al,\be$ are uniquely determined to conclude that $\fopt$ is the unique solution of~\eqref{eq:noiseless}. Since $\fopt$ is a solution of~\eqref{eq:noiseless}, we have that $\nuf(\fopt) = \obs$. This implies that \eq{ \sum_{k=2}^{M-1} a_k \mathbf{g}_k + \al\un + \be\x = \obs \qwithq \mathbf{g}_k\eqdef \nuu\pa{\dirac{x_k}}=\pa{g(x_m,x_k)}_{1\leq m\leq M}\in\RR^M. } We now prove that this equation uniquely determines the coefficients $a_2,\ldots,a_{M-1},\al,\be$ by showing that the family $(\un,\x,\mathbf{g}_2,\ldots,\mathbf{g}_{M-1})$ is a basis of $\RR^M$. Indeed, by definition of $g$ (see~\eqref{eq:kernelI2}), we have that \begin{align}\label{eq:gk} \forall k\in\{2,\ldots,M-1\}, \quad \V g_k = \pa{(x_m - x_k)_+}_{1\leq m \leq M} - (-x_k)_+\un + \pa{(-x_k)_+ - (1-x_k)_+}\x. \end{align} Hence, by writing the matrix of the family $(\un,\x,\mathbf{g}_2,\ldots,\mathbf{g}_{M-1})$ in the canonical basis of $\RR^M$, subtracting thanks to~\eqref{eq:gk} appropriate linear combinations of the first two columns (given by the vectors $\un$ and $\x$) to all of the other columns and finally subtracting $x_1$ times the first column to the second one, we end up with the following matrix \eq{ \begin{pmatrix} 1 & 0 & 0 & 0 & \hdots & 0\\ 1 & (x_2 - x_1) & 0 & 0 & \hdots & 0\\ 1 & (x_3 - x_1) & (x_3 - x_2) & 0 & \hdots & 0\\ \vdots & \vdots & \vdots & \vdots & \ddots & \vdots\\ 1 & (x_M - x_1) & (x_M - x_2) & (x_M - x_3) & \hdots & (x_M - x_{M-1})\\ \end{pmatrix}. } The latter is a lower triangular matrix with nonzero coefficients on the diagonal (as the sampling points $x_m$ are pairwise distinct), and is thus invertible, which proves the desired result. \section{Proof of Theorem~\ref{thm:limitdomain}} \label{app:limitdomain} Let $\fopt \in\mathcal{V}_0$. We fix $m \in \{2, M-2\}$. First of all, as we have seen in the proof of Theorem~\ref{theo:sol_set}, if $a_m a_{m+1} \leq 0$, then $\fopt = \fcano$ on $[x_m, x_{m+1}]$, and the graph of $\fopt$ in this interval is equal to the one of $\fcano$. Assume now that $a_m a_{m+1} > 0$. We now show that $\{ (x, \fopt(x)), \ x\in [x_m, x_{m+1} ] \} \subset \Delta_m$. The slope condition $a_m a_{m+1} > 0$ implies that $\etacano$ is degenerate and that $\etacano = \pm 1$ is constant over $[x_m, x_{m+1}]$. Assume for instance that the value is $1$, in which case $\fopt$ is convex over $[x_{m-1},x_{m+2}]$ according to Theorem~\ref{theo:sol_set}. We shall use the following well-known fact on convex functions. Fix $a < b < c$ and assume that $f$ is convex over $[a,c]$. Then, $f$ is below its arc between $a$ and $b$ on $(a,b)$, that is, $f(x) \leq \frac{f(b)-f(a)}{b-a} (x-a) + f(a)$ for any $x \in (a,b)$. Moreover, $f$ is above the same arc over $(b,c)$, that is, $f(x) \geq \frac{f(b)-f(a)}{b-a} (x-a) + f(a)$ for any $x \in (b,c)$. Let $x^* \in [x_m, x_{m+1}]$. By convexity, $\fopt$ is below its arc between $x_m$ and $x_{m+1}$. Hence we have that \begin{equation}\label{eq:condition1fordetalm} \fopt(x^*) \leq \frac{y_{0,m+1} - y_{0,m}}{x_{m+1} - x_m} (x^*- x_m) + y_{0,m}. \end{equation} Moreover, the convexity over $[x_{m-1}, x^*]$ implies that $\fopt(x^*)$ is above the arc of $\fopt$ between $x_{m-1}$ and $x_m$. This implies that \begin{equation} \label{eq:condition2fordetalm} \fopt(x^*) \geq \frac{y_{0,m} - y_{0,m-1}}{x_{m} - x_{m-1}} (x^* - x_{m-1}) + y_{0,m-1}. \end{equation} A similar argument over $[x^*, x_{m+2}]$ implies that \begin{equation} \label{eq:condition3fordetalm} \fopt(x^*) \geq \frac{y_{0,m+2} - y_{0,m+1}}{x_{m+2} - x_{m+1}} (x^*- x_{m+1}) + y_{0,m+1}. \end{equation} The conditions~\eqref{eq:condition1fordetalm},~\eqref{eq:condition2fordetalm}, and~\eqref{eq:condition3fordetalm} are precisely equivalent to $(x^*,\fopt(x^*)) \in \Delta_m$, since the three linear equations delineate this domain in this case. The same proof applies when $\etacano=-1$ over $[x_m, x_{m+1}]$ by using concavity instead of convexity. This proves that $\mathcal{G}(\fopt) \subset \mathcal{G}(\fcano) \cup \left( \cup_{m \in \mathcal{X}} \Delta_m \right)$ for every $\fopt \in \Spc{V}_0$, and hence the direct inclusion in~\eqref{eq:domain}.\\ For the reverse inclusion, we already know that $\fcano \in \mathcal{V}_0$, therefore it suffices to show that, for any $m \in \mathcal{X}$ and any $(x^*,y^*) \in \Delta_m$, there exists a solution $\fopt \in \mathcal{V}_0$ such that $\fopt (x^*) = y^*$. As before, since $m\in \mathcal{X}$, we know that $\etacano =\pm 1$ on $[x_m,x_{m+1}]$ and we can assume without loss of generality that the value is $1$. Then, any solution is convex and satisfies the relations~\eqref{eq:condition1fordetalm},~\eqref{eq:condition2fordetalm}, and~\eqref{eq:condition3fordetalm}. By convexity of $\mathcal{V}_0$, it suffices to show the result for $(x^*,y^*)$ in the boundary of $\Delta_m$, which is delimited by the relations \begin{align} &\frac{y_{0,m+1} - y_{0,m}}{x_{m+1} - x_m} (x^*- x_m) + y_{0,m} = y^* \text{, or} \label{eq:line1}\\ &\frac{y_{0,m} - y_{0,m-1}}{x_{m} - x_{m-1}} (x^*- x_{m-1}) + y_{0,m-1} = y^* \text{, or} \label{eq:line2} \\ & \frac{y_{0,m+2} - y_{0,m+1}}{x_{m+2} - x_{m+1}} (x^*- x_{m+1}) + y_{0,m+1} = y^*. \label{eq:line3} \end{align} The solution $\fcano$ is such that $\fcano(x^*) = \frac{y_{0,m+1} - y_{0,m}}{x_{m+1} - x_m} (x^*- x_m) + y_{0,m} = y^*$, hence any $(x^* , y^*)$ satisfying~\eqref{eq:line1} is attained by a solution (the canonical one) in $\mathcal{V}_0$. Assume that $(x^*,y^*)$ satisfies~\eqref{eq:line2} (the case of~\eqref{eq:line3} follows the same argument). We construct $\fopt$ as follows. First, $\fopt (x)= \fcano(x)$ for any $x \notin (x_m, x_{m+1})$. Then, we set \begin{equation} \fopt(x) = \frac{y_{0,m} - y_{0,m-1}}{x_{m} - x_{m-1}} (x- x_{m-1}) + y_{0,m-1} \end{equation} for $x \in (x_m, x^*]$. In particular, $f(x^*) = y^*$, and $\fopt$ is linear on $[x_{m}, x^*]$. Finally, we impose that $\fopt$ is linear on $[x^*,x_{m+1}]$, which is equivalent to the relation \begin{equation} \fopt(x) = \frac{y_{0,m+1}- y^*}{x_{m+1} - x^*} (x- x^*) + y^* \end{equation} for any $x \in [x^*,x_{m+1}]$. We then claim that $\fopt \in \mathcal{V}_0$, the argument being very similar to the one of Lemma \ref{lem:mountain}. Indeed, to show this, it suffices to remark that $\fopt$, which is piecewise-constant and coincides with $\fcano$ outside of $(x_m,x_{m+1})$, is convex on $[x_{m-1}, x_{m+2}]$ (this is guaranteed by the slope condition $a_m a_{m+1} > 0$ and the construction of $\fopt$). According to Theorem~\ref{theo:sol_set}, this implies that $\fopt \in \mathcal{V}_0$, with $\fopt (x^*) = y^*$. This finally shows that $(x^*,y^*) \in \cup_{\fopt \in \mathcal{V}_0} \mathcal{G}(\fopt)$, which proves~\eqref{eq:domain}. \section{Proof of Theorem~\ref{thm:sparsest_sol}} \label{sec:sparsest_solutions_proof} Using Theorem~\ref{theo:sol_set}, for any $\fopt \in \Spc{V}_0$, we have $\fopt (x) = \fcano(x)$ for any $x$ such that $\etacano(x) \neq \pm 1$. We now focus on regions where $\etacano(x) = \pm 1$. For all $n \in \{1, \ldots, N_s\}$, $\fcano$ has $\alpha_n + 1$ knots in the interval $[x_{s_n}, x_{s_n + \alpha_n}]$. In order to construct one of the sparsest solutions, we must therefore replace these $\alpha_n+1$ knots with as little knots as possible in each saturation region, since all solutions must coincide with $\fcano$ outside these regions. In order to lighten the notations, in what follows, we focus on a single saturation region determined by a fixed $n \in \{1, \ldots, N_s\}$ and we write $\alpha \eqdef \alpha_n$ and $s \eqdef s_n$. Similarly to the proof of Proposition~\ref{prop:uniqueness}, a piecewise-linear spline $f$ that coincides with $\fcano$ outside the interval $[x_{s}, x_{s + \alpha}]$ must be of the form \begin{align} \label{eq:sparsest_sol} f(x) = \fcano(x) - \sum_{n'=0}^{\alpha} a_{s + n'} (x - x_{s + n'})_+ + \sum_{p=1}^{P} \tilde{a}_{p} (x - \tilde{\tau}_{p})_+, \end{align} where $\tilde{a}_{p}\in \R$, $\tilde{\tau}_p \in [x_{s}, x_{s + \alpha}]$ such that $\tilde{\tau}_1 < \cdots < \tilde{\tau}_P$ and $P$ is the number of knots of $f$ in this interval. We then prove the following lemma. \begin{lemma} \label{lem:min_sparsity} If $f$ in~\eqref{eq:sparsest_sol} satisfies the constraints $f(x_m)=y_{0, m}$ for all $m\in\{1,\ldots,M\}$, then the number of knots $P$ in $[x_{s}, x_{s + \alpha}]$ satisfies $P \geq \lceil\frac{\alpha+1}{2} \rceil$. \end{lemma} \begin{proof} Lemma~\ref{lem:min_sparsity} is trivially true for $\alpha = 0$, since we must have $f = \fcano$ and thus $P=1$. Assume now that $\alpha >0$. Firstly, we show that we must have $\tilde{\tau}_1 \in [x_{s}, x_{s + 1})$. Assume by contradiction that $\tilde{\tau}_1 \geq x_{s+1}$: then, $f$ has no knots in the interval $(x_{s-1}, x_{s + 1})$. Yet $f$ must satisfy the interpolation constraints $f(x_m)=y_{0,m}$ for all $m\in\{1,\ldots,M\}$, which implies that the points $\P0{s-1}$, $\P0{s}$, and $\P0{s+1}$ are aligned. Therefore, $\fcano$ has a weight $a_s = 0$ (defined in~\eqref{eq:a_coefs}) which implies that $\etacano(x_s)=0$, which contradicts the assumption $\etacano(x_s)= \pm 1$. We can then prove in a similar fashion that $\tilde{\tau}_P \in (x_{s+ \alpha - 1} x_{s + \alpha}]$ when $\alpha > 1$. Next, we show that for $\alpha \geq 2$, we have \begin{align} \label{eq:min_knots_central_sat} \forall n' \in \{1 , \ldots, \alpha-1\},\ \exists p \in \{1, \ldots, P\} \text{ such that } \tilde{\tau}_p \in (x_{s+n'-1}, x_{s+n'+1}), \end{align} \ie there must be a knot in all blocks of two consecutive saturation intervals. We assume by contradiction that this is not the case. Similarly to above, this implies that $\P0{s+n'-1}$, $\P0{s+n'}$, and $\P0{s+n'+1}$ are aligned and thus that $\etacano(x_{s+n'})=0$, which yields a contradiction. Lemma~\ref{lem:min_sparsity} immediately follows from the constraints $\tilde{\tau}_1 \in [x_{s}, x_{s + 1})$ and $\tilde{\tau}_P \in [x_{s+ \alpha - 1}, x_{s + \alpha}]$ for $\alpha \leq 2$. For $\alpha >2$, by the two aforementioned constraints, $f$ must have at least two knots in the first and last saturation intervals $[x_{s}, x_{s + 1})$ and $(x_{s+ \alpha - 1} x_{s + \alpha}]$ respectively. Next, consider the interval $[x_{s+1}, x_{s +\alpha-1}]$, which consists of the central $\alpha-2$ consecutive saturations. Using~\eqref{eq:min_knots_central_sat}, this interval must contain at least $\lfloor \frac{\alpha-2}{2} \rfloor$ knots, which yields the lower bound $P \geq 2 + \lfloor \frac{\alpha-2}{2} \rfloor = \lceil\frac{\alpha+1}{2} \rceil$ (the last equality can easily be verified for every $\alpha \in \N$). \end{proof} The following Lemma then states that the bound in Lemma~\ref{lem:min_sparsity} are tight. \begin{lemma} \label{lem:construction_sparsest_sol} The lower bound in Lemma~\ref{lem:min_sparsity} is always reached, \ie there exists a piecewise-linear spline $\fopt \in \Spc{V}_0$ of the form~\eqref{eq:sparsest_sol} with $P = \lceil\frac{\alpha+1}{2} \rceil$ knots in $[x_s, x_{s+\alpha}]$. If $\alpha$ is odd or $\alpha=0$, then $\fopt $ is unique. If $\alpha > 0$ is even, then there are uncountably many such functions $\fopt$. \end{lemma} \begin{proof} Lemma~\ref{lem:construction_sparsest_sol} is trivially true for $\alpha = 0$, \ie when no saturation occurs. Indeed, the saturation interval is then reduced to the point $\{ x_{s} \}$, and the only solution $\fopt \in \Spc{V}_0$ of the form~\eqref{eq:sparsest_sol} is $\fopt = \fcano$ for which $P=1$. Assume now that $\alpha = 2k + 1$ is odd. The bound in Lemma~\ref{lem:min_sparsity} then reads $P\geq k+1$. Similarly to the proof of Proposition~\ref{prop:uniqueness}, we construct a function $\fopt$ of the form~\eqref{eq:sparsest_sol} with $P=k+1$ and \begin{align} \label{eq:sparsest_sol_odd} \begin{cases} \tilde{a}_1 \eqdef a_{s} + a_{s+1} \text{ and } \tilde{\tau}_1 \eqdef \frac{a_{s}x_{s} + a_{s+1}x_{s+1}}{\tilde{a}_1}; \\ \tilde{a}_2 \eqdef a_{s+2} + a_{s+3} \text{ and } \tilde{\tau}_2 \eqdef \frac{a_{s+2}x_{s+2} + a_{s+3}x_{s+3}}{\tilde{a}_2}; \\ \vdots \\ \tilde{a}_{k+1} \eqdef a_{s+2k} + a_{s+2k+1} \text{ and } \tilde{\tau}_k \eqdef \frac{a_{s+2k}x_{s+2k} + a_{s+2k+1}x_{s+2k+1}}{\tilde{a}_{k+1}}. \end{cases} \end{align} Since the $a_s, \ldots, a_{s+\alpha}$ all have the same (nonzero) sign, the $\tilde{\tau}_{i}$, $i=1, \ldots , k+1$, are all barycenters with positive weights, which implies that $\tilde{\tau}_{i} \in (x_{s+2i}, x_{s+2i+1})$. Then, as in the proof of Proposition~\ref{prop:uniqueness}, replacing the knots at $x_{s+2i}$ and $x_{s+2i+1}$ in $\fcano$ by a single knot at $\tilde{\tau}_{i}$ does not change the expression of $\fopt $ outside the interval $(x_{s+2i}, x_{s+2i+1})$, which implies that all the constraints $\fopt(x_m) = y_{0, m}$ for all $m\in\{1,\ldots,M\}$ are satisfied. Next, let $I_s = \{1, \ldots M\} \setminus \{s, \ldots, s+\alpha\}$ be the set of indices outside our interval of interest. Since $a_s, \ldots, a_{s+\alpha}$ and thus $\tilde{a}_1, \ldots, \tilde{a}_{k+1}$ all have the same sign, we have $\Vert \Op{D}^2 \fopt \Vert_{\Spc{M}} = \sum_{m \in I_s} \vert a_m \vert + \vert \sum_{i=1}^{k+1} \tilde{a}_{i} \vert= \sum_{m \in I_s} \vert a_m \vert + \vert\sum_{n=0}^{\alpha} a_{s+n}\vert = \Vert \Op{D}^2 \fcano \Vert_{\Spc{M}}$, which together with the interpolation constraints implies that $\fopt \in \Spc{V}_0$. To show the uniqueness, consider once again a function $\fopt $ of the form~\eqref{eq:sparsest_sol} with $P=k+1$ and $\tilde{\tau}_1 < \cdots < \tilde{\tau}_{k+1}$. We then invoke Lemma~\ref{lem:min_sparsity}, which stipulates that there must be knots in the first and last saturation intervals as well as every two consecutive saturation intervals. The only way to achieve this is to have $\tilde{\tau}_i \in (x_{s+2i}, x_{s+2i+1})$, $i=0, \ldots ,k$. The intervals $(x_{s+2i-1}, x_{s+2i})$ for all $i\in\{1, \ldots, k\}$ thus have no knots, which implies that in these intervals, $\fopt$ must follow the line $(\P0{s+2i-1}, \P0{s+2i})$. The knots are then necessarily the intersection of these lines, which yields the solution given in~\eqref{eq:sparsest_sol_odd}. The latter is therefore the unique function in $\Spc{V}_0$ with $P=k+1$ knots in the interval $[x_s, x_{s+\alpha}]$. An example of such a sparsest solution is shown in Figure~\ref{fig:3_sat} with $M=6$ and $\alpha = 3$ consective saturation intervals. Assume now that $\alpha = 2k$ is even, with $k>0$. The bound in Lemma~\ref{lem:min_sparsity} then reads $P\geq k+1$. By Lemma~\ref{lem:mountain}, the intersection $\widetilde{\mathrm{P}} = \Point{\tilde{\tau}}{\tilde{y}}$ between the lines $(\P0{s-1}, \P0{s})$ and $(\P0{s+1}, \P0{s+2})$ exists and satisfies $\tilde{\tau} \in (x_s, x_{s+1})$. Then, let $\widetilde{\mathrm{P}}_1 = \Point{\tilde{\tau}_1}{\tilde{y}_1}$ be any point on the line segment $[\P0{s}, \widetilde{\mathrm{P}}_1]$, \ie with $\tilde{\tau}_1 \in [x_{s}, \tilde{\tau}]$. Then, we define $\widetilde{\mathrm{P}}_{2}$ as the intersection between the lines $(\widetilde{\mathrm{P}}_{1}, \P0{s+1})$ and $(\P0{s+2}, \P0{s+3})$. Similarly, if $\alpha \geq 4$, for every $i\in \{3, \ldots , k+1\}$, we define $\widetilde{\mathrm{P}}_{i} = \Point{\tilde{\tau}_i}{\tilde{y}_i}$ as the intersection between the lines $(\P0{s+2i-4}, \P0{s+2i-3})$ and $(\P0{s+2i-2}, \P0{s+2i-1})$. Due to a similar barycenter argument as in~\eqref{eq:sparsest_sol_odd}, these intersections are well defined and satisfy $\tilde{\tau}_{i} \in (x_{s+2i-3}, x_{s+2i-2})$. Let $\fopt$ be the piecewise-linear spline that coincides with $\fcano$ outside the interval $(x_{s}, x_{s+\alpha})$, and that connects the points $\P0{s-1}$, $\widetilde{\mathrm{P}}_1$, $\ldots$, $\widetilde{\mathrm{P}}_{k+1}$, and $\P0{s+\alpha}$ in that interval. By construction, $\fopt$ satisfies the constraints $\fopt(x_m) = y_{0, m}$, $m\in\{1,\ldots,M\}$. Moreover, once again in a similar manner to~\eqref{eq:sparsest_sol_odd}, we have that $\Vert \fopt \Vert_\Spc{M} = \Vert \fcano \Vert_\Spc{M}$, which implies that $\fopt \in \Spc{V}_0$. Finally, $\fopt$ is of the form~\eqref{eq:sparsest_sol} with the lowest possible sparsity $P = k+1$ in the interval $[x_s, x_{s+\alpha}]$ (by Lemma~\ref{lem:min_sparsity}). Yet there are uncountably many possible choices of $\widetilde{\mathrm{P}}_1$ (it can be any point on a non-singleton line segment). All of these choices lead to a different solution $\fopt \in \Spc{V}_0$ that is uniquely defined, since the choice of $\widetilde{\mathrm{P}}_1$ specifies $\widetilde{\mathrm{P}}_2, \ldots, \widetilde{\mathrm{P}}_{k+1}$. This proves that there are uncountably many solutions of~\eqref{eq:noiselessclean} with sparsity $k+1$ in $[x_s, x_{s+\alpha}]$, and that there is a single degree of freedom for the choice of these $k+1$ knots. An example of such a sparsest solution is shown in Figure~\ref{fig:2_sat} with $M=5$ and $\alpha = 2$ consecutive saturation intervals. In our algorithm, we simply choose $\widetilde{\mathrm{P}}_1 = \P0{s}$, which yields a function $\fopt$ of the form~\eqref{eq:sparsest_sol} with \begin{align} \label{eq:sparsest_sol_even} \begin{cases} \tilde{a}_1 \eqdef a_{s} \text{ and } \tilde{\tau}_1 \eqdef x_{s}; \\ \tilde{a}_2 \eqdef a_{s+1} + a_{s+2} \text{ and } \tilde{\tau}_2 \eqdef \frac{a_{s+1}x_{s+1} + a_{s+2}x_{s+2}}{\tilde{a}_2}; \\ \vdots \\ \tilde{a}_{k+1} \eqdef a_{s+2k-1} + a_{s+2k} \text{ and } \tilde{\tau}_k \eqdef \frac{a_{s+2k-1}x_{s+2k-1} + a_{s+2k}x_{s+2k}}{\tilde{a}_{k+1}}. \end{cases} \end{align} \end{proof} Theorem~\ref{thm:sparsest_sol} then directly derives from Lemma~\ref{lem:construction_sparsest_sol} applied independently to each saturation interval $[x_{s_n}, x_{s_n + \alpha_n}]$ for $n\in\{1, \ldots, N_s\}$. Note that Lemma~\ref{lem:construction_sparsest_sol} also applies when no saturation occurs, \ie $\alpha_n = 0$. A sparsest solutions of~\eqref{eq:noiselessclean} thus coincides with a function of the form~\eqref{eq:sparsest_sol} constructed in Lemma~\ref{lem:construction_sparsest_sol} in each of these intervals, and with $\fcano$ outside these intervals. Finally, since the behavior of a solution in each saturation interval does not affect its behavior outside of it, the number of degrees of freedom in the set of sparsest solutions to~\eqref{eq:noiselessclean} is simply the sum of the number of degrees of freedom in each saturation interval. Yet by Lemma~\ref{lem:construction_sparsest_sol}, there are no degrees of freedom in intervals such that $\alpha_n$ is odd (a sparsest solution is uniquely determined on that interval), and there is one when $\alpha_n$ is even. Therefore, the total number of degrees of freedom of the set of sparsest solutions to~\eqref{eq:noiselessclean} is equal to the number of even values of $\alpha_n$ for $n\in\{1, \ldots, N_s\}$. \section{Proof of Proposition~\ref{prop:penalized_to_constrained}} \label{sec:penalized_to_constrained} Assume by contradiction that there exist $f_1, f_2 \in \Spc{V}_\lambda$ and $m_0 \in \{1, \ldots M\}$ such that $f_1(x_{m_0})\neq f_2(x_{m_0})$, and let $f_\alpha = \alpha f_1 + (1-\alpha) f_2$, where $0 < \alpha < 1$. We then have \begin{align} & \sum_{m=1}^M E(f_\alpha(x_m), y_{m}) + \lambda \Vert \Op{D}^2 f_\alpha \Vert_{\Spc{M}} \nonumber \\ &< \alpha \sum_{m=1}^M E(f_1(x_m), y_{m}) + (1-\alpha) \sum_{m=1}^M E(f_2(x_m), y_{m}) + \lambda \Big(\alpha \Vert \Op{D}^2 f_1 \Vert_{\Spc{M}} + (1-\alpha) \Vert \Op{D}^2 f_2 \Vert_{\Spc{M}} \Big) \nonumber \\ &= \alpha \Spc{J}_\lambda + (1-\alpha) \Spc{J}_\lambda = \Spc{J}_\lambda, \end{align} where $\Spc{J}_\lambda$ is the optimal cost of~\eqref{eq:noisyclean}. The inequality is due to the convexity of the $\Vert \cdot \Vert_\Spc{M}$ norm and of $E(\cdot, y)$ for any $y \in \R$. The fact that it is strict is due to the strict convexity of $E(\cdot, y_{m_0})$ and the fact that $f_1(x_{m_0})\neq f_2(x_{m_0})$. Yet since $\Spc{V}_\lambda$ is a convex set, we have $f_\alpha \in \Spc{V}_\lambda$: this implies that $\Spc{J}_\lambda = \sum_{m=1}^M E(f_\alpha(x_m), y_m) + \lambda \Vert \Op{D}^2 f_\alpha \Vert_{\Spc{M}} < \Spc{J}_\lambda $, which yields a contradiction. Therefore, there exists a unique vector $\V{y}_\lambda \in \R^M$ such that for any $\fopt \in \mathcal{V}_\lambda$, $\fopt(x_m) = y_{\lambda, m}$ for all $m\in\{1,\ldots,M\}$. This implies that $\Spc{V}_\lambda \subset \{f \in \BV: \ f(x_m) = y_{\lambda, m}, \ 1\leq m\leq M \}$. Moreover, we have that for any $\fopt \in \mathcal{V}_\lambda$, $E(\fopt(x_m), y_{m}) = E(y_{\lambda, m}, y_m)$, and thus that the data fidelity is constant in the constrained space $\{f \in \BV: \ f(x_m) = y_{\lambda, m}, \ 1\leq m\leq M \}$. This proves the equality between the solution sets of problems~\eqref{eq:noisyclean} and~\eqref{eq:noisy_constrained}. \section{ Proof of Proposition~\ref{prop:linear_regression}} \label{sec:linear_regression_proof} \paragraph{Item 1.}Let $J(\alpha, \beta) = \sum_{m=1}^M E(\alpha + \beta x_m, y_m)$ be the objective function of problem~\eqref{eq:linear_regression}. We show that problem~\eqref{eq:linear_regression} indeed has a unique solution by proving that $J$ is strictly convex and coercive when $M \geq 2$ and the $x_m$ are pairwise distinct. Concerning the coercivity, let $\Vert (\alpha, \beta) \Vert_2 \to +\infty$. Assume by contradiction that $\alpha + \beta x_m$ is bounded for every $m\in \{1, \ldots, M \}$. Then, since $M \geq 2$, $\alpha + \beta x_1- (\alpha + \beta x_2) = \beta(x_1-x_2)$ must also be bounded, which implies that $\beta$ is bounded since the $x_m$ are pairwise distinct. Therefore, we must have $\vert \alpha \vert \to +\infty$, which implies that $\vert \alpha + \beta x_1 \vert \to +\infty$ which yields a contradiction. Therefore, there exists a $m_0 \in \{ 1, \ldots, M \}$ such that $\vert \alpha + \beta x_{m_0} \vert \to +\infty$. The coercivity of $J$ then directly follows from that of $E(\cdot, y_{m_0})$. Next, to prove the strict convexity of $J$, let $(\alpha, \beta), (\alpha', \beta') \in \R^2$ with $(\alpha, \beta) \neq (\alpha', \beta')$, and $0 < s < 1$. For any $m$, we have $s \alpha + (1-s) \alpha' + (s \beta + (1-s) \beta') x_m = s (\alpha + \beta x_m) + (1-s) (\alpha' + \beta' x_m)$. Since $(\alpha, \beta) \neq (\alpha', \beta')$ and the $x_m$ are distinct, the equation $\alpha + \beta x_m= \alpha' + \beta' x_m$ can only be satisfied for at most a single $m \in \{1, \ldots, M \}$. Yet $M \geq 2$, which implies that $\exists m_0, \ \alpha + \beta x_{m_0} \neq \alpha' + \beta' x_{m_0}$. Therefore, due to the strict convexity of $E(\cdot, y_{m_0})$, we have \begin{align} E((s \alpha + (1-s) \alpha') + (s \beta + (1-s) \beta')x_{m_0}), y_{m_0}) < s E(\alpha + \beta x_{m_0}, y_{m_0}) + (1-s) E(\alpha' + \beta' x_{m_0}, y_{m_0}). \end{align} It then follows from the convexity of $E(\cdot, y_{m})$ for all $m$ that $J \big( s (\alpha, \beta) + (1-s) (\alpha', \beta') \big) < s J(\alpha, \beta) + (1-s) J(\alpha', \beta')$, which proves the strict convexity of $J$. Together with the fact that $J$ is coercive, this proves that~\eqref{eq:linear_regression} has a unique solution. \paragraph{Item 2.}Assume that $\lambda \geq \lambda_{\text{max}}$. By Fermat's rule, a vector $\zopt$ is a solution of problem~\eqref{eq:optizlambda} if and only if the zero vector belongs to the subdifferential of the objective function evaluated at $\zopt$. We thus have $\zopt = \V{y}_\lambda$ if and only if \begin{align} \label{eq:optimality_condition_discrete} \V{0} \in \underbrace{\begin{pmatrix}\partial_1 E(z_{\mathrm{opt}, 1}, y_1) \\ \vdots \\ \partial_1 E(z_{\mathrm{opt}, M}, y_M) \end{pmatrix}}_{\eqdef \V{v}(\zopt)} + \lambda \partial \Vert \M{L} \cdot \Vert_1(\zopt), \end{align} where $\partial_1$ denotes the partial derivative with respect to the first variable, and $\partial$ the subdifferential. The chain rule for subdifferentials \cite[Theorem 23.9.]{rockafellar1970convex} yields $\partial \Vert \M{L} \cdot \Vert_1(\V{z}) = \{ \M{L}^T \V{g}: \ \V{g} \in \partial \Vert \cdot \Vert_1(\M{L} \V{z}) \subset \R^{M-2} \}$, where $\partial \Vert \cdot \Vert_1(\V{a}) = \{ \V{g} \in \R^{M-2}: \Vert \V{g} \Vert_\infty \leq 1, \ \V{a}^T \V{g} = \Vert \V{a} \Vert_1 \}$. The vector $\M{L} \zopt$ lists the weights $a_m$ associated to the knots of the canonical solution $f_{\zopt}$ (see the proof of Proposition~\ref{prop:optizlambda}). Therefore, the linear regression case (in which $f_{\zopt}$ has no knot) corresponds to $\M{L} \zopt = \V{0}$. In this case, since $\partial \Vert \cdot \Vert_1(\V{0}) = \{ \V{g} \in \R^{M-2}: \Vert \V{g} \Vert_\infty \leq 1 \}$, the optimality condition~\eqref{eq:optimality_condition_discrete} now reads \eq{ \exists \V{g} \in \R^{M-2}, \quad \Vert \V{g} \Vert_\infty \leq 1, \quad \text{s.t.} \quad \V{v}(\zopt) + \lambda \M{L}^T \V{g} = \V{0}. } We now prove that $\zopt = \opt \alpha \V{1} + \opt \beta \x $ satisfies the optimality conditions~\eqref{eq:optimality_condition_discrete}, and thus that $\V{y}_\lambda = \opt \alpha \V{1} + \opt \beta \x$. To achieve this, we prove that $\V{g} = - \frac{1}{\lambda} {\M{L}^T}^\dagger \V{v}(\zopt)$ satisfies $\V{v}(\zopt) + \lambda \M{L}^T \V{g} = \V{0}$. Firstly, since $\lambda \geq \lambda_{\text{max}}$, we have that $\Vert \V{g} \Vert_\infty \leq 1$ by definition of $\lambda_{\text{max}}$. Next, let $V$ be the orthogonal complement of $\ker \M{L} \subset \R^M$. A known property of the pseudoinverse operator is that $\M{L}^T {\M{L}^T}^\dagger$ is the orthogonal projection operator onto $V$. By decomposing $\V{v}(\zopt)= \V{v}_1 + \V{v}_2$, where $\V{v}_1 \in V$ and $\V{v}_2 \in \ker \M{L}$, we thus get $\V{v}(\zopt) + \lambda \M{L}^T \V{g} = \V{v}_2$. Yet $\ker \M{L} = \mathrm{span} \{ \V{1}, \x \}$, since the canonical solutions $f_{\V{1}}$ and $f_{\x}$ (that satisfy $f_{\V{1}}(x_m) = 1$ and $f_{\V{x}}(x_m) = x_m$ for every $m\in \{ 1, \ldots, M\}$ respectively) are linear functions that are thus not penalized by the regularization. The optimality conditions of problem~\eqref{eq:linear_regression} (\ie setting the gradient to zero) then yield $\V{v}(\zopt) \perp \ker \M{L}$, which implies that $\V{v}_2 = \V{0}$ and thus that $\V{v}(\zopt) + \lambda \M{L}^T \V{g} = \V{0}$. This proves that $\zopt$ satisfies the optimality condition of problem~\eqref{eq:optizlambda}, and thus that $\zopt = \V{y}_\lambda = \opt \alpha \V{1}+ \opt \beta \x$. \paragraph{Item 3.} Due to item 2, we have $\V{y}_\lambda = \opt \alpha \V{1}+ \opt \beta \x$ which implies that the points $\Point{x_m}{y_{\lambda, m}}$ are aligned. Hence, the canonical dual certificate of the constrained problem~\eqref{eq:noisy_constrained} is $\etacano = 0$, which is nondegenerate. By Proposition~\ref{prop:uniqueness}, this implies that the unique solution to problem~\eqref{eq:noisy_constrained} is the canonical solution $f_{\zopt} = f_{\text{max}} = \opt \alpha + \opt \beta (\cdot)$. Due to the equivalence between problems~\eqref{eq:noisy_constrained} and~\eqref{eq:noisyclean} proved in Proposition~\ref{prop:penalized_to_constrained}, this concludes the proof.
{ "timestamp": "2020-08-04T02:34:30", "yymm": "2003", "arxiv_id": "2003.10112", "language": "en", "url": "https://arxiv.org/abs/2003.10112", "abstract": "We study the problem of one-dimensional regression of data points with total-variation (TV) regularization (in the sense of measures) on the second derivative, which is known to promote piecewise-linear solutions with few knots. While there are efficient algorithms for determining such adaptive splines, the difficulty with TV regularization is that the solution is generally non-unique, an aspect that is often ignored in practice. In this paper, we present a systematic analysis that results in a complete description of the solution set with a clear distinction between the cases where the solution is unique and those, much more frequent, where it is not. For the latter scenario, we identify the sparsest solutions, i.e., those with the minimum number of knots, and we derive a formula to compute the minimum number of knots based solely on the data points. To achieve this, we first consider the problem of exact interpolation which leads to an easier theoretical analysis. Next, we relax the exact interpolation requirement to a regression setting, and we consider a penalized optimization problem with a strictly convex data-fidelity cost function. We show that the underlying penalized problem can be reformulated as a constrained problem, and thus that all our previous results still apply. Based on our theoretical analysis, we propose a simple and fast two-step algorithm, agnostic to uniqueness, to reach a sparsest solution of this penalized problem.", "subjects": "Optimization and Control (math.OC); Signal Processing (eess.SP)", "title": "Sparsest Piecewise-Linear Regression of One-Dimensional Data", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.990731986892734, "lm_q2_score": 0.8198933381139645, "lm_q1q2_score": 0.8122945559097642 }
https://arxiv.org/abs/1907.09628
Limit Shape of Subpartition Maximizing Partitions
This is an expository note answering a question posed to us by Richard Stanley, in which we prove a limit shape theorem for partitions of $n$ which maximize the number of subpartitions. The limit shape and the growth rate of the number of subpartitions are explicit. The key ideas are to use large deviations estimates for random walks, together with convex analysis and the Hardy-Ramanujan asymptotics. Our limit shape coincides with Vershik's limit shape for uniform random partitions.
\section{Maximizing the number of subpartitions} \label{sec:1} Given a partition $\lambda = (\lambda_1 \ge ...\ge \lambda_k)$ of $n$, we can identify it with a $1$-Lipschitz function which is a finite perturbation of $|x|$ by following the Russian convention for drawing it. Specifically, start with the English convention for the Young diagram for $\lambda$ ($\lambda_1$ boxes on the top row, then $\lambda_2$ below it and so on, all justified to line up on the left) and rotate it by $135^\circ$. Then we place this rotated picture immediately adjacent to the graph of the function $x \mapsto |x|$ so that each box has unit length. This defines a $1$-Lipschitz function $g_{\lambda}(x)$ with the property that $g_{\lambda}(x) \ge |x|$ and $g_{\lambda}(x) = |x|$ for large $x$. We also define a rescaled version of $g_{\lambda}$ as $f_{\lambda}(x):= n^{-1/2}g_{\lambda}(n^{1/2}x)$ so that each box has side length $n^{-1/2}$ and area $n^{-1}$ when depicted beneath the graph of $f_{\lambda}$. In particular $\int_{\Bbb R} (f_{\lambda}(x)-|x|)dx = 1$. \\ \\ A \textit{subpartition} of a partition $\lambda = (\lambda_1 \ge ... \ge \lambda_k)$ is a partition $\mu = (\mu_1\ge ...\ge \mu_{\ell})$ such that $\ell \le k$ and $\mu_i \leq \lambda_i$ for all $i\le \ell$. Our main result is as follows. \begin{thm}[Theorems \ref{10} and \ref{9}]\label{mr} For each $n$, let $\lambda_n$ denote a partition of $n$ which maximizes the number of subpartitions among all other partitions of $n$. Then the number of subpartitions of $\lambda_n$ grows as $e^{\pi\sqrt{2n/3}-o(\sqrt{n})}$ as $n \to \infty$. Moreover $f_{\lambda_n}$ converges uniformly as $n\to \infty$ to the function $f(x) = \frac{2\sqrt{3}}{\pi}\log\big(2\cosh(\frac{\pi}{2\sqrt{3}}x)\big). $ \end{thm} \vspace{0.1 in} The limit shape here is known as \textit{Vershik's curve} and was first described as the limit of uniformly sampled partitions of $n$ in \cite{Ver96}. Our result can be shown by using \textit{large-deviations estimates} for uniformly sampled partitions of $n$ which were found in the follow-up paper \cite{DVZ98}. In particular, to prove Theorem \ref{mr}, first note by the Hardy-Ramanujan asymptotics that the number of subpartitions of any partition of $n$ is bounded above (up to some constant factor) by $e^{\pi \sqrt{2n/3}}$. We let $\mu_n$ be a partition of $n$ which is closest to Vershik's curve (after normalization by $\sqrt n$), among all other partitions of $n$. Fixing $\epsilon>0$, it follows from Theorem 1 of \cite{DVZ98} that for large enough $n$, ``most" partitions of $\lceil (1-\epsilon) n\rceil$ are going to be subpartitions of $\mu_n$, which means that the number of subpartitions of $\mu_n$ is bounded below by $\frac1n e^{\pi \sqrt{2(1-\epsilon)n/3} - o(\sqrt n)}.$ Since $\epsilon$ can be made arbitrarily small, this gives tight bounds on the exponential scale which can then be used (via elementary topological arguments) to show that the maximizing partitions $\lambda_n$ are very close to $\mu_n$ on the $\sqrt n$ scale, so that the $\lambda_n$ also converge to Vershik's curve. \\ \\ The main purpose of this note is to exposit the power of large deviations theory in this particular context of partition/subpartition problems. Specifically we are going to give a proof of Theorem \ref{mr}, which is essentially a more rigorous version of the sketch given in the preceding paragraph. However, our exposition is more self-contained and based entirely on foundational principles (specifically we do not use \cite{DVZ98}, but instead rely on the result of Mogulskii \cite{Mog92} which gives a large deviations rate function for the full sample path of a random walk with iid increments, and is arguably a central result of large deviations theory). \\ \\ We also have the following similar result for $k$-chains of subpartitions, i.e., simply ordered sets of $k$ subpartitions. The ordering may be strict or unstrict; our results do not depend on this convention. \begin{thm}[Section \ref{sec:6}]\label{1.2} Let $k\ge 1$, and let $\lambda_n$ denote a partition of $n$ which maximizes the number of $k$-chains of subpartitions, among all other partitions of $n$. Then the number of $k$-chains of subpartitions of $\lambda_n$ grows as $e^{k\pi\sqrt{2n/3} - o(\sqrt{n})}$ as $n \to \infty$. Furthermore $f_{\lambda_n}$ converges uniformly to the same limit shape as in Theorem \ref{mr}. \end{thm} We close out this introduction by noting a few questions that may warrant further study. In some cases, there are related results though we do not attempt to make a survey of them. \\ \\ One natural question is to consider fluctuations around limit curves, as done in \cite{Yak99, VFY99, VY01, IO03} for instance. For the problem we have considered, this is a bit difficult to phrase since for each $n$ we expect only a few maximizing partitions. On the other hand, if we let $s(\lambda)$ denote the number of subpartitions of $\lambda$, then we may, for $\beta\geq 0$ define a measure on partitions of $n$ with probability of $\lambda$ proportional to $s(\lambda)^\beta$. When $\beta\to \infty$, this measure concentrates on those $\lambda$ which maximize $s(\lambda)$, hence our problem. When $\beta=0$, this measure reduces to the uniform measure on partitions considered by Vershik. While we expect (in particular, based on our arguments in this paper) that the limit shape does not depend on $\beta$, it would be interesting to probe the dependence of $\beta$ on the fluctuations around that shape. It might also be interesting to obtain concentration and large deviations bounds for such measures, as established in \cite{VK85, DVZ98} for instance. \\ \\ While there are many other types of measures on partitions, one of particular importance is the Plancherel measure. This involves defining the dimension of $\lambda$ to be the number of standard Young Tableaux of that shape. In terms of subpartitions, this is the number of $n$-chains of subpartitions where we restrict that a subpartition cannot equal the partition. The Plancherel measure is then proportional to that dimension squared. For that measure, seminal and independent works of Logan-Shepp \cite{LS77} and Vershik-Kerov \cite{VK77} established a limit shape as $n\to \infty$ now known as the Logan-Shepp-Vershik-Kerov (LSVK) curve. This limit curve is not the same as Vershik's curve. Hence, a natural question is to find a way to interpolate the model so as to find limit shapes which likewise interpolate between these two curves. \\ \\ Theorem \ref{1.2} shows that taking $k$-chains for $k$ fixed does not achieve this aim of crossing over between the Vershik and LSVK curves. However, we speculate that taking $k=k(n)=cn^{1/2}$ may result in such a crossover. In fact, this problem can be reduced to a rhombus tiling limit shape problem for which there are some methods which may be useful. Another natural question involves increasing the dimension and considering higher dimensional partitions. In three dimensions, these would correspond with plane partitions, which are also nicely interpreted as rhombus tilings. \\ \\ \textbf{Acknowledgements:} The authors are thankful to Greg Martin and Richard Stanley who initiated a conversation on MathOverflow two years ago on this question, and in particular to Richard Stanley who posed this question to the first author of this work. I. Corwin was partially supported by a Packard Foundation Science and Engineering Fellowship as well as NSF grant DMS:1811143 and DMS:1664650. S. Parekh was partially supported by the Fernholz Foundation's ``Summer Minerva Fellows" program, as well as summer support from I. Corwin's NSF grant DMS:1811143. \\ \\ \textbf{Outline:} In Section \ref{sec:2} we will derive exponentially sharp upper bounds for the number of nearest-neighbor paths which stay below a given barrier. In Section \ref{sec:3} we introduce a certain functional which will describe the limit shape and the growth rate of the maximizing partitions; this functional appears naturally from the upper bounds of Section \ref{sec:2}. In Section \ref{sec:4} we prove the limit shape theorem abstractly (without identifying the limit shape explicitly), by using nice convexity properties of the functional defined in Section \ref{sec:3}. In Section \ref{sec:5} we use Lagrange multipliers and Hardy-Ramanujan asymptotics to derive the limit shape explicitly (thus completing the proof of Theorem \ref{mr}). In Section \ref{sec:6} we prove Theorem \ref{1.2}. \section{Preliminary upper bounds}\label{sec:2} First we introduce some notation. Always $I$ will denote a subinterval of $\Bbb Z$ or of $\Bbb R$. The specific type of interval will always be made clear from the context. For a (continuous) function $f:I \to \Bbb R$, we define the \textbf{lower convex envelope} of $f$ to be the supremum of all convex functions which are less than or equal to $f$. Note that this is a convex function, which is also the supremum of a countable number of linear functions which are equal (and in fact tangent, if $I=[0,1]$) to $f$ at certain special points. We also define the \textbf{\textit{decreasing} lower convex envelope} to be the sup of all \textit{decreasing} convex functions less than or equal to $ f$, which is a (weakly) decreasing convex function. \\ \\ Our first lemma is elementary (albeit tedious to state precisely) and says that the lower convex envelope necessarily optimizes a certain type of convex functional over the set of functions less than a given one. \begin{lem}\label{0} Let $\psi: \Bbb R \to \Bbb R\cup\{+\infty\}$ be a convex function. Let $I$ be the discrete interval $\{a,a+1,...,b\} \subset \Bbb Z$. We let $C(I)$ denote the space of all functions from $I \to \Bbb R$. Define a functional $J: C(I) \to \Bbb R$ by the formula $$J(f):= \sum_{i\in I\backslash\{a\}} \psi\big(f(i)-f(i-1)\big), $$ Fix some $f\in C(I)$, and let $K_{f}:= \{g\in C(I): g\leq f, g(a)=f(a),g(b)=f(b)\}$. Then one has that $\inf_{g \in K_f} J(g) = J(h),$ where $h$ is the lower convex envelope of $f$. Similarly, if $\bar K_f:= \{g \in C(I): g\le f, g(a)=f(a)\}, $ and if we also assume that $\psi$ achieves its minimum at $0$, then $\inf_{g\in \bar K_f} = J(\bar h)$, where $\bar h$ is the decreasing lower convex envelope of $f$. \end{lem} \begin{proof} We will work with $K_f$ rather than $\bar K_f$, briefly indicating the necessary modifications at the end of the proof. The argument is essentially a geometric one which proceeds in two steps. \\ \\ \textit{Step 1.} Firstly, we show that $J(f) \geq J(h)$ whenever $f(a) = h(a)$, $f(b)=h(b)$, and $h$ is the lower convex envelope of $f$. Let $C:= \{x \in I: f(x)=h(x)\}$. The complement of $C$ is the union of some finite collection of disjoint intervals $\bigcup_n^N (a_n,b_n) \cap \Bbb Z$. On each interval $(a_i,b_i)\cap \Bbb Z$ it is clear from the definition of the lower convex envelope that $h$ is just a linear function, i.e., $h(x) = \frac{x-a_n}{b_n-a_n}f(b_n) + \frac{b_n-x}{b_n-a_n}f(a_n)$ for $x \in [a_n,b_n]$. By Jensen's inequality, one sees that $$\sum_{a_n+1}^{b_n}\psi(f(i)-f(i-1)) \geq (b_n-a_n) \psi\big(\frac{f(b_n)-f(a_n)}{b_n-a_n}\big) = \sum_{a_n+1}^{b_n} \psi(h(i)-h(i-1)).$$ This is already enough to prove Step 1, since $f$ coincides with $h$ outside of the $[a_n,b_n]$. \\ \\ \textit{Step 2.} Secondly, we show that $J(h) \ge J(k)$ whenever $h,k$ are both convex functions with the property that $h(a) = k(a)$, $h(b)=k(b)$, and $h \le k$. To do this, we inductively define a sequence $\{h_j\}_{j=a}^b$ of functions: $h_a=h$, and $$h_{j+1}(x) = \max\{h_j(x), (x-j+1)k(j)+(j-x)k(j-1)\}.$$ In more geometric terms, we are simply taking $h_{j+1}$ to be the maximum of $h_j$ with the ``tangent line" to $k$ at $\{j-1,j\}.$ In particular each $h_j$ is convex, and it follows from convexity of $k$ that $h_b = k$. Thus the claim will be proved if we can show that $J(h_j) \geq J(h_{j+1})$ for all $j\in\{a,...,b-1\}$. But this is clear, because $h_j(x)$ agrees with $h_{j+1}(x)$ except for $x$ in some interval $[u,v]$ where it equals $\frac{x-u}{v-u}h_j(v) + \frac{v-x}{v-u}h_j(u).$ Hence the same argument from Step 1 (using Jensen's inequalty) applies to show $J(h_j) \ge J(h_{j+1})$. This completes the proof of step 2. \\ \\ Step 1 and Step 2 easily imply the claim because if $g \le f$ with $g(a) = f(a)$ and $g(b)=f(b)$, and if $h\le k$ are their respective lower convex envelopes then we have that $J(g) \geq J(h) \geq J(k)$, where the first inequality is from Step 1 and the second is from Step 2. \\ \\ Now suppose we replace $K_f$ by $\bar K_f$. Let $c \in \{a,...,b\}$ be the point at which $f$ achieves its minimum value. Let $h$ and $\bar h$ denote the lower convex envelope and decreasing lower convex envelope (respectively) of $f$. Note that $h = \bar h$ on $\{a,...,c\}$, and $h(c) = f(c)$, and therefore if $g \le f$ then the above argument gives $\sum_{a+1}^c \psi(g(i)-g(i-1)) \ge \sum_{a+1}^c \psi(\bar h(i)-\bar h(i-1)).$ On the other hand, note that $\bar h(x) = f(c)$ for $x \in \{c,...,b\}$, and thus by assuming that $\psi$ achieves its minimum at $0$, we get that $\sum_{c+1}^b \psi(\bar h(i)-\bar h(i-1)) = \sum_{c+1}^b \psi(0) \leq \sum_{c+1}^b \psi(g(i)-g(i-1)),$ as desired. \end{proof} \begin{lem}\label{1} Let $f:\{0,...,n\} \to \Bbb R$ with $f(0)=0$. Let $S$ denote a simple symmetric nearest-neighbor random walk on $\Bbb Z$. Also, let $g$ denote the decreasing lower convex envelope of $f$. We also let $\Lambda^*$ be the large deviation rate function associated with $S$, which means that $\Lambda^*$ is the Legendre transform of $\lambda \mapsto \log \Bbb E[e^{\lambda S_1}]$. Then \begin{align}\label{eq:star} \Bbb P\big(S_i \leq f(i), \forall i \leq n\big) \leq e^{-\sum_{i=1}^n \Lambda^*(g(i)-g(i-1))}.\end{align} \end{lem} \begin{proof} The proof uses a standard method for obtaining LDP upper bounds \cite{DZ}. Note that for real numbers $\lambda_1,...,\lambda_n$, and any Borel set $C \subset \Bbb R^n$, \begin{align*}\inf_{ x \in C} e^{\sum_1^n \lambda_i (x_i-x_{i-1})} \Bbb P(S \in C) &\leq \Bbb E[ e^{\sum_1^n \lambda_i(S_i-S_{i-1})} ] = e^{\sum_{i=1}^n \Lambda(\lambda_i)}, \end{align*} where $\Lambda(\lambda) = \log \Bbb E[e^{\lambda S_1}]$ and we impose that $x_0:=0$ in the relevant sum. Rearranging this gives us $$\Bbb P(S \in C) \leq e^{-\inf_{x \in C} \sum_1^n \lambda_i(x_i-x_{i-1}) - \Lambda(\lambda_i)}.$$ Now we optimize over all $\lambda_1,...,\lambda_n$. If we assume that $C$ is compact and convex we can use the minimax theorem for concave-convex functions \cite{Si58} to interchange the sup over $\lambda$ with the inf over $x$, specifically \begin{align}\label{eq:SC} \Bbb P(S \in C) &\leq e^{-\sup_{\lambda \in \Bbb R^n} \inf_{x \in C} \sum_1^n \lambda_i(x_i-x_{i-1}) - \Lambda(\lambda_i)}\\ \nonumber&\le e^{-\inf_{x \in C} \sup_{\lambda \in \Bbb R^n} \sum_1^n \lambda_i(x_i-x_{i-1}) - \Lambda(\lambda_i)}\\ \nonumber& \leq e^{-\inf_{x\in C} \sum_1^n \sup_{\lambda \in \Bbb R} \big(\lambda(x_i-x_{i-1}) - \Lambda(\lambda)\big)} \\ \nonumber&= e^{-\inf_{x \in C} \sum_1^n \Lambda^*(x_i-x_{i-1})}. \end{align} Now we let $C = \{x \in \Bbb R^n: -i\leq x_i \leq f(i), \forall i\}$, which is clearly compact and convex. Note that $S\in C$ is equivalent to the left-hand side of \eqref{eq:star} (owing to the fact that $S$ only takes $\pm 1$ sized jumps). Applying \eqref{eq:SC} and using Lemma \ref{0} to show that $\inf_{x\in C} \sum_1^n \Lambda^*(x_i-x_{i-1}) = \sum_1^n \Lambda^*(g(i)-g(i-1))$, we arrive at \eqref{eq:star}. \end{proof} \begin{cor}\label{2} Let $f:\{0,...,n\} \to \Bbb R$ with $f(0)=f(n)=0$, and let $g$ denote the lower convex envelope of $f$ (not the decreasing one). Then the number of nearest neighbor bridges which stay below $f$ (i.e., functions $\gamma: \{0,...,n\} \to \Bbb Z$ such that $\gamma(0)=\gamma(n)=0$, and $|\gamma(i)-\gamma(i-1)|=1$, and $\gamma(i) \leq f(i)$ for all $i$) is bounded above by $ 2^ne^{-\sum_{i=1}^n \Lambda^*(g(i)-g(i-1))}.$ \end{cor} \begin{proof} Let us pick a point $k \in \{0,...,n\}$ at which $g$ attains its minimum value. Note that $g(k)=f(k)$. Note by Lemma \ref{1} the number of nearest neighbor paths of length $k$ starting from 0 and lying below $f|_{\{0,...,k\}}$ is less than or equal to $2^ke^{-\sum_1^k \Lambda^*(g(i)-g(i-1))}$. Similarly the number of nearest neighbor paths of length $n-k$ starting from $0$ and lying below $f|_{\{k+1,...,n\}}$ is less than or equal to $2^{n-k} e^{-\sum_{k+1}^n \Lambda^*(g(i)-g(i-1))}$. Note that the number of bridge paths of length $n$ lying below $f$ is less than the number of pairs of paths $(\gamma,\gamma')$ where $\gamma$ is of the former type and $\gamma'$ is of the latter type. Thus the total number of such bridges is bounded above by product of the two individual upper bounds, which equals $2^n e^{-\sum_1^n \Lambda^*(g(i)-g(i-1))}$. \end{proof} An important thing to keep in mind is that the bounds of Propositions \ref{1} and \ref{2} are actually sharp up to some subexponential decay factor (see Section \ref{sec:4}). At an intuitive level, what this says is that if we condition a random walk to stay underneath a fixed barrier, then the path which minimizes the energy of the random walk is none other than the lower convex envelope of that barrier. Another thing to keep in mind is that the bounds of this section hold \textit{uniformly} over all partitions, which makes them a little bit stronger than ordinary LDP upper bounds. \section{The functional describing the limit shape}\label{sec:3} For a partition $\lambda$, one recalls the definitions of $f_{\lambda}$ and $g_{\lambda}$ given at the beginning of Section \ref{sec:1}. A $1$-Lipschitz function will always refer to a real-valued function $f$ with the property that $|f(x)-f(y)|\leq |x-y|$, or equivalently $f$ is absolutely continuous and $|f'|\leq 1$. \\ \\ Let us now estimate (or at least upper bound) the number of subpartitions of a given partition. Each subpartition of a given $\lambda$ can be interpreted as a trajectory of a simple symmetric random walk \textit{bridge} which stays below the graph of $g_{\lambda}$ (or alternatively of $f_{\lambda}$ after rescaling). By Corollary \ref{2}, the number of such bridges can be upper bounded quite easily. Specifically let $h_{\lambda}$ denote the lower convex envelope of $f_{\lambda}$, and let $k$ denote a large enough integer so that $g_{\lambda}(x) = |x|$ whenever $|x| \ge k$. Then by Corollary \ref{2} we know that the number of subpartitions of $\lambda$ (i.e., the number of unit-length random walk bridges which lie in between the graphs of $g_{\lambda}(x)$ and $ |x|$) is upper bounded by \begin{align} &\;\;\;\;\;2^{2k} e^{-\sum_{i=-k}^{k} \Lambda^*\big(n^{1/2}\big[h_{\lambda}(n^{-1/2}i)-h_{\lambda}(n^{-1/2}(i-1))\big]\big)}\notag \\&= e^{\sum_{-k}^{k} \big[\log 2 - \Lambda^*\big(n^{1/2}\big[h_{\lambda}(n^{-1/2}i)-h_{\lambda}(n^{-1/2}(i-1))\big]\big)\big]} = e^{\sqrt{2n}\int_{\Bbb R} \phi(h_{\lambda}'(x)) dx},\label{U} \end{align} where in the final equality we are using the piece-wise linearity of $h_{\lambda}$ and defining $\phi(x) := \log 2 - \Lambda^*(x)$. This function $\phi$ will be very important in the ensuing analysis. In particular, note that $\phi(x)$ is a concave and even function defined on $[-1,1]$ which achieves its maximum value of $\log 2$ at $x=0$, and its minimum of $0$ at $x=\pm 1$. \\ \\ The functional $f \mapsto \int_{\Bbb R} \phi\circ f'$ appearing in \eqref{U} will describe the optimal rate of growth of the number of subpartitions, as we will show in the following section. Therefore the remainder of this section will be devoted to analyzing this functional. To start, we make the following important definition: \begin{defn}\label{6} We define $\mathcal X$ to be the space of all $1$-Lipschitz functions $f: \Bbb R\to \Bbb R$ such that $f(x) \ge |x|$ and furthermore $\int_{\Bbb R} (f(x)-|x|)dx \le 1$. We equip $\mathcal X$ with the topology of uniform convergence on compact sets. Furthermore, we define the functional $F: \mathcal X \to \Bbb R_+$ by $F(h):= \int_{\Bbb R}\phi \circ h'$, where $\phi := \log 2 - \Lambda^*$ and $h'$ is the derivative of $h$. \end{defn} A few remarks are in order about this definition. Firstly, note that $\mathcal X$ is a compact space. Indeed, this is a consequence of Arzela Ascoli: equicontinuity is obvious, and pointwise boundedness follows from the integral condition on elements of $\mathcal X$ combined with the $1$-Lipschitz property (in fact any $f \in \mathcal X$ is bounded above by $x \mapsto \sqrt{x^2+2}$, since this curve is the locus of all points such that the rectangle which has one vertex at that point and another one at the origin, and is also adjacent to the graph of $|x|$, has area exactly $1$). \\ \\ Secondly, we remark that even though we equipped $\mathcal X$ with the topology of uniform convergence on compact sets, this convergence is actually equivalent to uniform convergence on all of $\Bbb R$. This once again follows from the fact that for all $f\in\mathcal X$ one has that $|x|\le f(x) \leq \sqrt{x^2+2}$, and also because of the fact that $\sqrt{x^2+2}-|x| \to 0$ as $|x|\to \infty$. In particular it is true that $\mathcal X$ is a complete metric space with respect to the uniform metric $$d(f,g) = \sup_{x\in\Bbb R} |f(x)-g(x)|.$$ The completeness is a consequence of Fatou's Lemma (to ensure that the value of the integral remains $\leq 1$ after taking a limit). This metric will be used very briefly in the proof of a later theorem (\ref{10}). \\ \\ Thirdly, it is not immediately clear that the integral defining the functional $F(f)$ actually converges for every $f\in \mathcal X$, but this will be taken care of by the following proposition which also highlights the nicest and most important property of $F$, and will crucially be used later. \begin{prop}\label{3} The integral defining the functional $F$ converges for every $f \in \mathcal X$. Furthermore, $F$ is upper semicontinuous on $\mathcal X$. \end{prop} \begin{proof} We will prove that if $f_n$ is a family of $1$-Lipschitz functions such that $f_n \to f$ uniformly, then $\limsup_{n \to \infty} F(f_n) \leq F(f)<\infty$. The key difficulty here is that $F$ is defined on functions on the whole real line, which is not compact. The proof will therefore proceed in two steps: first we replace $\Bbb R$ with a large compact interval and prove the upper semicontinuity in this simpler case; second we prove a certain ``tightness" property \eqref{b} for functions in $\mathcal X$ which will simultaneously also show that the integral defining $F(f)$ necessarily converges for all $f\in\mathcal X$. \\ \\ The first step is to show that for each fixed (large) $A>0$ one has that \begin{equation}\label{a}\limsup_{n \to \infty} \int_{[-A,A]} \phi \circ f_n' \leq \int_{[-A,A]} \phi \circ f'. \end{equation}The proof of this is quite standard, and purely topological (e.g., does not rely on properties of the space $\mathcal X$). Nevertheless we include a proof of \eqref{a} for completeness. \\ \\ For simplicity, let us replace the interval $[-A,A]$ by $[0,1]$ (the same argument works in the former case with some extra scaling factors). Let $\mathcal X[0,1]$ denote the space of $1$-Lipschitz functions on $[0,1]$ equipped with the uniform topology. We will show that the functional $G(f):= \int_0^1 \phi\circ f'$ is upper semicontinuous from $\mathcal X[0,1]\to \Bbb R$. To prove this it suffices to write $G$ as the infimum of some collection of continuous functionals. To do this, we consider partitions $\mathcal P = (0\leq t_1 \leq ... \leq t_n \leq 1)$ of $[0,1]$, and we define $G_{\mathcal P}(f):= \sum_1^n (t_i-t_{i-1}) \phi \big( \frac{f(t_i)-f(t_{i-1})}{t_i-t_{i-1}} \big)$. It is then clear that each $G_{\mathcal P}$ is continuous from $\mathcal X[0,1] \to \Bbb R$. We then claim that $G = \inf_{\mathcal P} G_{\mathcal P}$ (where the infimum is taken over all partitions of $[0,1]$) which would prove upper semicontinuity. To prove this equality, first note by Jensen's inequality and concavity of $\phi$ that for all $a<b$ and all $f$ one has $\int_a^b \phi \circ f' \leq (b-a) \phi\big( \frac{f(b)-f(a)}{b-a}\big)$, which proves that $G \leq \inf_{\mathcal P} G_{\mathcal P}$. To prove the other direction, we define the partition $\mathcal P_n$ to be the one consisting of dyadic intervals $[k2^{-n},(k+1)2^{-n})$ with $0\le k \le 2^n-1$. For a $1$-Lipschitz function $f$ let $f_n$ denote the continuous function with $f_n(0)=0$ and whose derivative $f_n'(x)$ takes the constant value $2^n\big(f((k+1)2^{-n})-f(k2^{-n})\big)$ for $x \in [k2^{-n},(k+1)2^{-n})$. Note that $f_n'$ forms a bounded martingale with respect to the dyadic filtration on the probability space $[0,1]$ (i.e., the filtration associated with the nested family of partitions $\{\mathcal P_n\}_n)$. Consequently $f_n'$ converges to $f'$ a.e, and thus $\phi\circ f_n' \to \phi\circ f'$ a.e. Hence by the bounded convergence theorem we conclude that $G_{\mathcal P_n}(f) = \int_0^1 \phi \circ f_n' \to \int_0^1 \phi \circ f' = G(f)$. This shows that $G \ge \inf_{\mathcal P} G_{\mathcal P}$. This proves upper semicontinuity of $G$ and in turn also proves \eqref{a}. \\ \\ Now given that \eqref{a} holds, we want to take $A \to \infty$, but this involves a nontrivial interchange of limits and this is where noncompactness of the real line gets in the way. So now we actually need to use special properties of the space $\mathcal X$. \\ \\ We will show that for every $\epsilon>0$, there exists some $A=A(\epsilon)>0$ (large) so that for all $f \in \mathcal X$ one has that \begin{equation}\label{b} \int_{\Bbb R\backslash [-A,A]} \phi \circ f' < \epsilon. \end{equation} Note that together with \eqref{a}, this is enough to complete the proof that $\limsup_n F(f_n) \leq F(f)$. The key here is, of course, that the bound of \eqref{b} is uniform over \textit{all} functions $f \in \mathcal X$. Note that \eqref{b} also shows that $F(f)<\infty$ for all $f\in\mathcal X$. \\ \\ To prove \eqref{b}, we first note that if $f$ is $1$-Lipschitz, then $f(x)-x$ is necessarily (weakly) decreasing for $x \ge 0$, thus \begin{align}\label{eq:fndiff}1-f(n)+f(n-1) = \big(f(n-1)-(n-1)\big)-\big(f(n)-n\big) \ge 0 \quad\textrm{for } n\geq 1.\end{align} The condition that $\int_{\Bbb R} (f(x)-|x|)dx \leq 1$ shows that $\sum_{n\ge 0} f(n)-n \leq 3$ (e.g., via an integral comparison test, since we know $f(n)-n$ is decreasing and $f(0)\le \sqrt{2}<2$). Then for all $N \ge 1$ we find that \begin{align*} &\;\;\;\;\;\sum_{n=1}^N n(1-f(n)+f(n-1))= \sum_{n=1}^N \sum_{k=1}^n (1-f(n)+f(n-1)) \\&= \sum_{k=1}^N \sum_{n=k}^N (1-f(n)+f(n-1)) = \bigg[\sum_{k=1}^N f(k-1)-(k-1)\bigg] - N(f(N)-N), \end{align*} where in the last line we used \eqref{eq:fndiff} so that the inner sum telescopes. Since $N(f(N)-N)\ge 0$ we can upper bound the last expression by $\sum_{k \ge 0} (f(k)-k).$ Hence we can let $N \to \infty$ in the preceding expression and we see that \begin{align}\label{eq:fk3} \sum_{n \ge 1} n (1-f(n)+f(n-1)) \leq \sum_{k\ge 0} f(k)-k \leq 3.\end{align} \\ Appealing to the definition of $\phi(x)$ we see that in $(-1,1)$, $\phi^\prime(x)=-\tanh^{-1}x$ which has logarithmic singluarities at $\pm 1$. Thus, it follows that $\phi$ asymptotically looks like $x|\log x|$ near $x=\pm 1$, i.e., $\lim_{x \to \pm 1} \frac{\phi(x)}{|x\mp 1|\log|x\mp 1|}$ will be a finite nonzero value. Since $|\log x| \leq Cx^{-1/3}$ near $x=0$, this implies that there exists some $C>0$ such that $\phi(x) \leq C(1-|x|)^{2/3}$ for all $x\in [-1,1]$. In particular, for all $A \ge 0$ one has \begin{align*} \sum_{n \ge A} \phi\big( f(n)-f(n-1)\big) &\leq C\sum_{n\ge A} \big(1-f(n)+f(n-1)\big)^{2/3} \\ &\leq C\bigg(\sum_{n \geq A} n^{-2}\bigg)^{1/3} \bigg( \sum_n n(1-f(n)+f(n-1)) \bigg)^{2/3} \\ &\leq C\cdot A^{-1/3}\cdot 3^{2/3}. \end{align*} For the second inequality we use the fact that if $a_n$ are nonnegative real numbers such that $\sum_n na_n<\infty$, then by Holder's inequality $\sum_{n\ge A} a_n^{2/3} \leq \big( \sum_{n\ge A} na_n\big)^{2/3} \big(\sum_{n\ge A} n^{-2}\big)^{1/3}$. The final inequality uses the bound derived in \eqref{eq:fk3}, as well as $\sum_{n \ge A} n^{-2} \leq A^{-1}$. \\ \\ To close out our proof, observe that Jensen's inequality and the concavity of $\phi$ show that $\int_{[n-1,n]} \phi \circ f' \leq \phi(f(n)-f(n-1))$. This, together with the preceding arguments, then shows that $$\int_A^{\infty} \phi\circ f' \leq \sum_{n=A}^{\infty} \phi\big( f(n)-f(n-1)\big) \lesssim A^{-1/3},$$ independently of $f$, which finally proves \eqref{b}. \end{proof} At this point it is important to remark that Proposition \ref{3} is \textbf{not} just some technical and otherwise unimportant intermediate step. Really it is where the ``meat" of the proof of the limit shape (Theorem \ref{mr}) really lies. Specifically, the important thing here is the second half of the proof where we prove a type of ``tightness" estimate \eqref{b} for functions in $\mathcal X$. In terms of partitions, what it really shows (in an equivalent formulation) is that the sequence of partitions maximizing the number of subpartitions, stays bounded on the $n^{1/2}$ scale, i.e., that the sequence $f_{\lambda_n}$ from Theorem \ref{mr} does not lose any mass in the limit (meaning that any subsequential limit $f$ of $f_{\lambda_n}$ satisfies $\int(f(x)-|x|)dx=1$). We remark that the bound $A^{-1/3}$ appearing at the end of the proof may actually be improved optimally to $\frac{\log A}{A}$, but this is slightly more difficult. \\ \\ As a corollary of Proposition \ref{3}, we can combine it with compactness of the space $\mathcal X$ in order to obtain the following key result. \begin{cor}\label{4} The functional $F$ from Definition \ref{6} admits a maximum $M(F)$ on the space $\mathcal X$. There is a unique function $f$ at which the maximum is attained and this maximizer $f$ is a convex and symmetric function (i.e. $f(x) = f(-x)$) and moreover $\int_{\Bbb R} (f(x)-|x|)dx = 1$. \end{cor} \begin{proof} Any upper semicontinuous function on a compact space achieves its maximum. \\ \\ The uniqueness of the maximizer is a concavity property. Specifically we note that $\phi$ is a strictly concave function, meaning $\phi((1-t)a+tb) > (1-t)\phi(a) + t\phi(b)$ whenever $t\in(0,1)$ and $a\ne b$. This then easily implies that $F((1-t)f+tg) > (1-t)F(f)+tF(g) $ for $t\in (0,1)$ and $f \ne g$. Clearly this rules out the existence of two distinct maxima. \\ \\ Symmetry is another consequence of concavity. Specifically, if the maximizer $f$ was not symmetric, then we can define its reflection $f_s(x):=f(-x)$. Clearly $F(f_s) = F(f)$ and thus if $f \neq f_s$ then as above we have that $F(\frac12 f+\frac12 f_s) > \frac12 F(f) + \frac12 F(f_s) = F(f),$ which is a contradiction. \\ \\ Let $f$ be the maximizer. To prove that $\int(f(x)-|x|)dx=1$, suppose that this integral took some value $\alpha<1$. Then we let $h(x) = \alpha^{-1/2}h(\alpha^{1/2}x)$. Clearly $\int (h(x)-|x|)dx = 1$, and $h$ is $1$-Lipschitz. Moreover a simple substitution reveals that $F(h) = \alpha^{-1/2}F(f) >F(f)$ which is a contradiction. \\ \\ To prove convexity of $f$, suppose (for contradiction) that $a,b$ are two points of $\Bbb R$ such that there is a linear function $\ell$ equal to $f$ at both $a$ and $b$, and such that $\ell<f$ on $(a,b)$. We define $h$ to be equal to $f$ on $\Bbb R\backslash [a,b]$, and equal to $\ell$ on $[a,b]$. Then by Jensen's inequality one has that $\int_a^b \phi \circ f' < (b-a) \phi\big( \frac{f(b)-f(a)}{b-a}\big) = \int_a^b \phi \circ h'$, which means that $F(f)<F(h)$; a contradiction. This completes the proof. \end{proof} \section{The limit shape Theorem}\label{sec:4} Note that in \eqref{U} we already proved that for any sequence $\lambda_n$ of partitions of $n$, the number of subpartitions is bounded above by $e^{\sqrt{2n}M(F)}$ where $M(F)$ is the maximum value of the functional $F$ from above. A natural question is whether there exists a sequence of partitions for which the number of subpartitions actually grows at this optimal rate. It turns out that the answer is yes (up to some subexponential factor which is irrelevant), which retrospectively justifies why we performed such an in-depth analysis of the functional $F$ in the first place. \begin{prop}\label{5} There exists a sequence of partitions $\mu_n$ of $n$ such that the number of subpartitions of $\mu_n$ actually grows as $e^{\sqrt{2n}M(F) - o(\sqrt{n})}$ as $n \to \infty$. \end{prop} The key behind proving this proposition is Mogulskii's theorem \cite{Mog92}, which is really the primary underlying idea behind this entire work. This result essentially says that the bound in \eqref{U} (and also in Propositions \ref{1} and \ref{2}) is actually \textit{sharp} (again, up to some subexponential factor which is not relevant to us). But before getting to the proof, let us first prove the following important corollary. \begin{thm}[Limit shape theorem]\label{10} Let $\lambda_n$ and $f_{\lambda_n}$ be as in Theorem \ref{mr}. As $n \to \infty$, the sequence $f_{\lambda_n}$ converges uniformly to the unique maximizer $f_{\max}$ of the functional $F$ from Definition \ref{6}. \end{thm} \begin{proof} Let $s(\lambda_n)$ denote the number of subpartitions of $\lambda_n$, and let $h_{\lambda_n}$ denote the lower convex envelopes of $f_{\lambda_n}$. By Proposition \ref{5} and equation \eqref{U} we have that $$e^{\sqrt{2n}M(F) - o(\sqrt{n})} \leq s(\lambda_n) \leq e^{\sqrt{2n} F(h_{\lambda_n})} \leq e^{\sqrt{2n}M(F)},\;\;\;\;\;\;\;\;\;\text{as}\;\; n\to \infty,$$ which means that $M(F)-o(1) \leq F(h_{\lambda_n}) \leq M(F)$ as $n \to \infty$. \\ \\ Thus we see that $F(h_{\lambda_n}) \to M(F)$ as $n \to \infty$. This is already enough to imply that $h_{\lambda_n} \to f_{\max}$ uniformly on compact sets as $n \to \infty.$ Indeed it is true that for every $\epsilon>0$ there exists $\delta>0$ such that (for all $f \in \mathcal X$) $F(f)>M(F)-\delta$ implies that $d(f,g)<\epsilon$ (here $d$ denotes the metric on $\mathcal X$ which was specified following Definition \ref{6}). If this was not the case then we can choose an $\epsilon>0$ such that $\sup_{d(f_{\max},g)\ge \epsilon} F(g) = M(F)$. But the space $\mathcal A$ of $1$-Lipschitz functions $g$ such that $d(f_{\max},g)\ge \epsilon$ is again a compact subset of $\mathcal X$ (being a closed subset of $\mathcal X$). Furthermore $F$ is still an upper semicontinuous function on $\mathcal A$, hence it achieves its maximum value which we already know is $M(F)$. Then there exists some $g_{\max} \in \mathcal A$ such that $F(g_{\max}) = M(F)$, which clearly contradicts uniqueness of the maximizer since $d(f_{\max},g_{\max})\ge \epsilon$ by construction. \\ \\ So we have proved that the convex envelopes $h_{\lambda_n}$ (though not necessarily the functions $f_{\lambda_n}$ themselves) converge uniformly to $f_{\max}$. Note that since $f_{\lambda} \geq h_{\lambda}$ (by definition of the convex envelope) we have \begin{align*} \int_{\Bbb R} |f_{\lambda_n} - h_{\lambda_n}| &=\int_{\Bbb R} (f_{\lambda_n}(x)-h_{\lambda_n}(x)) dx \\ &= \int_{\Bbb R} \big((f_{\lambda_n}(x)-|x|) - (h_{\lambda_n}(x) - |x|)\big)dx \\ &= 1- \int_{\Bbb R} (h_{\lambda_n}(x)-|x|)dx. \end{align*} Now $h_{\lambda_n}$ converges to $f_{\max}$ and by Corollary \ref{4} we know that $\int(f_{\max}(x)-|x|)dx = 1$, therefore by applying the preceding calculation and then Fatou's Lemma, we see that $$\limsup_n \int_{\Bbb R} |f_{\lambda_n}-h_{\lambda_n}| = 1-\liminf_n \int_{\Bbb R} (h_{\lambda_n}(x)-|x|)dx \leq 1-\int (f_{\max}(x)-|x|)dx = 0.$$ Therefore $\|f_{\lambda_n}-h_{\lambda_n}\|_{L^1(\Bbb R)} \to 0$, and since all functions are $1$-Lipschitz, this $L^1$ convergence also implies uniform convergence. \end{proof} Although this abstractly proves convergence to some limit shape, we still do not know anything about what the limit shape looks like geometrically. For instance is it bounded, and if so, is it a triangular shape or something more complicated? This question will be addressed in the following section. \\ \\ Let us now start to get to the proof of Proposition \ref{5}. As mentioned before, the key is the following result, which essentially gives matching lower bounds to the upper bounds which we gave in Section \ref{sec:2}. A proof may be found in Theorem 5.2.1 of \cite{DZ} or in the original paper \cite{Mog92}. \begin{thm}[Mogulskii 1992] Let $\mu_n$ denote the law on $C[0,1]$ of $(\frac1n S_{nt})_{t \in [0,1]}$ where $S$ is any i.i.d. random walk (whose increment distribution has exponential moments), and the values of $S$ at non-integer points are understood to be linearly interpolated from the two nearest integer points. Then $\mu_n$ satisfies an LDP with rate $n$ and good rate function $$I(f) = \int_0^1 \Lambda^* \circ f',$$ where $\Lambda^*$ denotes the Legendre transform of $\lambda \mapsto \log \Bbb E[e^{\lambda S_1}]$, and the integral is meant to be understood as $+\infty$ if $f$ is not absolutely continuous. \end{thm} It should be noted that Mogulskii's result is a vast strengthening of Cramer's theorem (from just the endpoint of an iid sample path to its entire history), in the same way that Donsker's invariance principle for iid random walks is a strengthening of the classical central limit theorem. Finally we are ready to prove Proposition \ref{5}. \begin{proof}[Proof of Proposition \ref{5}]Let $f_{\max}$ be the maximizer from Corollary \ref{4}. We choose a sequence $\mu_n$ of partitions of $n$ such that $f_{\mu_n}$ converges uniformly to $f_{\max}$. This can be done as follows. First we construct an intermediate partition $\tilde \mu_n$ by putting boxes of side length $n^{-1/2}$ beneath the graph of of $f_{\max}$ until no more boxes can be put in such a way that the graph of $f_{\tilde \mu_n}$ remains below that of $f$. Since $f_{\tilde \mu_n} \leq f$, one notices that $\tilde \mu_n$ will not actually be a partition of $n$ but rather of some number $k(n) \leq n$. However, it is true that $|f_{\tilde \mu_n} - f_{\max}| \leq Cn^{-1/2}$ for some constant independent of $n$ (otherwise more boxes could be added to $\tilde \mu_n$ without eclipsing the graph of $f_{\max}$). Now we can define $\mu_n$ to be equal to $\tilde \mu_n$ but with the remaining $n-k(n)$ boxes added to the first column of $\tilde \mu_n$. This will not change the limiting function $f_{\max}$. \\ \\ We define $f_{\delta}(x):=\max\{|x|, f_{\max}(x)-\delta\}$, and we define the \textit{support} of $f_{\delta}$ to be the set of $x$ where $f_{\delta}(x)>|x|$ (this is an interval centered at $0$, by convexity and symmetry of $f_{\max}$). Note that for large enough values of $n$, the $\delta/2$ neighborhood of $f_{\delta}$ lies strictly below $f_{\mu_n}$ on the support of $f_{\delta}$ (this is because $f_{\mu_n}\to f_{\max}$ uniformly). We are now going to consider nearest-neighbor (random walk) paths of grid-size $n^{-1/2}$ which lie in between the graphs of $f_{\delta/2}$ and $f_{3\delta/2}$. Such a path will be called $(\delta,n)$-admissible. Let $k=k(n,\delta)$ denote the positive integer such that $n^{-1/2}k = \text{argmin}_{y \in \frac1{\sqrt{n}}\Bbb Z} |y-a|$ where $a=a(\delta):=\inf\{x>0: f_{\delta}(x)=x\}$. \\ \\ Note by Mogulskii's Theorem that the number of $(\delta,n)$-admissible paths terminating on the vertical axis (i.e., nearest-neighbor functions $\gamma:n^{-1/2}\Bbb Z_{\le 0} \to n^{-1/2}\Bbb Z$ where $\Bbb Z_{\le 0}$ denotes non-positive integers) is greater or equal to $2^k e^{-\sqrt{2n}\int_{-n^{-1/2}k}^0 \Lambda^* \circ f_{\delta}' - o(\sqrt{n})} = e^{\sqrt{2n} \int_{-\infty}^0 \phi\circ f_{\delta}' - o(\sqrt{n})} $, as $n \to \infty$ (with $\delta$ fixed). \\ \\ Now we notice that two independent such random walks which are started from $(-n^{-1/2}k,n^{-1/2}k)$ and conditioned to stay between $f_{\delta/2}$ and $f_{3\delta/2}$ have probability at least $\frac{1}{\delta \sqrt n}$ of terminating at the same point. Indeed, this is because there are at most $\delta \sqrt n$ possible points $\{x_i\}_{i=1}^{\delta\sqrt n}$ at which such a walk can terminate (because the grid size is $n^{-1/2})$, and if $p_i$ is the probability of terminating at point $x_i$, then by Cauchy-Schwarz one finds that $1 =\sum_1^{\delta \sqrt n} p_i \leq (\sum_i p_i^2)^{1/2} (\delta \sqrt n )^{1/2}$, and because the probability of two independent such walks terminating at the same point equals $\sum_i p_i^2$. \\ \\ Now, a random walk \textit{bridge} of grid size $n^{-1/2}$ which lies between $f_{\delta/2}$ and $f_{3\delta/2}$ (which defines a subpartition of $\mu_n$ for large enough $n$) can be viewed as the concatenation of a pair of these random walk paths started from $(-n^{-1/2}k,n^{-1/2}k)$ terminating at the same point on the vertical axis (note here that we are using the property that $f_{\delta}(x) = f_{\delta}(-x))$. By the observations of the preceding two paragraphs, the number of such pairs is bounded below by $\frac{2}{\delta \sqrt{n}} \big(e^{\sqrt{2n} \int_{-\infty}^0 \phi\circ f_{\delta}' - o(\sqrt{n})}\big)^2.$ The prefactor $\frac2{\delta\sqrt n}$ may be absorbed into the $o(\sqrt n)$ term in the exponent, giving a lower bound of $e^{\sqrt{2n} F(f_{\delta})-o(\sqrt{n})}.$ The $o(\sqrt n)$ term may depend on $\delta$ but this is not a problem. \\ \\ Since this lower bound holds true for arbitrary $\delta>0$, the claim now follows if we can show that $F(f_{\delta}) \to F(f_{\max})$ as $\delta \to 0$. To do this, note that $f_{\delta}' \to f_{\max}'$ pointwise (trivially by the definition of $f_{\delta}$). Thus by Fatou's Lemma and maximality of $F(f_{\max})$ it is true that $F(f_{\max}) \leq \liminf_{\delta \to 0} F(f_{\delta}) \le \limsup_{\delta} F(f_n) \leq \max_g F(g)= F(f_{\max})$, which completes the argument. \end{proof} \section{Characterizing the limit shape}\label{sec:5} So far, many of our methods could have been used for more general types of models than the simple symmetric random walk (replacing $\phi$ with a more general concave function). We now move onto trying to find the limit shape $f_{\max}$ exactly, which will involve working with specific details of the function $\phi$, and thus most of the subsequent arguments and analysis will be specialized just to the case of the simple random walk. In particular we will show that $f_{\max}$ has, up to scaling and centering, the shape of the curve $x \mapsto \log \cosh x$. In particular it is not just the triangular function $x\mapsto \max\{1,|x|\}$, nor is it the Vershik-Kerov curve. It is, in fact, the Vershik curve which is the limit shape of \textit{uniformly} sampled partitions of $n$ \cite{Ver96}. \\ \\ Since $f_{\max}$ is an even convex function there exists a \textit{maximal} interval $(-a_{max},a_{max})$ (which we will henceforth refer to as the \textbf{support} of $f_{\max}$) on which $f(x)>|x|$. This interval is the interior of the largest closed interval containing the support (in the usual sense) of the second distributional derivative $f_{\max}''$ (which is a nonnegative Borel measure since $f_{\max}$ is convex). Note that it is possible that $a_{max}=+\infty$, and in a moment we will show that this is indeed the case. \\ \\ Let $\psi$ be a smooth function with support contained in $(-a_{max},a_{max})$, such that $\int_{\Bbb R} \psi = 0$. Then we claim \begin{equation}\label{7} \int_{\Bbb R} (\phi' \circ f_{\max}') \cdot \psi' = 0. \end{equation} Indeed, one easily checks that $\lim_{\epsilon \to 0} \epsilon^{-1} \big(F(f_{\max}+\epsilon\psi)-F(f_{\max})\big) = \int_{\Bbb R} (\phi'\circ f_{\max}')\cdot \psi'$. However, since $\int \psi = 0$ and since the support of $\psi$ is contained in the support of $f_{\max}$, it follows that for $\epsilon$ in a small enough neighborhood of zero, the function $f_{\max}+\epsilon \psi$ is an element of $\mathcal X$, and thus $F(f_{\max}+\epsilon \psi)\leq F(f_{\max})$. Hence if $\lim_{\epsilon \to 0} \epsilon^{-1} \big(F(f_{\max}+\epsilon\psi)-F(f_{\max})\big)$ exists then it must equal zero, proving \eqref{7}. \\ \\ Now if $h: [-a,a] \to \Bbb R$ is any measurable function such that $\int h\cdot \psi' = 0$ for every function $\psi \in C_c^{\infty}$ with $\int \psi = 0$, then this precisely means that the distributional derivative of $h$ is orthogonal (with respect to the $L^2$ pairing) to all except the constant functions. In particular it means that $h'$ is itself a constant function. Applying this principle to $h:= \phi' \circ f_{\max}'$, we see that $\phi'(f_{\max}'(x)) = \beta x + C$ for some $\beta, C \in \Bbb R.$ But $\phi$ and $f_{\max}$ are even functions, so $\phi'\circ f_{\max}'$ is and odd function, and thus $C=0$. Now recall that $\phi = \log 2 -\Lambda^*$ where $\Lambda^*$ is the Legenrde transform of $\Lambda(x) = \log \cosh x$. This implies that $\Lambda' \circ \phi'$ is the negative of the identity function on $[-1,1]$. In particular $\phi'(f_{\max}'(x)) = \beta x$ implies that $-f_{\max}'(x) = \Lambda'(\beta x)$, which in turn implies that $f_{\max}(x) = - \frac1{\beta} \log \cosh(\beta x)+D$ for all $x$ in the support of $f_{\max}$. Here $D$ is some constant of integration. Of course, we know that $f_{\max}$ is convex, which implies $\beta \le 0$. Thus, by renaming $\beta$ to be $-\beta$ we have proved the following. \begin{prop}\label{8} There exists some $\beta_{max}\ge 0$ and some $D_{max} > 0$ such that for every $x \in (-a_{max},a_{max})$ one has that $f_{\max}(x) = \frac1{\beta_{max}}\log \cosh (\beta_{max} x) +D_{max}$. \end{prop} In the possibility where $\beta_{max} = 0$, the statement of the above Proposition is of course nonsensical, but (because the condition $\int (f_{\max}(x)-|x|)dx=1$ determines $D_{max}$ uniquely from $\beta_{max}$) it is meant to be interpreted in the sense that $f_{\max}(x) = D_{max}$ on its support, meaning that the limit shape would be the triangular function $x\mapsto \max\{1,|x|\}$. We will rule out this possibility shortly. \\ \\ Our next goal is to find out whether or not $a_{max}<+\infty$, i.e., whether the limit shape is something compact or not. The next theorem tells us that the answer is no. \begin{thm}\label{9}In the notations of Proposition \ref{8}, $a_{max} = +\infty$, $\beta_{max} = \frac{\pi}{2\sqrt{3}},$ $D_{max} = \frac1{\beta_{max}}\log 2$, and $F(f_{\max}) = \pi/\sqrt{3}$. In particular, $f_{\max}(x) = \frac{2\sqrt{3}}{\pi}\log\big(2\cosh(\frac{\pi}{2\sqrt{3}}x)\big).$ \end{thm} \begin{proof}The key will be to use the Hardy-Ramanujan asymptotics together with the identity \begin{align}\label{f}\int_0^{\infty} \log(1+e^{-2x})dx = \int_0^{\infty} \sum_{n \ge 1} (-1)^{n+1}\frac{e^{-2nx}}{n}dx = \sum_{n \ge 1} \frac{(-1)^{n+1}}{2n^2}=\frac{\pi^2}{24} .\end{align} Here we Taylor expanded the logarithm and then used the identity $\sum_{n\ge 1} n^{-2} = \frac{\pi^2}{6}$ and its corollaries: $\sum_{n\;even} n^{-2} = \frac{\pi^2}{24}$ and $\sum_{n\;odd} n^{-2} = \frac{3\pi^2}{24}$. \\ \\ We now recall the Hardy-Ramanujan asymptotics \cite{HR} for the partition numbers. Specifically, if $p(n)$ denotes the number of partitions of $n$, then $p(n) = e^{\pi\sqrt{2n/3}-o(\sqrt{n})}$ as $n \to \infty$. Notice that $p$ is an increasing function of $n$, and every subpartition of a partition of $n$ is a partition of some integer $i\le n$. Thus the number of subpartitions of any given partition $\lambda$ of $n$ is upper bounded by $\sum_{i=0}^n p(i) \leq (n+1)p(n) = (n+1) e^{\pi\sqrt{2n/3}-o(\sqrt{n})}$. The prefactor $(n+1)$ may be absorbed into the $o(\sqrt n)$ term in the exponent, and thus by Proposition \ref{5} we conclude that $F(f_{\max}) \leq \pi/\sqrt{3}$. \\ \\ Now, let $f(x):= \alpha^{-1/2} \log(2\cosh (\alpha^{1/2} x))$, where $\alpha := \int_{-\infty}^{\infty} (\log(2\cosh x)-|x|)dx = 2\int_0^{\infty} \log(1+e^{-2x})dx = \frac{\pi^2}{12}$ by \eqref{f}. Note that $f$ is $1$-Lipschitz (because it has derivative given by $\tanh(\alpha^{1/2}x)$ which is bounded in absolute value by $1$), and also note (by substituting $u=\alpha^{1/2}x$) that $\int_{\Bbb R} (f(u)-|u|)dx = 1$ so that $f \in \mathcal X$. Now we claim that $F(f) = \pi/\sqrt{3} = 2\alpha^{1/2}$, which would indeed prove that $f=f_{\max}$. To prove this, note that $F(f)= \alpha^{-1/2} \int_{\Bbb R} \phi(\tanh u)du,$ so that proving that $F(f) = 2\alpha^{1/2}$ now amounts to showing that $\int_{\Bbb R} \phi(\tanh u)du = 2\alpha$. In other words, we want to show \begin{equation}\label{g} \int_0^{\infty} \phi(\tanh u)du = 2\int_0^{\infty} \log(1+e^{-2u})du. \end{equation} One readily checks that $\phi(\tanh u) = \log(e^u+e^{-u}) -u\tanh u,$ from which proving \eqref{g} amounts to checking that $\int_0^{\infty} \big( \log(e^u+e^{-u}) -2u+u\tanh u\big)du = 0$. But the integrand here has an explicit antiderivative given by $u \log(e^u+e^{-u}) -u^2$, which is readily seen to evaluate to zero at both $u=0$ and as $u \to \infty$. This proves \eqref{g}, which finally shows that $f=f_{\max}$. \end{proof} A further direction of study is to try to gain more precise asymptotics on the exact number of subpartitions of the maximizing sequence. Specifically we would like to find precise asymptotics on the $o(\sqrt n)$ term in the optimal growth rate $e^{\pi\sqrt{2n/3}-o(\sqrt n)},$ and we believe this can be done using more precise large deviations estimates. A similarly difficult ``local" asymptotic problem would be to find the rate at which the side lengths go to $\infty$ (note that Theorem \ref{9} merely proves that they grow faster than $\sqrt{n}$). \section{Extension to $k$-chains of subpartitions}\label{sec:6} We now extend the limit shape theorem to the case of partitions which maximize the number of $k$-chains of subpartitions, which will prove theorem \ref{1.2}. Since the proof is not significantly more complicated, we briefly indicate the changes which need to be made at each stage of the argument. \\ \\ First we address the necessary modifications in Section \ref{sec:2}. In the notation of Corollary \ref{2}, consider $k$-chains $\gamma_k\leq... \leq \gamma_2 \leq \gamma_1 \leq f$ of nearest-neighbor bridges which stay below $f$. Then (by viewing the chain as just a $k$-tuple of paths and disregarding the ordering) the same corollary says that the number of these $k$-chains is bounded above by $$\bigg( 2^n e^{-\sum_1^n \Lambda^*(g(i)-g(i-1))} \bigg)^k.$$ Then, in equation \eqref{U} at the beginning of Section \ref{sec:3}, this bound will tell us that for a given partition $\lambda$ of $n$, the number of $k$-chains of subpartitions of $\lambda$ (i.e., $k$-chains of random walk bridges of grid size $n^{-1/2}$ which are nestled in between the graphs of $f_{\lambda}(x)$ and $|x|$) is upper-bounded by \begin{equation}\label{13}e^{k\sqrt{2n} F(h_{\lambda}')}, \end{equation} where as usual $h_{\lambda}$ is the lower convex envelope of $f_{\lambda}$, and $F$ is the functional of Definition \ref{6}. \\ \\ Hence, all that is left to do is to show that the upper bound \eqref{13} is actually sharp up to the exponential scale (after replacing $F(h_{\lambda}')$ with $M(F) = \pi/\sqrt{3}$ there). The way to do this is by modifying the proof of Proposition \ref{5} to lower bound the number of ensembles of $k$ distinct paths staying below the graph of $f_{\max}$. In the notation of that proof, we consider ensembles (implicitly depending on $n$) of nearest neighbor bridges $(\gamma^i)_{i=1}^k$ from $n^{-1/2}\Bbb Z \to n^{-1/2}\Bbb Z,$ with the property that $f_{i\delta}\le \gamma_i \le f_{(i+1)\delta}$ for each $1\le i \le k$. Clearly each such ensemble defines a $k$-chain of subpartitions of $\mu_n$. Moreover the number of such $k$-chains is merely the product over $i\in\{1,...,k\}$, of the individual number of paths lying between the graphs of $f_{i\delta}$ and $f_{(i+1)\delta},$ and we already know a good individual lower bound from the proof of Proposition \ref{5}. Specifically, we can lower bound this number of $k$-chains by $$\prod_{i=1}^k \big( e^{\sqrt{2n} F(f_{(i+\frac12)\delta})-o(\sqrt n)}\big) = e^{\sqrt{2n}\sum_{i=1}^k F(f_{(i+\frac12)\delta}) - o(\sqrt n)}.$$ As we already showed in the proof of Proposition \ref{5}, $F(f_{\eta}) \to 0$ as $\eta \to 0$, which means (by making $\delta$ close to $0$) that we can actually lower bound the maximal number of $k$-chains of subpartitions by $e^{k\sqrt{2n}M(F) - o(\sqrt{n})}$, as $n \to \infty$. This already proves Theorem \ref{1.2}. We remark here that the proof does \textit{not} rely on whether or not the $k$-chains are strictly ordered or not, so the statement of Theorem \ref{1.2} does not depend on this interpretation. \\ \\ Unfortunately our proof makes it clear that we cannot easily generalize to the case of $k(n)$-chains, i.e., where $k$ grows to $+\infty$ with $n$. As stated in the introduction, we actually expect that if $k(n)$ grows slowly enough (at a rate of $o(n^{1/2})$) then one has the same limit shape. One the other hand if $n^{1/2} = o(k(n))$, then we expect the limit to be the LSVK curve \cite{LS77,VK77}. We expect a nontrivial crossover when $k(n) \sim \alpha n^{1/2}$, because this is precisely the minimal growth rate at which the typical ensemble of sub-paths no longer has a tendency to just concentrate near the boundary of the partition, but actually distributes itself throughout the bulk of the partition according to some density (as can be shown via a random matrix argument, or alternatively using variational principles for domino tilings). This may or may not be pursued in a future work, but we believe that a similar overall approach will work.
{ "timestamp": "2019-07-24T02:04:53", "yymm": "1907", "arxiv_id": "1907.09628", "language": "en", "url": "https://arxiv.org/abs/1907.09628", "abstract": "This is an expository note answering a question posed to us by Richard Stanley, in which we prove a limit shape theorem for partitions of $n$ which maximize the number of subpartitions. The limit shape and the growth rate of the number of subpartitions are explicit. The key ideas are to use large deviations estimates for random walks, together with convex analysis and the Hardy-Ramanujan asymptotics. Our limit shape coincides with Vershik's limit shape for uniform random partitions.", "subjects": "Probability (math.PR); Combinatorics (math.CO)", "title": "Limit Shape of Subpartition Maximizing Partitions", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9879462204837635, "lm_q2_score": 0.8221891327004132, "lm_q1q2_score": 0.8122786461741967 }
https://arxiv.org/abs/1510.01758
Estimating the Number Of Roots of Trinomials over Finite Fields
We show that univariate trinomials $x^n + ax^s + b \in \mathbb{F}_q[x]$ can have at most $\delta \Big\lfloor \frac{1}{2} +\sqrt{\frac{q-1}{\delta}} \Big\rfloor$ distinct roots in $\mathbb{F}_q$, where $\delta = \gcd(n, s, q - 1)$. We also derive explicit trinomials having $\sqrt{q}$ roots in $\mathbb{F}_q$ when $q$ is square and $\delta=1$, thus showing that our bound is tight for an infinite family of finite fields and trinomials. Furthermore, we present the results of a large-scale computation which suggest that an $O(\delta \log q)$ upper bound may be possible for the special case where $q$ is prime. Finally, we give a conjecture (along with some accompanying computational and theoretical support) that, if true, would imply such a bound.
\section{Introduction} For univariate polynomial equations defined over a field, it is desirable to obtain general upper bounds on the number of solutions given in simple terms of plainly available information, such as the coefficients, exponents, or number of terms. The ubiquitous example of this is the degree bound, but over non-algebraically closed fields, it is possible to considerably improve upon the degree bound for certain non-negligible families of polynomials. Over the real numbers, Descartes' Rule of Signs implies that a $t$-nomial $f$ must have less than $2t$ real roots. For sparse polynomials - those with a small number of nonzero terms - \linebreak this can provide a remarkable improvement on the trivial upper estimate given by the degree of $f$. In \cite{canetti}, the authors establish a finite field analogue of Descartes' Rule: a sparsity-dependent upper bound on the number of roots of a $t$-nomial over $\mathbb{F}_q$. More recently, an improved upper bound was derived in \cite{sparse_polys}. Here, we investigate possible further improvements to the bound for the special case of $t = 3$. This can be considered the smallest nontrivial choice of $t$, since the zero sets of univariate binomials are easily characterized - they are simply cosets of subgroups of $\F_q^*$, possibly together with $0 \in \F_q$. \begin{thm} \label{sparse} \cite[Theorems 2.2 and 2.3]{sparse_polys} Let $$f(x) = c_1 x^{a_1} + c_2 x^{a_2} + \cdots + c_t x^{a_t} \in \F_q[x]$$ with all $c_i$ nonzero and $a_1 > a_2 > \cdots > a_t = 0$. If $f$ vanishes on an entire coset of a subgroup $H \subseteq \F_q^*$, then $$ \#H \in \set{k \in \N}{\textnormal{for each } a_i \textnormal{, there is an }a_j \textnormal{ with } j \neq i \textnormal{ and } a_i \equiv a_j \: (\bmod \: k)}. $$ \noindent Furthermore, let $R(f)$ denote the number of distinct roots of $f$ in $\F_q$, and suppose $R(f) > 0$. If $C$ denotes the maximal cardinality of a coset on which $f$ vanishes, then $$ R(f) \leq 2 (q-1)^{1-1/(t-1)} C^{1/(t-1)}. $$ \end{thm} \pagebreak For a trinomial $f(x) = x^n + ax^s + b \in \F_q[x]$, with $a$ and $b$ nonzero, associate the parameter $$\delta = \gcd(n,s,q-1).$$ Suppose that $R(f) > 0$. It follows from Theorem \ref{sparse} that if $f$ vanishes on a coset of size $C$, then $n \equiv s \equiv 0 \: (\bmod \: C)$. Since $C$ must divide $\#\F_q^*$, we have that $C$ divides $\delta$. On the other hand, if $f$ vanishes at $\alpha \in \F_q$, then $\alpha \in \F_q^*$, and $f$ vanishes on the entire coset $\set{x \in \F_q^*}{x^\delta = \alpha^\delta}$ of order $\delta$. So, in the trinomial case we have explicitly that $C = \delta$, and the bound given above simplifies to $$R(f) \leq 2 \sqrt{\delta (q-1)}.$$ As pointed out in \cite{rojas2}, this bound for trinomials is also a consequence of an earlier result from \cite{rojas1} which bounds the number of cosets $S_i \subset \F_q^*$ needed to express the zero set of a sparse polynomial as a union of the form $ \bigcup_{i=1}^N S_i$. Our first result refines this upper bound. \begin{thm} \label{upper_bound} The roots of a trinomial $$f(x) = x^{n} + ax^{s} + b \in \F_q[x]$$ are the union of no more than $\Big\lfloor \frac{1}{2}+\sqrt{\frac{q-1}{\delta}} \Big\rfloor$ cosets of the subgroup $H \subseteq \F_q^*$ of size $\delta$. \end{thm} Consequently, we now have $R(f) \leq \delta \Big\lfloor \frac{1}{2} +\sqrt{\frac{q-1}{\delta}} \Big\rfloor$, improving the previous result by approximately a factor of $2$ when $\delta \ll q$. The method of proof is elementary but interesting: given a trinomial with $\delta = 1$ and $r$ roots in a field of undetermined size, we construct $r^2 - r + 1$ distinct nonzero elements in the field, giving a lower bound on its size. Additionally, we show that when $\delta=1$, this new bound is optimal for even-degree extensions of $\F_p$. If $q$ is an even power of a prime $p$ and $\delta=1$, the bound reduces to $R(f) \leq \sqrt{q}$, and we can indeed construct trinomials with $\delta=1$ and $\sqrt{q}$ distinct roots in $\F_q$. \begin{thm} \label{june_trinomials} For any odd prime $p$, the trinomial $x^{p^k} + x - 2$ has exactly $p^k$ roots in $\F_{p^{2k}}$. \end{thm} We prove Theorem \ref{june_trinomials} via linear-algebraic techniques: the extremal examples provided are translations of linear maps with null-spaces of exactly half the dimension of $\ff{q}$ as a vector space over $\ff{\sqrt{q}}$. The optimality of the bound is somewhat murkier when $\F_q$ is not an even-degree extension. Trinomials with nearly as many roots have been found for some other cases; for example, when $q$ is a cube, the authors of \cite{rojas2} give the example $f(x) = x^{1+q^{1/3}} + x + 1$ which has $q^{1/3} + 1$ roots. Most notably, the question of optimality of the bound remains open for the prime field case. We remark that out of all examples of which we are aware, including those given in \cite{rojas2}, the only sparse polynomials which vanish at a substantial number of points do so by abusing some obvious algebraic structure of $\F_q$ - they either vanish on an entire translation of a subspace or on an entire coset of a nontrivial subgroup. Trinomials over prime fields which have $\delta = 1$ are deprived of both of these luxuries, and accordingly, finding examples with many roots seems to be difficult. Let $R_p$ denote the maximum value of $R(f)$ over all trinomials in $\F_p[x]$ having $\delta = 1$. Recall that by Fermat's little theorem, if $\tilde{n} \equiv n \: (\bmod \: p - 1)$ and $\tilde{s} \equiv s \: (\bmod \: p - 1)$, then the two polynomials $f(x) = x^n + ax^s + b$ and $\tilde{f}(x) = x^{\tilde{n}} + ax^{\tilde{s}} + b$ define the same mapping on $\F_p^*$, so it is possible to compute $R_p$ in straightforward way by enumerating trinomials with degree less than $p - 1$ and counting their roots. In \cite{rojas2}, $R_p$ is computed for all primes up to 16633, and they find no instances in which $R_p$ exceeds $2 \log p$. As a result of a large-scale computation, we observe that the inequality $R_p \leq 2 \log p$ continues to hold for all primes up to 139571. Therefore, it appears that the current bound, $R_p = O(\sqrt{p})$, is still far from optimal for trinomials over $\F_p$, but we have been unsuccessful in proving any stronger version of Theorem \ref{upper_bound} for prime fields. It is known that if $f$ is allowed to range over all polynomials in $\F_p[x]$, the distribution of $R(f)$ approaches a Poisson distribution with mean $1$ as $p \to \infty$ \cite{random}. That is, the proportion of $f \in \F_p[x]$ with $R(f) = r$ is approximately $e^{-1}/r!$ when $p$ is sufficiently large. Based on computational experiments, this also appears to be true when $f$ ranges over just the set of trinomials in $\F_p[x]$ with $\delta=1$. This is certainly \textit{not} the case if $f$ were to range over, for example, the set of \textit{all} trinomials, or the set of {\em tetranomials} with $\delta=1$ due to the presence of $f$ which vanish on large cosets. On the other hand, the set of trinomials with $\delta=1$ appears to behave similarly to what we would expect from an $f$ randomly selected from all of $\F_p[x]$. Apparently, restriction of $f \in \F_p[x]$ to trinomial with $\delta = 1$ provides very little statistical information about $R(f)$. \begin{heuristic} \label{behave} With respect to root number, the set of trinomials $f \in \F_p[x]$ with $\delta=1$ behaves like a uniform random sample of polynomials from $\F_p[x]$. That is, when $p$ is large enough, the values of $R(f)$ behave like they are given by random variables with distribution function $\rho(r) = e^{-1}/r!$ (a Poisson distribution with mean 1). \end{heuristic} In Section 4, we show that Heuristic \ref{behave} allows us to make fairly accurate guesses of the actual values of $R_p$ recorded by our computations. Therefore, it may be that the observed logarithmic growth of $R_p$ is not due to any special property of trinomials with $\delta = 1$, but rather emerges as a statistical consequence of this set being so ``ordinary," together with the exponential decay of the Poisson distribution. We phrase this formally as the following conjecture that the distributions of $R(F)$ and $R(f)$, with $F$ ranging over $\F_p[x]$ and $f$ ranging over trinomials with $\delta = 1$, differ by at most a constant factor. \begin{conj} \label{con} Define \begin{align*} M_p &= \set{f \in \F_p[x]}{ \deg f < p}, \\ T_p &= \set{x^n + ax^s + b}{ (a,b) \in (\F_p^*)^2, \; 0 < s < n < p - 1, \; \textnormal{and }\gcd(n,s,p-1) = 1}. \end{align*} Let $\mu(p,r)$ denote the proportion of $f \in M_p$ with $R(f) = r$ and $t(p,r)$ denote the proportion of $f \in T_p$ with $R(f) = r$. There exists a constant $\lambda \in \mathbb{R}$ such that $$ t(p,r) \leq \lambda \mu(p,r) ,$$ for all $p$ prime and $r \in \N$. \end{conj} If these two distributions do in fact differ by at most a constant factor, then we are able to readily derive the logarithmic upper bound for $R_p$ suggested by our experiments. And, in turn, we could extend such a bound to trinomials with $\delta > 1$ by noticing that $x^n + ax^s + b$ has at most $\delta$ roots for every root of $x^{n / \delta} + ax^{s / \delta} + b$. \begin{cor} \label{cor} Suppose Conjecture \ref{con} is true. Then, we have the asymptotic bound $$ R_p = \max \set{R(f)}{f \in T_p} = O \left( \frac{\log p}{\log \log p} \right). $$ \end{cor} \begin{proof} Let $M_p(r) = \set{f \in M_p}{R(f) = r}$ and $T_p(r) = \set{f \in T_p}{R(f) = r}$ so that $$\mu(p,r) = \frac{\#M_p(r)}{\#M_p} \textnormal{ and } t(p,r) = \frac{\#T_p(r)}{\#T_p}.$$ \pagebreak \noindent We can bound $\#M_p(r)$ from above by counting polynomials of the form $$ \left( \prod_{i=1}^r (x - \alpha_i) \right) \left( \sum_{i=0}^{p-1-r} c_i x^i \right), $$ with $\alpha_i \in \F_p$ distinct, which gives $$ \mu(p,r) = \frac{\#M_p(r)}{\#M_p} = \frac{\#M_p(r)}{p^{p}} \leq \frac{\binom{p}{r} p^{p-r}}{p^{p}} = \binom{p}{r} \frac{1}{p^r} \leq \frac{1}{r!}.$$ Obviously $\#T_p \leq p^4$, so assuming the existence of $\lambda$ defined in Conjecture \ref{con}, we have $$ \#T_p(r) \leq \lambda \mu(r,p) \#T_p \leq \frac{\lambda \#T_p}{r!} \leq \frac{\lambda p^4}{r!} .$$ If $\lambda p^4 / r! < 1$ then the set $T_p(r)$ is empty, so we must have $\lambda p^4 / R_p! \geq 1$, or equivalently, $ \log (R_p!) \leq \log (\lambda p^4 ).$ By applying Stirling's approximation, we get the asymptotic bound $$ R_p \log R_p \sim \log (R_p!) \leq \log (\lambda p^4 ) = 4 \log p + \log \lambda = O(\log p). $$ By considering the growth order of the inverse function of $y = x \log x$, we obtain $$R_p = O \left( \frac{\log p}{\log \log p} \right).$$ \end{proof} We remark that in \cite{rojas2}, it is shown that under the Generalized Riemann Hypothesis, there exists an infinite sequence of primes $\left( p_k \right)_{k=1}^\infty$ satisfying the lower bound $R_{p_k} = \Omega \left( \frac{\log {p_k}}{\log \log {p_k}} \right)$. So, the truth of both Conjecture \ref{con} and GRH would imply that the bound in Corollary \ref{cor} is, up to a multiplicative constant, asymptotically optimal. Finally, we prove the following theoretical result which states that Conjecture \ref{con} is true if we consider only trinomials of bounded degree as we take $p$ to infinity. This weaker result is suggestive but certainly not sufficient to imply the bound in Corollary \ref{cor}. In particular, Theorem \ref{poisson_density} shows the existence of $\lambda_N \in \mathbb{R}$ such that $t_N(p,r) \leq \lambda_N \mu(p,r)$, but we do not have a bound on the set $\set{\lambda_N}{N \in \N}$. \begin{thm} \label{poisson_density} Suppose $n,s \in \N$ with $0 < s < n$ and $\gcd(n,s) = 1$. As $p \to \infty$, the proportion of pairs $(a,b) \in (\F_p^*)^2$, such that $f(x) = x^n + ax^s + b$ has $R(f) = r$, converges to $$ \begin{dcases} \frac{ \big[e^{-1} (n-r)! \big] }{\hfill r! (n-r)! \ } & \mbox{if } r < n \\ \: 1/r! &\mbox{if } r = n, \end{dcases} $$ where $[\cdot]$ denotes the ``nearest integer" function.\\ \noindent Furthermore, fix $N \in \N$, and let $$T_{p,N} = \set{x^n + ax^s + b}{(a,b) \in (\F_p^*)^2, \; 0 < s < n \leq N, \; \textnormal{and } \gcd(n,s,p-1) = 1}.$$ Let $\mu(p, r)$ be defined as in Conjecture \ref{con}, and let $t_N(p,r)$ denote the proportion of $f \in T_{p,N}$ with $R(f) = r$. We then have $$ \limsup_{p \to \infty} \left( \max_{r \leq N} \frac{ t_N(p,r)}{\mu(p,r)} \right) \leq e .$$ \end{thm} \section{New Upper Bound and Extremal Trinomials} \begin{definition} For $n, s$ fixed, define the family of trinomials in $\ff{q}[x]$ $$C(n, s) = \set{ f_c(x) = cx^{n} - (c+1)x^{s} + 1}{c\neq -1,0}. $$ \end{definition} Observe that $C(n, s)$ is exactly the set of trinomials with \begin{itemize} \item support $\{ n, s, 0 \}$ \item constant term $1$ \item $f(1) = 0$ \end{itemize} This is clear because $f(1) = 0$ if and only if $f$'s coefficients sum to zero. We introduce this family of trinomials because they have the following useful property. \begin{lem} \label{disjoint_roots} Let $G \subseteq \F_q^*$ be the unique multiplicative subgroup of order $N$, and suppose that $\gcd(n, s, N) = 1$. The only root in $G$ shared by any two members of $C(n, s)$ is $\alpha = 1$. \end{lem} \begin{proof} $f_c(\alpha) = 0$ is equivalent to the following linear equation in $c$: $$c(\alpha^{n} - \alpha^{s}) = \alpha^{s} - 1.$$ This has multiple solutions in $c$ if and only if both $\alpha^{n} - \alpha^{s} = 0$ and $\alpha^{s} - 1 = 0$. Since $G$ is a cyclic group of order $N$ and $\gcd(n, s, N) = 1$, the only $\alpha \in G$ such that $\alpha^{n} = \alpha^{s} = 1$ is $\alpha = 1$ itself. So 1 is the only $\alpha$ such that $f_c(\alpha) = 0$ for multiple $f_c \in C(n,s)$. \end{proof} \begin{lem} \label{gcd_one} Let $G \subseteq \F_q^*$ be the unique multiplicative subgroup of order $N$, and let $f \in \ff{q}[x]$ be a trinomial of the form $ax^{n} + bx^{s} + 1$ satisfying $\gcd(n, s, N) = 1$. The number of roots of $f$ that lie in $G$ does not exceed $$\frac{1}{2}+\sqrt{N}.$$ \end{lem} \begin{proof} Suppose $f(x) = ax^{n} + bx^{s} + 1$ has $r$ distinct roots $\zeta_1,\zeta_2, \ldots, \zeta_r$ in $G$. For each $\zeta_i$ let $g_i(x) = f(\zeta_i x) = (a\zeta_i^n)x^{n} + (b\zeta_i^s)x^{s} + 1$. Since the map $x \rightarrow \zeta x$ permutes the elements of $G$, each of these $g_i$ also has $r$ roots in $G$. Additionally, each $g_i$ is a member of $C(n,s)$, since $g_i(1) = f(\zeta_i) = 0$. We now check that each $g_i$ is distinct. Suppose $g_i = g_j$ with $i \neq j$. We then have both $\zeta_i^n = \zeta_j^n$ and $\zeta_i^s = \zeta_j^s$, or, equivalently, $(\zeta_i/\zeta_j)^n = 1$ and $(\zeta_i/ \zeta_j)^s = 1$. Once again, the only $\alpha \in G$ that satisfies $\alpha^n = \alpha^s = 1$ is $\alpha = 1$, so $\zeta_i = \zeta_j$, which contradicts the supposition that the roots $\zeta_1,\zeta_2, \ldots, \zeta_r$ are distinct. In summary, there exist $r$ distinct trinomials of $C(n,s)$ that each have $r$ roots in $G$, and by Lemma \ref{disjoint_roots}, $\zeta = 1$ is the only root among these that is not unique. This implies that $G$ contains at least $r(r-1) + 1$ distinct elements, but we know that $G$ has size $N$ by hypothesis. Therefore it must be that $$r^2 - r + 1 \leq N,$$ which yields the desired constraint on $r$: $$r\leq \frac{1}{2}+\sqrt{N}.$$ \end{proof} \noindent We now have everything we need to complete the proof. \begin{proof}[Proof of Theorem \ref{upper_bound}] Let $f(x) = x^n + ax^s + b \in \F_q[x]$. Obviously the roots of $f$ are not affected by re-scaling; let $$ \tilde{f}(x) = \frac{1}{b}x^n + \frac{a}{b}x^s + 1 = \alpha x^n + \beta x^s + 1. $$ The exponents may fail to satisfy $\delta = \gcd(n, s, q-1) = 1$. However, $\tilde{f}(x) = 0$ is equivalent to the system $$\alpha y^{n/\delta} + \beta y^{s/\delta} + 1= 0$$ $$y = x^{\delta}.$$ The second equation is only solvable for $x$ when $y$ lies in the subgroup of order $(q-1)/\delta$. The first equation satisfies $\gcd(n/\delta, s/\delta, (q-1)/\delta) = 1$, so we can invoke Lemma \ref{gcd_one} and find that there are at most $ \Big\lfloor \frac{1}{2} + \sqrt{\frac{q-1}{\delta}} \Big\rfloor$ such $y$. Each of these $y$ then admits one coset of $\delta$ distinct solutions for $x$. \end{proof} \begin{proof}[Proof of Theorem \ref{june_trinomials}] Observe that the function $$T(x) = x^{p^k} + x$$ is an $\ff{p^k}$-linear map from $\ff{p^{2k}}$ to $\ff{p^{2k}}$. $T$ is a binomial, so it is easy to show that it does have nonzero solutions and therefore has a null space of positive dimension. Since $T$ is not the zero transformation, we conclude that it has a null space of dimension 1, and therefore that it has $p^k$ roots. We see that $f(x) = T(x) - 2 = 0$ exactly when $T(x) = 2$. This has one obvious solution, $x=1$, so we conclude from the linearity of $T$ that it has as many solutions as $T(x) = 0$. Therefore, $f$ has $p^k = \sqrt{q}$ roots, all of which are nonzero. \end{proof} \section{Proof of Theorem \ref{poisson_density}} Our proof of Theorem \ref{poisson_density} relies on the following statement from \cite{ffcheb}, which can be viewed as a Chebotarev density theorem for function fields. At its core, this result is powered by the Lang-Weil estimate for the number of points on varieties over $\F_q$. \begin{thm} \label{ffc} \cite[Proposition 3.1]{ffcheb} Let $n, m,$ and $N$ be positive integers, and let $F \in \F_q[A_1, \ldots A_m, x]$ be separable in $x$ and have $\deg F \leq N$ and $\deg_x F = n$. Let $\F$ be an algebraic closure of $\F_q$, and suppose that $$\textnormal{Gal}\left(F , \F(A_1,\ldots A_m) \right) \cong S_n.$$ For a partition $\lambda$ of $n$, let $C_\lambda \subset S_n$ denote the conjugacy class of permutations $\sigma \in S_n$ with cycle type $\lambda$, and let $\mathcal{A}_\lambda$ denote the set of $(a_1, \ldots, a_m) \in \F_q^m$ such that the univariate polynomial $f(x) = F(a_1, \ldots a_m, x)$ factorizes over $\F_q$ into irreducible factors with degree pattern $\lambda$. Then, there exists a constant $c(m,N) \in \mathbb{R}$, which depends only on $m$ and $N$, such that $$ \Big\lvert \frac{\#\mathcal{A}_\lambda}{q^m} - \frac{\#C_\lambda}{\# S_n} \Big\rvert \leq \frac{c(m,N)}{ q^{1/2}}. $$ \end{thm} Let $k$ be a field, and let $F(x) = x^n + A x^s + B$, where $A$ and $B$ are indeterminates over $k$, $0 < s < n$, and $\gcd(n,s) = 1$. It is shown by Cohen in \cite[p.\ 64 and Corollary 3]{cohen} that unless char$(k)$ divides $n(n-1)$, $F$ is separable over $k(A,B)$ and $\textnormal{Gal}\left(F , k(A,B) \right) \cong S_n$. Here we consider $k$ an algebraic closure of a prime field $\F_p$, and $n$ bounded by some fixed $N \in \N$, so $F$ satisfies the conditions of Theorem \ref{ffc} when $p > N$. Let $C(r)$ be the collection of all permutations $\sigma \in S_n$ with exactly $r$ fixed points. If $r = n$ then $C(r)$ contains only the identity permutation and then $ \frac{\#C(r)}{\#S_n} = 1/n! = 1/r! $. Otherwise, every $\sigma \in C(r)$ can be written as $\sigma = c_1 c_2 \cdots c_r \sigma_d$, where each $c_i$ is a length-one cycle and $\sigma_d$ permutes the remaining elements and has no fixed points. Permutations that have no fixed points are called \textit{derangements}, and the proportion of permutations that are derangements is extremely well-approximated by $e^{-1}$ \cite{derang}. Specifically, the number of derangements of $n$ elements is given by $d_n = \big[ e^{-1} n! \big],$ where $[\cdot]$ denotes the ``nearest integer" function. Therefore, to count the the number of $\sigma \in C(r)$, we simply count the ways to choose $c_1, c_2, \ldots, c_r$ and multiply by the number of derangements of the remaining $n - r$ elements, so we have $$ \frac{\#C(r)}{\#S_n} = \frac{{{n}\choose{r}}d_{n-r}}{n!} = \frac{\frac{n!}{r!(n-r)!}d_{n-r}}{n!} = \frac{\big[ e^{-1}(n-r)! \big]}{r!(n-r)!}. $$ Note that $$\frac{\big[ e^{-1}(n-r)! \big]}{r!(n-r)!} \leq \frac{ e^{-1}(n-r)! + 0.5}{r!(n-r)!} \leq \frac{ e^{-1}+ 0.5}{r!} < \frac{1}{r!},$$ so in fact we have $\frac{\#C(r)}{\#S_n} \leq 1/r!$ always. Let $\mathcal{A}(r)$ denote the number of $(a,b) \in \F_p^2$ such that $F(a,b,x) = x^n + ax^s + b \in \F_p[x]$ has exactly $r$ linear factors. Since $C(r)$ is the union of some number of conjugacy classes which is bounded in terms of $N$, we have $$ \Big\lvert \frac{\#\mathcal{A}(r)}{p^2} - \frac{\#C(r)}{\# S_n} \Big\rvert \leq \sum_{C_\lambda \subseteq C(r)} \Big\lvert \frac{\#\mathcal{A}_\lambda}{p^2} - \frac{\#C_\lambda}{\# S_n} \Big\rvert = O_N \left( \frac{1}{ p^{1/2}} \right), $$ as $p \to \infty$, where the $O$-constant depends only on $N$. Now define $$\mathcal{A}^*(r) = \set{(a,b) \in (\F_p^*)^2}{x^n + a x^s + b \textnormal{ has exactly } r \textnormal{ distinct linear factors}}.$$ $\mathcal{A}^*(r)$ differs negligibly from $\mathcal{A}(r)$ since there are less than $2p$ elements in $\F_p^2 \setminus (\F_p^*)^2$, and by \cite[Proof of Proposition 3.1]{ffcheb}, the number of $(a,b) \in \F_p^2$ such that $x^n + ax^s + b$ has a root of multiplicity is bounded asymptotically by $O_N(p)$, so $$ \Big\lvert \frac{ \#\mathcal{A}^*(r)}{ p^2} - \frac{ \#\mathcal{A}(r)}{ p^2} \Big\rvert = O_N\left( \frac{1}{p} \right) . $$ Finally, note that $$ \Big\lvert \frac{ \#\mathcal{A}^*(r)}{ (p-1)^2} - \frac{ \#\mathcal{A}^*(r)}{ p^2} \Big\rvert \leq 1 - \frac{(p-1)^2}{p^2} < \frac{2}{p}.$$ Therefore we have $$\Big\lvert \frac{\#\mathcal{A}^*(r)}{\#(\F_p^*)^2} - \frac{\#C(r)}{\#S_n} \Big\rvert = O_N \left(\frac{1}{p^{1/2}}\right), $$ which proves the first claim, concerning trinomials with $\gcd(n,s) = 1$.\\ Recall the definitions \begin{align*} M_p &= \set{f \in \F_p[x]}{ \deg f < p} \\ T_{p,N} &= \set{x^n + ax^s + b}{(a,b) \in (\F_p^*)^2, \; 0 < s < n \leq N, \; \textnormal{and } \gcd(n,s,p-1) = 1}, \end{align*} and recall that $\mu(p,r)$ denotes the proportion of $f \in M_p$ with $R(f) = r$, and that $t_N(p,r)$ denotes the proportion of $f \in T_{p,N}$ with $R(f) = r$. It is clear that $t_N(p,r)$ is equal to the average value across all fractions $$ \frac{\#\mathcal{A}^*(r)}{\#(\F_p^*)^2} $$ which are associated to a trinomial $x^n + Ax^s + B$ with $0 < s < n \leq N$ and $\gcd(n,s,p-1)= 1$. It remains to study trinomials with $\gcd(n,s,p-1) = 1$ but $\gcd(n,s) > 1$. \pagebreak Suppose $k = \gcd(n,s) > 1$, and write $n = kn'$ and $s = ks'$ so that $\gcd(n',s') = 1$. If $\gcd(n,s,p-1) = 1$, then we must have $\gcd(k,p-1) = 1$, so the map $x \rightarrow x^k$ permutes $\F_p$. \linebreak Since $x^n = (x^k)^{n'}$ and $x^s = (x^k)^{s'}$, it follows that the trinomials $x^n + ax^s + b$ and $x^{n'} + ax^{s'} + b$ have the same number of distinct roots in $\F_p$. Thus, $t_N(p,r)$ is equal to the average of a collection of fractions which all satisfy $$ \frac{\#\mathcal{A}^*(r)}{\#(\F_p^*)^2} \leq \frac{1}{r!} + \frac{C_N}{p^{1/2}},$$ where $C_N \in \mathbb{R}$ is a constant which depends only on $N$. It follows immediately that $$ \limsup_{p \to \infty} t_N(p,r) \leq 1/r! $$ for each $r \leq N$, and so $$ \limsup_{p \to \infty} \left( \max_{r \leq N} t_N(p,r) r! \right) \leq 1 .$$ In \cite{random}, Leont'ev studies the generating function $\phi(x) = \sum_{r=0}^\infty \mu(p,r) x^r$, and shows that $\phi(x)$ converges to $e^{x-1}$ for $x \in (0,1]$. Using the continuity theorem for generating functions \cite[Section 1.1.6]{cont_thm}, he then concludes that $\mu(p,r) \to e^{-1}/r!$ as $p \to \infty$ for all $r \in \N$. Since we are only interested in the finitely many $r \in \{0,1,\ldots,N\}$, we can also be assured that $$ \lim_{p \to \infty} \left( \min_{r \leq N} \mu(p,r) r! \right) = e^{-1}. $$ Therefore, we have \begin{align*} \limsup_{p \to \infty} \left( \max_{r \leq N} \frac{t_N(p,r)}{\mu(p,r)} \right) &= \limsup_{p \to \infty} \left( \max_{r \leq N} \frac{t_N(p,r) }{\mu(p,r) } \frac{r!}{r!} \right) \\ &\leq \limsup_{p \to \infty} \left( \frac{ \max_{r \leq N} t_N(p,r)r! }{ \min_{r \leq N} \mu(p,r)r! } \right) \\ &= \frac{\limsup_{p \to \infty} \left( \max_{r \leq N} t_N(p,r) r! \right)}{\lim_{p \to \infty} \left( \min_{r \leq N} \mu(p,r) r! \right)} \\ &\leq \frac{1}{e^{-1}} \\ &= e. \end{align*} \qed \section{Poisson Heuristic and Computational Data for $\F_p$} First, we attempt to establish some basic plausibility for the Poisson Heuristic. As before, let $t(p,r)$ denote the proportion of trinomials over $\F_p$ with $\delta = 1$ that have $r$ distinct roots. The following table gives the statistical distance between $t$ and a Poisson distribution with mean $1$ for a few fields of various sizes. \\ \begin{center} \begin{tabular}{l|c} $\F_p$ & $\sum_{r = 0}^{\infty} \lvert t(p,r) - e^{-1}/r! \rvert$ \\ \hline $\F_{101}$ & 0.0367266 \\ $\F_{1009}$ & 0.0112061 \\ $\F_{10007}$ & 0.0007107 \\ $\F_{100003}$ & 0.0000834 \\ \end{tabular} \captionof{table}{Deviation of $t(p,r)$ from a Poisson distribution.} \end{center} \vspace{0.5cm} Recall that $T_p$ denotes the set of trinomials over $\F_p$ with $\delta = 1$ and degree less than $p - 1$. We have computed $R_p$, the maximum number of roots of attained by any $f \in T_p$, for primes up to $p = 139571$. In this section, we show that the values of $R_p$ that we would expect by Heuristic \ref{behave} are quite close to what we actually observe. That is, we consider the expected values of $$R_p = \max \set{R(f)}{f \in T_p}$$ under the model that the values of $R(f)$ are given by random variables with distribution function $\rho(r) = {e^{-1}/r!}$, and we compare these expected values with real values of $R_p$. More generally, let $M_N$ be the maximum of $N$ independent variables all with distribution $\rho(r) = {e^{-1}/r!}$. It is known that $M_N$ becomes very predictable when $N$ is large. Specifically, it is shown in \cite{poisson_exist} that there exists an integer sequence $\widehat{M}_N$ such that, as $N \to \infty$, $$\textnormal{Prob}(\lvert \widehat{M}_N - M_N \rvert \leq 1) \to 1.$$ In \cite{poisson_weak}, a nice asymptotic formula is given for $\widehat{M}_N$: $$\widehat{M}_N \sim \frac{\log{N}}{\log{\log{N}}}. $$ As an initial estimate, there are slightly less than $p^4$ trinomials $x^n + ax^s + b \in T_p$: there are $(p-1)^2$ pairs $(a, b)$ and almost $(p-1)^2$ pairs $(n,s)$. So, assuming Heuristic \ref{behave}, a reasonable conservative prediction would be $$ R_p \approx \frac{4 \log{p}}{\log{\log{p}}}. $$ However, to make an accurate prediction for $R_p$ we need to be more precise in two ways. Firstly, there are actually much fewer independent values of $R(f)$ than $p^4$. For any $f \in \F_p[x]$, we have that $$R(f(x)) = R( f(\gamma x^e)),$$ as long as $\gcd(e, p - 1) = 1$ and $\gamma \in \F_p^*$, because the maps $x \rightarrow \gamma x$ and $x \rightarrow x^e$ are both bijections on $\F_p$. As a result, knowing the number of roots of one trinomial immediately determines the number of roots of a significant chunk of trinomials. Therefore, we would like to find an appropriate, effective value for $N$ that better models the number of independent random values. To do this, we count exactly the number of trinomials with $\delta = 1$ and then quotient out by the size of these equivalent chunks. The exact number of pairs $(n, s)$ that are relatively prime with $p - 1$ is given by the \textit{Jordan totient function}, $J_2(p - 1)$ \cite[p.\ 147]{jordan}. We must subtract $\varphi(p-1)$ to avoid counting pairs with $n = s$, and we divide by $2$ to avoid counting both $(n,s)$ and $(s,n)$. There are $(p-1)^2$ choices for the two coefficients, so overall we have $$\#T_p = \left( 1/2 \right) \left(p-1\right)^2 \left(J_2(p - 1) - \varphi(p - 1) \right) .$$ As discussed in Section 2, $\gcd(n, s, p - 1) = 1$ implies that every pair $(\gamma^n, \gamma^s)$ is unique, so we divide by $(p - 1)$ to account for trinomials of the form $f(\gamma x)$. To account for the transformation $x \rightarrow x^e$, we divide by the number of $e$ with $\gcd(e,p-1) = 1$, which is given by $\varphi(p - 1)$. So, we take our effective number of independent Poisson variables to be $$ N(p) = \left(\frac{p-1}{2} \right) \left( \frac{J_2(p-1)}{\varphi(p - 1)} - 1 \right).$$ This number is approximately equal to $p^2$; for primes in the range $11 \leq p \leq 139571$, we have $$ \frac{1}{2} < \frac{N(p)}{p^2} < 2.$$ Secondly, it is beneficial to consider the less elegant but more precise asymptotic formula for $\widehat{M}_N$ given in \cite{poisson_strong}. Below, $W$ is the \textit{Lambert W function}. $$ \widehat{M}_N \sim E_N := \frac{\log N}{W(\log (N) / e)} - \frac{1 + \log 2\pi }{2 \log \left( \frac{\log N}{W(\log (N) / e)}\right) } - 1.5.\\ $$ In summary, by Heuristic \ref{behave} we expect that $ R_p \approx E_{N(p)} $ when $p$ is sufficiently large. The following plot displays the ratios $R_p / E_{N(p)}$ for all primes $p \leq 139571$. \noindent \includegraphics[width=\textwidth]{Poisson_Tilde2} \captionof{figure}{The ratios $R_p / E_{N(p)}$ for all primes $p \leq 139571$.} \vspace{0.5cm} The visibly distinct bands correspond to primes that share the same value for $R_p$. The apparent upper and lower bounding monotonic subsequences are traced by dotted curves. The average over all ratios is 1.0429 and the standard deviation is 0.05587. For all $p \leq 139571$, we have $R_p \leq 2 \log p$. The largest recorded value of $R_p$ is $R_p = 16$, which is witnessed at $p = 8581, 43943, 107351,$ and $133877$; the associated ratios $16/E_{N(p)}$ lie visibly on the upper dotted line. The values of $R_p = \max \set{R(f)}{f \in T_p}$ were computed in a straightforward way (i.e.\ by enumerating trinomials and counting their roots) by parallel C++ code which ran on Texas A\&M's Ada supercomputing cluster for $5000$ CPU hours. The program takes advantage of the fact that $R(f(x)) = R(f(\gamma x^e))$ when $\gcd(e,p-1) = 1$ and $\gamma \in \F_p^*$ to reduce the enumeration space. The values $E_{N(p)}$ were computed separately by a small Matlab program, which in particular makes use of Matlab's built-in \verb|lambertw| function. \section*{Acknowledgments} We would like to thank the Texas A\&M Supercomputing Facility for providing us with computational resources, and our advisor, J.\ Maurice Rojas, for his indispensable guidance.
{ "timestamp": "2016-07-26T02:03:53", "yymm": "1510", "arxiv_id": "1510.01758", "language": "en", "url": "https://arxiv.org/abs/1510.01758", "abstract": "We show that univariate trinomials $x^n + ax^s + b \\in \\mathbb{F}_q[x]$ can have at most $\\delta \\Big\\lfloor \\frac{1}{2} +\\sqrt{\\frac{q-1}{\\delta}} \\Big\\rfloor$ distinct roots in $\\mathbb{F}_q$, where $\\delta = \\gcd(n, s, q - 1)$. We also derive explicit trinomials having $\\sqrt{q}$ roots in $\\mathbb{F}_q$ when $q$ is square and $\\delta=1$, thus showing that our bound is tight for an infinite family of finite fields and trinomials. Furthermore, we present the results of a large-scale computation which suggest that an $O(\\delta \\log q)$ upper bound may be possible for the special case where $q$ is prime. Finally, we give a conjecture (along with some accompanying computational and theoretical support) that, if true, would imply such a bound.", "subjects": "Number Theory (math.NT); Algebraic Geometry (math.AG)", "title": "Estimating the Number Of Roots of Trinomials over Finite Fields", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9879462219033657, "lm_q2_score": 0.8221891218080991, "lm_q1q2_score": 0.8122786365803576 }
https://arxiv.org/abs/0908.3305
Cycles are determined by their domination polynomials
Let $G$ be a simple graph of order $n$. A dominating set of $G$ is a set $S$ of vertices of $G$ so that every vertex of $G$ is either in $S$ or adjacent to a vertex in $S$. The domination polynomial of $G$ is the polynomial $D(G,x)=\sum_{i=1}^{n} d(G,i) x^{i}$, where $d(G,i)$ is the number of dominating sets of $G$ of size $i$. In this paper we show that cycles are determined by their domination polynomials.
\section{Introduction} \begin{abstract} Let $G$ be a simple graph of order $n$. A dominating set of $G$ is a set $S$ of vertices of $G$ so that every vertex of $G$ is either in $S$ or adjacent to a vertex in $S$. The domination polynomial of $G$ is the polynomial $D(G,x)=\sum_{i=1}^{n} d(G,i) x^{i}$, where $d(G,i)$ is the number of dominating sets of $G$ of size $i$. In this paper we show that cycles are determined by their domination polynomials. \end{abstract} \noindent{\small\noindent{\small {\it AMS Classification}}: 05C38, 05C69. \noindent\vspace{0.1mm}{\it Keywords}: Cycle; Dominating set; Domination polynomial; Equivalence class.} \section{Introduction} Throughout this paper we will consider only simple graphs. Let $G=(V,E)$ be a simple graph. The {\it order} of $G$ denotes the number of vertices of $G$. For every vertex $v\in V$, the {\it closed neighborhood} of $v$ is the set $N[v]=\{u \in V|uv\in E\}\cup \{v\}$. For a set $S\subseteq V$, the closed neighborhood of $S$ is $N[S]=\bigcup_{v\in S} N[v]$. A set $S\subseteq V$ is a {\it dominating set} if $N[S]=V$, or equivalently, every vertex in $V\backslash S$ is adjacent to at least one vertex in $S$. The {\it domination number} $\gamma(G)$ is the minimum cardinality of a dominating set in $G$. A dominating set with cardinality $\gamma(G)$ is called a {\it $\gamma$-set}. For a detailed treatment of this parameter, the reader is referred to~\cite{hhs}. Let ${\cal D}(G,i)$ be the family of dominating sets of a graph $G$ with cardinality $i$ and let $d(G,i)=|{\cal D}(G,i)|$. The {\it domination polynomial} $D(G,x)$ of $G$ is defined as $D(G,x)=\sum_{i=1}^{|V(G)|} d(G,i) x^{i}$, ( see \cite{a}). Two graphs $G$ and $H$ are said to be {\it${\cal D}$-equivalent}, written $G\sim H$, if $D(G,x)=D(H,x)$. The {\it${\cal D}$-equivalence class} of $G$ is defined as $[G]=\{H: H\sim G\}$. A graph $G$ is said to be {\it${\cal D}$-unique}, if $[G]=\{G\}$. For two graphs $G_1=(V_1,E_1)$ and $G_2=(V_2,E_2)$, {\it join} of $G_1$ and $G_2$ denoted by $G_1\vee G_2$ is a graph with vertex set $V(G_1)\cup V(G_2)$ and edge set $E(G_1)\cup E(G_2)\cup \{uv| u\in V(G_1)$ and $v\in V(G_2)\}$. We denote the complete graph of order $n$, the cycle of order $n$, and the path of order $n$, by $K_n$, $C_n$, and $P_n$, respectively. Also we denote $K_1\vee C_{n-1}$ by $W_n$ and call it {\it wheel} of order $n$. Let $n\in \mathbb{Z}$ and $p$ be a prime number. Then if $n$ is not zero, there is a nonnegative integer $a$ such that $p^a\mid n$ but $p^{a+1}\nmid n$, we let ${\rm ord}_p\,n=a$. In \cite{aap}, the following question was posed: \begin{quote} {\it For every natural number n, $C_n$ is ${\cal D}$-unique}. \end{quote} Also in \cite{aap} it was proved that $P_n$ for $n\equiv 0\pmod3$ has ${\cal D}$-equivalence class of size two and it contains $P_n$ and the graph obtained by adding two new vertices joined to two adjacent vertices of $C_{n-2}$. In this paper we show that for every positive integer $n$, $C_n$ is ${\cal D}$-unique. \section{${\cal D}$-uniqueness of cycles} In this section we will prove that $C_n$ is ${\cal D}$-unique. This answers in affirmative a problem proposed in \cite{aap} on ${\cal D}$-equivalence class of $C_n$. As a consequence we obtain that $W_n$ is ${\cal D}$-unique. We let $C_1=K_1$ and $C_2=K_2$. We begin by the following lemmas. \begin{lem} {\rm\cite{aap}}\label{regular} Let $H$ be a $k$-regular graph with $N[u]\neq N[v]$, for every $u,v\in V(H)$. If $D(G,x)=D(H,x)$, then $G$ is also a $k$-regular graph. \end{lem} \begin{lem} {\rm\cite[Theorem 2.2.3]{a}}\label{union} If $G$ has $m$ connected components $G_1,\dots,G_m $. Then $D(G,x)=\prod_{i=1}^m D(G_{n_i},x)$. \end{lem} The next lemma gives a recursive formula for the determination of the domination polynomial of cycles. \begin{lem}\label{cycle} {\rm\cite[Theorem 4.3.6]{a}} For every $n\geq 4$, $$D(C_n,x)=x(D(C_{n-1},x)+D(C_{n-2},x)+D(C_{n-3},x)).$$ \end{lem} \begin{lem}\label{Zhang} {\rm\cite[Theorem 1]{fhrv}} For every $n\geq 1$, $\gamma(C_n)=\lceil\frac{n}{3}\rceil$. \end{lem} \begin{lem}\label{-1} If $n$ is a positive integer and $\alpha_n:=D(C_n,-1)$, then the following holds: $$\alpha_n=\left\{ \begin{array}{ll} 3, & \hbox {if~ $n\equiv0\pmod4$;} \\ -1, & \hbox {otherwise}. \end{array} \right.$$ \end{lem} \begin{proof}{By Lemma \ref{cycle}, for every $n\geq4$, $\alpha_n=-(\alpha_{n-1}+\alpha_{n-2}+\alpha_{n-3})$. Now, by induction on $n$ the proof is complete.} \end{proof} \begin{lem}\label{ord} For every positive integer $n$, $${\rm ord}_3\,D(C_n,-3)=\left\{ \begin{array}{ll} \lceil\frac{n}{3}\rceil+1, & \hbox {if~ $n\equiv0\pmod3$;} \\ \lceil\frac{n}{3}\rceil \hbox{~or~} \lceil\frac{n}{3}\rceil+1, & \hbox {if ~$n\equiv1\pmod3$;} \\ \lceil\frac{n}{3}\rceil, & \hbox {if ~$n\equiv2\pmod3$}. \end{array} \right.$$ \end{lem} \begin{proof}{Let $a_n:=D(C_n,-3)$. By Lemma \ref{cycle}, for any $n\geq 4$, $a_n=-3(a_{n-1}+a_{n-2}+a_{n-3})$. Since $D(C_1,x)=x$, $D(C_2,x)=x^2+2x$, and $D(C_3,x)=x^3+3x^2+3x$, one has $a_1=-3$, $a_2=3$, $a_3=-9$. Now, by induction on $n$ one can easily see that ${\rm ord}_3\,a_n\ge \lceil\frac{n}{3}\rceil$. Suppose that $a_n=(-1)^n3^{{\lceil\frac{n}{3}\rceil}}b_n$. By Lemma \ref{cycle} it follows that for every $n$, $n\geq 4$ the following hold, \begin{equation}\label{bn}b_n=\left\{ \begin{array}{ll} 3b_{n-1}-3b_{n-2}+b_{n-3}, & \hbox {if~ $n\equiv0\pmod3$;} \\ b_{n-1}-b_{n-2}+b_{n-3}, & \hbox {if ~$n\equiv1\pmod3$;}\\ 3b_{n-1}-b_{n-2}+b_{n-3}, & \hbox {if ~$n\equiv2\pmod3$}. \end{array} \right.\end{equation} It turns out that $b_i$, for every $i$, $1\leq i\leq 30$ modulo 9, are as follows:\\ 1, 1, 3, 3, 7, 6, 2, 7, 3, 7, 7, 3, 3, 4, 6, 5, 4, 3, 4, 4, 3, 3, 1, 6, 8, 1, 3, 1, 1, 3. So for every $n$, $1\leq n\leq 30$, $9\nmid b_n$. By (\ref{bn}) and induction on $t$, it is easily seen that for every $t$, $t\geq1$, $b_{t+27}\equiv b_t\pmod9$. Hence for any positive integer $n$, $9\nmid b_n$ or equivalently ${\rm ord}_3\,b_n\leq1 $. By induction on $n$, and using (\ref{bn}), we find that if $n\equiv 0\pmod3$, ${\rm ord}_3\,b_n=1 $, and ${\rm ord}_3\,b_n\in\{0,1\}$ for $n\equiv1\pmod3$, and ${\rm ord}_3\,b_n=0$ for $n\equiv2\pmod3$. This completes the proof.} \end{proof} \begin{remark} Since for every $t$, $t\geq1$, $b_{t+27}\equiv b_t\pmod9$, by considering $b_n$ for $1\leq n\leq 30$, we conclude that in the case $n\equiv 1\pmod3$, $${\rm ord}_3\,a_n=\left\{ \begin{array}{ll} \lceil\frac{n}{3}\rceil+1,& \hbox{if ~$ n\in \{4, 13, 22\}$}\, (mod\,27 ); \\ \lceil\frac{n}{3}\rceil, & \hbox{otherwise.} \end{array} \right.$$ \end{remark} Now, we prove our main result. \begin{thm}\label{theorem5} For every positive integer $n$, cycle $C_n$ is ${\cal D}$-unique. \end{thm} \begin{proof}{ The assertion is trivial for $n=1, 2, 3$. Now, let $n\geq 4$. Let $G$ be a simple graph with $D(G,x)=D(C_n,x)$. By Lemma \ref{regular}, $G$ is 2-regular and so it is a disjoint union of cycles $C_{n_1},\ldots,C_{n_k}$. Hence, by Lemma \ref{union}, $D(G,x)=\prod_{i=1}^k D(C_{n_i},x)$. Thus $n=n_1+\cdots+n_k$ and by Lemma \ref{Zhang}, ${\lceil\frac{n}{3}\rceil}={\lceil\frac{n_1}{3}\rceil}+\cdots+{\lceil\frac{n_k}{3}\rceil}.$ Therefore at least $k-2$ numbers of $n_1, \ldots,n_k$, are divisible by 3. On the other hand, $${\rm ord}_3\,D(C_n,-3)=\sum_{i=1}^k {\rm ord}_3\,D(C_{n_i},-3).$$ Now, by Lemma \ref{ord} it is easily seen that $k\leq3$. Now, let $\alpha_n:=D(C_n,-1)$, for every positive integer $n$. Since $D(C_n,x)=\prod_{i=1}^k D(C_{n_i},x)$, therefore $\alpha_n=\prod_{i=1}^k \alpha_{n_i}$. By Lemma \ref{-1}, $\alpha_n\in\{-1,3\}$. If $\alpha_n=3$, then only one of the numbers $n_1, \ldots, n_k$ is divisible by 4, and therefore $k$ is an odd number. If $\alpha_n=-1$, then for every $i$, $1\leq i\leq k$, $\alpha_{n_i}=-1$, and thus $k$ is an odd number. Since $k\leq3$, then $k\in\{1,3\}$. It remains to show that $k\ne3$. Let $\beta_n:=D'(C_n,-1)$, for every $n\geq 1$, where $D'(C_n,x)$ is the derivative of $D(C_n,x)$ with respect to $x$. Then by the recursive formula given in Lemma \ref{cycle} we conclude that for every $n$, $n\geq 4$, $\beta_n=-(\alpha_n+\beta_{n-1}+\beta_{n-2}+\beta_{n-3})$. Now, by induction on $n$ and using Lemma \ref{-1}, we have: \begin{equation}\label{lemma5} \beta_n=\left\{ \begin{array}{ll} -n, & \hbox{if $n\equiv0\pmod4$;} \\ n, & \hbox{if $n\equiv1\pmod4$;} \\ 0, & \hbox{otherwise.} \end{array} \right. \end{equation} Let $\theta_n:=D''(C_n,-1)$, for every $n$, $n\geq 1$. By Lemma \ref{cycle}, we conclude that for every $n\geq 4$, $\theta_n=-2\alpha_n-2\beta_{n}-(\theta_{n-1}+\theta_{n-2}+\theta_{n-3}).$ Now, by induction on $n$, using Lemma \ref{-1} and relation (\ref{lemma5}), we obtain the following: \begin{equation}\label{lemma6} \theta_n=\left\{ \begin{array}{ll} n(n-4)/2, & \hbox{if $n\equiv0\pmod4$;} \\ -n(n-1)/2, & \hbox{if $n\equiv1\pmod4$;} \\ n(n+2)/4, & \hbox{if $n\equiv2\pmod4$;}\\ 0, & \hbox{if $n\equiv3\pmod4$.} \end{array} \right. \end{equation} Now, let $k=3$. Thus \begin{equation}\label{k=3} D(C_n,x)=\prod_{i=1}^3 D(C_{n_i},x). \end{equation} By putting $x=-1$ in relation (\ref{k=3}), we find that $\alpha_n=\alpha_{n_1}\alpha_{n_2}\alpha_{n_3}$. Since $n=n_1+n_2+n_3$, by Lemma \ref{-1}, ten cases can be considered: \begin{enumerate} \item[1)]~~$n\equiv 0\pmod4$, $n_1\equiv 0\pmod4$, $n_2\equiv 1\pmod4$, $n_3\equiv 3\pmod4$; \item[2)] ~ $n\equiv 0\pmod4$, $n_1\equiv 0\pmod4$, $n_2\equiv 2\pmod4$, $n_3\equiv 2\pmod4$; \item[3)] ~ $n\equiv 1\pmod4$, $n_1\equiv 1\pmod4$, $n_2\equiv 1\pmod4$, $n_3\equiv 3\pmod4$; \item[4)] ~ $n\equiv 1\pmod4$, $n_1\equiv 1\pmod4$, $n_2\equiv 2\pmod4$, $n_3\equiv 2\pmod4$; \item[5)] ~ $n\equiv 1\pmod4$, $n_1\equiv 3\pmod4$, $n_2\equiv 3\pmod4$, $n_3\equiv 3\pmod4$; \item[6)] ~ $n\equiv 2\pmod4$, $n_1\equiv 1\pmod4$, $n_2\equiv 2\pmod4$, $n_3\equiv 3\pmod4$; \item[7)] ~ $n\equiv 2\pmod4$, $n_1\equiv 2\pmod4$, $n_2\equiv 2\pmod4$, $n_3\equiv 2\pmod4$; \item[8)] ~ $n\equiv 3\pmod4$, $n_1\equiv 1\pmod4$, $n_2\equiv 1\pmod4$, $n_3\equiv 1\pmod4$; \item[9)] ~ $n\equiv 3\pmod4$, $n_1\equiv 1\pmod4$, $n_2\equiv 3\pmod4$, $n_3\equiv 3\pmod4$; \item[10)] ~ $n\equiv 3\pmod4$, $n_1\equiv 2\pmod4$, $n_2\equiv 2\pmod4$, $n_3\equiv 3\pmod4$. \end{enumerate} For instance, if Case 1 occurs, by derivative of two sides of the equality (\ref{k=3}) and putting $x=-1$, we find that $\beta_n=\beta_{n_1}\alpha_{n_2}\alpha_{n_3}+\alpha_{n_1}\beta_{n_2}\alpha_{n_3}+\alpha_{n_1}\alpha_{n_2}\beta_{n_3}$. Now, by Lemma \ref{-1} and relation (\ref{lemma5}) we obtain that $n_3=2n_2$ which is impossible. Similarly, in cases 2, 3, 4, 5, 6, 8, and 9 we obtain a contradiction. If the Case 7 occurs then, by the second derivative of two sides of equality (\ref{k=3}) and putting $x=-1$, we conclude that:$$\theta_n=\theta_{n_1}\alpha_{n_2}\alpha_{n_3}+\alpha_{n_1}\theta_{n_2}\alpha_{n_3}+\alpha_{n_1}\alpha_{n_2}\theta_{n_3} +2\beta_{n_1}\beta_{n_2}\alpha_{n_3}+2\beta_{n_1}\beta_{n_3}\alpha_{n_2}+2\beta_{n_2}\beta_{n_3}\alpha_{n_1}.$$ Now, by Lemma \ref{-1}, and using relations (\ref{lemma5}) and (\ref{lemma6}) we find that, $n_1n_2+n_1n_3+n_2n_3=0$ which is impossible. Similarly, for case 10 we get a contradiction. Thus $k=1$ and the proof is complete.} \end{proof} By the following lemma and Theorem \ref{theorem5}, the next corollary follows immediately. \begin{lem} {\rm\cite[Corollary 2]{aap}} If $G$ is ${\cal D}$-unique, then for every $m$, $m\ge1$, $G\vee K_m$ is ${\cal D}$-unique. \end{lem} \begin{cor} For every two positive integers $m$ and $n$, $K_m\vee C_n$ is ${\cal D}$-unique. In particular $W_n$ is ${\cal D}$-unique. \end{cor} \noindent{\bf Acknowledgements.} The research of the first author was in part supported by a grant (No. 87050212) from school of Mathematics, Institute for Research in Fundamental Sciences (IPM).
{ "timestamp": "2009-08-23T16:40:08", "yymm": "0908", "arxiv_id": "0908.3305", "language": "en", "url": "https://arxiv.org/abs/0908.3305", "abstract": "Let $G$ be a simple graph of order $n$. A dominating set of $G$ is a set $S$ of vertices of $G$ so that every vertex of $G$ is either in $S$ or adjacent to a vertex in $S$. The domination polynomial of $G$ is the polynomial $D(G,x)=\\sum_{i=1}^{n} d(G,i) x^{i}$, where $d(G,i)$ is the number of dominating sets of $G$ of size $i$. In this paper we show that cycles are determined by their domination polynomials.", "subjects": "Combinatorics (math.CO)", "title": "Cycles are determined by their domination polynomials", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9850429125286726, "lm_q2_score": 0.824461928533133, "lm_q1q2_score": 0.8121303793512837 }
https://arxiv.org/abs/2209.08164
Solutions of the Variational Equation for an nth Order Boundary Value Problem with an Integral Boundary Condition
In this paper, we discuss differentiation of solutions to the boundary value problem $y^{(n)} = f(x, y, y^{'}, y^{''}, \ldots, y^{(n-1)}), \; a<x<b,\; y^{(i)}(x_j) = y_{ij},\; 0\leq i \leq m_j, \; 1 \leq j \leq k-1$, and $y^{(i)}(x_k) + \int_c^d p y(x)\;dx = y_{ik}, \;0 \leq i \leq m_k,\;\sum_{i=1}^km_i=n$ with respect to the boundary data. We show that under certain conditions, partial derivatives of the solution $y(x)$ of the boundary value problem with respect to the various boundary data exist and solve the associated variational equation along $y(x)$.
\section{Introduction} Our concern is characterizing partial derivatives with respect to the boundary data of solutions to the $n$th order nonlocal boundary value problem \begin{equation}\label{eq1} y^{(n)} = f\left(x, y, y^{'}, y^{''}, \ldots, y^{(n-1)}\right), \; a<x<b \end{equation} satisfying \begin{equation}\label{eq2} \begin{array}{c} y^{(i)}\left(x_j\right) = y_{ij},\; 0\leq i \leq m_j, \; 1 \leq j \leq k-1, \\ y^{(i)}\left(x_k\right) + \displaystyle\int_c^d p y(x)\;dx = y_{ik}, \;0 \leq i \leq m_k \end{array} \end{equation} where and throughout $k,n\in\mathbb{N}$ with $2 \leq k \leq n,\; m_1, \ldots, m_k\in\mathbb{Z}^+$ such that $\sum_{i=1}^km_i=n,$ and $a < x_1 < x_2 < \cdots < x_k <c<d <b,p\in\mathbb{R}$. Differentiation of solutions of initial value problems with respect to initial conditions has been a well-known result in the field of differential equations for a long time. In his book \cite{Hartman64}, Hartman attributes the theorem and proof to Peano. Hence, the result is commonly referred to as a theorem of Peano. These derivatives solve the associated variational equation to the differential equation. Subsequently, similar results were obtained for boundary value problems and relied heavily upon the continuous dependence of solutions of boundary value problems on boundary conditions. The continuous dependence result utilizes a map of initial conditions to boundary conditions and the Brouwer Invariance of Domain Theorem. Results for boundary value problems on differential equations with standard boundary conditions may be found in \cite{Henderson84, Peterson76, Peterson78, Spencer75, Sukup75}. Direct analogues also exist for difference equations \cite{HendersonLee91} and dynamic equations on times scales \cite{BaxterLyonsNeugebauer16}. The mathematics community has added a parameter to the nonlinearity \cite{HendersonHornHoward94, HendersonJiang15, LyonsMiller15}. Researchers have also produced results for various types of boundary conditions including nonlocal \cite{BenchohraHamaniHendersonNtouyas07, EhrkeHendersonKunkelSheng07, HendersonTisdell04, HendersonHopkinsKimLyons08, HopkinsKimLyonsSpeer09, Lawrence02, Lyons11, Lyons14}, functional \cite{Datta98, Ehme93, EhmeEloeHenderson93, EhmeHenderson96, EhmeLawrence00}, and integral \cite{BenchohraHendersonLucaOuahab14,LyonsMajorSeabrook18}. In this paper, we extend the results of \cite{JansonJumanLyons14} to an $n$th order differential equation using the procedure outlined in \cite{Henderson87}. The general idea is to use continuous dependence to write the solution of the boundary value problem as the solution to an initial value problem. After multiple applications of the Mean Value Theorem, we can apply Peano's theorem directly to the problem at hand. The remainder of this paper is organized as follows. In section two, we present the boundary value problem and define its associated variational equation. We also introduce five hypotheses that are imposed upon the differential equation along with Peano's Theorem and the continuous dependence result. Our boundary value problem with integral condition analogue is found in section three. \section{Assumptions and Background Theorems} We establish a few conditions that are imposed upon (\ref{eq1}): \begin{enumerate} \item[(i)] $f\left(x,y_1, \ldots,y_n\right): (a,b) \times \mathbb{R}^n \to \mathbb{R}$ is continuous, \item[(ii)] $\frac{\partial f}{\partial y_i} \left(x,y_1, \ldots,y_n\right): (a,b) \times \mathbb{R}^n \to\mathbb{R}$ is continuous, $i = 1,\ldots,n,$ \item[(iii)] solutions of initial value problems for (\ref{eq1}) extend to $(a,b).$ \end{enumerate} \begin{remark} Note that \textup{(iii)} is not a necessary condition but lets us avoid continually making statements about maximal intervals of existence inside $(a,b)$. \end{remark} \noindent Next, the results discussed rely upon the definition of the variational equation which we present here. \begin{definition} Given a solution $y(x)$ of \textup{(\ref{eq1})} and for $i=1,2,\ldots,n,$ we define the \textit{variational equation along $y(x)$} by \begin{equation}\label{var} z^{(n)} = \sum_{i=1}^n\frac{\partial f}{\partial y_i} \left(x,y,y',\ldots,y^{(n-1)}\right) z^{(i-1)}. \end{equation} \end{definition} Our aim is an analogue of the following theorem that Hartman \cite{Hartman64} attributes to Peano for (\ref{eq1}), (\ref{eq2}). \begin{theorem}\label{peano}[A Peano Theorem] Assume that, with respect to \textup{(\ref{eq1})}, conditions \textup{(i)-(iii)} are satisfied. Let $x_0 \in (a,b)$ and $$y(x) := y\left(x, x_0, c_0 ,c_1,\ldots,c_{n-1}\right)$$ denote the solution of \textup{(\ref{eq1})} satisfying the initial conditions $y^{(i)}\left(x_0\right) = c_i,\; 0\leq i\leq n-1.$ Then, \begin{enumerate} \item[(a)] for each $0\leq j\leq n-1$, $\alpha_j(x) := \frac{\partial y}{\partial c_j}(x)$ exists on $(a,b)$ and is the solution of the variational equation \textup{(\ref{var})} along $y(x)$ satisfying the initial conditions $$\alpha_j^{(i)}\left(x_0\right) = \delta_{ij}, \; 0\leq i \leq n-1.$$ \item[(b)] $\beta(x) := \frac{\partial y}{\partial x_0}(x)$ exists on $(a,b)$ and is the solution of the variational equation \textup{(\ref{var})} along $y(x)$ satisfying the initial conditions $$\beta^{(i)}\left(x_0\right) = -y^{(i)}\left(x_0\right), \; 0\leq i \leq n-1.$$ \item[(c)] $\frac{\partial y}{\partial x_0} (x) = -\sum_{i=0}^{n-1} y^{(i)}(x_0) \frac{\partial y}{\partial c_i} (x).$ \end{enumerate} \end{theorem} The next condition guarantees uniqueness of solutions of (\ref{eq1}), (\ref{eq2}) and is a nonlocal analogue of $(m_1,\ldots,m_k)$-disconjugacy. \begin{enumerate} \item[(iv)] If, for $0\leq i\leq m_j-1, \; 1\leq j\leq k-1,$ $$y^{(i)}\left(x_j\right) = z^{(i)}\left(x_j\right),$$ and, for $0\leq i\leq m_k-1,$ $$y^{(i)}\left(x_k\right) + \int_c^d p y(x)\;dx = z^{(i)}\left(x_k\right) + \int_c^d pz(x)\;dx,$$ where $y(x)$ and $z(x)$ are solutions of (\ref{eq1}), then, on $(a,b),$ $$y(x) \equiv z(x).$$ \end{enumerate} The last condition provides uniqueness of solutions of (\ref{var}) along all solutions of (\ref{eq1}) and again is a nonlocal analogue of $(m_1,\ldots,m_k)$-disconjugacy. \begin{enumerate} \item[(v)] Given a solution $y(x)$ of (\ref{eq1}), if, for $0\leq i\leq m_j-1, \; 1\leq j\leq k-1,$ $$u^{(i)}\left(x_j\right)=0,$$ and, for $0\leq i\leq m_k-1,$ $$u^{(i)}\left(x_k\right) + \int_c^d p u(x)\;dx = 0,$$ where $u(x)$ is a solution of (\ref{var}) along $y(x),$ then, on $(a,b)$, $$u(x) \equiv 0.$$ \end{enumerate} We also make use of the following continuous dependence result for boundary value problems. A typical proof may be found in \cite{HendersonKarnaTisdell05}. \begin{theorem}\label{contdep}[Continuous Dependence on Boundary Conditions] Assume \textup{(i)-(iv)} are satisfied with respect to \textup{(\ref{eq1})}. Let $y(x)$ be a solution of \textup{(\ref{eq1})} on $(a,b)$. Then, there exists a $\delta > 0$ such that, for $$\left|x_j - t_j\right| < \delta,\; 1\leq j\leq k,$$ $$\left|c-\xi\right|<\delta,\; \left|d-\Delta\right|<\delta,\;\left|p-\rho\right|<\delta,$$ $$\left|y^{(i)}\left(x_j\right) - y_{ij} \right| < \delta,\; 0\leq i\leq m_j-1, \; 1\leq j\leq k-1,$$ and $$\left|y^{(i)}\left(x_k\right)+ \int_c^d p y(x)\;dx - y_{ik}\right| < \delta, \; 0\leq i\leq m_k-1,$$ there exists a unique solution $y_{\delta} (x)$ of \textup{(\ref{eq1})} such that $$y^{(i)}_{\delta}\left(t_j\right) = y_{ij},\; 0\leq i\leq m_j-1, \; 1\leq j\leq k-1,$$ $$y_{\delta}^{(i)}\left(t_k\right) + \int_{\xi}^{\Delta} \rho y_{\delta}(x)\;dx = y_{ik},\; 0\leq i\leq m_k-1,$$ and, for $0\leq i\leq n-1,$ $\{y_\delta^{(i)}(x)\}$ converges uniformly to $y^{(i)}(x)$ as $\delta \to 0$ on $[\alpha,\beta]\subset(a,b)$. \end{theorem} \section{Analogue of Peano's Theorem} \indent In this section, we present our analogue to Theorem \ref{peano} stated in five parts. \begin{theorem}\label{mainresult} Assume conditions \textup{(i)-(v)} are satisfied. Let $u(x)=\\u(x, x_1,\ldots,x_k,y_{01},\ldots,y_{m_k-1,k},p,c,d)$ be the solution of \textup{(\ref{eq1})} on $(a,b)$ satisfying $$u^{(i)}\left(x_j\right)=y_{ij}, \; 0\leq i\leq m_j-1, \; 1\leq j\leq k-1,$$ and $$u^{(i)}\left(x_k\right)+\int_c^d pu(x)dx=y_{ik},\; 0\leq i\leq m_k-1.$$ Then, \begin{enumerate} \item [$(a)$] for each $1\leq l\leq k-1$ and $0\leq r\leq m_l-1,\; Y_{rl}(x):=\frac{\partial u}{\partial y_{rl}}(x)$ exists on $(a,b)$ and is the solution of the variational equation \textup{(\ref{var})} along $u(x)$ satisfying the boundary conditions \begin{align*} &Y^{(i)}_{rl}\left(x_j\right) = 0, \; 0 \leq i \leq m_j -1,\; 1 \leq j \leq k-1,\; j \neq l \\ &Y^{(i)}_{rl}\left(x_l\right) = 0,\;0 \leq i \leq m_l - 1,\; i \neq r \\ &Y^{(r)}_{rl}\left(x_l\right) = 1 \\ &Y^{(i)}_{rl}\left(x_k\right) + \int_c^dpY_{rl}(x)dx = 0,\; 0 \leq i \leq m_k - 1, \end{align*} and for $0\le r \le m_k-1,\; Y_{rk}:=\frac{\partial u}{\partial y_{rk}}(x)$ exists on $(a,b)$ and is the solution of the variational equation \textup{(\ref{var})} along $u(x)$ satisfying the boundary conditions \begin{align*} &Y^{(i)}_{rk}\left(x_j\right) = 0,\; 0 \leq i \leq m_j -1,\; 1 \leq j \leq k-1, \\ &Y^{(i)}_{rk}\left(x_k\right) + \int_c^dpY_{rk}(x)dx = 0,\; 0 \leq i \leq m_k - 1, \; i\neq r,\\ &Y^{(r)}_{rk}\left(x_k\right) + \int_c^dpY_{rk}(x)dx = 1, \end{align*} \item [(b)] for each $1\leq l\leq k-1,\; X_{l}(x):=\frac{\partial u}{\partial x_{l}}(x)$ exists on $(a,b)$ and is the solution of the variational equation \textup{(\ref{var})} along $u(x)$ satisfying the boundary conditions \begin{align*} &X^{(i)}_{l}\left(x_j\right) = 0,\;0\le i\le m_j-1,\; 1 \leq j \leq k-1,\; j \neq l \\ &X^{(i)}_{l}\left(x_l\right) = -u^{(i+1)}(x_l),\; 0 \leq i \leq m_l - 1, \\ &X^{(i)}_{l}\left(x_k\right) + \int_c^dpX_{l}(x)dx = 0,\; 0 \leq i \leq m_k - 1, \end{align*} and $X_k:=\frac{\partial u}{\partial x_k}(x)$ exists on $(a,b)$ and is the solution of the variational equation \textup{(\ref{var})} along $u(x)$ satisfying the boundary conditions \begin{align*} &X^{(i)}_{k}\left(x_j\right) = 0,\; 0\le i\le m_j-1,\;1 \leq j \leq k-1, \\ &X^{(i)}_{k}\left(x_k\right) + \int_c^dpX_{k}(x)dx = -u^{(i+1)}\left(x_k\right),\; 0 \leq i \leq m_k - 1. \end{align*} \item [(c)] $C(x):=\frac{\partial u}{\partial c}(x)$ exists on $(a,b)$ and is the solution of the variational equation \textup{(\ref{var})} along $u(x)$ satisfying the boundary conditions \begin{align*} &C^{(i)}\left(x_j\right) = 0,\; 0\le i\le m_j-1,\;1 \leq j \leq k-1, \\ &C^{(i)}\left(x_k\right) + \int_c^dpC(x)dx = -pu(c),\; 0 \leq i \leq m_k - 1. \end{align*} \item [(d)] $D(x):=\frac{\partial u}{\partial d}(x)$ exists on $(a,b)$ and is the solution of the variational equation \textup{(\ref{var})} along $u(x)$ satisfying the boundary conditions \begin{align*} &D^{(i)}\left(x_j\right) = 0,\; 0\le i\le m_j-1,\;1 \leq j \leq k-1, \\ &D^{(i)}\left(x_k\right) + \int_c^dpD(x)dx = pu(d),\; 0 \leq i \leq m_k - 1. \end{align*} \item [(e)] $P(x):=\frac{\partial u}{\partial p}(x)$ exists on $(a,b)$ and is the solution of the variational equation \textup{(\ref{var})} along $u(x)$ satisfying the boundary conditions \begin{align*} &P^{(i)}\left(x_j\right) = 0,\; 0\le i\le m_j-1,\;1 \leq j \leq k-1, \\ &P^{(i)}\left(x_k\right) + \int_c^dpP(x)dx = -\int_c^du(x)dx,\; 0 \leq i \leq m_k - 1. \end{align*} \end{enumerate} \end{theorem} \bigskip \begin{proof} We only prove part (a) as the proofs of (b)-(e) follow similarly. Fix integers $1 \leq l\leq k-1$ and $0\le r\le m_l-1.$ We consider $Y_{rl}(x)=\frac{\partial u}{\partial y_{rl}}(x).$ Since the argument for the case of $Y_{ik}(x)=\frac{\partial u}{\partial y_{ik}},\;\;0\le i\le m_k-1,$ is similar, we omit its proof. To ease the burdensome notation and realizing that all boundary data are fixed except $y_{rl}$, we denote $u(x, x_1,\ldots, x_k,\\ y_{01}, \ldots, y_{rl}, \ldots, y_{m_k -1, k},p,c,d)$ by $u(x, y_{rl})$. Let $\delta > 0$ be as in Theorem \ref{contdep} with $0 \leq |h| \leq \delta$, and define the difference quotient for $y_{rl}$ by \begin{displaymath} Y_{rlh}(x) = \frac{1}{h}\left[u\left(x, y_{rl} + h\right) - u\left(x, y_{rl}\right)\right]. \end{displaymath} First, we inspect the boundary conditions for $Y_{rlh}$. Note that for every $h \neq 0$ and $0 \leq i \leq m_j - 1,\; $ $1 \leq j \leq k-1,\; j\neq l,$ \begin{align*} Y_{rlh}^{(i)}\left(x_j\right) &= \frac{1}{h}\left[u^{(i)}\left(x_j, y_{rl} + h\right) - u^{(i)}\left(x_j, y_{rl}\right)\right] \\ &= \frac{1}{h}\left[y_{ij} - y_{ij}\right] \\ &= 0, \end{align*} for every $0 \leq i \leq m_l - 1,\;i \neq r$ \begin{align*} Y_{rlh}^{(i)}\left(x_l\right) &= \frac{1}{h}\left[u^{(i)}\left(x_l, y_{rl} + h\right) - u^{(i)}\left(x_l, y_{rl}\right)\right]\\ &= \frac{1}{h}\left[y_{il} - y_{il}\right]\\ &= 0, \end{align*} and \begin{align*} Y_{rlh}^{(r)}\left(x_l\right) &= \frac{1}{h}\left[u^{(r)}\left(x_l, y_{rl} + h\right) - u^{(r)}\left(x_l, y_{rl}\right)\right]\\ &= \frac{1}{h}\left[y_{rl} + h - y_{rl}\right]\\ &= 1. \end{align*} Finally, for every $0 \leq i \leq m_k -1$ \begin{align*} Y_{rlh}^{(i)}\left(x_k\right) + \int_c^d pY_{rlh}(x)dx &= \frac{1}{h}\left[u^{(i)}\left(x_k, y_{rl} + h\right) - u^{(i)}\left(x_k, y_{rl}\right)\right.\\ &+\left.\int_c^d p\left(u(x,y_{rl}+h)- u(x,y_{rl})\right)dx\right]\\ &= \frac{1}{h}\left[y_{ik} - y_{ik}\right] \\ &= 0. \end{align*} Next, we show that $Y_{rlh}(x)$ is a solution of the variational equation. To that end, for $m_l \leq i \leq n-1$, let \begin{displaymath} \mu_i = u^{(i)}\left(x_l, y_{rl}\right) \end{displaymath} and \begin{displaymath} \nu_i = \nu_i(h) = y^{(i)}\left(x_l, y_{rl} + h\right) - \mu_i \end{displaymath} Note by Theorem \ref{contdep}, for $m_l\leq i\leq n-1, \;\nu_i = \nu_i (h) \to 0$ as $h\to 0.$ Using the notation of Theorem \ref{peano} for solutions of initial value problems for (\ref{eq1}), viewing $u(x)$ as the solution of an initial value problem at $x_l,$ and denoting this solution as an IVP, i.e. $u(x) = y\left(x,x_l,y_{0l},\ldots,y_{m_l-1,l},\mu_{m_l},\ldots,\mu_{n-1}\right)$, we have \begin{align*} Y_{rlh}(x) = \frac{1}{h}[ &y(x, x_l, y_{0l}, \ldots, y_{rl} + h, \ldots, y_{m_l -1,l}, \mu_{m_l} + \nu_{m_l}, \mu_{m_l + 1} + \nu_{m_l + 1}, \ldots, \mu_{n-1} + \nu_{n-1}) \\&- y(x, x_l, y_{0l}, \ldots, y_{rl}, \ldots, y_{m_l -1,l}, \mu_{m_l}, \mu_{m_l + 1}, \ldots, \mu_{n-1})]. \end{align*} Next, by utilizing telescoping sums to vary only one component at a time, we have \begin{align*} Y_{rlh}(x) =& \frac{1}{h}[y(x, x_l, y_{0l},\dots,y_{rl} + h, \ldots,\mu_{m_l} + \nu_{m_l}, \mu_{m_l + 1} + \nu_{m_l + 1}, \ldots,\mu_{n-1} + \nu_{n-1})\\&- y(x, x_l, y_{0l},\ldots,y_{rl},\ldots, \mu_{m_l} + \nu_{m_l}, \mu_{m_l + 1} + \nu_{m_l + 1}, \ldots, \mu_{n-1} + \nu_{n-1})\\&+ y(x, x_l, y_{0l},\ldots,y_{rl},\ldots, \mu_{m_l} + \nu_{m_l}, \mu_{m_l + 1} + \nu_{m_l + 1}, \ldots, \mu_{n-1} + \nu_{n-1}) \\& - y(x, x_l, y_{0l},\ldots,y_{rl}, \ldots, \mu_{m_l}, \mu_{m_l + 1} + \nu_{m_l + 1}, \ldots, \mu_{n-1} + \nu_{n-1}) \\ &+ - \cdots \\ &+ y(x, x_l, y_{0l},\ldots,y_{rl},\ldots, \mu_{m_l}, \mu_{m_l + 1}, \, \mu_{n-1} + \nu_{n-1}) \\ &+ y(x, x_l, y_{0l},\ldots, y_{rl},\ldots, \mu_{m_l}, \ldots, \mu_{n- 1})]. \end{align*} By Theorem \ref{peano} and the Mean Value Theorem, we obtain \begin{align*} Y_{rlh}(x) &= \alpha_r(x; y(x, x_l, y_{0l},\ldots,y_{rl} + \bar{h}, \ldots,\mu_{ml} + \nu_{ml}, \ldots , \mu_{n - 1} + \nu_{n-1}))\\ &+ \frac{\nu_{m_l}}{h}\alpha_{ml}(x;y(x; x_l, y_{0l},\ldots,y_{rl},\ldots, \mu_{m_l} + \bar{\nu}_{m_l},\mu_{m_l + 1} + \nu_{m_l + 1}, \ldots, \mu_{n-1} + \nu_{n-1}))\\ &+ \cdots \\&+ \frac{\nu_{n-1}}{h}\alpha_{n-1}(x;y(x, x_l, y_{0l}, \ldots,\mu_{m_l}, \mu_{m_l + 1}, \ldots, \mu_{n-1} + \bar{\nu}_{n-1})), \end{align*} where for $0 \leq j \leq n-1,$ $\alpha_j(x; y(\cdot))$ is the solution of the variational equation (\ref{eq1}) along $y(\cdot)$ satisfying \begin{displaymath} \alpha^{(i)}_j\left(x_l\right) = \delta_{ij}, \; 0 \leq i \leq n-1. \end{displaymath} Furthermore, $y_{rl} + \bar{h}$ is between $y_{rl}$ and $y_{rl} + h$, and for each $m_l\le i\le n-1$, $\mu_i + \bar{\nu_i}$ is between $\mu_i$ and $\mu_i + \nu_i$. Note that we use $y(\cdot)$ to simplify the notation. Thus, to show $\displaystyle\lim_{h \rightarrow 0} Y_{rlh}$ exists, it suffices to show, for $m_l \leq i \leq n-1,$ $\displaystyle\lim_{h \rightarrow 0}$ $\frac{\nu_i}{h}$ exists. Recall that \begin{align*} &Y_{rlh}^{(i)}\left(x_j\right) = 0, \; 0 \leq i \leq m_j - 1,\; 1 \leq j \leq k-1, \;j \neq l, \\ &Y_{rlh}^{(i)}\left(x_k\right) + \int_c^d pY_{rlh}(x)dx = 0, \;0 \leq i \leq m_k - 1. \end{align*} Hence, by substituting into the equations above and solving each for $\alpha_r,$ we create a system of $n-m_l$ equations with $n-m_l$ unknowns \begin{align*} -\alpha_r^{(i)}\left(x_j; y(\cdot)\right) = \frac{\nu_{m_l}}{h}\alpha^{(i)}_{m_l}\left(x_j; y(\cdot)\right)& + \cdots + \frac{\nu_{n-1}}{h}\alpha^{(i)}_{n-1}\left(x_j; y(\cdot)\right),\\& 0\le i\le m_j-1,\;1\le j\le k-1,\;j\ne l \end{align*} and \begin{align*} -\alpha_r^{(i)}\left(x_k; y(\cdot)\right) &- \int_c^d p\alpha_r\left(x;y(\cdot)\right)dx= \frac{\nu_{m_l}}{h}\alpha^{(i)}_{m_l}\left(x_k; y(\cdot)\right)+ \int_c^d p\alpha_{m_l}(x;y(\cdot))dx \\&+ \cdots + \frac{\nu_{n-1}}{h}\alpha^{(i)}_{n-1}\left(x_k; y(\cdot)\right) + \int_c^d p\alpha_{n-1}(x;y(\cdot))dx,\;0\le i\le m_k-1. \end{align*} In the system of equations above, we notice that $y(\cdot)$ is not always the same. Therefore, we consider the matrix along $y(x).$ $$ M := \begin{pmatrix} \alpha_{m_l}(x_1; y(x)) & \alpha_{m_l+1}(x_1;y(x)) & \cdots & \alpha_{n-1}(x_1; y(x))\\ \alpha^{'}_{m_l}(x_1; y(x)) & \alpha^{'}_{m_l+1}(x_1; y(x)) & \cdots & \alpha^{'}_{n-1}(x_1; y(x))\\ \vdots & \vdots & \ddots & \vdots\\ \alpha_{m_l}^{(m_1 -1)}(x_1; y(x)) & \alpha^{(m_1 -1)}_{m_l+1}(x_1; y(x)) & \cdots & \alpha^{(m_1 -1)}_{n-1}(x_1; y(x))\\ \vdots & \vdots & \ddots & \vdots\\ \alpha_{m_l}^{(m_{l-1} -1)}(x_{l-1}; y(x)) & \alpha^{(m_{l-1} -1)}_{m_l+1}(x_{l-1}; y(x)) & \cdots & \alpha^{(m_{l-1} -1)}_{n-1}(x_{l-1}; y(x))\\ \alpha_{m_l}(x_{l+1}; y(x)) & \alpha_{m_l + 1}(x_{l+1}; y(x)) & \cdots & \alpha_{n-1}(x_{l+1}; y(x))\\ \vdots & \vdots & \ddots & \vdots\\ \alpha_{m_l}(x_k; y(x)) & \alpha_{m_l+1}(x_k; y(x)) & \cdots & \alpha_{n-1}(x_k; y(x))\\ + \int_c^d p\alpha_{m_l}(x;y(x))dx & +\int_c^d p\alpha_{m_l + 1}(x;y(x))dx & \cdots & +\int_c^d p\alpha_{n-1}(x;y(x))dx\\ \vdots & \vdots & \ddots & \vdots\\ \alpha_{m_l}^{(m_k -1)}(x_k; y(x)) & \alpha_{m_l + 1}^{(m_k -1)}(x_k; y(x)) & \cdots & \alpha_{n-1}^{(m_k -1)}(x_k; y(x)) \\ + \int_c^d p\alpha_{m_l} (x;y(x))dx & + \int_c^d p\alpha_{m_l + 1}(x;y(x))dx & \cdots & + \int_c^d p\alpha_{n-1}(x;y(x))dx \end{pmatrix}$$ We claim that $\det(M) \neq 0$. Suppose to the contrary that $\det(M) = 0$. Then, there exists a linear combination of the column vectors with scalars $p_i\in\mathbb{R},\;m_l\le i\le n-1$ such that at least one $p_i$ is nonzero $$p_{m_l} \begin{pmatrix} \alpha_{m_l}(x_1; y(x))\\ \alpha^{'}_{ml}(x_1; y(x))\\ \vdots\\ \alpha_{m_l}^{(m_{l-1} -1)}(x_{l-1}; y(x))\\ \alpha_{m_l}(x_{l+1}; y(x))\\ \vdots\\ \alpha_{m_l}^{(m_k -1)}(x_k; y(x) \\ + \int_c^d p\alpha_{m_l}(x;y(x))dx \end{pmatrix} + \cdots + p_{n-1} \begin{pmatrix} \alpha_{n-1}(x_1; y(x))\\ \alpha^{'}_{n-1}(x_1; y(x))\\ \vdots\\ \alpha_{n-1}^{(m_{l-1} -1)}(x_{l-1}; y(x))\\ \alpha_{n-1}(x_{l+1}; y(x))\\ \vdots\\ \alpha_{n-1}^{(m_k -1)}(x_k; y(x)) \\ + \int_c^d p\alpha_{n-1}(x;y(x))dx \end{pmatrix} = \begin{pmatrix} 0 \\ 0\\ \vdots \\ 0\\ 0\\ \vdots\\ 0\\ \end{pmatrix}.$$ Set \begin{displaymath} w(x; y(x)) := p_{m_l}\alpha_{m_l}(x; y(x)) + \cdots + p_{n-1}\alpha_{n-1}(x;y(x)). \end{displaymath} Then by Theorem \ref{peano}, $w(x; y(x))$ is a nontrivial solution of (\ref{var}), but \begin{displaymath} w^{(i)}(x_j; y(x)) = 0,\; 0 \leq i \leq m_j -1, \;1 \leq j \leq k-1,\; j\neq l \end{displaymath} and \begin{displaymath} w^{(i)}(x_k; y(x)) + \int_c^d pw(x; y(x))dx= 0,\;0\le i\le m_k-1. \end{displaymath} When coupled with hypothesis (v), we have $w(x; y(x)) \equiv 0$. Since each alpha function is not identically zero, $p_{m_l} = p_{m_{l+1}} = \cdots = p_{n-1} = 0$ which is a contradiction to the choice of $p_i$'s. Hence, $\det(M) \neq 0$ implying $M$ and, subsequently by Theorem \ref{contdep}, $M(h)$ have inverses. Here, $M(h)$ is the appropriately defined matrix from the system of equations using the correct $y(\cdot)$. Therefore, for each $m_l \leq i \leq n-1$, we can solve for $\frac{\nu_i}{h}$ by using Cramer's Rule. and suppressing the arguments of each $\alpha$: \begin{align*}\frac{\nu_i}{h} &=\frac{1}{M(h)} \times\\ &\begin{vmatrix} \alpha_{m_l} & \cdots & \alpha_{i-1} & -\alpha_r & \alpha_{i+1} & \cdots & \alpha_{n-1} \\ \vdots & \ddots & \vdots & \vdots & \vdots & \ddots & \vdots\\ \alpha_{m_l} + \int p\alpha_{m_l}& \cdots & \alpha_{i-1} + \int p\alpha_{i - 1} & -\alpha_r - \int p\alpha_{r}& \alpha_{i+1} + \int p\alpha_{i + 1} & \cdots & \alpha_{n-1} + \int p\alpha_{n-1} \end{vmatrix}\end{align*} Note as $h \rightarrow 0$, $\det(M(h)) \rightarrow \det(M),$ and so, for $1 \leq i \leq n-1 $, $\nu_i(h)/h \rightarrow \det(M_i) / \det(M) := B_i$ as $h \rightarrow 0$, where $M_i$ is the $n- m_l \times n- m_l$ matrix found by replacing the appropriate column of the matrix $M$ by \begin{align*} \textup{col}\Big[&-\alpha_r(x_1;y(x)),\ldots,-\alpha_r^{(m_1-1)}(x_1;y(x)),\ldots,-\alpha_r(x_{l-1};y(x)),\ldots,-\alpha_r^{(m_{l-1}-1)}(x_{l-1};y(x)),\\&-\alpha_r(x_{l+1};y(x)),\ldots,-\alpha_r^{(m_{l+1}-1)}(x_{l+1};y(x)), \ldots,-\alpha_r(x_k;y(x))-\int_c^d p\alpha_r(x;y(x))dx,\\&\ldots,-\alpha_r^{(m_k-1)}(x_k;y(x))-\int_c^d p\alpha_r(x;y(x))dx\Big]. \end{align*} \par Now, let $Y_{rl}(x) = \displaystyle\lim_{h \rightarrow 0}Y_{rlh}(x)$, and note by construction $$Y_{rl}(x) = \frac{\partial y}{\partial y_{rl}}(x)=\frac{\partial u}{\partial y_{rl}}(x).$$ Futhermore, $$Y_{rl}(x) = \lim_{h \rightarrow 0} Y_{rlh}(x) =\alpha_r\left(x;u(x)\right)+\sum_{i=m_l}^{n-1}B_i\alpha_i\left(x;u(x)\right)$$ which is a solution of the variational equation (\ref{var}) along $u(x)$. In addition, \begin{align*} &Y_{rl}^{(i)}\left(x_j\right) = \lim_{h \rightarrow 0}Y^{(i)}_{rlh}\left(x_j\right) = 0, \; 0 \leq i \leq m_j-1, \; 1 \leq j \leq k-1,j\ne l,\\ &Y_{rl}^{(i)}\left(x_l\right) = \lim_{h \rightarrow 0}Y_{rlh}^{(i)}\left(x_l\right) = 0, \; 0 \leq i \leq m_j-1, i\ne r,\\ &Y_{rl}^{(r)}\left(x_l\right) = \lim_{h \rightarrow 0}Y^{(i)}_{rlh}\left(x_l\right) = 1,\\ &Y_{rl}^{(i)}\left(x_k\right) + \int_c^d p Y_{rl}(x)\;dx = \lim_{h \rightarrow 0} \left[Y_{rlh}^{(i)}\left(x_k\right) + \int_c^d Y_{rlh}(x)\;dx\right] = 0,\;0 \leq i \leq m_k -1. \end{align*} \end{proof} Finally, we note that similar to part (c) of Peano's theorem, the solutions found in (a)-(e) of the main result may be written as various combinations of one another due to the dimensionality of the solution space. We refer the reader to Corollary 4.1 in \cite{Lyons11} for an example. \bibliographystyle{amsplain}
{ "timestamp": "2022-09-20T02:02:13", "yymm": "2209", "arxiv_id": "2209.08164", "language": "en", "url": "https://arxiv.org/abs/2209.08164", "abstract": "In this paper, we discuss differentiation of solutions to the boundary value problem $y^{(n)} = f(x, y, y^{'}, y^{''}, \\ldots, y^{(n-1)}), \\; a<x<b,\\; y^{(i)}(x_j) = y_{ij},\\; 0\\leq i \\leq m_j, \\; 1 \\leq j \\leq k-1$, and $y^{(i)}(x_k) + \\int_c^d p y(x)\\;dx = y_{ik}, \\;0 \\leq i \\leq m_k,\\;\\sum_{i=1}^km_i=n$ with respect to the boundary data. We show that under certain conditions, partial derivatives of the solution $y(x)$ of the boundary value problem with respect to the various boundary data exist and solve the associated variational equation along $y(x)$.", "subjects": "Classical Analysis and ODEs (math.CA); Analysis of PDEs (math.AP)", "title": "Solutions of the Variational Equation for an nth Order Boundary Value Problem with an Integral Boundary Condition", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9822876997410348, "lm_q2_score": 0.8267117898012104, "lm_q1q2_score": 0.8120688223526249 }
https://arxiv.org/abs/2112.11997
Bohr sets in sumsets I: Compact groups
Let $G$ be a compact abelian group and $\phi_1, \phi_2, \phi_3$ be continuous endomorphisms on $G$. Under certain natural assumptions on the $\phi_i$'s, we prove the existence of Bohr sets in the sumset $\phi_1(A) + \phi_2(A) + \phi_3(A)$, where $A$ is either a set of positive Haar measure, or comes from a finite partition of $G$. The first result generalizes theorems of Bogolyubov and Bergelson-Ruzsa. As a variant of the second result, we show that for any partition $\mathbb{Z} = \bigcup_{i=1}^r A_i$, there exists an $i$ such that $A_i - A_i + sA_i$ contains a Bohr set for any $s \in \mathbb{Z} \setminus \{ 0 \}$. The latter is a step toward an open question of Katznelson and Ruzsa.
\section{Introduction and statements of results} Let $G$ be an abelian topological group. For a finite set $\Lambda$ of characters (i.e. continuous homomorphisms from $G$ to $S^1 := \{ z \in \mathbb{C}: |z|=1\}$) and $\eta > 0$, the set \begin{equation*} \label{eq:bohr1} B(\Lambda; \eta) := \{ x \in G : | \gamma(x)-1 | < \eta \textup{ for any } \gamma \in \Lambda\} \end{equation*} is called a \textit{Bohr set} or a \emph{Bohr neighborhood of $0$}. We refer to $\eta$ as the \textit{radius} and $|\Lambda|$ as the \textit{rank} (or \textit{dimension}) of the Bohr set. The set $B(\Lambda; \eta)$ is also called a Bohr-$(|\Lambda|, \eta)$ set. If $A, B \subset G$, the sumset and difference set of $A$ and $B$ are $A\pm B: = \{a\pm b: a \in A, b \in B \}$. If $c \in \mathbb{Z}$, we define $cA:=\{ ca: a \in A\}$. The study of Bohr sets in sumsets started with the following important theorem of Bogolyubov \cite{bogo}\footnote{This is reminiscent of Steinhaus' theorem, which says that if $A \subset \mathbb{R}$ has positive Lebesgue measure, then $A-A$ contains an open interval around 0.}. \begin{theorem}[Bogolyubov \cite{bogo}] If $A \subset \mathbb{Z}$ has positive upper Banach density, i.e. \[ d^*(A) : = \lim_{N \rightarrow \infty} \sup_{M \in \mathbb{Z}} \frac{|A \cap [M+1,M+N]|}{N} >0, \] then $A + A - A - A$ contains a Bohr set whose rank and radius depend only on $d^*(A)$. \end{theorem} While it originated from the study of almost periodic functions, Bogolyubov's theorem is now a standard tool in additive combinatorics. It was used in Ruzsa's proof of Freiman's theorem \cite{ruzsa6} and in Gowers' proof of Szemer\'edi's theorem \cite{gowers}. See \cite{bienvenu-le1, gm1} for a recent variant of Bogolyubov's theorem and its applications. The more copies of $A$ are involved, the more structured the sumset is. This reflects the fact that more convolutions result in smoother functions. Thus, a natural question is: What is the smallest number of copies of $A$ that will guarantee the existence of a Bohr set? In $\mathbb{Z}$, it is known that $A-A$ does not necessarily contain a Bohr set, which is a result of Kriz \cite{kriz}. On the other hand, F\o lner \cite{folner1} proved that there is a Bohr set $B$ such that $(A-A) \setminus B$ has density 0. Regarding three copies of $A$, Bergelson and Ruzsa \cite{br} proved the following: \begin{theorem}[Bergelson-Ruzsa \cite{br}] \label{th:br} Let $r, s, t$ be non-zero integers satisfying $r +s+t = 0$. If $A \subset \mathbb{Z}$ has positive upper Banach density, then $r A+s A+t A$ contains a Bohr set whose rank and radius depend only on $r, s, t$ and $d^*(A)$. \end{theorem} The condition $r + s + t = 0$ is easily seen to be necessary, by taking $A = M \mathbb{Z} + 1$ for some $M > |r| + |s| + |t|$, since any Bohr set must necessarily contain 0. In particular, one cannot expect $A + A - A$ to contain a Bohr set. When $(r, s, t) = (1, 1, -2)$, Bergelson-Ruzsa's theorem generalizes Bogolyubov's, since $A+A-2 A \subset A+A-A-A$. \subsection{Partition results in $\mathbb{Z}$} While the problem of finding Bohr sets in sumsets of sets having positive density has attracted much attention, the analogous question concerning partitions of $\mathbb{Z}$ was little studied until recently. Regarding the latter, there is a well-known problem in additive combinatorics and dynamical systems, which was popularized by Ruzsa \cite[Chapter 5]{ruzsabook} and Katznelson \cite{katznelson}. \begin{question} \label{q:kr} If $\mathbb{Z} = \bigcup_{i=1}^r A_i$, must there exist $i \in \{1, 2, \ldots, r\}$ such that $A_i - A_i$ contains a Bohr set? \end{question} In terms of dynamical systems, Question \ref{q:kr} asks if any set of recurrence for minimal \textit{topological systems} is also a set of recurrence for minimal \textit{isometries} (also known as a set of \textit{Bohr recurrence}). See \cite{gkr} for a detailed account of the history of \cref{q:kr}, as well as its many equivalent formulations.\footnote{In \cite{gkr}, what we call ``Bohr set'' is referred to as a ``Bohr neighborhood of $0$.'' Their Bohr sets are our Bohr sets translated by any element.} While Question \ref{q:kr} remains open at the moment and only some partial results were obtained \cite{gkr, griesmer-k}, we do have a positive answer when three copies of $A_i$ are involved. \begin{theorem} \label{th:main-kr} Let $\mathbb{Z} = \bigcup_{i=1}^r A_i$ be a partition. \begin{enumerate}[label=(\alph*), leftmargin=*] \item For any $s_1, s_2 \in \mathbb{Z} \setminus \{0\}$, there exists $i \in \{1, 2, \ldots, r\}$ such that the set $s_1 A_i -s_1 A_i + s_2A_i$ contains a Bohr set whose rank and radius depend only on $r$ and $s_1, s_2$. \item There exists $i \in \{1, 2, \ldots, r\}$ such that for any $s \in \mathbb{Z} \setminus \{0\}$, the set $A_i - A_i + s A_i$ contains a Bohr set. \end{enumerate} \end{theorem} \cref{th:main-kr} highlights the difference between partition and density since, as we mentioned earlier, there is a set $A \subseteq \mathbb{Z}$ of positive density such that $A - A + A$ does not contain a Bohr set. The expression $s_1 A_i - s_1 A_i + s_2 A_i$ is related to Rado's condition on partition regularity \cite{rado}. Recall that an equation $s_1x_1 + s_2x_2 + \cdots + s_{\ell} x_{\ell} =0$ with coefficients in $\mathbb{Z} \setminus \{0\}$ is \textit{partition regular} if under any finite partition (or coloring) of $\mathbb{Z} \setminus \{0\}$, there exists a monochromatic solution $(x_1,x_2,\ldots, x_{\ell})$. Rado's theorem says that the equation $s_1 x_1 + s_2 x_2 + \cdots + s_{\ell} x_{\ell} =0$ is partition regular if and only if $\{s_1, \ldots, s_{\ell}\}$ satisfies the following condition: There exists a nonempty set $J \subset \{1, \ldots, \ell\}$ such that $\sum_{i \in J} s_i =0$. Using the facts that $(s_1 + \ldots + s_{\ell}) A \subseteq s_1 A + \ldots + s_{\ell} A$, and a Bohr set must contain $0$, \cref{th:main-kr}(a) implies that for $\ell \geq 3$ and $s_1, \ldots, s_{\ell} \in \mathbb{Z} \setminus \{0\}$, the following are equivalent: \begin{enumerate} \item For any partition $\mathbb{Z} = \bigcup_{i=1}^r A_i$, there exists $i \in \{1, \ldots, r\}$ such that $s_1 A_i + \ldots + s_{\ell} A_i$ contains a Bohr set. \item $\{s_1, \ldots, s_{\ell}\}$ satisfies Rado's condition above. \end{enumerate} A novelty of \cref{th:main-kr}(b) is that it guarantees a single set $A_i$ that works for every coefficient $s$ (on the other hand, we do lose control on the rank and radius of the Bohr set). When $s$ is very large, the set $sA_i$ is small and so its contribution to the sum diminishes. While there is no consensus on what the answer to \cref{q:kr} should be, \cref{th:main-kr}(b) provides evidence that the answer to \cref{q:kr} is either positive or very delicate. In \cite[Table 1, p. 8]{gkr}, Glasscock-Koutsogiannis-Richter summarized results on Bohr sets in sumsets, pertaining to both density and partition. Our \cref{th:main-kr} fills in the blank on the Syndeticity\footnote{A subset $A$ of a group $G$ is called \textit{syndetic} if $G$ can be covered by finitely many translated of $A$.} column and $rA + sA+ tA$ row of their table. \subsection{Results in compact groups} Bogolyubov's theorem has been generalized to other groups as well (in more general groups, the upper Banach density $d^*$ can be defined in terms of \textit{F\o lner sequences} or \textit{invariant means}). F\o lner \cite{folner1, folner2} extended Bogolyubov's theorem to all abelian groups. Answering a question of Hegyv\'ari-Ruzsa \cite{hr}, Bj\"orlund-Griesmer \cite{bg} proved that in any countable discrete abelian group $G$, for any $A \subset G$ with $d^*(A) >0$, for ``many'' $a \in A$, the set $A+A-A-a$ contains a Bohr set whose rank and radius depend only on $d^*(A)$. Very recently, Griesmer \cite{griesmer-br} generalized Theorem \ref{th:br} to all countable discrete abelian groups, though his proof does not give effective bounds for the rank and radius of the Bohr set in question. Bergelson-Ruzsa and Bogolyubov first proved their theorems in the cyclic group $\mathbb{Z}_N$, and the statements in $\mathbb{Z}$ follow from a compactness argument. Likewise, in Bj\"orlund-Griesmer \cite{bg} and Griesmer \cite{griesmer-br}, certain compact groups (namely \textit{Bohr compactifications} and \textit{Kronecker factors}) play a prominent role. In view of this ``compact first'' strategy, the main goal of this paper is, in fact, to study the existence of Bohr sets in sumsets of compact groups. Under this investigation, \cref{th:main-kr} arises as an application of our general method. In a subsequent paper, we will study the existence of Bohr sets in arbitrary discrete groups by transferring our results from compact groups. Another feature of our work is the consideration of continuous homomorphisms $\phi:G \rightarrow G$ and the image $\phi(A)$ rather than just dilations $c A$. This point of view leads to a wider range of applications, for example, linear maps on vector spaces and multiplication by an element in a ring (see Theorems \ref{th:nf} and \ref{th:ff} below). This new perspective was also adopted in recent work of Ackelsberg-Bergelson-Best \cite{abb} on Khintchine-type recurrence for actions of an abelian group (Theorem \ref{th:abb} below). Our main result on Bohr sets in sumsets arising from partitions is as follows. \begin{theorem} \label{th:main-partition} Let $G$ be a compact abelian group with normalized Haar measure $\mu$ and let $\phi_1, \phi_2: G \rightarrow G$ be continuous homomorphisms satisfying \begin{enumerate}[label=(\alph*), leftmargin=*] \item $\phi_1, \phi_2$ are commuting, and \item $\phi_1(G), \phi_2(G)$ have finite indices in $G$. \end{enumerate} Let $G = \bigcup_{i=1}^r A_i$ be a partition of $G$ into measurable sets. Then for some $1 \leq i \leq r$, \[ \phi_1(A_i) - \phi_1(A_i) + \phi_2(A_i) \] contains a Bohr-$(k, \eta)$ set, where $k$ and $\eta$ depend only on $r$, $[G:\phi_1(G)]$ and $[G:\phi_2(G)]$. \end{theorem} \begin{remark} \ \begin{itemize}[leftmargin=*] \item If $\mu(A) >0$, then $A + A - A$ is not guaranteed to contain a Bohr set. For a counterexample, take $G = \mathbb{Z}_{2N}$ for some large $N$ and $A=\{a \in \mathbb{Z}_{2N}: a \textup{ is odd} \}$. In particular, the analogous version of \cref{th:main-partition} for sets of positive measure fails. \item We do not know if the commuting condition can be removed entirely, though it can be slightly relaxed (see \cref{th:main-partition-true}). For example, commutativity is not required when $\phi_1$ or $\phi_2$ is an automorphism (see Remark \ref{rk:auto2}). \item The finite index condition on $\phi_1(G)$ cannot be removed, by taking for example $\phi_1 = 0$ and $\phi_2(x) = x$. On the other hand, we do not know whether the finite index condition on $\phi_2(G)$ can be removed. If we let $\phi_2 = 0$, then the situation amounts to \cref{q:kr} itself. \end{itemize} \end{remark} We now turn our attention to density results. In compact abelian groups, the Haar measure plays the role of the upper Banach density. \begin{theorem} \label{th:main-density} Let $G$ be a compact abelian group with normalized Haar measure $\mu$ and $\phi_1, \phi_2, \phi_3: G \rightarrow G$ be continuous homomorphisms satisfying \begin{enumerate}[label=(\alph*), leftmargin=*] \item $\phi_1 + \phi_2 + \phi_3 = 0$, \item $\phi_1, \phi_2, \phi_3$ are commuting, \item $\phi_1(G), \phi_2(G), \phi_3(G)$ have finite indices in $G$. \end{enumerate} Let $A \subseteq G$ be a measurable subset with $\mu(A) = \delta >0$. Then \[ \phi_1(A) + \phi_2(A) + \phi_3(A) \] contains a Bohr-$(k, \eta)$ set, where $k$ and $\eta$ depend only on $\delta$ and the indices $[G:\phi_i(G)]$ $(1 \leq i \leq 3)$. \end{theorem} \begin{remark}\ \begin{itemize}[leftmargin=*] \item The condition $\phi_1 + \phi_2 + \phi_3 = 0$ cannot be removed. For a counterexample, take $G=\mathbb{Z}_N$ for some large $N$ and $A = \{1, \cdots, \lfloor N/10 \rfloor\}$. Then $A+A+A$ does not contain 0, and hence not a Bohr set. \item We do not know if the condition on commutativity can be removed entirely, though it can be weakened (see Theorem \ref{th:main-density-true}). For example, commutativity is not required when one of $\phi_1, \phi_2$ and $\phi_3$ is an automorphism (see Remark \ref{rk:auto}). \item The finite index condition cannot be removed. Indeed, we can take $G = \mathbb{F}_2^n$ for some large $n$, $\phi_1(x)=\phi_2(x)=x, \phi_3(x)=0$. In this setting, Bohr sets are simply vector subspaces. A construction of Green \cite[Theorem 9.4]{green-ff} gives a set $A$ of size $\geq |G|/4$ such that any subspace contained in $A-A$ must have codimension $\geq \sqrt{n}$. \end{itemize} \end{remark} As an application, \cref{th:main-density} can be used to obtain an effective version of the aforementioned result of Griesmer \cite{griesmer-br}. We plan to pursue this idea in a subsequent paper. \subsection{Number-theoretic consequences} As mentioned earlier, the fact that we accommodate homomorphisms in \cref{th:main-partition} and \cref{th:main-density} enables us to generalize \cref{th:br} and \cref{th:main-kr} to number fields and function fields. In the following, for a subset $A$ of a ring $R$ and $c \in R$, we write \begin{equation} \label{eq:ring1} cA = \{ca: a \in A\} \end{equation} and \begin{equation} \label{eq:ring2} A/c = \{ b\in A: bc \in A\}. \end{equation} The next theorem is true for any number field, but we only state for $\mathbb{Z}[i]$ for simplicity. \begin{theorem} \label{th:nf} Let $s_1, s_2, s_3 \in \mathbb{Z}[i] \setminus \{0\}$ such that $s_1 + s_2 + s_3 = 0$. \begin{enumerate}[label=(\alph*), leftmargin=*] \item If a set $A \subseteq \mathbb{Z}[i]$ has positive upper density, i.e. \[ \overline{d}(A) := \limsup_{N \to \infty} \frac{|A \cap [-N, N]^d|}{(2N + 1)^d} = \delta > 0, \] then $s_1 A + s_2 A + s_3 A$ contains a $(k, \eta)$-Bohr set in $\mathbb{Z}[i]$, where $k$ and $\eta$ depend only on $s_1, s_2, s_3$ and $\delta$. \item If $\mathbb{Z}[i] = \bigcup_{j=1}^r A_j$, then for some $j \in \{1, 2, \ldots, r\}$, $s_1 A_j - s_1 A_j + s_2 A_j$ contains a $(k, \eta)$-Bohr set in $\mathbb{Z}[i]$, where $k$ and $\eta$ depend only on $s_1, s_2$ and $r$. \item If $\mathbb{Z}[i] = \bigcup_{j=1}^r A_j$, then there exists $j \in \{1, 2, \ldots, r\}$ such that $A_j - A_j + s A_j$ contains a Bohr set for any $s \in \mathbb{Z}[i] \setminus \{0\}$. \end{enumerate} Here, as a group, we identify $\mathbb{Z}[i]$ with $\mathbb{Z}^2$. \end{theorem} Our next result deals with the ring $\mathbb{F}_q[t]$ of polynomials over a finite field $\Fq$. \begin{theorem} \label{th:ff} Let $s_1, s_2, s_3 \in \Fq[t] \setminus \{0\}$ such that $s_1 + s_2 + s_3 = 0$. \begin{enumerate}[label=(\alph*), leftmargin=*] \item If a set $A \subseteq \Fq[t]$ has positive upper density, i.e. \[ \overline{d}(A) := \limsup_{N \to \infty} \frac{|\{x \in \Fq[t]: \deg x < N \}|}{q^N} = \delta > 0, \] then $s_1 A + s_2 A + s_3 A$ contains a $\Fq$-vector subspace of finite codimension in $\mathbb{F}_q[t]$, where the codimension depends only on $s_1, s_2, s_3$ and $\delta$. \item If $\Fq[t] = \bigcup_{i=1}^r A_i$, then for some $i \in \{1, \ldots, r\}$, $s_1 A_i - s_1 A_i + s_2 A_i$ contains a $\Fq$-vector subspace of finite codimension of $\mathbb{F}_q[t]$, where the codimension depends only on $s_1, s_2$ and $\delta$. \item If $\mathbb{F}_q[t] = \bigcup_{i=1}^r A_i$, then there exists $i \in \{1, 2, \ldots, r\}$ such that $A_i - A_i + s A_i$ contains an $\Fq$-vector subspace of finite codimension of $\mathbb{F}_q[t]$ for any $s \in \Fq[t] \setminus \{0\}$. \end{enumerate} \end{theorem} We remark that the special case $s_1, s_2, s_3 \in \Fq \setminus\{0\}$ of \cref{th:ff}(a) is essentially Corollary 1.4 in \cite{griesmer-br}. \subsection{Counting linear patterns} Similarly to the proofs of Bogolyubov \cite{bogo} and Bergelson-Ruzsa \cite{br}'s theorems, we deduce Theorem \ref{th:main-density} from a lower bound (of correct order of magnitude) for the number of certain linear patterns in $G$. This is straightforward in Bogolyubov's case, but less so in Bergelson and Ruzsa's. Bergelson and Ruzsa had to count the number of generalized Roth patterns $\{ x, x+ry, x+sy \}$ (where $r, s \in \mathbb{Z}$) and they deduced this from Szemer\'edi's theorem \cite{szemeredi} and Varnavides' argument \cite{var}. For us, we need to count the number of patterns $\{ x, x+ \phi(y), x+ \psi(y) \}$ (where $\phi$ and $\psi$ are homomorphisms). This is accomplished by generalizing a Fourier-analytic argument of Bourgain \cite{bourgain-roth}. Bourgain's argument, in essence an arithmetic regularity lemma, allows us to obtain the following Khintchine-type result. \begin{theorem}[Khintchine-Roth theorem in compact abelian groups] \label{th:roth-khintchine} Let $G$ be a compact abelian group with probability Haar measure $\mu$ and $\phi, \psi: G \to G$ be continuous homomorphisms such that $[G: \phi(G)], [G: \psi(G)]$ and $[G: (\phi-\psi)(G)]$ are finite. Let $f: G \to [0, 1]$ be a measurable function with $\int_G f \, d \mu = \delta >0$. Then for any $\epsilon > 0$, there exists a constant $c_1 >0$ that depends only on $\delta, \epsilon$ and the indices above such that the set \[ B = \left\{y \in G: \int_G f(x) f(x+ \phi(y)) f(x + \psi(y)) \, d \mu(x) > \delta^3 - \epsilon \right\} \] has measure at least $c_1$. Consequently, \begin{equation} \label{eq:counting-roth} \iint_{G^2} f(x) f(x+ \phi(y)) f(x + \psi(y)) \, d \mu(x) d\mu(y) \geq c_2 \end{equation} for some positive constant $c_2$ depending only on $\delta$ and the indices above. \end{theorem} Theorem \ref{th:roth-khintchine} was proved independently by Berger-Sah-Sawhney-Tidor \cite{bsst}, under the hypothesis that $\phi, \psi$ and $\phi-\psi$ are automorphisms, using a very similar argument. Our execution is slightly different from theirs, in that we follow Bergelson-Host-McCutcheon-Parreau \cite{bhmp}'s elaboration of Bourgain's argument, while they follow Tao \cite{tao}'s. Theorem \ref{th:roth-khintchine} is markedly similar to the following result of Ackelsberg, Bergelson and Best: \begin{theorem}[{\cite[Theorem 1.10]{abb}}] \label{th:abb} Let $G$ be a countable discrete abelian group, and $\phi, \psi: G \to G$ be homomorphisms such that $[G: \phi(G)], [G: \psi(G)]$ and $[G: (\phi-\psi)(G)]$ are finite. For any ergodic system $(X,\mathcal{B}, \mu, (T_g)_{g \in G})$, any $\epsilon > 0$, and any $A \in \mathcal{B}$, the set \[ B = \left\{ g \in G : \mu ( A \cap T^{-1}_{\phi(g)} A \cap T^{-1}_{\psi(g)} A ) > \mu(A)^3 - \epsilon \right\} \] is syndetic in $G$. \end{theorem} As discussed in \cite[Section 10]{abb}, the finite index condition in Theorem \ref{th:abb} is necessary. The following result of Fox-Sah-Sawhney-Stoner-Zhao \cite{fsssz}, improving on an earlier result of Mandache \cite{mandache}, shows that the finite index condition is also necessary in Theorem \ref{th:roth-khintchine}. \begin{example} \label{ex:roth-khintchine} Let $\ell <4$ be arbitrary and $\delta >0$ be sufficiently small in terms of $l$. Let $G = \mathbb{F}_2^n \times \mathbb{F}_2^n$ where $n$ is sufficiently large, $\phi(u,v)=(u,0), \psi(u,v)=(0,u)$. Then the left hand side of \eqref{eq:counting-roth} counts the number of ``corners'' $\{ (a, b) , (a+u, b), (a, b+u) \}$ in $\mathbb{F}_2^n \times \mathbb{F}_2^n$. \cite[Corollary 1.3]{fsssz} states that there exists a set $A \subset G$ of size $\geq \delta |G|$ such that for any $u \in \mathbb{F}_2^n \setminus \{0\}$, we have \[ \# \{ (a, b) \in G: (a, b) , (a+u, b), (a, b+u) \in A \} < \delta^\ell |G|. \] Hence, the set $B$ in Theorem \ref{th:roth-khintchine} has to be $\{ 0 \} \times \mathbb{F}_2^n$. But the measure of this set in $G$ goes to $0$ as $n$ goes to infinity. \end{example} Regarding Theorem \ref{th:main-partition}, we deduce it from the following result, which counts the number of monochromatic configurations under finite partitions of $G$. \begin{theorem} \label{th:counting-partition} Let $G$ be a compact abelian group with probability Haar measure $\mu$ and let $\psi, \phi_1, \ldots, \phi_k: G \to G$ be continuous homomorphisms satisfying: \begin{enumerate}[label=(\alph*), leftmargin=*] \item $\psi, \phi_1, \ldots, \phi_k$ are commuting, and \item $\psi(G), \phi_1(G), \ldots, \phi_k(G)$ have finite indices in $G$. \end{enumerate} Suppose $G = \bigcup_{i=1}^r A_i$ is a partition of $G$ into measurable sets. Then \begin{equation} \label{eq:counting-brauer} \sum_{i=1}^r \iint_{G^2} 1_{A_i}(\psi(y)) 1_{A_i}(x) 1_{A_i}(x + \phi_1(y)) \cdots 1_{A_i}(x + \phi_k(y)) \, d \mu(x) d\mu(y) \geq c_3 \end{equation} for some positive constant $c_3$ depending only on $r, k$ and the indices above. \end{theorem} \begin{remark}\ \begin{itemize}[leftmargin=*] \item By taking $\psi =0$, we see that the condition $[G:\psi(G)]$ is finite cannot be removed. However, we do not know whether the condition $[G: \phi_i(G)] < \infty$ is necessary or not. \item Our proof relies heavily on the commuting condition and we do not know if it can be removed. \end{itemize} \end{remark} When $\psi$ and $\phi$ are dilations, the configuration $\{ \psi(y), x + \phi_1(y), \ldots, x + \phi_k(y) \}$ becomes the \textit{Brauer configuration} $\{y, x, x+y, \ldots, x + ky\}$. Results on counting such monochromatic configurations have been established by Serra-Vena \cite[Theorem 1.3]{sv} for finite abelian groups of bounded torsion. Thus, besides the fact that it allows for more general homomorphisms, Theorem \ref{th:counting-partition} has the advantage of being uniform over all groups. On the other hand, our finite index condition is certainly related, and in a sense, dual to Serra-Vena's bounded exponent condition \cite{sv}. We remark that despite the apparent similarity between \eqref{eq:counting-roth} and \eqref{eq:counting-brauer}, their proofs are very different. The proof of \cref{th:counting-partition} is ``Fourier-free'' and its main ingredient is the Hales-Jewett theorem. Thus, our approach in proving this theorem is also genuinely different from Serra-Vena's, which relies on a removal lemma for groups. On the quantitative side, our bounds leave much to be desired. Since the proof of \cref{th:roth-khintchine} relies on the regularity lemma (\cref{prop:regularity_lemma}), in \cref{th:main-density}, the dependence of $k$ and $\eta$ on $\delta$ and $[G:\phi_i(G)]$ is of tower type. Likewise, since the proof of \cref{th:counting-partition} uses the Hales-Jewett theorem, the bounds for $k$ and $\eta$ in \cref{th:main-partition} are even worse. It is an interesting problem to obtain good bounds for Theorems \ref{th:main-density} and \ref{th:main-partition}, even in special classes of groups such as $\mathbb{F}_p^n$. Indeed, Sanders \cite[Theorem A.1]{sanders} obtained a near optimal bound for Bogolyubov's theorem in $\mathbb{F}_p^n$. \textbf{Outline of the paper.} In \cref{sec:prelim}, we set up notation and collect some basic facts about Bohr sets, kernels and homomorphisms in compact abelian groups. \cref{sec:partition} is devoted to proving results involving partitions, especially, Theorems \ref{th:main-partition} and \ref{th:counting-partition}. Theorems \ref{th:main-density}, \ref{th:roth-khintchine} and related density results will be proved in \cref{sec:density}. \cref{sec:z_and_field} contains proofs of results in $\mathbb{Z}$, number fields and function fields, i.e. Theorems \ref{th:main-kr}, \ref{th:nf} and \ref{th:ff}. Lastly, we present some related open questions in \cref{sec:open_question}. \textbf{Acknowledgement.} We thank Vitaly Bergelson and John Griesmer for many helpful conversations on sets of recurrence, Bohr sets and related topics. The second author was partially supported by National Science Foundation Grant DMS-1702296. \section{Preliminaries} \label{sec:prelim} In this section, we gather some background on Bohr sets, kernels and homomorphisms in compact abelian groups. Most of the results are well-known or resemble known theorems. We include proofs for the results that we cannot pinpoint precisely in the literature. \subsection{Notation} We write $[N]$ for the set $\{ 1, \ldots, N\}$. If $A$ and $B$ are two quantities, we write $A = O(B)$ or $A \ll B$ if there is a constant $C$ such that $|A| \leq CB$. We write $e(x)$ for $e^{2 \pi i x}$. Throughout this paper, $G$ is a Hausdorff compact abelian group with probability Haar measure $\mu$ and $\Gamma$ is the dual of $G$, written additively. The relevance of homomorphisms is that if $\gamma \in \Gamma$ and $\phi:G \rightarrow G$ is a continuous homomorphism, then $\gamma \circ \phi$ is also an element of $\Gamma$. If $f: G \rightarrow \mathbb{C}$ is a function, for $t \in G$ we define the function $f_t(x) = f(x+t)$. For $f \in L^1(G)$, the Fourier transform of $f$ is the function \[ \widehat{f} (\gamma) = \int_{G} f(x) \overline{\gamma (x)} \, d\mu(x) \qquad \textup{ for } \gamma \in \Gamma. \] For $f, g \in L^2(G)$, we then have Parseval's formula \[ \int_{G} f(x) \overline{g(x)} \, d\mu(x) = \sum_{\gamma \in \Gamma} \widehat{f} (\gamma) \overline{ \widehat{g} (\gamma) } \] and Plancherel's formula \[ \int_{G} |f(x)|^2 \, d\mu(x) = \sum_{\gamma \in \Gamma} \left| \widehat{f} (\gamma) \right|^2. \] \subsection{Bohr sets} For $\Lambda, \Lambda_1, \Lambda_2 \subseteq \Gamma$ and $\eta_1, \eta_2 > 0$, it follows from the definition of Bohr sets that \[ B(\Lambda_1; \eta_1) \cap B(\Lambda_2; \eta_2) \supset B(\Lambda_1 \cup \Lambda_2; \min(\eta_1, \eta_2)) \] and \[ B(\Lambda; \eta_1) + B(\Lambda; \eta_2) \subset B(\Lambda; \eta_1 + \eta_2). \] \begin{lemma} \label{lem:translate} Suppose $f_1, \ldots, f_k \in L^\infty(G)$, $\| f_i \|_{\infty} \leq 1$ for all $i=1, \ldots, k$. Let $\phi_1, \ldots, \phi_k$ be homomorphisms $G \rightarrow G$. Then for any $\eta >0$, the set \[ B = \{ t \in G: \| \widehat{f_i} - \widehat{f_{i, \phi_i(t)}} \|_{\infty} < \eta \textup{ for }i=1, \ldots, k \} \] contains a Bohr set $B(\Lambda; \eta)$ where $|\Lambda| \leq \frac{4k}{\eta^2}$. \end{lemma} \begin{proof} Note that if $\| \widehat{f_i} - \widehat{f_{i, \phi_i(t)}} \|_\infty \geq \eta$, then for some $\gamma \in \Gamma$, \[ |\widehat{f_i}(\gamma) - \widehat{f_{i, \phi_i(t)}}(\gamma)| = |1 - \gamma(\phi_i(t))| |\widehat{f_i}(\gamma)| \geq \eta. \] This implies that $|1 - \gamma(\phi_i(t))| \geq \eta$ and $\gamma \in \Lambda_i := \{ \lambda \in \Gamma : |\widehat{f_i}(\lambda)| > \eta /2 \}$. We have thus shown that \[ B( \bigcup_{i=1}^k \Lambda_i \circ \phi_i; \eta) \subset B, \] where $\Lambda_i \circ \phi_i: = \{ \gamma \circ \phi_i: \gamma \in \Lambda_i\} \subset \Gamma$. By Plancherel's formula, \[ \left( \frac{\eta}{2} \right)^2 |\Lambda_i| \leq \sum_{\lambda \in \Lambda_i} \left|\widehat{f_i}(\lambda)\right|^2 \leq 1. \] Therefore, $|\Lambda_i| \leq \frac{4}{\eta^2}$ and $|\bigcup_{i=1}^k \Lambda_i \circ \phi_i| \leq \frac{4k}{\eta^2}$. \end{proof} Next lemma is needed in \ref{sec:z_and_field}. \begin{lemma} \label{lem:subgroup-bohr} Let $H$ be a locally compact abelian group, $K$ be a closed subgroup of finite index $m$. Then $K$ is a Bohr-$(m, |e(1/m)-1|)$ set in $H$. \end{lemma} \begin{proof} Let $\chi_1, \ldots, \chi_m$ be all characters on $H/K$. For any $x \in H/K$ and $1 \leq i \leq m$, we have $|\chi_i(x)|^m = 1$, so either $\chi_i(x) = 1$ or $|\chi_i(x) -1| \geq |e(1/m)-1|$. If $\chi_i(x) = 1$ for all $i$ then $x=0$. Hence \[ \{ 0 \} = B(\chi_1, \ldots, \chi_m; |e(1/m)-1|). \] The characters $\chi_i$ lift to characters $\tilde{\chi_i}$ on $H$ by $\tilde{\chi_i} (h) = \chi_i(h+K)$. Therefore, \[ H = B(\tilde{\chi_1}, \ldots, \tilde{\chi_m}; |e(1/m)-1|), \] as desired. \end{proof} We will also need Bogolyubov's theorem for compact abelian groups. \begin{lemma}[Bogolyubov for compact abelian groups, see {\cite[Lemma 2.1]{ruzsa6}}] \label{lem:bogolyubov_compact_group} Let $G$ be a compact abelian group with Haar measure $\mu$ and let $A \subseteq G$ of positive measure. Then $A - A + A - A$ contains a Bohr-$(k, \eta)$ set where $k, \eta$ depends only on $\mu(A)$. \end{lemma} \subsection{Kernels} A kernel on $G$ is a non-negative continuous function that satisfies $\int_G K \, d\mu =1$. Specifically, we will utilize the kernels whose Fourier transforms are also non-negative and supported on given Bohr sets. For a kernel $K$, we write $\lVert \widehat{K} \rVert_1$ to denote $\sum_{\gamma \in \Gamma} | \widehat{K}(\gamma) |$. \begin{lemma}[cf. {\cite[Lemma 4.3]{bhmp}}] \label{lem:kernel_a} Given a finite set $\Lambda \subset \Gamma$ and $\eta \in (0, 1/2]$, there exists a kernel $K$ satisfying the following: \begin{enumerate} \item $K \geq 0, \widehat{K} \geq 0$ and $\int_G K \, d \mu = \lVert K \rVert_1 = 1$, \item $\lVert \widehat{K} \rVert_1 = \lVert K \rVert_{\infty} \leq 1/(C_0\eta)^{|\Lambda|}$ for some absolute constant $C_0 >0$, and \item $K$ vanishes outside the Bohr set $B( \Lambda; \eta)$. \end{enumerate} Consequently, \begin{equation} \label{eq:mub} \mu(B( \Lambda; \eta)) \geq (C_0\eta)^{|\Lambda|}. \end{equation} \end{lemma} We remark that the bound \eqref{eq:mub} can also be obtained from an elementary covering argument (see \cite{tao}). \begin{proof} First, for each $\lambda \in \Lambda$, there exists a kernel $K_{\lambda}: G \to [0, \infty)$ satisfying the following properties: \begin{enumerate} \item $\lVert K_{\lambda} \rVert_1 = 1$, \item $\widehat{K}_{\lambda} \geq 0$, \item $K_{\lambda}$ is supported on $B(\{\lambda\}; \eta) = \{x \in G: |\lambda(x) - 1| < \eta\}$, \item $\|K_\lambda\|_{\infty} = K_{\lambda}(0) \leq 1/(C_0 \eta)$ \text{ for some absolute constant $C_0$}. \end{enumerate} Indeed, let $B = B(\{\lambda\}; \frac{\eta}{2})$ and let $K_{\lambda} = \frac{1_B}{\mu(B)}*\frac{1_B}{\mu(B)}$. Clearly the first and second properties are satisfied. Additionally, $K_{\lambda}$ is supported on $B(\{\lambda\}; \frac{\eta}{2}) + B(\{\lambda\}; \frac{\eta}{2}) \subset B(\{\lambda\}; \eta)$. Concerning the last property, we have for every $x \in G$, \[ K_{\lambda}(x) = \sum_{\gamma \in \Gamma} \widehat{K}_{\lambda}(\gamma) \gamma(x) \] and so \[ \left| K_{\lambda}(x) \right| \leq \sum_{\gamma \in \Gamma} \widehat{K}_{\lambda}(\gamma) = K_{\lambda}(0). \] Therefore, $\lVert K_{\lambda} \rVert_{\infty} = K_{\lambda}(0) = \frac{1}{\mu(B)}$. Since $\lambda$ is continuous, its image $\lambda(G)$ is a closed subgroup of $S^1 = \{ z \in \mathbb{C}: |z|=1\}$, and so it is either $S^1$ or $\{ z \in \mathbb{C} : z^q=1\}$ for some $q \in \mathbb{N}$. Since $\lambda$ is a homomorphism, it is measure-preserving (see Lemma \ref{lem:pushforward} below). Hence $\mu(B)$ is equal to the normalized Haar measure of the set \[ \left\{ z \in S^1 : |z-1| < \frac{\eta}{2} \right\} \] in the group $\lambda(G)$. In either case, where $\lambda(G)=S^1$ or $\{ z \in \mathbb{C} : |z|^q=1\}$, we find that $\mu(B) \geq C_0 \eta$ for some absolute constant $C_0$. Therefore, $\lVert K_{\lambda} \rVert_{\infty} \leq 1/(C_0 \eta)$. We now define \[ \widetilde{K} = \prod_{\lambda \in \Lambda} K_{\lambda}. \] It follows that $\widetilde{K} \geq 0$ and $\widetilde{K}$ is supported on $B(\Lambda; \eta)$. Repeatedly using the fact that $\widehat{fg}(\gamma) = \sum_{\lambda \in \Gamma} \widehat{f}(\lambda) \widehat{g}(\gamma - \lambda)$ for all $f, g \in L^{\infty}(G)$, we have $\widehat{\widetilde{K}} \geq 0$. Likewise, since $\widehat{K}_{\lambda}(0) = \lVert K_{\lambda} \rVert_1 = 1$, we have $\lVert \widetilde{K} \rVert_1 = \widehat{\widetilde{K}}(0) \geq 1$. Upon defining \[ K = \widetilde{K}/\lVert \widetilde{K} \rVert_{1} \] we obtain the desired kernel. \end{proof} \subsection{Homomorphisms} We will often make use of the following facts about homomorphisms $G \rightarrow G$. \begin{lemma}\label{lem:homo-solutions} Let $\phi: G \rightarrow G$ be a continuous homomorphism such that $[G:\phi(G)] = m$ is finite. Then for any $\gamma \in \Gamma$, there are at most $m$ elements $\chi \in \Gamma$ such that $\gamma = \chi \circ \phi$. \end{lemma} \begin{proof} It is easy to see that for each $\gamma \in \Gamma$, the set $S_\gamma := \{ \chi \in \Gamma : \gamma = \chi \circ \phi\}$ is either empty, or a coset of the group $S_0$. On the other hand, $S_0$ is the annihilator of the group $\phi(G)$, so by \cite[Theorem 2.1.2]{rudin}, it is isomorphic to $G/\phi(G)$, and hence has cardinality $m$. \end{proof} \begin{lemma} \label{lem:composition} Let $\phi, \psi: G \rightarrow G$ be homomorphisms such that $[G:\phi(G)] = m$ and $[G:\psi(G)] = \ell$ are finite. Then $[G: \phi ( \psi (G))] \leq m\ell$ is finite. \end{lemma} \begin{proof} We have $[G:\phi(\psi(G))] = [G:\phi(G)] [\phi(G): \phi(\psi(G))]$. It suffices to show that $[\phi(G): \phi(\psi(G))] \leq \ell$. Let $x_1 + \psi(G), \ldots, x_\ell + \psi(G)$ be all cosets of $\psi(G)$ in $G$. Then $\phi(x_1) + \phi(\psi(G)), \ldots, \phi(x_\ell) + \phi(\psi(G))$ are all cosets of $\phi(\psi(G))$ in $\psi(G)$ (these are not necessarily distinct, so the actual number of cosets may be less than $\ell$), proving the desired claim. \end{proof} \begin{lemma} \label{lem:pushforward} Let $G, H$ be compact abelian groups and $\mu, \nu$ be the normalized Haar measures of $G$ and $H$, respectively. Suppose $\phi: G \to H$ is a continuous surjective homomorphism. Then $\phi_* \mu = \nu$ (i.e. $\nu(B) = \mu( \phi^{-1}(B))$ for any Borel set $B \subset H$). \end{lemma} \begin{proof} Let $\nu_0 = \phi_* \mu$. By the uniqueness of the normalized Haar measure, it suffices to show that $\nu_0$ is a translation-invariant probability measure on $H$. First, $\nu_0$ is a probability measure because $\nu_0(H) = \mu(\phi^{-1}(H)) = \mu(G) = 1$. Now let $B \subset H$ be a Borel set and $h_0 \in H$ be arbitrary. Since $\phi$ is surjective, there exists $g_0 \in G$ such that $\phi(g_0) = h_0$. For any $g \in \phi^{-1}(B + h_0)$, we have \[ \phi(g - g_0) = \phi(g) - \phi(g_0) \in B + h_0 - h_0 = B. \] Therefore, $\phi^{-1}(B + h_0) \subseteq \phi^{-1}(B) + g_0.$ On the other hand, \[ \phi(\phi^{-1}(B) + g_0) \subseteq B + h_0 \] and so $\phi^{-1}(B + h_0) = \phi^{-1}(B) + g_0$. Since $\mu$ is translation-invariant on $G$, it follows that \[ \nu_0(B + h_0) = \mu(\phi^{-1}(B + h_0)) = \mu(\phi^{-1}(B) + g_0) = \mu(\phi^{-1}(B)) = \nu_0(B). \] Thus $\nu_0$ is translation-invariant on $H$ and so $\nu_0 = \nu$. \end{proof} \begin{lemma} \label{lem:finite-index} Let $\phi:G \rightarrow G$ be a continuous homomorphism such that $[G:\phi(G)]=m$ is finite. Then for any measurable set $A \subset G$, we have \begin{equation} \mu( A ) \leq m \mu( \phi(A) ) \end{equation} and \begin{equation} \mu( \phi^{-1}(A) ) \leq m \mu( A ). \end{equation} Consequently, if $f \in L^1(G)$ is nonnegative, then \[ \int_G f(x) \, d\mu(x) \geq \frac{1}{m} \int_G f( \phi(x) ) \, d\mu(x). \] \end{lemma} \begin{proof} First, since $\phi$ is continuous and $G$ is compact, $\phi(G)$ is a compact subgroup of $G$. Since $G$ is Hausdorff, $\phi(G)$ is closed. In other words, $\phi(G)$ is a closed subgroup of $G$. For each Borel set $B \subset \phi(G)$, $\lambda(B) = m \mu(B)$ defines a probability measure on $\phi(G)$. Since this measure is translation invariant, it is equal to the normalized Haar measure on $\phi(G)$. By \cref{lem:pushforward}, $\lambda = \phi_* \mu$. This means that for any Borel set $B \subset \phi(G)$, we have $\mu(\phi^{-1}(B)) = m \mu(B)$. Let $A$ be any Borel set in $G$. Since $A \subset \phi^{-1}(\phi(A))$, we have hence $\mu(A) \leq \mu(\phi^{-1}(\phi(A)) = m \mu(\phi(A))$, and the first assertion is proved. Applying the first assertion to the set $\phi^{-1}(A)$, we get the second assertion. The third assertion follows from the second one, and the fact that $f$ can be approximated by functions of the form $\sum_{i=1}^n c_i 1_{A_i}$ for Borel sets $A_i$ and $c_i \geq 0$. \end{proof} The next lemmas deal with images and preimages of Bohr sets under homomorphisms. \begin{lemma} \label{lem:bohr-homo1} Let $B \subset G$ be a Bohr-$(k, \eta)$ set and $\phi: G \rightarrow G$ be a continuous homomorphism. Then $\phi^{-1}(B)$ is also a Bohr-$(k, \eta)$ set. \end{lemma} \begin{proof} If $B = \{ x \in G: |\gamma_i(x) -1| < \eta \textup{ for }i=1, \ldots, k\}$ is a Bohr-$(k, \eta)$ set, then $\phi^{-1}(B) = \{ x \in G: |\gamma_i \circ \phi (x) -1| < \eta \textup{ for }i=1, \ldots, k\}$ is also a Bohr-$(k, \eta)$-set. \end{proof} The next lemma is more surprising. \begin{lemma}[cf. Griesmer {\cite[Lemma 1.7] {griesmer-br}}] \label{lem:bohr-homo2} Let $B \subset G$ be a Bohr-$(k, \eta)$ set and $\phi: G \rightarrow G$ be a continuous homomorphism such that $[G:\phi(G)] = m < \infty$. Then $\phi(B)$ contains a Bohr-$(k', \eta')$ set, where $k', \eta'$ depend on $k, \eta$ and $m$. \end{lemma} \begin{proof} Suppose $B = \{x \in G: |\gamma_i(x) - 1| < \eta \text{ for } 1 \leq i \leq k\}$ where $\gamma_i \in \Gamma$. Then \[ A = \{x \in G: |\gamma_i(x) - 1| < \eta/4 \text{ for } 1 \leq i \leq k\} \] satisfies $A - A + A - A \subseteq B$. The bound \eqref{eq:mub} implies that $\mu(A) \geq (C_0\eta/4)^k$ for some absolute constant $C_0>0$. In view of \cref{lem:finite-index}, $\mu(\phi(A)) \geq \mu(A)/m \geq \frac{(C_0 \eta)^k}{4^k m}$. Therefore, by \cref{lem:bogolyubov_compact_group}, the set $\phi(B) \supseteq \phi(A) - \phi(A) + \phi(A) - \phi(A)$ is a Bohr-$(k', \eta')$ set where $k', \eta'$ depend only on $\mu(\phi(A))$, which is bounded below by $\frac{(C_0 \eta)^k}{4^k m}$. \end{proof} \subsection{Counting lemmas} \begin{lemma}[cf. {\cite[Lemma 2]{bourgain-roth}}] \label{lem:bourgain_lemma_2} Let $\phi, \psi: G \rightarrow G$ be continuous homomorphisms such that $\phi(G), \psi(G)$ have finite indices in $G$. Then for $f_1, f_2, f_3 \in L^\infty(G)$ and $K \in L^1(G)$ such that $\widehat{K} \in L^1(\Gamma)$, we have \begin{equation} \label{eq:k} \left| \iint_{G^2} f_1(x)f_2(x+ \phi(y))f_3(x+ \psi(y)) K(y) \, d\mu(x) d\mu(y) \right| \ll \| \widehat{f_1} \|_{\infty} \|f_2 \|_2 \| f_3\|_2 \| \widehat{K} \|_1 \end{equation} where the implied constant depends only on the indices of $\phi(G)$ and $\psi(G)$ in $G$. \end{lemma} \begin{proof} Since linear combinations of characters are dense in $L^2(G)$, without loss of generality, we can assume $f_1, f_2, f_3$ and $K$ are equal to their Fourier series. For $x \in G$, write $g(x) = \int_{G} f_2(x+ \phi(y) )f_3(x+ \psi(y) ) K(y) \, d\mu(y)$. By Plancherel's formula, \begin{equation*} \left| \int_G f_1(x) g(x) \, d\mu(x) \right|= \left| \sum_{\gamma \in \Gamma} \widehat{f_1}(\gamma) \widehat{g} (\overline{\gamma}) \right| \leq \| \widehat{f_1} \|_{\infty} \cdot \| \widehat{g} \|_1. \end{equation*} Thus \begin{eqnarray*} g(x) &=& \int_{G} f_2(x+ \phi(y)) f_3(x+ \psi(y)) K(y) \, d\mu(y) \nonumber \\ &=& \int_G \left( \sum_{\gamma_2, \gamma_3, \gamma_0 \in \Gamma} \widehat{f_2}(\gamma_2) \gamma_2(x+ \phi(y)) \widehat{f_3}(\gamma_3) \gamma_3(x+ \psi(y)) \widehat{K}(\gamma_0) \gamma_0(y) \right) \, d\mu(y) \nonumber \\ &=& \int_G \left( \sum_{\gamma_2, \gamma_3, \gamma_0 \in \Gamma} \widehat{f_2}(\gamma_2) \widehat{f_3}(\gamma_3) \widehat{K}(\gamma_0) (\gamma_2 + \gamma_3)(x) (\gamma_2 \circ \phi + \gamma_3 \circ \psi + \gamma_0 ) (y)\right) \, d\mu(y) \nonumber \\ &=& \sum_{\substack{ \gamma_2, \gamma_3, \gamma_0 \in \Gamma,\\ \gamma_2 \circ \phi + \gamma_3 \circ \psi + \gamma_0=0}} \widehat{f_2}(\gamma_2) \widehat{f_3}(\gamma_3) \widehat{K}(\gamma_0) (\gamma_2 + \gamma_3)(x) \end{eqnarray*} Consequently, \begin{equation*} \widehat{g}(\gamma) = \sum_{\substack{ \gamma_2, \gamma_3, \gamma_0 \in \Gamma,\\ \gamma_2 \circ \phi + \gamma_3 \circ \psi + \gamma_0=0, \\ \gamma_2 + \gamma_3 = \gamma}} \widehat{f_2}(\gamma_2) \widehat{f_3}(\gamma_3) \widehat{K}(\gamma_0) \end{equation*} and \begin{equation*} \lVert \widehat{g} \rVert_1 \leq \sum_{\substack{ \gamma_2, \gamma_3, \gamma_0 \in \Gamma,\\ \gamma_2 \circ \phi + \gamma_3 \circ \psi + \gamma_0=0}} |\widehat{f_2}(\gamma_2) | \cdot | \widehat{f_3}(\gamma_3) | \cdot | \widehat{K}(\gamma_0) |. \end{equation*} Therefore, it suffices to show that for each $\gamma_0 \in \Gamma$, we have \[ \sum_{\substack{ \gamma_2, \gamma_3 \in \Gamma,\\ \gamma_2 \circ \phi + \gamma_3 \circ \psi + \gamma_0=0}} |\widehat{f_2}(\gamma_2) | \cdot | \widehat{f_3}(\gamma_3) | \ll \|f_2 \|_2 \cdot \| f_3 \|_2. \] By the Cauchy-Schwarz inequality and the Plancherel's formula, the left hand side is at most \begin{eqnarray} & & \| f_2 \|_2 \cdot \left( \sum_{\gamma_2} \left( \sum_{ \gamma_3 \circ \psi = -\gamma_0 - \gamma_2 \circ \phi} | \widehat{f_3}(\gamma_3) | \right)^2 \right)^{1/2} \nonumber \\ & \ll & \| f_2 \|_2 \cdot \left( \sum_{\gamma_2} \sum_{ \gamma_3 \circ \psi =- \gamma_0 - \gamma_2 \circ \phi} | \widehat{f_3}(\gamma_3) |^2 \right)^{1/2} \label{eq:index1} \\ & \ll & \| f_2 \|_2 \cdot \left( \sum_{\gamma_3} | \widehat{f_3}(\gamma_3) |^2 \right)^{1/2} \label{eq:index2} \\ &=& \| f_2 \|_2 \cdot \| f_3 \|_2. \nonumber \end{eqnarray} In \eqref{eq:index1}, we use the fact that for each $\xi \in \Gamma$, there are at most $[G: \psi(G)]$ values of $\gamma_3$ such that $\gamma_3 \circ \psi = \xi$. Likewise, in \eqref{eq:index2}, we use the fact that for each $\xi \in \Gamma$, there are at most $[G: \phi(G)]$ values of $\gamma_2$ such that $\gamma_2 \circ \phi = \xi$. Both of these facts follow from \cref{lem:homo-solutions}. \end{proof} \begin{remark} Lemma \ref{lem:bourgain_lemma_2} is not true without the assumption on finite indices. As a counterexample, we let $\phi(x) = x, \psi(x) = 2x$ and $G=\mathbb{F}_2 ^k$ for some large $k$. Let $n=|G| = 2^k$. For each $i=1, \ldots, n$, define \begin{itemize} \item $\widehat{f_1}(\gamma_i) = \widehat{f_3}(\gamma_i) =1$, so $f_1(x) = f_3(x) = n \cdot 1_{x=0}$. \item $\widehat{f_2}(\gamma_i) = a_i$, where $a_i \geq 0$. \item $\widehat{K}(\gamma_i) = b_i$, where $b_i \geq 0$. \end{itemize} Then \eqref{eq:k} says that \[ n(a_1 b_1 + \cdots + a_n b_n)^2 \ll (a_1^2 + \cdots + a_n^2) (b_1 + \cdots + b_n)^2. \] This is false by taking $a_1=b_1=1$ and $a_i = b_i =0$ for $i \neq 1$. \end{remark} While the previous lemma involves the configuration $x, x + \phi(y), x + \psi(y)$, the next one is concerned with $x, x + \phi(y)$ and $\psi(y)$. Its proof is almost identical and so we only highlight the differences. \begin{lemma} \label{lem:counting2} Let $\phi, \psi: G \rightarrow G$ be continuous homomorphism such that $\phi(G), \psi(G)$ have finite indices in $G$. Then for $f_1, f_2, f_3 \in L^\infty(G)$, we have \begin{equation} \left| \iint_{G^2} f_1(x) f_2(x+ \phi(y))f_3(\psi(y)) \, d\mu(x) d\mu(y) \right| \ll \| \widehat{f_1} \|_{\infty} \|f_2 \|_2 \| f_3\|_2 \end{equation} where the implicit constant depends only on the indices of $\phi(G)$ and $\psi(G)$ in $G$. \end{lemma} \begin{proof} Similar to the proof of \cref{lem:bourgain_lemma_2}, without loss of generality, we can assume $f_1, f_2, f_3$ are equal to their Fourier series. For $x \in G$, write $g(x) = \int_{G} f_2(x+ \phi(y) )f_3(\psi(y) ) \, d\mu(y)$ and then by Plancherel's formula, \begin{equation*} \left| \int_G f_1(x) g(x) \, d \mu(x) \right|= \left| \sum_{\gamma \in \Gamma} \widehat{f_1}(\gamma) \widehat{g} (\overline{\gamma}) \right| \leq \| \widehat{f_1} \|_{\infty} \cdot \| \widehat{g} \|_{1}. \end{equation*} Moreover, we also have \begin{eqnarray*} g(x) &=& \int_G \left( \sum_{\gamma_2, \gamma_3 \in \Gamma} \widehat{f_2}(\gamma_2) \gamma_2(x+ \phi(y)) \widehat{f_3}(\gamma_3) \gamma_3(\psi(y)) \right) \, d\mu(y) \nonumber \\ &=& \sum_{\substack{ \gamma_2, \gamma_3 \in \Gamma,\\ \gamma_2 \circ \phi + \gamma_3 \circ \psi = 0}} \widehat{f_2}(\gamma_2) \widehat{f_3}(\gamma_3) \gamma_2(x). \end{eqnarray*} As a consequence, \begin{equation*} \widehat{g}(\gamma) = \sum_{\substack{ \gamma_2, \gamma_3 \in \Gamma,\\ \gamma_2 \circ \phi + \gamma_3 \circ \psi = 0, \\ \gamma_2 = \gamma}} \widehat{f_2}(\gamma_2) \widehat{f_3}(\gamma_3) = \widehat{f_2}(\gamma) \sum_{\substack{\gamma_3 \in \Gamma, \\ \gamma \circ \phi + \gamma_3 \circ \psi = 0}} \widehat{f_3}(\gamma_3) \end{equation*} and so \begin{equation*} \| \widehat{g} \|_{1} \leq \sum_{\substack{\gamma_2, \gamma_3 \in \Gamma, \\ \gamma_2 \circ \phi + \gamma_3 \circ \psi = 0}} \left| \widehat{f_2}(\gamma_2) \right| \left| \widehat{f_3}(\gamma_3) \right|. \end{equation*} On the other hand, we have \begin{eqnarray} \sum_{\substack{\gamma_2, \gamma_3 \in \Gamma, \\ \gamma_2 \circ \phi + \gamma_3 \circ \psi = 0}} \left| \widehat{f_2}(\gamma_2) \right| \left| \widehat{f_3}(\gamma_3) \right| &=& \sum_{\gamma_2 \in \Gamma} \left( \left| \widehat{f_2}(\gamma_2) \right| \sum_{\gamma_3 \in \Gamma, \atop{\gamma_3 \circ \psi = - \gamma_2 \circ \phi}} \left| \widehat{f_3}(\gamma_3) \right|\right) \nonumber\\ &\leq& \left( \sum_{\gamma_2 \in \Gamma} \left| \widehat{f_2}(\gamma_2) \right|^2 \right)^{1/2} \left( \sum_{\gamma_2 \in \Gamma} \left( \sum_{\gamma_3 \in \Gamma, \atop{\gamma_3 \circ \psi = - \gamma_2 \circ \phi}} \left| \widehat{f_3}(\gamma_3) \right| \right)^2 \right)^{1/2} \nonumber \\ &\ll& \lVert f_2 \rVert_2 \left( \sum_{\gamma_2 \in \Gamma} \sum_{\gamma_3 \in \Gamma,\atop{\gamma_3 \circ \psi = - \gamma_2 \circ \phi}} \left| \widehat{f_3}(\gamma_3) \right|^2 \right)^{1/2} \label{eq:new_index1} \\ &\ll& \lVert f_2 \rVert_2 \left( \sum_{\gamma_3 \in \Gamma} \left| \widehat{f_3}(\gamma_3) \right|^2 \right)^{1/2} \label{eq:new_index2} \\ &=& \lVert f_2 \rVert_2 \lVert f_3 \rVert_2. \nonumber \end{eqnarray} In \eqref{eq:new_index1}, we use the fact that for each $\xi \in \Gamma$, there are $\leq [G: \psi(G)]$ values of $\gamma_3$ such that $\gamma_3 \circ \psi = \xi$ while \eqref{eq:new_index2} follows from the fact that there are $\leq [G: \phi(G)]$ values of $\gamma_2$ such that $\gamma_2 \circ \phi = \xi$. \end{proof} \section{Bohr sets and partitions} \label{sec:partition} \subsection{Monochromatic configurations} We make some preparations before the proof of \cref{th:counting-partition}. In this section, we only need $G$ to be a commutative semigroup with neutral element. Fix $k+1$ commuting (semigroup) homomorphisms $\psi, \phi_1, \ldots, \phi_k: G \rightarrow G$. We write \[ \Phi_m = \{ \psi^{i_0} \circ \phi_1^{i_1} \circ \cdots \circ \phi_1^{i_k} : 0 \leq i_0, i_1, \ldots, i_k \leq m \} \cup \{ 0 \} \] (where $\phi^i$ is the $i$-th composition of $\phi$). For \textit{formal variables} $x_1, \ldots, x_n$, we write \[ S_m(x_1,\ldots, x_n) = \left\{ \sum_{i=1}^n \xi_i (x_i) : \xi_i \in \Phi_m \right\} \] and we refer to $S_m(x_1,\ldots, x_n)$ as the $S_{m,n}$-\textit{set} with \textit{generators} $x_1, \ldots, x_n$. For an element $x = \sum_{i=1}^n \xi_i (x_i) \in S_m(x_1,\ldots, x_n)$, by the \textit{support} of $x$ we mean the set $\{ i \in [n]: \xi_i \neq 0\}$. The goal of this section is to prove the following: \begin{theorem} \label{th:mpc-hom} For any $r > 0$, there exist $n$ and $m$ such that under any $r$-coloring of $S_m(x_1,\ldots, x_n)$, there is a monochromatic configuration \[ \{ \psi(y), x, x + \phi_1(y), \ldots, x+ \phi_k(y) \}, \] where $x, y$ have nonempty and disjoint supports. \end{theorem} The fact that the supports of $x$ and $y$ are nonempty and disjoint will be crucial in our applications (\cref{th:counting-partition} and \cref{prop:counting-brauer}). \cref{th:mpc-hom} follows from \cref{prop:mpc} below whose proof requires the multidimensional Hales-Jewett theorem (for a reference, see \cite[Theorem 7, p.40]{ramsey-book}). We recall the theorem here for reader's convenience. The set $[t]^N = \{(x_1, \ldots, x_N): x_i \in [t] \}$ is called a \textit{cube} of dimension $N$ over $t$ elements. Let $[N] = A_0 \cup A_1 \ldots A_m$ be any disjoint partition of $[N]$, where $A_i \neq \varnothing$ for $i \neq 0$ ($A_0$ may be empty), and $f: A_0 \rightarrow [t]$ be any map. Define a map $g: [t]^m \rightarrow [t]^N$ by assigning to each $(y_1, \ldots, y_m) \in [t]^m$ the element $(x_1, \ldots, x_N) \in [t]^N$, where \begin{equation} x_i = \begin{cases} f(i), \quad &\textup{if}\ i \in A_0 \\ y_j, \quad &\textup{if}\ i \in A_j \text{ for }j \in [m]. \end{cases} \end{equation} A \textit{combinatorial space} of dimension $m$ is the image of $g$ for some choice of $A_0, A_1, \ldots, A_m$ and $f$. We can now state: \begin{theorem}[Multidimensional Hales-Jewett] For any $r, t, m$, there exists a number $N=HJ(t, m;r)$ such that whenever $[t]^N$ is $r$-colored, there must be a monochromatic combinatorial space of dimension $m$. \end{theorem} Using this, we can prove the following proposition: \begin{proposition} \label{prop:mpc} For any $r>0$ and $\ell >0$, there exist $n=n(k,\ell, r)$ and $m=m(k,\ell, r)$ such that under any $r$-coloring of $S_{m}(x_1,\ldots, x_n)$, there are elements $y_1, \ldots, y_\ell \in S_m(x_1,\ldots, x_n)$ with nonempty and disjoint supports, such that for each $i \in [\ell]$, the elements \[ \psi(y_i) + \sum_{1 \leq j \leq i-1} \xi_j (y_j) \qquad \textup{ where } \xi_j \in \{ 0, \psi, \phi_1, \ldots, \phi_k \} \] have the same color (i.e. their color depends only on $i$). \end{proposition} \begin{proof} The number of colors $r$ will be fixed throughout. We will proceed by induction on $\ell$. When $\ell=1$ the statement is obvious. Suppose the statement is true for $\ell$, we will prove it is true for $\ell+1$. Write $n' = n(k,\ell, r), m'=m(k,\ell, r)$. We define $m = m(k,\ell, r):=|\Phi_{m'+1}| +1$, $N:=HJ( |\Phi_{m'+1}|, n';r)$ and $n = n(k, \ell+1,r): = 1 + N$. Consider an arbitrary $r$-coloring of $S_{m}(x_1,\ldots, x_n)$. An $r$-coloring of $S_{m}(x_1,\ldots, x_n)$ induces an $r$-coloring of $\Phi_{m'+1}^{N}$ by assigning to $(a_1, \ldots, a_{N}) \in (\Phi_{m'+1})^{N}$ the color of \[ \psi (x_n) + \sum_{i=1}^{N} \phi \circ a_i (x_i). \] Since $N=HJ( |\Phi_{m'+1}|, n';r)$, there is a disjoint partition \[ [N] = A_0 \cup A_1 \cdots \cup A_{n'}, \quad A_i \neq \varnothing \, \forall i \neq 0 \] and functions $f_i \in \Phi_{m' + 1}$ for $i \in A_0$ such that when $\xi_1, \xi_2, \ldots, \xi_{n'}$ range over $\Phi_{m'+1}$, all the elements \[ \psi(x_n) + \sum_{i=1}^{N} \phi \circ a_i (x_i), \] with \begin{equation*} a_i = \begin{cases} f_i, \quad &\textup{if}\ i \in A_0 \\ \zeta_j, \quad &\textup{if}\ i \in A_j \text{ for }j \in [n'] \end{cases} \end{equation*} have the same color. Write $z_j = \sum_{i \in A_j} \phi(x_i)$ for $1 \leq j \leq n'$, and $z_{n'+1} = x_{n+1} + \sum_{i \in A_0} f_i(x_i)$. Then all the $z_j$ have nonempty and disjoint supports, and all elements of the form \[ \psi(z_{n'+1}) + \sum_{j=1}^{n'} \zeta_j (z_j), \qquad \zeta_j \in \Phi_{m'+1}, \] have the same color. By the inductive hypothesis, there exists a sequence $y_1, \ldots, y_\ell \in S_{m'} (z_1, \ldots, z_n')$ having nonempty and disjoint supports such that for each $i=1, \ldots, \ell$, the elements \[ \psi(y_i) + \sum_{1 \leq j \leq i-1} \xi_j (y_j) \qquad \textup{ where } \xi_j \in \{ 0, \psi, \phi_1, \ldots, \phi_k \} \] have the same color. We now set $y_{\ell+1} = z_{n'+1}$. Clearly the elements \[ \phi(y_{\ell+1}) + \sum_{1 \leq j \leq \ell} \xi_j (y_j) \qquad \textup{ where } \xi_j \in \{ 0, \psi, \phi_1, \ldots, \phi_k \} \] are of the form \[ \phi(z_{n'+1}) + \sum_{j=1}^{n'} \zeta_j (z_j), \qquad \zeta_j \in \Phi_{m'+1}, \] and so they have the same color. Thus Proposition \ref{prop:mpc} is proved. \end{proof} \begin{proof}[Proof of \cref{th:mpc-hom}] Applying Proposition \ref{prop:mpc} with $\ell=r+1$, we can find a sequence $y_1, \ldots, y_{r+1}$ satisfying the conclusion of that proposition. Let $c(i)$ be the color of \[ \psi(y_i) + \sum_{1 \leq j \leq i-1} \xi_j (y_j) \qquad \textup{ where } \xi_j \in \{ 0, \psi, \phi_1, \ldots, \phi_k \}. \] Then there exist $1 \leq u < v \leq r+1$ such that $c(u) = c(v)$. Hence the elements \[ \psi(y_u), \psi(y_v), \psi(y_v) + \phi_1(y_u), \ldots, \psi(y_v) + \phi_k(y_u), \] have the same color, and we are done (with $x=\psi(y_v), y=y_u$). \end{proof} \subsection{Proofs of \cref{th:counting-partition} and \cref{th:main-partition}} Using Theorem \ref{th:mpc-hom} we can now prove \cref{th:counting-partition}, which we recall for convenience: \begin{theorem*} Suppose $\phi, \psi_1, \ldots, \psi_k: G \to G$ are continuous homomorphisms satisfying: \begin{enumerate} \item $\psi(G), \phi_1(G), \ldots, \phi_k(G)$ have finite indices in $G$, and \item $\psi, \phi_1, \ldots, \phi_k$ are commuting. \end{enumerate} Suppose $G = \bigcup_{i=1}^r A_i$ is a partition of $G$ into measurable sets. Then \begin{equation*} \sum_{i=1}^r \iint_{G^2} 1_{A_i}(\psi(t)) 1_{A_i}(x) 1_{A_i}(x + \phi_1(t)) \cdots 1_{A_i}(x + \phi_k(t)) \, d \mu(x) d\mu(t) \geq c_3 \end{equation*} for some positive constant $c_3$ depending only on $r, k$ and the aforementioned indices. \end{theorem*} \begin{proof}Consider the set $S_m(x_1, \ldots, x_n)$ given by Theorem \ref{th:mpc-hom}, where we now let $x_1, \ldots, x_n$ vary over $G$. Note that for any $\psi \in \Phi_m$, we have $[G:\psi(G)] < \infty$ by \cref{lem:composition}. Let $R$ be the set of all pairs $(z, t)$ where $z, t \in S_m(x_1, \ldots, x_n)$ have nonempty and disjoint supports. Suppose $G = \bigcup_{i=1}^r A_i$. For $i \in [r]$, we define \[ T_i : = \int_{G^2} 1_{A_i}( \psi(y) ) 1_{A_i}(x) 1_{A_i}(x+ \phi_1(y)) \cdots 1_{A_i}(x+ \phi_k(y)) \, d\mu(x) d\mu(y). \] Let $(z,t) \in R$ be arbitrary, and suppose \[ z = \sum_{u \in U} \zeta_u(x_u) \qquad \textup{and} \qquad t = \sum_{v \in V} \xi_v (x_v) \] where $U, V \subset [n]$ are nonempty and disjoint and $\zeta_u, \xi_v \in \Phi_m \setminus \{ 0\}$. We have \begin{eqnarray*} & & \int_{G^m} 1_{A_i}( \psi(t)) 1_{A_i}(z) 1_{A_i}(z+ \phi_1(t)) \cdots 1_{A_i}(z+ \phi_k(t)) \, d\mu(x_1) \cdots d\mu(x_m) \\ & = & \int_{G^m} 1_{A_i} \left( \sum_{v \in V} \psi( \xi_v (x_v) ) \right) 1_{A_i} \left( \sum_{u \in U} \zeta_u(x_u)\right) 1_{A_i} \left( \sum_{u \in U} \zeta_u(x_u) + \phi_1 \left( \sum_{v \in V} \xi_v (x_v) \right) \right) \cdots \\ & & \qquad 1_{A_i} \left( \sum_{u \in U} \zeta_u(x_u) + \phi_k \left( \sum_{v \in V} \xi_v (x_v) \right) \right) \, d\mu(x_1) \cdots d \mu(x_m) \\ & \ll & \int_{G^m} 1_{A_i} \left( \sum_{v \in V} \psi( x_v ) \right) 1_{A_i} \left( \sum_{u \in U} x_u \right) 1_{A_i} \left( \sum_{u \in U} x_u + \phi_1 \left( \sum_{v \in V} x_v \right) \right) \cdots \\ & & \qquad 1_{A_i} \left( \sum_{u \in U} x_u + \phi_k \left( \sum_{v \in V} x_v \right) \right) \, d\mu(x_1) \cdots d \mu(x_m) \\ & = & \int_{G^2} 1_{A_i}( \psi(y) ) 1_{A_i}(x + \phi_1(y) ) \cdots 1_{A_i}(x + \phi_k(y) ) \, d\mu(x) \, d\mu(y) \\ &=& T_i, \end{eqnarray*} by $|U|+|V|$ applications of Lemma \ref{lem:finite-index}. Now \cref{th:mpc-hom} implies that \begin{eqnarray*} 1 & \leq & \int_{G^m} \sum_{i=1}^r \sum_{(z, t) \in R } 1_{A_i}( \psi(t)) 1_{A_i}(z) 1_{A_i}(z+ \phi_1(t)) \cdots 1_{A_i}(z+ \phi_k(t)) \, d\mu(x_1) \cdots d\mu(x_m) \\ &\leq& \sum_{i=1}^r \int_{G^m} \sum_{(z, t) \in R } 1_{A_i}( \psi(t)) 1_{A_i}(z) 1_{A_i}(z+ \phi_1(t)) \cdots 1_{A_i}(z+ \phi_k(t)) \, d\mu(x_1) \cdots d\mu(x_m) \\ & \ll & \sum_{i=1}^r T_i, \end{eqnarray*} thus finishing the proof. \end{proof} To prove \cref{th:main-partition}, we will need the following proposition. With an eye to potential applications, we state and prove a slightly stronger version than what is needed. \begin{proposition} \label{prop:partition-bohr-stronger} Let $\phi, \psi: G \to G$ be commuting continuous homomorphisms with images having finite indices. Suppose $f_1, \ldots, f_r: G \to [0,1]$ are measurable functions such that $\sum_{i=1}^r f_i \geq 1$ pointwise. For $w \in G$, define \[ R_i(w) = \iint f_i(\psi(y)) f_i(x+w) f_i(x + \phi(y)) \ d\mu(x) d\mu(y). \] Then there are $c, k, \eta >0$ depending only on $r$ and the indices above such that for some $i \in [r]$, the set $\{ w \in G: R_i(w) > c\}$ contains a Bohr-$(k,\eta)$ set. \end{proposition} \begin{proof} For $i \in [r]$, let $A_i = \{x \in G: f_i(x) \geq 1/r\}$. Since $\sum_{i=1}^r f_i \geq 1$ pointwise, $G = \bigcup_{i=1}^r A_i$. In light of \cref{th:counting-partition}, there exists a constant $c$ depending only on $r$ and the indices and an $i \in [r]$ such that \[ \iint_{G^2} 1_{A_i}(x) 1_{A_i}(x + \phi(y)) 1_{A_i} (\psi(y)) \ d\mu(x) d\mu(y) > c. \] It then follows that \[ R_i(0) \geq \frac{c}{r^3}. \] On the other hand, by \cref{lem:counting2}, for every $w \in G$, \begin{equation*} \label{eq:partition_R_i} |R_i(w) - R_i(0)| \ll \lVert \hat{f} - \widehat{f_w} \rVert_{\infty}, \end{equation*} where the implicit constant depends only on the indices of $\phi(G), \psi(G), (\phi-\psi)(G)$ in $G$. Hence, there exists a constant $c'$ such that $R_{i}(w) \geq \frac{c}{2r^3}$ if \begin{equation*} \lVert \widehat{f_{i}} - \widehat{f_{i,w}} \rVert_{\infty} < c'. \end{equation*} By \cref{lem:translate}, the set of such $w$ contains a Bohr-$(k,\eta)$ set, where $k$ and $\eta$ depend only on $c'$. \end{proof} \cref{th:main-partition} is now a special case of the next theorem with $\psi_1 = \phi_2$ and $\psi_2 = \phi_1$. \begin{theorem}\label{th:main-partition-true} Let $G=\bigcup_{i=1}^r A_i$ be a partition into measurable sets. Let $\phi_1, \phi_2, \psi_1, \psi_2: G \rightarrow G$ be continuous homomorphisms satisfying the followings: \begin{enumerate} \item $\phi_2 \circ \psi_2 = \phi_1 \circ \psi_1$, \item $\psi_1 \circ \psi_2 = \psi_2 \circ \psi_1$, \item $\phi_1(G), \psi_1(G), \psi_2(G)$ have finite indices in $G$. \end{enumerate} Then for some $1 \leq i \leq r$, the set $\phi_1(A_i) - \phi_1(A_i) + \phi_2(A_i)$ contains a Bohr-$(k, \eta)$ set, where $k$ and $\eta$ depend only on $r$ and the indices of $\phi_1(G), \psi_1(G), \psi_2(G)$ in $G$. \end{theorem} \begin{proof} Suppose $G = \bigcup_{i=1}^r A_i$. We apply \cref{prop:partition-bohr-stronger} with $f_i = 1_{A_i}$ and $(\psi_1, \psi_2)$ in place of $(\psi, \phi)$. Then for some $i$, the set $\{ w \in G: R_i(w) > c\}$ contains a Bohr-$(k,\eta)$ set $B$. This means that for $w \in B$, there exist $x, y \in G$ such that \[ x+w, \psi_2(y), x + \psi_1(y) \in A_i. \] Since \[ \phi_1(x+w) + \phi_2( \psi_2(y)) - \phi_1( x + \psi_1(y)) = \phi_1(w), \] we conclude that $\phi_1(B) \subset \phi_1(A_i) + \phi_2(A_i) - \phi_1(A_i)$. Our theorem now follows from \cref{lem:bohr-homo2}. \end{proof} \begin{remark} \label{rk:auto2} If $\phi_1$ is an automorphism and $[G:\phi_2] < \infty$ then the hypothesis of \cref{th:main-partition-true} is also satisfied. Indeed, we let $\psi_1 = \phi_1^{-1} \circ \phi_2$ and $\psi_2 = \textup{Id}$. Then the first two conditions of \cref{th:main-partition-true} are satisfied. As for the third condition, we have $\psi_1(G) = \phi_1^{-1} \circ \phi_2(G),$ which has finite index in $G$ by \cref{lem:composition}. Similarly, we see that if $\phi_2$ is an automorphism and $[G:\phi_1] < \infty$, then the conditions of \cref{th:main-partition-true} is also satisfied. \end{remark} \section{Bohr sets and sets of positive measure} \label{sec:density} \subsection{A regularity lemma} The goal of this section is to prove \cref{prop:regularity_lemma}. As mentioned in the introduction, this argument has its genesis in Bourgain \cite{bourgain-roth}. Bourgain's ideas were elaborated by Tao \cite{tao}, who proved Roth's theorem in compact abelian groups that are 2-divisible; and by Bergelson-Host-McCutcheon-Parreau \cite[Theorem 4.1]{bhmp}, who proved Roth's theorem for dilations on the torus $\mathbb{R} / \mathbb{Z}$. We streamline and generalize Bergelson-Host-McCutcheon-Parreau's argument to deal with homomorphisms on arbitrary compact abelian groups. This generalization requires non-trivial modifications; especially, we will make use of \cref{lem:translate} and \cref{lem:kernel_a}. \begin{lemma}[cf. {\cite[Lemma 4.2]{bhmp}}] \label{lem:J} Let $\phi, \psi: G \rightarrow G$ be continuous homomorphisms such that $\phi(G), \psi(G)$ and $(\phi-\psi)(G)$ have finite indices in $G$. For $f \in L^\infty(G)$, define \[ J(f) = \iint_{G^2} f(x)f(x+ \phi(y))f(x+ \psi(y)) \, d\mu(x) d\mu(y). \] Then for any measurable functions $f, g: G \to [0,1]$, \begin{equation*} \label{eq:c} |J(f) - J(g)| \ll \| \widehat{f} - \widehat{g} \|_\infty, \end{equation*} where the implicit constant depends only on the aforementioned indices. \end{lemma} \begin{proof} We have \begin{eqnarray*} J(f) - J(g) &=& \iint_{G^2} (f-g)(x) \cdot f(x+ \phi(y)) \cdot f(x+ \psi(y)) \, d\mu(x) d\mu(y) \\ & & + \iint_{G^2} g(x) \cdot ( f-g) (x+ \phi(y)) \cdot f(x+ \psi(y)) \, d\mu(x) d\mu(y) \\ & & + \iint_{G^2} g(x) \cdot g(x+ \phi(y)) \cdot (f - g) (x + \psi(y)) \, d\mu(x) d\mu(y). \end{eqnarray*} The lemma now follows from Lemma \ref{lem:bourgain_lemma_2} and the assumptions on $\phi$ and $\psi$. \end{proof} \begin{proposition}[Regularity Lemma] \label{prop:regularity_lemma} Let $f: G \to [0, 1]$ be a measurable function with $\int_G f \, d \mu = \delta >0$. Let $\phi, \psi: G \to G$ be continuous homomorphisms such that $\phi(G), \psi(G)$ and $(\phi- \psi)(G)$ have finite indices in $G$. Then for every $\epsilon > 0$, there exist a constant $C$ that depends only on $\delta, \epsilon$ and the indices above, a kernel $K: G \to \mathbb{R}_{\geq 0}$, and a decomposition $f = f_{st} + f_{er} + f_{un}$ such that \begin{enumerate} \item $\lVert K \rVert_{\infty} < C$, \item $\lVert f_{st} \rVert_{\infty} \leq 1$, $\lVert f_{er} \rVert_{\infty} \leq 2$ and $\lVert f_{un} \rVert_{\infty} \leq 2$, \item $J'(f_{st}) := \displaystyle \iint_{G^2} f_{st}(x) f_{st}(x+ \phi(t)) f_{st}(x+ \psi(t)) K(t) \, d\mu(x) d\mu(t) > \delta^3 - \epsilon$, \item $\lVert f_{er} \rVert_2 < \epsilon$, \item $\lVert \widehat{f}_{un} \rVert_{\infty} \lVert \widehat{K} \rVert_1 < \epsilon$. \end{enumerate} \end{proposition} \begin{proof} For $t \in G$, let \[ d(t) := \max \left( \| \widehat{f} - \widehat{f_{t}} \|_\infty, \| \widehat{f} - \widehat{f_{\phi(t)}} \|_\infty, \| \widehat{f} - \widehat{f_{\psi(t)}} \|_\infty \right). \] Fixing $\epsilon > 0$, we define sequences $\eta_n \in (0, 1/2]$, $\kappa_n \in (0, \infty)$ and finite sets $\Lambda_n \subseteq \Gamma$ recursively as follows: First set $\eta_0 = 1/2$. For $n \geq 0$, \cref{lem:translate} implies that there exists a set $\Lambda_n \in \Gamma$ with $|\Lambda_n| \leq 12/\eta_n^2$ such that $D(\eta_n) := \{t \in G: d(t) \leq \eta_n\}$ contains a Bohr set $B(\Lambda_n; \eta_n)$. For $\eta \in (0, 1/2]$, define $\nu(\eta) = (C_0 \eta)^{12/\eta^2}$ where $C_0$ is the constant found in \cref{lem:kernel_a}; in particular, $\nu(\eta_n) = (C_0 \eta_n)^{12/\eta_n^2} \leq (C_0 \eta_n)^{|\Lambda_n|}$. Put \[ \kappa_n = \nu(\eta_n)^{-1/2} \text{ and } \eta_{n+1} = \min \left \{\eta_n, \frac{\epsilon^2}{4 \kappa_n^2}, \epsilon \nu \left( \frac{\epsilon}{2 \kappa_n} \right) \right\}. \] In view of \cref{lem:kernel_a}, for $n \geq 0$, there is a kernel $K_n: G \to [0, \infty)$ such that \[ \widehat{K}_n \geq 0, \| \widehat{K_n}\|_1 = \lVert K_n \rVert_{\infty} \leq 1/\nu(\eta_n) \] and $K_n$ is supported on $B(\Lambda_n; \eta_n) \subseteq D(\eta_n)$. We define \[ f_n = f * K_n. \] \noindent \textbf{Claim 1:} \[ \| \widehat{f} - \widehat{f_n} \|_{\infty} = \sup_{\gamma \in \Gamma} \left| \widehat{f}(\gamma) (1-\widehat{K_n}(\gamma)) \right| \leq \eta_n. \] Indeed, by construction, $K_n$ is supported on $D(\eta_n)$; and every $t \in D(\eta_n)$ satisfies $\left| \widehat{f}(\gamma) (1 - \gamma(t)) \right| \leq \eta_n$ for all $\gamma \in \Gamma$. Therefore, for all $\gamma \in \Gamma$, \begin{eqnarray*} \left| \widehat{f}(\gamma) \right| \left| 1-\widehat{K_n}(\gamma) \right| &\leq& \left| \widehat{f}(\gamma) \right| \int_{G} K_n(x) \left| 1-\overline{\gamma(x)} \right| \, d\mu(x) \\ &=& \left| \widehat{f}(\gamma) \right| \int_{D(\eta_n)} K_n(x) \left| 1-\overline{\gamma(x)} \right| \, d\mu(x) \\ &=& \int_{D(\eta_n)} K_n(x) \left| \widehat{f}(\gamma) \right| \left| 1-\overline{\gamma(x)} \right| \, d\mu(x)\\ &\leq& \eta_n \int_{D(\eta_n)} K_n(x) \, d\mu(x) \leq \eta_n. \end{eqnarray*} \noindent \textbf{Claim 2:} \[ \lVert f_{n+1} - f_n \rVert_2^2 \leq \lVert f_{n+1} \rVert_2^2 - \lVert f_n \rVert_2^2 + 2 \eta_{n+1} \kappa_n^2. \] Indeed, we have \begin{eqnarray*} \lVert f_{n+1} - f_n \rVert_2^2 &= & \lVert \widehat{f_{n+1}} - \widehat{f_n} \rVert_2^2 \\ &= & \lVert \widehat{f_{n+1}} \rVert_2^2 - \lVert \widehat{f_n} \rVert_2^2 - \sum_{\gamma \in \Gamma} \left( - \widehat{f_{n+1}}(\gamma) \overline{\widehat{f_{n}}(\gamma)} + \overline{\widehat{f_{n+1}}(\gamma)} \widehat{f_{n}}(\gamma) \right)\\ &= & \lVert \widehat{f_{n+1}} \rVert_2^2 - \lVert \widehat{f_n} \rVert_2^2 + 2 \sum_{\gamma \in \Gamma} \left| \widehat{f}(\gamma) \right|_2^2 \widehat{K_n}(\gamma) \left( \widehat{K_n}(\gamma) - \widehat{K_{n+1}}(\gamma) \right)\\ & \leq & \lVert \widehat{f_{n+1}} \rVert_2^2 - \lVert \widehat{f_n} \rVert_2^2 + 2 \sum_{\gamma \in \Gamma} \left| \widehat{f}(\gamma) \right|^2 \widehat{K_n}(\gamma) \left(1 - \widehat{K_{n+1}}(\gamma) \right) \\ & \leq & \lVert \widehat{f_{n+1}} \rVert_2^2 - \lVert \widehat{f_n} \rVert_2^2 + 2 \sup_{\gamma \in \Gamma} |\widehat{f}(\gamma)| \left( 1 - \widehat{K_{n+1}}(\gamma) \right) \cdot \| \widehat{K_n} \|_1 \\ & \leq & \lVert \widehat{f_{n+1}} \rVert_2^2 - \lVert \widehat{f_n} \rVert_2^2 + 2 \eta_{n+1} \kappa_n^2 \end{eqnarray*} and the claim is proved. Since $\eta_{n+1} \leq \epsilon^2/(4 \kappa_n^2)$, we have \[ \lVert f_{n+1} - f_n \rVert_2^2 \leq \lVert f_{n+1} \rVert_2^2 - \lVert f_n \rVert_2^2 + \epsilon^2/2. \] Let $M$ be the smallest integer such that $M \geq 2/\epsilon^2$. Then \[ \sum_{n=0}^{M-1} \lVert f_{n+1} - f_n \rVert_2^2 \leq \lVert f_M \rVert_2^2 - \lVert f_0 \rVert_2^2 + M \epsilon^2/2 \leq 1 + M \epsilon^2/2 \leq M \epsilon^2. \] Therefore there exists $0 \leq n \leq M - 1$ such that \[ \lVert f_{n+1} - f_n \rVert_2 \leq \epsilon. \] From now on, we fix this $n$. Next consider the expression \[ I_n(t) = \int_G f_n(x) f_n(x+\phi(t)) f_n(x + \psi(t)) \, d \mu(x) \; \; \text{ for } t \in G. \] We have \[ |I_n(0) - I_n(t)| \leq \| f_n - (f_n)_{\phi(t)} \|_1 + \| f_n - (f_n)_{\psi(t)} \|_1. \] Note that \begin{eqnarray*} \| f_n - (f_n)_{\phi(t)} \|^2_1 &\leq & \| f_n - (f_n)_{\phi(t)} \|_2^2 = \| (f - f_{\phi(t)})*K_n \|_2^2 \\ &=& \sum_{\gamma \in \Gamma} \left| \widehat{K_n}(\gamma) \right|^2 \left| \widehat{f}(\gamma) \right|^2 \left| 1 -\gamma(\phi(t)) \right|^2 \\ &\leq & \| \widehat{K_n} \|_1 d(t)^2 \leq \kappa_n^2 d(t)^2. \end{eqnarray*} The same estimate holds for $\| f_n - (f_n)_{\psi(t)} \|_1^2$. Hence $|I_n(0) - I_n(t)| \leq 2 \kappa_n d(t)$ for any $t \in G$. Since $I_n(0) = \lVert f_n \rVert_1^3 = \lVert f \rVert_1^3 \geq \delta^3$, it follows that \[ I_n(t) \geq \delta^3 - 2 \kappa_n d(t) \text{ for all } t \in G. \] Note that $d(t) \leq \epsilon/(2\kappa_n)$ for $t$ in the set $D(\epsilon/(2 \kappa_n))$ and so $I_n(t) \geq \delta^3 - \epsilon$ in this set. Let $\eta = \epsilon/(2 \kappa_n)$. In view of \cref{lem:kernel_a}, there exists a kernel $K$ supported on $D(\eta)$ such that $\lVert K \rVert_{\infty} \leq 1/\nu(\eta)$. We then have \[ J'(f_{n}) := \int_G I_{n}(t) K(t) \, d \mu(t) \geq (\delta^3 - \epsilon) \int_{B(\epsilon/(2\kappa_n))} K(t) \, d\mu(t) \geq \delta^3 - \epsilon. \] Letting $f_{st} = f_n$, $f_{er} = f_{n+1} - f_n$ and $f_{un} = f - f_{n+1}$, we obtain \begin{enumerate} \item $\lVert K \rVert_{\infty} \leq 1/\nu(\epsilon/(2\kappa_n)) \leq 1/\nu(\epsilon/(2 \kappa_M))$, \item $\lVert f_{er} \rVert_2 = \lVert f_{n+1} - f_n \rVert_2 < \epsilon$, \item $\lVert \hat{f}_{un} \rVert_{\infty} \lVert K \rVert_{\infty} = \lVert \hat{f} - \hat{f}_{n+1} \rVert_{\infty} \lVert K \rVert_{\infty} < \eta_{n+1}/ \nu(\eta) < \epsilon$ because $\eta_{n+1} \leq \epsilon \nu(\epsilon/(2\kappa_n)) = \epsilon \nu(\eta).$ \item $J'(f_{st}) = J'(f_n) \geq \delta^3 - \epsilon$. \end{enumerate} Our proof finishes. \end{proof} \subsection{Proof of density results} The goal of this section is to prove \cref{th:main-density} and \cref{th:roth-khintchine}. First we recall \cref{th:roth-khintchine} for the reader's convenience. \begin{theorem*}[Khintchine-Roth theorem for compact abelian groups] Let $f: G \to [0, 1]$ be a mensurable function with $\int_G f \, d \mu > \delta$. Let $\phi, \psi: G \to G$ be continuous homomorphisms such that $[G: \phi(G)], [G: \psi(G)]$ and $[G:(\phi-\psi)G]$ are finite. Then for every $\epsilon > 0$, there exists a constant $c_1$ that depends only on $\delta, \epsilon$ and the indices above such that the set \[ B = \left\{t \in G: \int_G f(x) f(x+ \phi(t)) f(x + \psi(t)) \, d\mu(x) d\mu(t) > \delta^3 - \epsilon \right\} \] has measure greater than $c_1$. As a consequence, there exists a constant $c_2$ that depends only on $\delta$ and indices of $\phi(G), \psi(G)$ such that \[ J(f) := \iint_{G^2} f(x) f(x+ \phi(t)) f(x+ \psi(t)) d\mu(x) d\mu(t) > c_2. \] \end{theorem*} \begin{proof} Fix $\epsilon > 0$ and let constant $C$, kernel $K$ and the decomposition $f = f_{st} + f_{er} + f_{un}$ be as found in \cref{prop:regularity_lemma}. Define \[ J'(f) := \iint_{G^2} f(x) f(x+ \phi(t)) f(x+ \psi(t)) K(t) \ d\mu(x) d\mu(t) \] and \[ J'(f_{st}) := \iint_{G^2} f_{st}(x) f_{st}(x + \phi(t)) f_{st}(x + \psi(t)) K(t)\ d\mu(x) d\mu(t). \] Applying the decomposition $f = f_{st} + f_{er} + f_{un}$ and expanding $J'(f)$, we see that the difference $J'(f) - J'(f_{st})$ will have $26$ terms. The terms that contain $f_{er}$ can be bounded by $4 \epsilon$ since for $f_1, f_2, f_3 \in L^{\infty}(G)$, \begin{equation*} \iint_{G^2} f_1(x) f_2(x + \phi(t)) f_3(x + \psi(t)) K(t) \, d\mu(x) d\mu(t) \leq \max_{i} \lVert f_i \rVert_{\infty}^2 \max_{i} \lVert f_i \rVert_1 \lVert K \rVert_1. \end{equation*} On the other hand, in view of \cref{lem:bourgain_lemma_2}, the terms containing $f_{un}$ are bounded by $O(\lVert \hat{f}_{un} \rVert_{\infty} \lVert \widehat{K} \rVert_{1})$ which is $O(\epsilon)$ thanks to the properties of the decomposition. Therefore, \begin{equation} \label{eq:J_f'_1} J'(f) > J'(f_{st}) - O(\epsilon) > \delta^3 - c_0 \epsilon \end{equation} where the constant $c_0$ depends only on the indices of $\phi(G), \psi(G)$ and $(\phi - \psi)(G)$ in $G$. Define \begin{equation*} I_f(t) = \int_G f(x) f(x+ \phi(t)) f(x+ \psi(t)) \, d\mu(x) \end{equation*} and \begin{equation*} B = \{t \in G: I_f(t) > \delta^3 - 2 c_0 \epsilon \}. \end{equation*} We then have \begin{multline} \label{eq:J_f'_2} J'(f) = \int_G I_f(t) K(t) \, d\mu(t) = \int_B I_f(t) K(t) \, d\mu(t) + \int_{G \setminus B} I_f(t) K(t) \, d\mu(t) \leq \\ \int_B K(t) \, d\mu(t) + (\delta^3 - 2 c_0 \epsilon) \int_{G \setminus B} K(t) \, d\mu(t) \leq \lVert K \rVert_{\infty} \mu(B) + (\delta^3 - 2 c_0 \epsilon). \end{multline} Combining \eqref{eq:J_f'_1} and \eqref{eq:J_f'_2}, we deduce that \begin{equation*} \mu(B) > c_0 \epsilon/\lVert K \rVert_{\infty}. \end{equation*} Letting $c_1 = c_0 \epsilon/\lVert K \rVert_{\infty}$, we obtain the first part of the theorem. Now we have \begin{equation*} J(f) = \int_G I_f(t) \, d\mu(t) > (\delta^3 - 2 c_0 \epsilon) c_1. \end{equation*} Letting $c_2 = c_1(\delta^3 - 2c_0 \epsilon)$, we obtain the second part of the theorem. \end{proof} In order to prove \cref{th:main-density}, we need the following proposition. For our future applications, we will state and prove a slightly more general version than what is necessary. \begin{proposition} \label{prop:density-bohr-stronger} Suppose $\phi, \psi: G \rightarrow G$ are continuous homomorphisms such that $\phi(G), \psi(G), (\phi-\psi)(G)$ have finite indices in $G$. Let $f : G \rightarrow [0,1]$ such that $\int_G f \, d\mu = \delta >0$. For $w \in G$, define \[ R(w) = \iint_{G^2} f(x + w) f(x + \phi(y)) f(x+\psi(y)) \, d\mu(x) d\mu(y) \] Then there are $c, k, \eta >0$ depending only on $\delta$ and the indices above such that the set $\{ w \in G: R(w) > c\}$ contains a Bohr-$(k,\eta)$ set. \end{proposition} \begin{proof} By \cref{lem:bourgain_lemma_2}, we have \[ |R(w) - R(0)| \ll \lVert \hat{f} - \widehat{f_w} \rVert_{\infty} \] where implicit constant depends only on the indices of $\phi(G), \psi(G), (\phi-\psi)(G)$ in $G$. By \cref{th:roth-khintchine}, we know that $R(0) > c$ for some constant $c>0$ depending on these indices and $\delta$. It follows that there exists a constant $c'$ such that $R(w) > c/2$ if \begin{equation} \lVert \hat{f} - \widehat{f_w} \rVert_{\infty} < c'. \end{equation} \cref{lem:translate} implies that the set of such $w$ contains a Bohr-$(k,\eta)$ set, where $k$ and $\eta$ depend only on $c'$. \end{proof} We can now formulate and prove our main theorem for sets of positive measure. \begin{theorem} \label{th:main-density-true} Let $\phi_1, \phi_2, \phi_3, \psi_1, \psi_2: G \rightarrow G$ be continuous homomorphisms satisfying the following \begin{enumerate} \item $\phi_1 + \phi_2 + \phi_3=0$, \item $\phi_1 \circ \psi_1 = \phi_2 \circ \psi_2$, \item $\phi_3(G), \psi_1(G), \psi_2(G), (\psi_1 + \psi_2)(G)$ have finite indices in $G$. \end{enumerate} The for any measurable set $A \subset G$, $\mu(A) = \delta >0$, the set $\phi_1(A) + \phi_2(A) + \phi_3(A)$ contains a Bohr-$(k, \eta)$ set, where $k$ and $\eta$ depend only on $\delta$ and the indices above. \end{theorem} \begin{proof} Applying \cref{prop:density-bohr-stronger} for $f=1_A$ and $\psi_1, \psi_2$ in place of $\phi$ and $\psi$, we see that there exists a Bohr-$(k,\eta)$ set $B$ such that for all $w \in B$, there are $x, y \in G$ such that \[ x+w , x+\psi_1(y) \textup{ and } x-\psi_{2}(y) \in A. \] Note that \[ \phi_3(x + w) + \phi_1(x+\psi_1(y)) + \phi_2(x-\psi_2(y)) = \phi_3(w) \] and so that $\phi_1(A) + \phi_2(A) + \phi_3(A) \supseteq \phi_3(B)$. Our theorem then follows from \cref{lem:bohr-homo2}. \end{proof} \cref{th:main-density} is now a special case of \cref{th:main-density-true} when $\psi_1 = \phi_2$ and $\psi_2 = \phi_1$. \begin{remark} \label{rk:auto} If $\phi_1$ is an automorphism and $[G:\phi_2(G)], [G:\phi_3(G)] < \infty$ then the hypothesis of \cref{th:main-density-true} is also satisfied. Indeed, we let $\psi_1 = \phi_1^{-1} \circ \phi_2$ and $\psi_2 = \textup{Id}$. Then the first two conditions of \cref{th:main-density-true} are satisfied. As for the third condition, we have $\psi_1(G) = \phi_1^{-1} \circ \phi_2(G)$ and $(\psi_1 + \psi_2)(G) = \phi_1^{-1} \circ (\phi_2 + \phi_1) (G)$. Both of these have finite indices in $G$ by \cref{lem:composition}. \end{remark} \section{Bohr sets in sumsets in number fields and function fields} \label{sec:z_and_field} In this section we prove Theorems \ref{th:main-kr}, \ref{th:nf} and \ref{th:ff} using a strategy similar to Bergelson and Ruzsa's proof of \cref{th:br}. To prove \cref{th:br}, one could embed $A \cap [N]$ naturally in $\mathbb{Z}_N$, and invoke the counting result (for example, \cref{th:roth-khintchine}) in $\mathbb{Z}_N$. However, one has to deal with the ``wraparound effect'': A solution to $s_1 x + s_2 y + s_3z =0$ in $\mathbb{Z}_N$ does not necessarily correspond to a solution in $\mathbb{Z}$. To overcome this issue, Bergelson and Ruzsa embedded $A \cap [N]$ in $\mathbb{Z}_{N'}$ for some $N' \gg N$. Then $A \cap [N]$ remains dense in $\mathbb{Z}_{N'}$ and a solution in $\mathbb{Z}_{N'}$ found in $A \cap [N]$ is now a solution in $\mathbb{Z}$. For partitions, the corresponding counting result would be \cref{th:counting-partition}. However, if this theorem were applied directly, we would have a partition of the whole group $\mathbb{Z}_{N'}$ which again causes the wrap-around effect. To avoid this problem, we need to modify \cref{th:counting-partition} so that it allows for partitions of a subset $[-N, N] \subset \mathbb{Z}_{N'}$ instead of the whole group. \begin{proposition}\label{prop:counting-brauer} For any $k, \ell, r>0$, there is a constant $c(k, \ell, r) > 0$ such that the following holds: For sufficiently large $N$, if $[-N, N] = \bigcup_{i=1}^r A_i$, then for some $1 \leq i \leq r$, we have \[ \sum_{|x|, |y| \leq N} 1_{A_i}(\ell y) 1_{A_i}(x) 1_{A_i}(x+y) \cdots 1_{A_i}(x + ky) \geq c(k, \ell, r) N^2. \] \end{proposition} \begin{remark} \cref{prop:counting-brauer} also follows from Frankl-Graham-R\"odl \cite[Theorem 1]{fgr}, but our proof shows that it is directly in line with \cref{th:counting-partition}. Furthermore, our proof easily generalizes to other rings such as $\mathbb{Z}[i]$ and $\Fq[t]$. \end{remark} \begin{proof} We applying \cref{th:mpc-hom} with $\psi(y) = \ell y$ and $\phi_j(y) = jy$ for $1 \leq j \leq k$. Then there exist $m$ and $n$ depending only on $r$ and $k$ such that for any $r$-coloring of $S_m(x_1, \ldots, x_n)$, there are $x$ and $y$ of nonempty and disjoint support such that the configuration \[ \{ \ell y, x, x+y, \ldots, x+ky\} \] is monochromatic. Note that elements of $S_m(x_1, \ldots, x_n)$ are all linear forms in $x_1, \ldots, x_n$ with bounded integer coefficients. We now let $x_1, \ldots, x_n$ vary over $[-cN, cN]$ where $c$ is a small constant. Then $S_m(x_1, \ldots, x_n) \subset [-N, N]$. Under the partition $[-N, N] = \bigcup_{i=1}^r A_i$, each set $S_m(x_1, \ldots, x_n)$ contains a monochromatic configuration $\{ \ell y, x, x+y, \ldots, x+ky\}$. There are $\gg N^n$ monochromatic configurations arising in this way. However, a configuration may come from many different sets $S_m(x_1, \ldots, x_n)$. We will show that the number of tuples $(x_1, \ldots, x_n)$ giving rise to the same configuration $\{ \ell y, x, x+y, \ldots, x+ky\}$ is $\ll N^{n-2}$. Indeed, let $I, J$ be disjoint nonempty subsets of $[n]$ such that $x$ and $y$ are linear combinations with bounded coefficients of $(x_i)_{i \in I}$ and $(x_j)_{j \in J}$, respectively. For fixed $I$ and $J$, the number of choices for $(x_i)_{i \in I}$ is $\ll N^{|I|-1}$, since any choice of $(|I|-1)$ of the $x_i$'s gives at most one value for the remaining $x_i$. For the same reason, the number of choices for $(x_j)_{j \in J}$ is $\ll N^{|J|-1}$. Since there are finitely many pairs $(I, J)$, we see that the number of $(x_1, \ldots, x_n)$ that give rise to $\{ \ell y, x, x+y, \ldots, x+ky\}$ is $\ll N^{n-2}$. Hence the number of monochromatic configurations in $[N]$ is $\gg N^2$, and we are done. \end{proof} Our next statement is essentially a diagonalization argument. \begin{proposition} \label{prop:counting-diagonal} Let $\mathcal{P}$ denote an arbitrary partition $\mathbb{Z} = \bigcup_{i=1}^r A_i$. Then there exists some $1 \leq i \leq r$ with the following property: For every $\ell \geq 0$, there is a constant $c(\ell, \mathcal{P})$ such that \[ \sum_{|x|, |y| \leq N} 1_{A_i}(y) 1_{A_i}(x) 1_{A_i}(x+ \ell y) \geq c(\ell, \mathcal{P}) N^2 \] for infinitely many $N \in \mathbb{N}$. \end{proposition} \begin{proof} Invoking \cref{prop:counting-brauer}, for each $k \in \mathbb{N}$, there is $i=f(k)$ such that for infinitely many $N$, we have \[ \sum_{|x|, |y| \leq N} 1_{A_i}(y) 1_{A_i}(x) 1_{A_i}(x+y) \cdots 1_{A_i}(x + ky) \geq c(k,1,r) N^2. \] Hence there exist an $i \in \{1, \ldots, r\}$ and an infinite set $K$ such that $f(k) = i$ for all $k \in K$. Let $\ell$ be arbitrary and pick $k \in K, k\geq \ell$. We have, for infinitely many $N$, \begin{multline*} \sum_{|x|, |y| \leq N} 1_{A_{i}}(y) 1_{A_{i}}(x) 1_{A_{i}}(x+ \ell y) \\ \geq \sum_{|x|, |y| \leq N} 1_{A_{i}}(y) 1_{A_{i}}(x) 1_{A_{i}}(x+y) \cdots 1_{A_{i}}(x + ky) \geq c(k,1,r) N^2, \end{multline*} thus proving the desired claim. \end{proof} \begin{remark} In the proof above, we do not have any control on $c(k,1, r)$ since we do not have control on $k$. As a result, the constant $c(\ell, \mathcal{P})$ above depends on the partition. It is interesting to see if this dependence is indeed necessary. \end{remark} We can now prove \cref{th:main-kr}. \begin{proof}[Proof of \cref{th:main-kr}(a)] Let $\mathbb{Z} = \bigcup_{i=1}^r A_i$ be an arbitrary partition and $s_1, s_2 \in \mathbb{Z} \setminus \{ 0 \}$. Without loss of generality, we assume $s_1, s_2 >0$. For a set $A \subset \mathbb{Z}$ and $N >0$, we write $A^{(N)}$ to denote $A \cap [-N,N]$. By Proposition \ref{prop:counting-brauer}, there exist $i \in [r]$ and an infinite set $\mathcal{N}$ such that \begin{equation}\label{eq:schur1} \sum_{|x|, |y| \leq N} 1_{A_i^{(N)}}(s_1 y) 1_{A_i^{(N)}}(x) 1_{A_i^{(N)}}(x+ s_2y) \geq c N^2 \end{equation} for any $N \in \mathcal{N}$, where $c>0$ is a constant independent of $N$. Let $N'$ be the smallest odd integer greater than $(2s_1 + s_2)N$. We identify $\mathbb{Z}_{N'}$ with $[ - \frac{N'-1}{2}, \frac{N'-1}{2}]$. Then \eqref{eq:schur1} implies that \begin{equation}\label{eq:schur2} \sum_{x, y \in \mathbb{Z}_{N'}} 1_{A_i^{(N)}}(s_1 y) 1_{A_i^{(N)}}(x) 1_{A_i^{(N)}}(x+ s_2y) \geq c' N'^2 \end{equation} for some constant $c'>0$ independent of $N$. Define \[ R(w) = \sum_{x, y \in \mathbb{Z}_{N'}} 1_{A^{(N)}_i}(s_1 y) 1_{A^{(N)}_i}(x+w) 1_{A^{(N)}_i}(x+ s_2 y). \] Then by the same argument as the proof of \cref{prop:partition-bohr-stronger}, the set $\{w \in \mathbb{Z}_{N'}: R(w) > 0 \}$ contains a Bohr-$(k, \eta)$ set in $\mathbb{Z}_{N'}$, where $k$ and $\eta$ are independent of $N$. Note that $R(w)>0$, implies there are $a, a', a'' \in A^{(N)}_i$ and $x, y \in \mathbb{Z}_{N'}$ such that \[ s_1 y \equiv a, \quad x+w \equiv a', \quad \textup{and} \quad x+s_2 y \equiv a'' \pmod{N'}. \] Therefore, \[ s_1 w = s_1 (x+w) - s_1 (x + s_2 y) + s_2 (s_1 y) \equiv s_1 a' - s_1 a'' + s_2 a \pmod{N'}. \] If $|w| \leq N$ then this congruence is an equality in $\mathbb{Z}$ due to the way we choose $N'$ and the fact that $|a|, |a'|, |a''| \leq N$. We have thus proved that, for each $N \in \mathcal{N}$, there exist $x_1, \ldots, x_k \in [-\frac{N'-1}{2}, \frac{N'-1}{2}]$ such that \[ (s_1 A_i - s_1 A_i + s_2 A_i)/s_1 \supset [-N, N] \cap \left\{ w \in \mathbb{Z}: \left| e \left(\frac{ x_j w}{N'} \right) -1 \right| < \eta \quad \forall j=1, \ldots, k\right\}. \] Here we are using the notation $A/c$ defined in \eqref{eq:ring2}. As $N \in \mathcal{N}, N \to \infty$ and by passing to a subsequence if necessary, the sequence $(\frac{x_1}{N'}, \ldots, \frac{x_k}{N'})$ converges to a point $(\alpha_1, \ldots, \alpha_k)$ in $(\mathbb{R}/\mathbb{Z})^k$. Hence, \begin{equation*} \label{eq:bohrset1} (s_1 A_i - s_1 A_i + s_2 A_i)/s_1 \supset \left\{ w \in \mathbb{Z}: \left| e\left( \alpha_j w \right) -1 \right| < \frac{\eta}{2} \quad \forall j=1, \ldots, k\right\}. \end{equation*} This implies that \begin{equation*} \label{eq:bohrset2} s_1 A_i - s_1 A_i + s_2 A_i \supset \left\{ n \in \mathbb{Z}: \left| e\left( \frac{\alpha_j n}{s_1} \right) -1 \right| < \frac{\eta}{2} \quad \forall j=1, \ldots, k \right\} \cap s_1 \mathbb{Z}. \end{equation*} Since $s_1 \mathbb{Z}$ is a Bohr set and the intersection of two Bohr sets is a Bohr set, our proof finishes. \end{proof} \begin{proof}[Proof of \cref{th:main-kr}(b)] We proceed similarly to part (a), using \cref{prop:counting-diagonal} instead of \cref{prop:counting-brauer}. Let $\mathcal{P}$ be an arbitrary partition $\mathbb{Z} = \bigcup_{i=1}^r A_i$. Let $i$ be given by \cref{prop:counting-diagonal}. Let $s \in \mathbb{N}$ be arbitrary. Then there is an infinite set $\mathcal{N}_s \subset \mathbb{N}$ such that for any $N \in \mathcal{N}_s$, we have \begin{equation} \sum_{|x|, |y| \leq N} 1_{A_i^{(N)}}(y) 1_{A_i^{(N)}}(x) 1_{A_i^{(N)}}(x+ s y) \geq c(s, \mathcal{P}) N^2. \end{equation} for some constant $c(s, \mathcal{P})>0$ independent of $N$. Note that \[ w = (w+x) - (w+sy) + sy. \] The rest is identical to part (a). \end{proof} \subsection{Sumsets in $\mathbb{Z}[i]$} Even though the corresponding tori in the cases of $\mathbb{Z}[i]$ and $\Fq[t]$ are slightly different from $\mathbb{Z}$, the general approaches are very similar. Therefore, we will be brief and highlight only the differences. The following proposition is needed for the proof of \cref{th:nf}(b,c). We omit its proof since it is identical to the ones of Propositions \ref{prop:counting-brauer} and \ref{prop:counting-diagonal}. \begin{proposition}\label{prop:counting-brauer-nf}\ \begin{enumerate}[label=(\alph*)] \item Let $b, a_1, \ldots, a_k \in \mathbb{Z}[i]$ and $r>0$. There is a constant $c = c(b, a_1, \ldots, a_k, r)$ $>0$ such that the following holds: For $N$ sufficiently large, if $[-N, N]^2 = \bigcup_{j=1}^r A_j$, then for some $1 \leq j \leq r$, we have \[ \sum_{x, y \in [-N, N]^2} 1_{A_j}(by) 1_{A_j}(x) 1_{A_j}(x+a_1 y) \cdots 1_{A_i}(x + a_k y) \geq c N^4. \] \item Let $\mathcal{P}$ denote an arbitrary partition $\mathbb{Z}[i] = \bigcup_{j=1}^r A_j$. Then there exists some $1 \leq j \leq r$ with the following property: For each $\ell \in \mathbb{Z}[i]$, there is a constant $c(\ell, \mathcal{P})$ such that \[ \sum_{x, y \in [N]^2} 1_{A_j}(y) 1_{A_j}(x) 1_{A_j}(x+ \ell y) \geq c(\ell, \mathcal{P}) N^4. \] for infinitely many $N \in \mathbb{N}$. \end{enumerate} \end{proposition} \begin{proof}[Proof of Theorem \ref{th:nf} (a)] Suppose $A \subset \mathbb{Z}[i]$ has $\overline{d}(A) = \delta >0$. Then for infinitely many $N$, we have $|A^{(N)}| \geq \delta N^2$, where $A^{(N)} = A \cap [-N, N]^2$. Let $N' = 2(|s_1| + |s_2| + |s_3|) N + 1$. We identify $[-\frac{N'-1}{2}, \frac{N'-1}{2}]^2$ with $\mathbb{Z}_{N'} \times \mathbb{Z}_{N'}$. By \cref{th:main-density}, the set $s_1 A + s_2 A + s_3 A$ contains a Bohr set in $\mathbb{Z}_{N'} \times \mathbb{Z}_{N'}$, which is of the form \[ \left\{ (w,v) \in \mathbb{Z}_{N'} \times \mathbb{Z}_{N'} : \left| e\left( \frac{wx_j + vy_j}{N'} \right) -1 \right| < \eta \quad \forall j=1, \ldots, k\right\} \] for some $x_1, \ldots, x_k, y_1, \ldots, y_k \in [-\frac{N'-1}{2}, \frac{N'-1}{2}]$, where $k$ and $\eta$ depend only on $\delta$ and $s_1, s_2, s_3$. If $(w, v)$ is in the Bohr set above and $|w|, |v| \leq N$, then there exist $a, a', a'' \in A^{(N)}$ such that \[ (w,v) = s_1 a + s_2 a' + s_3 a'', \] where the equality is in $\mathbb{Z}[i]$ and not just in $\mathbb{Z}_{N'} \times \mathbb{Z}_{N'}$. Hence, \[ s_1 A + s_2 A + s_3 A \supset [-N, N]^2 \cap \left\{ (w,v) \in \mathbb{Z}[i] : \left| e\left( \frac{wx_j + vy_j}{N'} \right) -1 \right| < \eta \quad \forall j=1, \ldots, k\right\}. \] Letting $N$ go to infinity along some subsequence, we have that \[ s_1 A + s_2 A + s_3 A \supset \left\{ (w,v) \in \mathbb{Z}[i] : \left| e\left( w \alpha_j + v\beta_j \right) -1 \right| < \frac{\eta}{2} \quad \forall j=1, \ldots, k\right\}, \] where $(\alpha_1, \ldots, \alpha_k, \beta_1, \ldots, \beta_k)$ is a limit point of $( \frac{x_1}{N'}, \ldots, \frac{x_k}{N'}, \frac{y_1}{N'}, \ldots, \frac{y_k}{N'} )$, and we are done. \end{proof} \begin{proof}[Proof of Theorem \ref{th:nf}(b)] Using Proposition \ref{prop:counting-brauer-nf}(a) and arguing similarly to the proof of Theorem \ref{th:main-kr}(a), we see that for some $1 \leq j \leq r$, for infinitely many $N$, we have \[ (s_1 A_j - s_1 A_j + s_2 A_j)/s_1 \supset [-N, N]^2 \cap \left\{ (w,v) \in \mathbb{Z}[i] : \left| e\left( \frac{wx_j + vy_j}{N'} \right) -1 \right| < \eta \quad \forall j=1, \ldots, k\right\} \] Letting $N$ go to infinity, we have \[ (s_1 A_j - s_1 A_j + s_2 A_j)/s_1 \supset \left\{ (w,v) \in \mathbb{Z}[i] : \left| e\left( w \alpha_j + v\beta_j \right) -1 \right| < \frac{\eta}{2} \quad \forall j=1, \ldots, k\right\}, \] where $(\alpha_1, \ldots, \alpha_k, \beta_1, \ldots, \beta_k)$ is a limit point of $( \frac{x_1}{N'}, \ldots, \frac{x_k}{N'}, \frac{y_1}{N'}, \ldots, \frac{y_k}{N'} )$. Note that \[ w \alpha_j + v \beta_j = \Re( (w + iv) (\alpha_j -i\beta_j)) \] and hence, \[ s_1 A - s_1 A + s_2 A \supset \left\{ z \in \mathbb{Z}[i] : \left| e\left( \Re \left( z \frac{\alpha_j - i \beta_j}{s_1} \right) \right) -1 \right| < \frac{\eta}{2} \quad \forall j=1, \ldots, k \right\} \cap s_1 \mathbb{Z}[i], \] which is a Bohr set by Lemma \ref{lem:subgroup-bohr}. \end{proof} The proof of Theorem \ref{th:nf}(c) is similar to part (b), using Proposition \ref{prop:counting-brauer-nf}(b) instead of Proposition \ref{prop:counting-brauer-nf}(a). \subsection{Sumsets in $\Fq[t]$} Let $p$ be a prime and $q$ be a power of $p$. First, let us introduce some standard facts about $\Fq[t]$. Let $\K=\Fq(t)$ be the field of fractions of $\Fq[t]$. For $f/g \in \K$ we define $|f/g|=q^{\deg(f)-\deg(g)}$ and $|0| =0$. The completion of $\K$ with respect to $|\cdot|$ is $\Kinf=\Fqt = \left\{ \sum_{i=-\infty}^n a_i t^i: a_i \in \Fq, n \in \mathbb{Z} \right\}$. Let $\mathbb{T}_q = \left\{ \sum_{i=-\infty}^{-1} a_i t^i: a_i \in \Fq \right\}$. Then $\Fq[t], \K, \Kinf, \mathbb{T}_q$ are the analogs of $\mathbb{Z}, \mathbb{Q}, \mathbb{R}$ and $\mathbb{R}/\mathbb{Z}$, respectively. For $x \in \Fq$, we write $e_q(x) = e\left( \frac{\textup{Tr} (x)}{p} \right)$, where $\textup{Tr}: \Fq \rightarrow \Fp$ is the trace map.\footnote{That is, $\textup{Tr}(x)$ is the trace of the $\mathbb{F}_p$-linear map $y \mapsto xy$ from $\Fq$ to $\Fq$, when $\Fq$ is viewed as a $\mathbb{F}_p$-vector space. In particular, $\textup{Tr}(x) \in \mathbb{F}_p$.} It can be checked that $x \mapsto e_q(ax)$ (where $a \in \Fq$) are all the additive characters of $\Fq$. If $\alpha = \sum_{i=-\infty}^n a_i t^i \in \Kinf$, we write $(\alpha)_{-1} = a_{-1}$ and $E(\alpha) = e_q(a_{-1})$. It can be checked that $f \mapsto E( f \alpha)$, where $\alpha \in \mathbb{T}_q$, are all the continuous characters of $\Fq[t]$. This also shows that $\mathbb{T}_q$ is the dual of $\Fq[t]$. Any Bohr set $B$ in $\Fq[t]$ is of the form \[ B = \left\{ f \in \Fq[t]: \left| E(f \alpha_i) -1 \right| < \eta \textup{ for } i=1, \ldots, k \right\}, \] where $\alpha_1, \ldots, \alpha_k \in \mathbb{T}_q$. If $\eta < |e(1/p)-1|$ then \[ B = \left\{ f \in \Fq[t]: \textup{Tr}((f \alpha_i)_{-1}) =0 \textup{ for } i=1, \ldots, k \right\}. \] This is an $\Fp$-subspace and not necessarily an $\Fq$-subspace. However, it contains the $\Fq$-subspace \[ \left\{ f \in \Fq[t]: (f \alpha_i)_{-1} =0 \textup{ for } i=1, \ldots, k \right\}. \] We write $G_N = \{ f\in \Fq[t]: \deg(f) < N\}$. For a set $A \subset \Fq[t]$, we write $A^{(N)}$ for $A \cap G_N$. Moreover, for each $N$, we fix a polynomial $P_N \in \Fq[t]$ of degree $N$. Then $G_N \cong \Fq[t]/(P_N)$. While $G_N$ is already a group, we work with $\Fq[t]/(P_N)$ since the multiplication $f \mapsto sf$ is a homomorphism on the latter. Using the same arguments as in Propositions \ref{prop:counting-brauer} and \ref{prop:counting-diagonal}, we can prove the following: \begin{proposition}\label{prop:counting-brauer-ff}\ \begin{enumerate}[label=(\alph*)] \item Let $b, a_1, \ldots, a_k \in \Fq[t]$ and $r>0$. There is a number $c=c(q, b, a_1, \ldots, a_k, r) >0$ such that the following holds. For $N$ sufficiently large, if $G_N = \bigcup_{j=1}^r A_i$, then for some $1 \leq i \leq r$, we have \[ \sum_{x, y \in G_N} 1_{A_i}(by) 1_{A_i}(x) 1_{A_i}(x+a_1 y) \cdots 1_{A_i}(x + a_k y) \geq c q^{2N}. \] \item Let $\mathcal{P}$ denote an arbitrary partition $\Fq[t] = \bigcup_{i=1}^r A_i$. Then there exists some $1 \leq i \leq r$ with the following property: For each $\ell \in \Fq[t]$, there is a constant $c(\ell, \mathcal{P})$ such that \[ \sum_{x, y \in \Fq[t]} 1_{A_i}(y) 1_{A_i}(x) 1_{A_i}(x+ \ell y) \geq c(\ell, \mathcal{P}) q^{2N} \] for infinitely many $N \in \mathbb{N}$. \end{enumerate} \end{proposition} \begin{proof}[Proof of Theorem \ref{th:ff}] We will sketch the proof of \cref{th:ff}(b). Parts (a) and (c) can be proved along the same lines. Let $\Fq[t] = \bigcup_{i=1}^r A_i$ be an arbitrary partition and $s_1, s_2 \in \Fq[t] \setminus \{ 0 \}$. By Proposition \ref{prop:counting-brauer-ff}(a), we know that there exist $1 \leq i \leq r$ and an infinite set $\mathcal{N}$ such that \begin{equation}\label{eq:schur3} \sum_{x, y \in G_N} 1_{A_i^{(N)}}(s_1 y) 1_{A_i^{(N)}}(x) 1_{A_i^{(N)}}(x+ s_2y) \gg q^{2N} \end{equation} for each $N \in \mathcal{N}$. Let $N'= \max(\deg s_1, \deg s_2) + N$. We identify $G_{N'}$ with $\Fq[t] / (P_{N'})$. Arguing similarly to the proof of Theorem \ref{th:main-kr}(a) and using \eqref{eq:schur3}, we find that \[ (s_1 A_i - s_1 A_i + s_2 A_i)/s_1 \supset G_N \cap \left\{ w \in \Fq[t]: (w \frac{x_i}{P_{N'}})_{-1} = 0 \quad \forall j=1 \ldots, k\right\}, \] for some $x_1, \ldots, x_k \in G_{N'}$. Letting $N \rightarrow \infty$ and using compactness of $\mathbb{T}_q$, we have \begin{equation*} (s_1 A_i - s_1 A_i + s_2 A_i)/s_1 \supset \left\{ w \in \Fq[t]: (w \alpha_i)_{-1} = 0\quad \forall j=1, \ldots, k\right\} \end{equation*} for some $\alpha_1, \ldots, \alpha_k \in \mathbb{T}_q$. Therefore, \[ s_1 A_i - s_1 A_i + s_2 A_i \supset \left\{ f \in \Fq[t]: (f \frac{\alpha_i}{s_1})_{-1} = 0\quad \forall j=1, \ldots, k\right\} \cap s_1 \Fq[t], \] which is clearly an $\Fq$-subspace of bounded codimension. \end{proof} \section{Open questions} \label{sec:open_question} \cref{th:main-kr}(b) says that in any partition $\mathbb{Z} = \bigcup_{i=1}^r A_i$, there exists an $i \in \{1, \ldots, r\}$ such that $A_i - A_i + sA_i$ contains a Bohr set for every $s \in \mathbb{Z} \setminus \{0\}$. Inspired by Katznelson and Ruzsa's question, \cref{th:main-kr}(b) naturally gives rise to the following question. \begin{question} \label{ques:B+sA} Suppose $A \subseteq \mathbb{Z}$ does not contain a Bohr set and $B \subseteq \mathbb{Z}$ such that $B + sA$ contains a Bohr set for every $s \in \mathbb{Z} \setminus \{0\}$. Must it be true that $B$ contains a Bohr set? \end{question} An positive answer to \cref{ques:B+sA} would lead to a resolution of Katznelson-Ruzsa's question. However, it is likely that the answer to \cref{ques:B+sA} is negative. As mentioned in the introduction, we do not know whether the commuting conditions in \cref{th:main-partition} and \cref{th:main-density} can be removed entirely or not. It is interesting to answer the following. \begin{question} Can the commuting conditions in \cref{th:main-partition} and \cref{th:main-density} be removed? \end{question}
{ "timestamp": "2022-01-03T02:17:14", "yymm": "2112", "arxiv_id": "2112.11997", "language": "en", "url": "https://arxiv.org/abs/2112.11997", "abstract": "Let $G$ be a compact abelian group and $\\phi_1, \\phi_2, \\phi_3$ be continuous endomorphisms on $G$. Under certain natural assumptions on the $\\phi_i$'s, we prove the existence of Bohr sets in the sumset $\\phi_1(A) + \\phi_2(A) + \\phi_3(A)$, where $A$ is either a set of positive Haar measure, or comes from a finite partition of $G$. The first result generalizes theorems of Bogolyubov and Bergelson-Ruzsa. As a variant of the second result, we show that for any partition $\\mathbb{Z} = \\bigcup_{i=1}^r A_i$, there exists an $i$ such that $A_i - A_i + sA_i$ contains a Bohr set for any $s \\in \\mathbb{Z} \\setminus \\{ 0 \\}$. The latter is a step toward an open question of Katznelson and Ruzsa.", "subjects": "Combinatorics (math.CO); Dynamical Systems (math.DS); Number Theory (math.NT)", "title": "Bohr sets in sumsets I: Compact groups", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9875683476832692, "lm_q2_score": 0.822189134878876, "lm_q1q2_score": 0.8119679654154681 }
https://arxiv.org/abs/math/0702432
Exceptional points for Lebesgue's density theorem on the real line
For a nontrivial measurable set on the real line, there are always exceptional points, where the lower and upper densities of the set are neither zero nor one. We quantify this statement, following work by V. Kolyada, and obtain the unexpected result that there is always a point where the upper and the lower densities are closer to 1/2 than to zero or one. The method of proof uses a combinatorial restatement of the problem.
\section{Introduction and notation} \label{intro} \subsection{Formulation of the problem} Denote by $\lambda$ the Lebesgue measure on the real line. We will call a measurable set $S\subset\R$ {\em nontrivial} if neither $S$ nor $\R\setminus S$ is of measure zero. A point $p\in\R$ is called a {\em density point} of $S$ if \[ \lim_{\epsilon\ar0}\frac{\lambda(I_\epsilon(p)\cap S)}{2\epsilon}=1, \] where $I_\epsilon(p)$ is the interval $(p-\epsilon,p+\epsilon)$. The well-known Lebesgue density theorem, in a somewhat weakened form, states that \smallskip {\em For any measurable set $S\subset\R$, almost all points $p\in\R$ are either density points of $S$ or density points of $\R\setminus S$.} \smallskip It is a natural problem to investigate the set of what we will call {\em exceptional points} for $S$, i.e. points which are neither density points of $S$, nor those of $\R\setminus S$. Note that this is a topological notion, since as far as measure theory is concerned, there are no such exceptional points. First, we quantify the notion of exceptional point: given a measurable $S\subset\R$ and $0\leq \delta\leq1/2$, we will call $p\in \R$ a $\delta${\em-exceptional point} for $S$ if \[ \delta\leq\liminf_{\epsilon\ar0}\frac{\lambda(I_\epsilon(p)\cap S)} {2\epsilon}\leq\limsup_{\epsilon\ar0}\frac{\lambda(I_\epsilon(p) \cap S)}{2\epsilon}\leq1-\delta. \] Let $0\leq\delta\leq1/2$. In this article, we will be studying the statement \[ \hd\delta:\;\text{\bf There is a }\delta\text{\bf -exceptional point for every nontrivial }S\subset\R. \] Clearly, if $\delta_1>\delta_2$ then $\hd{\delta_1}$ implies $\hd{\delta_2}$. The central problem we are addressing is finding the universal constant $\delta_{\mathcal{H}}$: \[ \delta_{\mathcal{H}}=\sup\{\delta|\; \hd\delta\text{ is true}\}.\] \subsection{The history of the problem} The problem of determining the constant $\delta_{\mathcal{H}}$ was introduced and studied in \cite[\S4]{vik}; in this paper Victor Kolyada showed that \[ 1/4\leq\delta_{\mathcal{H}}\leq(\sqrt{17}-3)/4\sim0.2807764 \] On his suggestion, the question of proving the inequality $1/4\leq\delta_{\mathcal{H}}$ became one of the problems in the 1983 Schweitzer competition (cf. \cite[Problem 9, 1983]{schw}), a contest for mathematics undergraduates in Hungary. As it turned out, the author could not solve this problem at the time, and, as a result, failed to win the first prize in the competition. Probably, to some extent motivated by this disappointment, the author undertook a thorough study of the problem after the competition, and this led to the result obtained in 1984, which, with apologies for the considerable delay, we submit in the present paper. \subsection{Results, and contents of the paper} There is a simple analytic proof of the fact that $\delta_{\mathcal{H}}\geq 1/4$; we recall this proof in \S1. In \S2 we describe a combinatorial restatement of our problem, and using this combinatorial approach, in \S3, we give an upper bound on $\delta_{\mathcal{H}}$. We conjecture that this upper bound, which is a solution of a cubic equation, and is approximately 0.272, is, in fact, the value of $\delta_{\mathcal{H}}$. The main result of the paper is described in the last section, where we prove a lower bound on $\delta_{\mathcal{H}}$. This lower bound is also a solution of a cubic equation; its value is about 0.263. {\sc Notation and conventions}: In this article, every set is assumed to be measurable. All intervals will be considered open. The length of an interval $J$ will be denoted by $|J|$. We denote by $I_\epsilon(p)$ the $\epsilon$-neighborhood of the point $p\in\R$, i.e. the interval $(p-\epsilon, p+\epsilon)$. Given an interval $I\subset\R$ and a subset $H\subset\R$, denote by $\lambda(H|I)$ the relative measure of $H$ in I, i.e. \[ \lambda(H|I) = \frac{\lambda(H\cap I)}{|I|}. \] Given a set $S\subset\R$ and a number $a\in\R$ we denote by $a+S$ the set $\{a+x;\;x\in S\}$ and by $a-S$ the set $\{a-x;\;x\in S\}$. {\sc Acknowledgment.} We would like to thank Victor Kolyada for useful comments and references, and extend our gratitude to Mikl\'os Laczkovich for his help and encouragement. \section{The solution of the Schweitzer problem} \begin{prop} \label{quarter} The statement $\hd{1/4}$ is true. \end{prop} Let us see the proof. We are given a nontrivial $S\subset\R$, and we are looking for a 1/4-exceptional point for $S$. Let $a$ be a density point for $S$ and $b$ be a density point for the complement of $S$. Without loss of generality we may assume that $a=0$ and $b=1$. Denote by $\ws$ the truncated set $\ws=(-\infty,0)\cup S\setminus(1,\infty)$ and let $d_{\ws}(x) = \lambda(\ws\cap(x,\infty))$. The function \[f(x)=d_{\ws}(x)+x/2\] goes to infinity linearly as $x\ar\pm\infty$, and its derivative is negative at 0, and positive at 1. This implies that $f(x)$ has a global minimum at a point $p$ in the {\em interior} of the interval $(0,1)$. Now, given an arbitrary $\epsilon>0$, we have \begin{multline} \lambda(\ws|I_\epsilon(p))\geq\frac{\lambda((p-\epsilon,p) \cap \ws)}{2\epsilon}=\frac{d_{\ws}(p-\epsilon)-d_{\ws}(p)}{2\epsilon} \\ =\frac{(d_{\ws}(p-\epsilon)+(p-\epsilon)/2)-(d_{\ws}(p)+p/2)}{2\epsilon} +\frac14=\frac{f(p-\epsilon)-f(p)}{2\epsilon}+\frac14\geq\frac14; \end{multline} similarly, one sees that \[ \lambda(\ws|I_\epsilon(p)) \leq\frac34.\] As $0<p<1$, the sets $S$ and $\ws$ coincide near $p$, and thus $p$ is a $1/4$-exceptional point for $S$. This proves that $\hd{1/4}$ holds.\qed It does not appear that this proof can be improved upon easily, thus it seems natural to conjecture that, in fact, $\delta_{\mathcal{H}}=1/4$. Thus we were very surprised to discover otherwise. To explain the reasons behind this phenomenon, we first recast the problem in a discrete form. \section{Combinatorial restatement} Based on an idea of Mikl\'os Laczkovich, we formulate a combinatorial problem, which turns out to be equivalent to determining whether $\hd\delta$ is true (also cf. \cite[\S4]{vik}). Given a finite, increasing sequence of positive real numbers, \[ 0<a_1<b_1<\dots<a_r<b_r,\] we call the union of intervals \[ C=(-\infty,0)\cup\bigcup_{i=1}^r(a_i,b_i) \] a {\em configuration}, and the elements of the sequence, including 0, the {\em endpoints} of $C$. Given $\delta$, $0\leq\delta\leq1/2$, we denote by $\kd \delta$ the following statement: \\ $\kd\delta$:\; {\bf For every configuration $C$, there is an endpoint $c$ such that \[ \delta\leq \lambda(C|I_\omega(c))\leq1-\delta\text{ for all } \omega>0.\] } \smallskip For the convenience of the reader, we write down the {\bf opposite} of $\kd\delta$ as well:\\ {\em There exists a configuration $C$ such that for every endpoint $c$ of $C$ there is a positive radius $\omega(c)$ such that $\lambda(C|I_{\omega(c)}(c))\notin[\delta,1-\delta]$.} \smallskip Again, clearly $\kd{\delta_1}$ implies $\kd{\delta_2}$ if $\delta_1>\delta_2$. Set $\delta_{\mathcal{K}}=\sup\{\delta>0;\,\kd{\delta}\text{ true}\}$. \begin{prop}\label{hequalk} We have $\delta_{\mathcal{H}}=\delta_{\mathcal{K}}$. \end{prop} \begin{proof} First we show that if $\hd \delta$ is false, then so is $\kd{\delta+\tau}$ for any $\tau>0$. Assume that $S$ is a counterexample to $\hd{\delta}$. Using the cut-off construction at the beginning of Proposition \ref{quarter}, without loss of generality, we can assume that $(1,\infty)\cap S=\emptyset$ and $(-\infty,0)\subset S$. Then for every $x$ in the closed interval $[0,1]$, there exists a radius $\epsilon(x)$ such that $\lambda(S|I_{\epsilon(x)}(x))\notin[\delta,1-\delta]$. At the cost of increasing $\delta$, one may put a uniform lower bound on $\epsilon(x)$. Indeed, fix a small $t>0$. It is easy to check that for $y\in I_{t\epsilon(x)}(x)$ we have $\lambda(S|I_{\epsilon(x)}(y))\notin[\delta+t,1-\delta-t]$. Since $[0,1]$ is compact, it is covered by finitely many of the intervals $I_{t\epsilon(x)}(x)$. Pick such a finite cover and denote by $\eta$ the least of the radii $\epsilon(x)$ in it. Then for each $y\in [0,1]$ there is an $x\in[0,1]$ such that $y\in t\epsilon(x)$, $\epsilon(x)\geq\eta$ and \[\lambda(S|I_{\epsilon(x)}(y))\notin[\delta+t,1-\delta-t]. \] Finally, by approximating $S$ with a finite union of intervals, we can find a configuration $C$ such that for any interval $I$ we have \[ |\lambda(I\cap C)-\lambda(I\cap S)|<t\eta. \] Then by applying to each endpoint of $C$ the last two inequalities, we can convince ourselves that $C$ provides a counterexample to $\kd{\delta+2t}$. This clearly shows that $\delta_\mathcal{K} \leq \delta_\mathcal{H}$. Now we prove the opposite inequality. Assume that the configuration $C$ is a counterexample to $\kd \delta$. This means that for each endpoint $c$ of $C$ there is a radius $\omega(c)>0$ such that $\lambda(C|I_{\omega(c)}(c))\notin[\delta,1-\delta]$). Denote the least and greatest among the positive numbers $\omega(c)$ by $\omega_{\min}$ and $\omega_{\max}$ respectively. Without loss of generality, we can assume that $C\subset(-\infty,1)$; let $\tilde C=C\cap(0,1)$. Fix a small $\epsilon>0$ and let $H_1=\tilde C$. We define a finite disjoint union of intervals $H_n$ by induction as follows: write $H_n=\cup_{j=1}^{r(n)}(a_j(n),b_j(n))$ and let \[ H_{n+1} = \bigcup_{j=1}^{r(n)}\left([a_j(n)-\epsilon^n\tilde C] \cup(a_j(n),b_j(n))\cup[b_j(n)+\epsilon^n\tilde C]\right). \] In particular, $H_n\subset H_{n+1}$. Finally, let $H=\cup_{n=1}^\infty H_n$. We will now show that for any $\tau>0$ one can choose a sufficiently small $\epsilon>0$ such that $H=H(\epsilon)$ is a counterexample to $\hd {\delta+\tau}$. Pick an arbitrary point $x\in\R$. We need to compute $\liminf/\limsup$ of the density of the set $H$ around $x$. Clearly, we can assume that $x$ is a boundary point of $H$, otherwise the density is 0 or 1. Pick a positive integer $n$ and denote by $v=v_n$ the endpoint of $H_n$ closest to $x$. Since $C$ is a counterexample to $\kd \delta$, there is a a radius $\omega=\omega_n$, $\omega_{\min}\leq\omega\leq\omega_{\max}$ such that \begin{equation} \label{hn} \lambda(H_n|I_{\epsilon^{n-1}\omega}(v))<\delta. \end{equation} For simplicity of notation, we will suppress the other possibility: $>1-\delta$. We would like to estimate $\lambda(H|I_{\epsilon^{n-1} \omega}(x))$. First, using the trivial bound $\lambda(\tilde C)\leq1$, we obtain \begin{equation} \label{hhn} \lambda((H\setminus H_n)\cap I_{\epsilon^{n-1}\omega}(v)) <\epsilon^{n-1} \frac{ M\epsilon}{1-M\epsilon}, \end{equation} where $M$ is the number of endpoints of $C$. Next, we can estimate the distance between $x$ and $v$ as \begin{equation} \label{xv} |x-v|\leq \epsilon^n. \end{equation} Combining the inequalities \eqref{hn}, \eqref{hhn} and \eqref{xv}, a short computation shows that \[ \lambda(H|I_{\epsilon^{n-1}\omega}(x))<\delta+ \frac\epsilon{2\omega}\left(1+\frac M{1-M\epsilon}\right). \] Thus given any $\tau>0$, we can choose $\epsilon$ sufficiently small, so that we have \[\lambda(H|I_{\epsilon^{n-1}\omega_n}(x))\notin(\delta+\tau,1-\delta-\tau)\] for the sequence of intervals constructed above. Since clearly $\epsilon^{n-1}\omega_n\rightarrow0$, we can conclude that $\delta_\mathcal{K} \geq \delta_\mathcal{H}$, and this completes the proof. \end{proof} \section{An upper bound}\label{upper} The main goal of this article is to estimate the constant $\delta_\mathcal{H}$ introduced in \S\ref{intro}. The rather ``natural'' proof of Proposition \ref{quarter} seems to suggest that $\delta_\mathcal{H}=1/4$. In the next section, we will prove, however, that $\delta_\mathcal{H}>1/4$! Proposition \ref{hequalk} shows that we can study the constant $\delta_\mathcal{K}$ instead of $\delta_\mathcal{H}$. The following statement provides an upper bound for $\delta_\mathcal{K}$. \begin{prop} \label{counter} If $(2\delta)^3+(2\delta)^2+2\delta>1$, then there is a counterexample to $\kd \delta$. \end{prop} \begin{remark} This provides the bound $\delta_\mathcal{K}<0.2719$. \end{remark} \begin{proof} We construct a configuration $C(m,s,N)\subset(-\infty,1)$ depending on 2 parameters, $0<m,s<1$ and a large integer $N$. The construction goes as follows. We consider the interval $(1-m,1)$, and divide it into $N$ equal parts. Next we break each of these parts into two: an initial piece proportional to $s$ and a final piece, proportional to $1-s$, and then take the union of these initial pieces: \[ C(m,s,N)\setminus(-\infty,0)= \left\{x\in(1-m,1);\;0<\left\{\frac{N(x+m-1)}{m}\right\}<s\right\}, \] where $\{y\}$ stands for the fractional part of the real number $y$. Then we can compile the following table: the first column lists the endpoints of $C(m,s,N)$, the second a certain chosen radius, and the last one twice the corresponding density. \begin{center} \begin{tabular}{l|l|l} endpoint $v$ & radius $r$ & $2\lambda(C(m,s,N)|I_r(v))$\\ \hline 0 & 1 & $sm+1$\\ $1-m$ & $m$ & $2-(1/m-s)$\\ $\sim1$ & 1 & $\sim sm$\\ all other & $sm/N$ & $2-(1/s-1)$ \end{tabular} \end{center} The third line of the table represents the last endpoint of $C(m,s,N)$; it approaches 1 as $N\rightarrow\infty$ and the corresponding density has been computed in this limit as well. It is clear that all but this endpoint give densities $>1/2$, and that the first density: $sm+1$, is always greater than the second: $2-(1/m-s)$. Then a simple argument shows that the optimal configuration (in the limit when $N\rightarrow\infty$) is achieved when \begin{equation} \label{triple} \frac1m-s=sm=\frac1s-1. \end{equation} Indeed, it is sufficient to check that the gradients of the three two-variable functions which appear here are never collinear. Eliminating $m$ from \eqref{triple} we obtain \[ 2s^3-2s^2+2s=1.\] This quickly leads to the equation \[ q^3+q^2+q=1\] for the parameter $q=1/s-1$, which represents twice the density. This completes the proof. \end{proof} We conjecture that this is, in fact, an optimal construction. \begin{conj} \label{conj} The universal constant $\delta_\mathcal{K}$ is the only real root of the cubic equation \[ (2\delta)^3+(2\delta)^2+2\delta=1.\] \end{conj} We have not been able to prove this conjecture; see, however, Remark \ref{last}. \section{The Main Result} \begin{theorem} $\kd \delta$ is true if $4\delta^3+2\delta^2+3\delta<1$. \end{theorem} \begin{remark} The theorem provides the lower bound $\delta_\mathcal{K}>0.2629$. \end{remark} We start with a simple Lemma. \begin{lemma} \label{first} Suppose that an interval $I$ is represented as a not necessarily disjoint union of intervals: $I=\cup_{j=1}^nI_j$. Assume that $0<\delta<1$, and let $B$ be a measurable set such that $\lambda(B|I_j)\geq 1-\delta$ for $j=1,\dots,n$. Then \[ \lambda(B|I)\geq \frac{1-\delta}{1+\delta}. \] \end{lemma} \begin{proof} Without loss of generality we can assume that $I=(0,1)$, and that our system of intervals $I_j=(a_j,b_j)$, $j=1,\dots,n$, satisfies \begin{enumerate} \item $a_j<a_{j+1}$, for $j=1,\dots,n-1$, i.e. the left endpoints form an increasing sequence, and \item $I_j\cap I_{j+2}=\emptyset$ for $j=1,\dots,n-2$. \end{enumerate} Indeed, the first condition can be satisfied by renumbering the intervals, and the second by eliminating intervals which are contained in the union of the rest of the system. Introduce the following parameters of the system: setting $I_0=I_{n+1}=\emptyset$, for $1\leq j\leq n$ let \begin{eqnarray*} & x_j=\lambda(I_j\cap I_{j+1}),&x_j^B=\lambda(I_j\cap I_{j+1}\cap B),\\ & y_j = \lambda(I_j\setminus(I_{j-1}\cup I_{j+1})),& y^B_j = \lambda((B\cap I_j)\setminus(I_{j-1}\cup I_{j+1})). \end{eqnarray*} Using these parameters, we can rewrite the inequality $\lambda(B|I_j)\geq1-\delta$ as \[ x^B_{j-1}+y^B_j+x^B_j\geq(1-\delta)(x_{j-1}+y_j+x_j). \] Summing these inequalities for $j=1,\dots,n$, we obtain \[ \frac{2x^B+y^B}{2x+y}\geq1-\delta, \] where \[ x=\sumj x_j,\,y=\sumj y_j,\, x^B=\sumj x_j^B,\,y^B=\sumj y_j^B. \] Now using the fact that $x+y=1$, and that $x^B\leq x$, we can conclude that \[ \frac{2x^B+y^B}{1+x^B}\geq1-\delta. \] Hence \[ (1+\delta)x^B+y^B\geq 1-\delta, \] which implies that \[ x^B+y^B\geq\frac{1-\delta}{1+\delta}. \] This last inequality is exactly the statement of the Lemma. \end{proof} Now we begin the proof of the Theorem. Assume that $\kd \delta$ does not hold for some $0<\delta<\frac12$. Our results so far show that in this case $1/4< \delta$. Then let \[ C=(-\infty,0)\cup(a_1,b_1)\cup\dots\cup(a_r,b_r=1) \] be a configuration which is a counterexample to $\kd \delta$ with the {\em least} possible number $r$ of intervals in it. For each endpoint $p$ of $C$, introduce the set \[ D_p = \{\omega\in \R_{\geq0};\; \lambda(C|I_\omega(p))\notin(\delta,1-\delta)\}, \] and let $\omega(p)=\sup D_p$. Note that, by our assumption, $D_p$ is nonempty for every endpoint $p$ of $C$. \begin{definition} We will call an endpoint $p$ {\em black} if $\lambda(C|I_{\omega(p)}(p))\geq1-\delta$, and {\em white} if $\lambda(C|I_{\omega(p)}(p))\leq \delta$. Denote the set of black endpoints by $\B=\B(C)$, and the set of white endpoints by $\W=\W(C)$. \end{definition} Notice that $0$ is a black, while $1$ is a white endpoint. \begin{lemma} \label{second} If $p$ is a black endpoint and $p\leq1/2$, then either $\omega(p)<p$ or $\omega(p)\geq1-p$. Similarly, for $p\in\W$ and $p\geq1/2$, we have $\omega(p)< 1-p$ or $\omega(p)\geq p$. \end{lemma} \begin{proof} Assume that contrary to the statement of the Lemma, there is a $p\in\B$ such that $\omega(p)\geq p$ and $p+\omega(p)<1$. We will arrive at a contradiction from these assumptions. First we observe that we must have $b_i\leq p+\omega(p)\leq a_{i+1}$ for some $i<r$. Indeed, if $p+\omega(p)$ were an interior point of an interval in $C$, then for a sufficiently small $\epsilon>0$, the density $\lambda(C|I_{\omega(p)+\epsilon}(p))$ would be strictly greater than the density $\lambda(C|I_{\omega(p)}(p))$; this contradicts the definition of $\omega(p)$ as the maximal radius $\omega$ for which $\lambda(C|I_{\omega}(p))\geq1-\delta$. Now we claim that the configuration \[ C_{p+\omega(p)} = C\setminus(p+\omega(p),\infty) \] is a counterexample to $\kd \delta$. For every vertex v of $C_{p+\omega(p)}$, we need to find an appropriate radius $\ot(v)$, such that \begin{equation} \label{toprove} \lambda(C_{p+\omega(p)}|I_{\ot(v)}(v))\notin[\delta,1-\delta]. \end{equation} It follows from our observation above that the vertices of $C_{p+\omega(p)}$ form a subset of the vertices of $C$. If $v\in\W(C)$, or $v\in\B(C)$ and $v+\omega(v)\leq p+\omega(p)$, then then \eqref{toprove} is easy to satisfy: one chooses $\ot(v)=\omega(v)$. Pick a black vertex $v\in\B(C)$ with $v+\omega(v)\leq p+\omega(p)$. To show that $C_{p+\omega(p)}$ is a counterexample to $\kd \delta$ we prove that \[ \lambda(C_{p+\omega(p)}|I_{p+\omega(p)-v}(v))>1-\delta.\] Indeed, the definition of $\omega(p)$ implies that $\lambda(C|(p+\omega(p),v+\omega(v)))<1-2\delta$. This, in turn, means that \[ 1-\delta = \lambda(C|I_{\omega(v)}(v))<\lambda(C|I_{p+\omega(p)-v}(v)).\] Now observe that the configuration $C_{p+\omega(p)}$ has fewer elements than $C$. The fact that it provides a counterexample to $\kd \delta$ contradicts $C$ being a counterexample with the fewest possible number of intervals in it. This completes the proof of the Lemma. \end{proof} We can divide the set $\{v\in\B; v\leq\frac12\}$ into two groups: in the first group we collect the endpoints which satisfy $\omega(v)<v$; the second group will contain the endpoints for which $\omega(v)\geq v$, in which case $\omega(v)\geq1-v$ according to Lemma \ref{second}. This second group is always nonempty since 0 is in it. Introduce a special notation for the largest endpoint from the second group: \[ v_\B=\max\{v\in\B;\; v\leq1/2,\,\omega(v)\geq1-v\},\] and also let \[ v_\W=\min\{v\in\W;\; v\geq1/2,\, \omega(v)\leq v\}.\] In addition, set $\rho=\lambda(C\cap(0,1))$ and $I_\circ=(v_\B,v_\W)$. \begin{lemma} \label{third} In the notation introduced above, we have \[ \frac{1-\rho}{2(1-v_\B)}\leq \delta\quad\text{and}\quad\frac\rho{2v_\W}\leq \delta.\] \end{lemma} \begin{proof} It is easy to see that if for a black endpoint $v$ between $0$ and $1/2$ we have $\omega(v)\geq1-v$, then $\lambda(C|I_{1-v}(v))\geq1-\delta$. This implies the first equality. The second one is proved similarly. \end{proof} The following statement is the heart of our argument. Its proof will take up most of the remainder of the paper. \begin{prop} \label{fourth} \[ \rho\geq\frac{1-\delta}{1+\delta}|I_\circ|\quad\text{or}\quad \rho\leq\left(1-\frac{1-\delta}{1+\delta}\right)|I_\circ|. \] \end{prop} \begin{proof} If $C$ has no endpoints inside $I_\circ$, then the statement of the Proposition is satisfied trivially. We can thus assume that the set $F$ of endpoints of $C$ inside $I_\circ$ is non-empty: \[ F = \{v\in\B\cup\W;\;v_\B<v<v_\W\}\neq\emptyset.\] Now for $v\in\B$ denote by $\mu(v)$ the radius of the interval around $v$ in which the density of $C$ is maximal. Thus for any $\omega>0$, we have \[ \lambda(C|I_{\mu(v)(v)})\geq \lambda(C|I_{\omega}(v)).\] Similarly, for $v\in\W$, we denote by $\mu(v)$ the radius of the interval around $v$ in which the density of $C$ is minimal. \begin{lemma}\label{cimp} If $p\in F$, then $I_{\mu(p)}(p)\subset(0,1)$. \end{lemma} \begin{proof} Assume that $p\leq\frac12$. Then if $p\in\B$, then $\mu(p)\leq p$ because of the definition of $v_\B$. If $p\in\W$ and $\lambda(C|I_p(p))\leq\frac12$, then $\lambda(C|I_{\omega}(p))$ will increase with $\omega$ for $\omega>p$. This implies that in this case, again, $\mu(p)\leq p$. The proof in the case when $p>\frac12$ is analogous. \end{proof} Now we construct two subsets $\fb$ and $\fw$ of the interval $(0,1)$ as follows. Let \begin{eqnarray*} &\fb_1 = \cup\{I_{\mu(p)}(p);\;p\in F\cap\B\}, & \fw_1 = \cup\{I_{\mu(p)}(p);\;p\in F\cap\W\}\\ &\fb_2 = \cup\{(a_i,b_i);\;\fb_1\cap(a_i,b_i)\neq\emptyset\}, &\fw_2 = \cup\{(b_i,a_{i+1});\;\fw_1\cap(b_i,a_{i+1})\neq\emptyset\}\\ &\fb=\fb_1\cup\fb_2,&\fw=\fw_1\cup\fw_2. \end{eqnarray*} Clearly, all these sets are unions of intervals. \begin{lemma}\label{icircin} \[ I_\circ\subset \fb\cup\fw\subset(0,1).\] \end{lemma} \begin{proof} The fact that $\fb,\fw\subset(0,1)$ easily follows from Lemma \ref{cimp}. Now let $(a_i,b_i)\subset I_\circ$. Then either $a_i$ or $b_i$ is an element of $F$, i.e. lies in the interior of $I_\circ$. Assume that $a_i\in F$. If $a_i\in\W$, then $(a_i,b_i)\subset I_{\mu(a_i)}(a_i)$, and thus $(a_i,b_i)\subset\fw_1$. On the other hand, if $a_i\in\B$, then obviously $(a_i,b_i)\subset \fb_2$. The other case, $b_i\in F$ is similar. It is not hard to see that the same method of proof works for the intervals of the form $(b_j,a_{j+1})$. This completes the proof of the Lemma. \end{proof} \begin{lemma}\label{mainlemma} \begin{enumerate} \item The set $\fb$ is a union of intervals of the form $(a_i,b_j)$, $i\leq j$, while the set $\fw$ is a union of intervals of the form $(b_i,a_j)$, $j<i$. \item Let the intervals $J_\B$ and $J_\W$ be connected components of the sets $\fb$ and $\fw$, respectively. Then exactly one of the following 3 possibilities takes place: \[ J_\B\cap J_\W=\emptyset\;\text{ or }\;J_\B\subset J_\W\; \text{ or }\;J_\W\subset J_\B.\] \end{enumerate} \end{lemma} \begin{proof} To prove the first statement, observe that for $p\in F\cap\B$, the interval $I_{\mu(p)}(p)$ has to have its two boundary points in the closure of $C$ in order to conform with the definition of $\mu(p)$. These two intervals are subsets of $\fb_2$ by construction, and this completes the proof for $\fb$. The proof is similar for $\fw$. Now we turn to the second statement, which is the key to our whole argument. It follows from (1) that $J_\B=(a_i,b_j)$ and $J_\W=(b_k,a_l)$ for some indices $0\leq i,j,k,l\leq n$. If the two intervals, $J_\B$ and $J_\W$ were not situated as described in the statement, then we would have the following two remaining possibilities: \begin{equation} \label{twocases} a_i<b_k<b_j<a_l\quad\text{ or }\quad b_k<a_i<a_l<b_j. \end{equation} Consider the first of these two cases. We claim that if it were to take place, then the configuration \[ \tc=\left[(-\infty,b_k)\cup [C\cap(b_k,b_j)]\right]-b_k \] would be a counterexample to $\kd \delta$. As $\tc$ has fewer intervals than $C$, this would contradict the minimality of $C$. Indeed, consider first a black endpoint $p$ of $C$ between $b_k$ and $b_j$: $b_k\leq p\leq b_j$, $p\in\B$. We can conclude from the definition of $J_\B$ that $p+\mu(p)\leq b_j$. Then clearly \[ \lambda(\tc|I_{\mu(p)}(p))\geq \lambda(C|I_{\mu(p)}(p))\geq1-\delta.\] The proof is analogous when $b_k\leq p\leq b_j$ and $p\in\W$. The second case of \eqref{twocases} is symmetric to the first one. In this case \[ \tc = a_l-[(a_l,\infty)\cup[C\cap(a_i,a_l)]],\] and the argument is the same as above. \end{proof} \begin{cor} Either $I_\circ\subset\fb$ or $I_\circ\subset\fw$. \end{cor} This immediately follows from Lemmas \ref{icircin} and \ref{mainlemma}: if an interval $I$ is contained in the union of a system of intervals, whose any two elements are either disjoint or one contains the other, then, in fact, $I$ is already contained in one of the intervals of the system. Now we are ready to finish the proof of Proposition \ref{fourth}. Because of the symmetry of the problem, without loss of generality, we can assume that $I_\circ\subset J$, where the interval $J$ is a connected component of $\fb$. By our construction, the interval $J$ is a subset of $(0,1)$, and it is a union of intervals of the form $(a_i,b_i)$ and $I_{\mu(p)}(p)$ with $p\in\B$. Thus it satisfies the conditions of Lemma \ref{first}, and we can conclude that \[ \lambda(C\cap J) \geq \frac{1-\delta}{1+\delta}|J|.\] As $|I_\circ|\leq|J|$ and $\lambda(C\cap J)\leq\lambda(C\cap(0,1))$, this implies the statement of the Proposition, and the proof is complete. \end{proof} To prove our main Theorem, all that is left is to make a little calculation. According to Lemma \ref{third}, we have \begin{equation} \label{onerho} 1-\rho\leq 2\delta(1-v_\B)\quad\text{and} \quad\rho\leq2\delta v_\W. \end{equation} Adding up the two inequalities we obtain $1\leq2\delta(1+v_\W-v_\B)$, which can also be written as \begin{equation} \label{penult} |I_\circ| \geq \frac1{2\delta}-1. \end{equation} In addition, the second inequality of \eqref{onerho} implies that \begin{equation} \label{small} \rho\leq2\delta. \end{equation} Substituting \eqref{penult} and \eqref{small} into the inequality of Proposition \ref{fourth}, we obtain \[ 2\delta\geq\frac{1-\delta}{1+\delta}\left(\frac1{2\delta}-1\right).\] Expanding this inequality leads to \[ 4\delta^3+2\delta^2+3\delta\geq1,\] which completes the proof of the Theorem. \begin{remark}\label{last} If we could replace $\frac{1-\delta}{1+\delta}$ by $\frac1{1+2\delta}$ in Proposition \ref{third}, then the same calculation would lead to the inequality $8\delta^3+4\delta^2+2\delta\geq1$. This does not seem impossible, because in Lemma \ref{first} we did not use that we are dealing with a special system of intervals. This would confirm our conjecture, made at the end of \S\ref{upper}. \end{remark}
{ "timestamp": "2007-02-14T23:55:21", "yymm": "0702", "arxiv_id": "math/0702432", "language": "en", "url": "https://arxiv.org/abs/math/0702432", "abstract": "For a nontrivial measurable set on the real line, there are always exceptional points, where the lower and upper densities of the set are neither zero nor one. We quantify this statement, following work by V. Kolyada, and obtain the unexpected result that there is always a point where the upper and the lower densities are closer to 1/2 than to zero or one. The method of proof uses a combinatorial restatement of the problem.", "subjects": "Classical Analysis and ODEs (math.CA); Combinatorics (math.CO)", "title": "Exceptional points for Lebesgue's density theorem on the real line", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9848109507462227, "lm_q2_score": 0.8244619177503205, "lm_q1q2_score": 0.8119391250737472 }
https://arxiv.org/abs/1510.00747
Elementary triangular matrices and inverses of $k$-Hessenberg and triangular matrices
We use elementary triangular matrices to obtain some factorization, multiplication, and inversion properties of triangular matrices. We also obtain explicit expressions for the inverses of strict $k$-Hessenberg matrices and banded matrices. Our results can be extended to the cases of block triangular and block Hessenberg matrices.
\section{Introduction} The importance of triangular, Hessenberg, and banded matrices is well-known. Many problems in linear algebra and matrix theory are solved by some kind of reduction to problems involving such types of matrices. This occurs, for example, with the $LU$ factorizations and the $QR$ algorithms. In this paper, we study first some simple properties of triangular matrices using a particular class of such matrices that we call elementary. Using elementary matrices we obtain factorization and inversion properties and a formula for powers of triangular matrices. Some of our results may be useful to develop parallel algorithms to compute powers and inverses of triangular matrices and also of block-triangular matrices. In the second part of the paper we obtain an explicit formula for the inverse of a strict $k$-Hessenberg matrix in terms of the inverse of an associated triangular matrix. Our formula is obtained by extending $n \times n$ $k$-Hessenberg matrices to $(n+k) \times (n+k)$ invertible triangular matrices and using some natural block decompositions. Our formula can be applied to find the inverses of tridiagonal and banded matrices. The problem of finding the inverses of Hessenberg and banded matrices has been studied by several authors, using different approaches, such as determinants and recursive algorithms. See \cite{Elouafi}, \cite{Ikebe}, \cite{Piff}, \cite{Xu}, and \cite{Yama}. \section{Elementary triangular matrices} In this section $n$ denotes a fixed positive integer, $N=\{1,2,\ldots,n\}$, and $\mathcal T$ denotes the set of lower triangular $n \times n$ matrices with complex entries. An element of $\mathcal T$ is called {\em elementary} if it is of the form $I +C_k$, for some $k \le n $, where $I$ is the identity matrix and $C_k$ is lower triangular and has all of its nonzero entries in the $k$-th column. Let $A=[a_{j,k}] \in \mathcal T$. For each $ k \in N$ we define $E_k$ as the matrix obtained from the identity matrix $I_n$ by replacing its $k$-th column with the $k$-th column of $A$, that is, $(E_k)_{j,k}=a_{j,k}$ for $j=k,k+1,\ldots, n$, $(E_k)_{j,j}=1 $ for $ j \ne k$, and all the other entries of $E_k$ are zero. The matrices $E_k$ are called the {\em elementary factors} of $A$ because $$ A = E_1 E_2 \cdots E_n . \eqno(2.1)$$ Let us note that performing the multiplications in (2.1) does not require any arithmetical operations. It is just putting the columns of $A$ in their proper places. If for some $k$ we have $a_{k,k}\ne 0$ then $E_k$ is invertible and it is easy to verify that $$ (E_k)^{-1} = I - \frac{1}{a_{k,k}} (E_k -I). \eqno(2.2)$$ Note that $(E_k)^{-1}$ is also an elementary lower triangular matrix. If $A$ is invertible then all of its elementary factors are invertible and from (2.1) we obtain $$ A^{-1} = (E_n)^{-1} (E_{n-1})^{-1} \cdots (E_2)^{-1} (E_1)^{-1}. \eqno(2.3) $$ Therefore $A^{-1}$ is the product of the elementary factors of the matrix $B= (E_1)^{-1} (E_2)^{-1}\cdots (E_n)^{-1}$, but in reverse order. Notice that $B= I + (I -A) D^{-1},$ where $D=\hbox{\, Diag}(a_{1,1},a_{2,2},\ldots,a_{n,n})$. Therefore $$B^{-1}=E_n E_{n-1} \cdots E_2 E_1. \eqno(2.4)$$ If we are only interested in computing the inverse of $A$ we can find the inverse of $\tilde A=A D^{-1}, $ which has all its diagonal entries equal to one, and then we have $A^{-1}= D^{-1} {\tilde A}^{-1}$. This means that it would be enough to consider matrices with diagonal entries equal to one to study the construction of inverses of triangular matrices. If we consider general triangular matrices we can also obtain results about the computation of powers of such matrices. We will obtain next some results about products of elementary factors. We define $C_k= E_k - I$ for $k \in N$. Note that $(C_k)_{k,k}= a_{k,k} -1$, $(C_k)_{j,k}= a_{j,k} $ for $j=k+1,\ldots,n$, and all the other entries are zero, that is, all the nonzero entries of $C_k$ lie on the $k$-th column. It is clear that $A$ has the following additive decomposition $$A= I + C_1+C_2+ \cdots + C_n. \eqno(2.5)$$ Let $L$ be the $n \times n$ matrix such that $L_{k+1,k}=1$ for $k=1,2,\ldots,n-1$ and all its other entries are zero. We call $L$ the shift matrix, since $M L$ is $M$ shifted one column to the left. Note that $\mathcal T$ is invariant under the maps $M \to M L$ and $M \to L M$. In order to simplify the notation we will write the entries in the main diagonal of $A$ as $a_{k,k}=x_k$ for $k=1,2,\ldots,n.$ The matrices $C_k$ have simple multiplication properties that we list in the following proposition. \begin{thm} \begin{enumerate} \item{ If $ j < k $ then $C_j C_k=0$.} \item{ For $m \ge 1$ we have $C_k^m=(x_k -1)^{m-1} C_k$.} \item{ If $ k > j $ then $ C_k C_j= a_{k,j} C_k L^{k-j}.$} \end{enumerate} \end{thm} The proofs are straightforward computations. Notice that all the nonzero entries of $C_k L^{k-j}$ are in the $j$-th column. From part 3 of the previous theorem we obtain immediately the following multiplication formula. If $r\ge 2$ and $k_1 >k_2 > \cdots > k_r$ then $$ C_{k_1} C_{k_2} \cdots C_{k_r}=a_{k_1,k_2} a_{k_2,k_3} \cdots a_{k_{r-1},k_r} C_{k_1} L^{k_1 -k_r}. \eqno(2.6)$$ If $K$ is a subset of $N$ with at least two elements we define the matrix $$ G(K)= a_{k_1,k_2} a_{k_2,k_3} \cdots a_{k_{r-1},k_r} C_{k_1} L^{k_1 -k_r}, \eqno(2.7)$$ where $K=\{k_1,k_2,\ldots, k_r\}$ and $k_1 >k_2 > \cdots > k_r.$ Let us note that all the nonzero entries of $G(K) $ are in its $k_r$-th column. If $K$ contains only one element, that is, $K=\{ j \}$ then we put $G(K)=C_j$. Since $ E_k= I + C_k$, the multiplication properties of the $C_k$ can be used to obtain some corresponding properties of the $E_k$. \begin{thm} \begin{enumerate} \item{ If $ k > j $ then $E_k E_j = I + C_k +C_j + G(\{k,j\}).$} \item{ For $m \ge 1$ we have $$E_k^m=I + \frac{ x_k^m -1}{x_k -1}\, C_k. $$} \item{ If $K=\{k_1,k_2,\ldots, k_r\}$ and $k_1 >k_2 > \cdots > k_r$ then $$ E_{k_1} E_{k_2} \cdots E_{k_r} = I + \sum_{J \subseteq K} G(J). $$ } \end{enumerate} \end{thm} Proof: The proof of the first part is trivial. For the second part, use part 2 of Theorem 2.1 and the binomial formula. For part 3, write each factor in the form $E_{k_j}=I + C_{k_j}$, expand the product, collect terms and use the definition of the function $G$. Alternatively, we can use part 1 repeatedly and then use the definition of $G$. \eop Observe that the number of summands in part 3 is at most equal to the number of subsets of $K$. If some of the entries $a_{j,k}$ are equal to zero then some of the matrices $G(J)$ are zero. Taking $K=N$ in part 3 of Theorem 2.2 we obtain $$ E_n E_{n-1} \cdots E_2 E_1 = I + \sum_{J\subseteq N} G(J) . \eqno(2.8) $$ Let $k \in N $. Then the summands in (2.8) that may have nonzero entries in the $k$-th column (other than $I$) are the matrices $G(\{k\}\cup J)$ where $J \subseteq \{k+1,k+2,\ldots,n\}$, and thus the number of such matrices is at most equal to $2^{n-k}$. For $k=n$ there is only one, which is $C_n$, for $k=n-1$ they are $C_{n-1}$ and $G(\{n-1,n\})$, and so on. Since $G(\{k\}\cup J)$ is a scalar multiple of $ C_m L^{k-m} $ where $m$ is the largest element of $J$, we can group together all the terms having the same largest element $m$ and therefore the $k$-th column of the product in (2.8) is a linear combination of $C_k, C_{k+1}L, C_{k+2}L^2, \ldots, C_n L^{n-k}$, and the $k$-th column of the identity matrix. Therefore, if we have computed $E_n E_{n-1} \cdots E_{k}$ then the columns with indices $k,k+1,\ldots,n$ are determined and do not change when we multiply by $E_{k-1}, E_{k-2}$, etc. Thus $E_n E_{n-1} \cdots E_{k+1}$ and $E_n E_{n-1} \cdots E_{k+1} E_{k}$ only differ in their $k$-th columns. This means that computing the sequence $E_n E_{n-1} \cdots E_{k}$, for $k=n, n-1,\ldots,2,1$ we obtain $B^{-1}$ column by column, starting from the last one. This procedure may be useful to develop parallel algorithms for the computation of inverses of triangular matrices. We consider now explicit expressions for the positive powers of a triangular matrix $A$ in terms of its elementary factors. We start with $A^2$. Using (2.5) and part 1 of Theorem 2.1 we get $$A^2=(I + C_1+C_2+\cdots + C_n)^2=I+ 2 \sum_{k=1}^n C_k +\sum_{k=1}^n C_k^2 + \sum_{j<k} C_k C_j. \eqno(2.9)$$ Therefore, by Theorem 2.1 we have $$A^2=I + \sum_{k=1}^n (1+ x_k) C_k +\sum_{j<k} a_{k,j} C_k L^{k-j} , \eqno(2.10)$$ where the last sum runs over all pairs $(j,k)$ such that $1 \le j < k \le n$. Let $K=\{k_1,k_2,\ldots, k_r\}\subseteq N$, where $k_1 >k_2 > \cdots > k_r, $ and let $m$ be a positive integer. We define the scalar valued function $g(K,m)$ as follows $$g(K,m)=\Delta[1, x_{k_1},x_{k_2},\ldots, x_{k_r}] t^{m+r}, \eqno(2.11)$$ where $\Delta[1, x_{k_1},x_{k_2},\ldots, x_{k_r}]$ denotes the divided differences functional with respect to the numbers $1, x_{k_1},x_{k_2},\ldots, x_{k_r}$. $g(K,m)$ is a symmetric polynomial in the $x_j$. For the properties of divided differences see \cite{ddlrs}. Using induction and some basic properties of divided differences we can obtain an expression for $A^m$ in terms of matrices of the form $G(J)$. \begin{thm} For $m \ge 1$ we have $$A^m= I + \sum_{j=1}^n g(\{j\},m-1) C_j +\sum_{J\subset N,\ |J|=2} g(J,m-2) G(J)+ \qquad \qquad \qquad $$ $$\qquad \qquad \qquad \sum_{J\subset N,\ |J|=3} g(J,m-3) G(J)+ \cdots + \sum_{J\subset N,\ |J|=m} g(J,0) G(J). \eqno(2.12)$$ \end{thm} Note that the numbers $g(J,0)$ in the last sum are all equal to 1. A similar result for triangular matrices with distinct diagonal entries $x_j$ was obtained by Shur \cite{Shur}. Other formulas for powers of general square matrices appear in \cite{FoM}. \section{Inverses of Hessenberg matrices} In this section we use the fact that a $k$-Hessenberg matrix $H$ is a submatrix of a larger triangular matrix to obtain a formula for the inverse of $H$, in case it exists. We also characterize the invertible $k$-Hessenberg matrices in terms of properties of the blocks in a natural block decomposition. We call an $n \times n$ matrix $H=[h_{i,j}]$ lower $k$-Hessenberg if $h_{i,j}=0$ for $i < j-k$. We say that $H$ is strict lower $k$-Hessenberg if $h_{i,j}\ne 0$ for $i=j-k$. Any lower $k$-Hessenberg $n \times n$ matrix $H$ has the following block decomposition. Let $m=n-k$. Then $$H= \left[ \begin{matrix} B & A \cr D & C \cr \end{matrix} \right], \eqno(3.1)$$ where $A$ is $m \times m$, $B$ is $m \times k$, $C$ is $k \times m$, and $D$ is $k \times k$. Note that $A$ is lower triangular and it is invertible if $H$ is strict $k$-Hessenberg. Extending the block decomposition of equation (3.1) to form a lower triangular matrix we obtain the following result. \begin{thm} Let $H$ be strict $k$-Hessenberg with the block decomposition given in (3.1). Then $H$ is invertible if and only if $C A^{-1} B -D$ is invertible and if $H$ is invertible we have $$ H^{-1} = \left[ \begin{matrix} 0 & 0 \cr A^{-1} & 0 \cr \end{matrix} \right] - \left[ \begin{matrix} I_k \cr E \cr \end{matrix} \right] G^{-1} \left[ \begin{matrix} F & I_k \cr \end{matrix} \right], \eqno(3.2) $$ where $I_k$ is the $k \times k$ identity matrix, $E=-A^{-1}B,$ \ \ $F=-C A^{-1}$, and $G=C A^{-1} B -D$. \end{thm} Proof: Suppose that $G=C A^{-1} B -D$ is invertible. Define the lower triangular $(n+k) \times (n+k)$ matrix $T$ by $$ T= \left[ \begin{matrix} I_k & 0 & 0 \cr B & A & 0 \cr D & C & I_k \cr \end{matrix} \right]. \eqno(3.3) $$ It is easy to verify that $$ T^{-1} = \left[ \begin{matrix} I_k & 0 & 0 \cr E & A^{-1} & 0 \cr G & F & I_k \cr \end{matrix} \right], \eqno(3.4)$$ where $E, F, $ and $G$ are as previously defined. Consider now the block decomposition $$ T= \left[ \begin{matrix} R & 0 \cr H & S \cr \end{matrix} \right], $$ where $$ R= \left[ \begin{matrix} I_k & 0 \end{matrix} \right], \qquad S= \left[ \begin{matrix} 0 \cr I_k \end{matrix} \right]. $$ From $T T^{-1} = I_{n+k}$ we obtain the equations $$ H \left[ \begin{matrix} 0 & 0 \cr A^{-1} & 0 \cr \end{matrix} \right] + S \left[ \begin{matrix} F & I_k \cr \end{matrix} \right] = I_n, \eqno(3.5)$$ and $$ H \left[ \begin{matrix} I_k \cr E \cr \end{matrix} \right] +S G = 0. \eqno(3.6)$$ Since $G$ is invertible by hypothesis, we can solve for $S$ in the last equation and substitute the resulting expression for $S$ in equation (3.5). In this way we obtain $$ H \left[ \begin{matrix} 0 & 0 \cr A^{-1} & 0 \cr \end{matrix} \right] - H \left[ \begin{matrix} I_k \cr E \cr \end{matrix} \right] G^{-1} \left[ \begin{matrix} F & I_k \cr \end{matrix} \right] = I_n , $$ which implies that $H$ is invertible and (3.2) holds. Now suppose that $H$ is invertible and let $$ H^{-1}= \left[ \begin{matrix} U & V \cr W & Y \cr \end{matrix} \right] , $$ where the block decomposition is compatible with the decomposition of $H$ given in (3.1). Then, from the equation $H^{-1} H =I_n$ we obtain $$ UB +VD =I_k, \qquad UA + VC =0. \eqno(3.7)$$ Since $A$ is invertible the second equation in (3.7) yields $U + VC A^{-1}=0$, and multiplying by $B$ on the right we obtain $ UB + V C A^{-1}B =0$. Combining this last equation with the first one in (3.7) we get $ V ( C A^{-1}B -D) = -I_k$ and therefore $G$ is invertible and $G^{-1}=-V$. \eop Let us observe that an important part of the computation of $H^{-1}$ is the computation of the inverse of the triangular matrix $A$. Note also that any given strict $k$-Hessenberg matrix can be modified to become invertible by changing the $k \times k$ block $D$ of (3.1) in a suitable way. In the case of $k=1$ the matrix $G$ reduces to a number and then the second term in the right-hand side of (3.2) is the product of a column vector times a row vector. See \cite{Ikebe}. Note that Theorem 3.1 holds for tridiagonal matrices and also for banded matrices. Suppose that $k=1$, $H$ is tridiagonal, and $n \ge 3$. Then the matrices in the block decomposition of $H$ are $B=[ h_{1,1}\ h_{2,1} \ 0 \ 0 \ \cdots \ 0]^{\mathsf {T}}$, \ $C=[0 \ 0 \ \cdots \ 0\ h_{n,n-1}\ h_{n,n}],$ and $D=0$. In this case $A$ is lower triangular and tridiagonal. Using the row version of the theory of elementary triangular matrices, that we describe in the next section, it is easy to construct a recursive algorithm to compute the inverses of tridiagonal matrices as the size $n$ increases. The proof of Theorem 3.1 only uses the hypothesis that the block $A$ is invertible. Therefore the theorem holds also for other types of matrices, such as block Hessenberg matrices. \section{Row and block versions of the theory of elementary triangular matrices} In this section we present a brief description of two variations on the theory of elementary triangular matrices presented in section 2. The first one is obtained when we consider lower triangular matrices that are the sum of the identity matrix plus a matrix that has all of its nonzero entries in a single {\em row.} In the second one we consider block lower triangular matrices with square blocks along the diagonal that may have different sizes. Recall that $\mathcal T$ denotes the $n \times n$ lower triangular matrices. An element of $\mathcal T$ is called {\em row elementary} if it is of the form $I +R_k$, for some $k \le n $, where $I$ is the identity matrix and $R_k$ is lower triangular and has all of its nonzero entries in the $k$-th row. Let $A=[a_{k,j}] \in \mathcal T$. For $ k \in N$ we define $F_k$ as the matrix obtained from the identity matrix $I$ by replacing its $k$-th row with the $k$-th row of $A$, that is, $(F_k)_{k, j}=a_{k, j}$ for $j=1,2,\ldots, k$, $(F_k)_{j,j}=1 $ for $ j \ne k$, and all the other entries of $F_k$ are zero. The matrices $F_k$ are called the {\em elementary factors by rows } of $A$ because $$ A = F_1 F_2 \cdots F_n . \eqno(4.1)$$ If for some $k$ we have $a_{k,k}\ne 0$ then $F_k$ is invertible and $$ (F_k)^{-1} = I - \frac{1}{a_{k,k}} (F_k -I). \eqno(4.2)$$ Therefore, if $A$ is invertible then $$ A^{-1}= (F_n)^{-1} (F_{n-1})^{-1} \cdots (F_2)^{-1} (F_1)^{-1}. \eqno(4.3)$$ The matrices $(F_j)^{-1}$ are the elementary factors by rows of the matrix $$B= I - \frac{1}{a_{k,k}} (A -I), $$ and $$B^{-1}=F_n F_{n-1} \cdots F_2 F_1. \eqno(4.4)$$ Define $ R_k =F_k -I$ for $k \in N$. Then $A=I + R_1+R_2+\cdots +R_n$. It is easy to see that $F_k F_{k-1} \cdots F_2 F_1$ and $F_{k-1} \cdots F_2 F_1$ only differ in the $k$-th row, and the difference is a linear combination of translates of $R_1, R_2, \ldots, R_k$. Note that $F_k F_{k-1} \cdots F_2 F_1$ is the inverse of the submatrix of $B$ obtained by deleting the rows and columns with indices $k+1,k+2, \ldots, n$, which is often called the $k \times k$ section of $B$. Therefore, computing the sequence of matrices $F_k F_{k-1} \cdots F_2 F_1$ for $k=1,2,3,\ldots $ yields a recursive algorithm that gives the inverse of $B$ row by row. That algorithm can also be used to find the inverses of the sections of infinite lower triangular matrices such as the ones considered in \cite{LVS}. The inversion algorithm introduced in \cite{LVS} can be combined with the computation of inverses of diagonal blocks of a triangular matrix, using multiplication of elementary matrices, by rows or by columns. The concept of elementary triangular matrices (by colummns of rows) can be generalized to the case of block triangular matrices in a natural way. We describe next how it is done in the case of column block elementary matrices. Let $k_1,k_2,\ldots,k_r$ be positive integers such that $n=k_1+k_2+\cdots +k_r$. Let $X_j$ be a $k_j \times k_j$ matrix for $1 \le j \le r$ and let $A$ be an $n \times n$ block matrix that has the matrices $X_j$ along the diagonal and all its other nonzero entries below the diagonal blocks. For $j \in \{1,2,\ldots,r\}$ let $E_j$, called the block elementary factor of $A$ by columns, be the matrix that coincides with $A$ in all the columns corresponding to the diagonal block $X_j$, that is, the columns with indices between $k_1+k_2+\cdots +k_{j-1}+1$ and $k_1+k_2+\cdots +k_{j}$, and coincides with the identity matrix in the rest of the columns. Then we have $A= E_1 E_2 \cdots E_r$. If the block $X_j$ is invertible then $E_j$ is also invertible and $$(E_j)^{-1}= I - (E_j-I) \hbox{\,Diag}(I_{m_j}, (X_j)^{-1}, I_{p_j}),\eqno(4.5)$$ where $ \hbox{\,Diag}(I_{m_j}, (X_j)^{-1}, I_{p_j})$ is the block diagonal matrix that coincides with the block diagonal of $A$ in the $j$-th block and with the identity matrix in the rest of the blocks. Note that $m_j=k_1+k_2+ \cdots+ k_{j-1}$ and $p_j = n- m_j - k_j$. If all the diagonal blocks $X_j$ are invertible then $$A^{-1}= (E_r)^{-1} (E_{r-1})^{-1}\cdots (E_2)^{-1} (E_1)^{-1}. \eqno(4.6)$$
{ "timestamp": "2015-10-06T02:02:26", "yymm": "1510", "arxiv_id": "1510.00747", "language": "en", "url": "https://arxiv.org/abs/1510.00747", "abstract": "We use elementary triangular matrices to obtain some factorization, multiplication, and inversion properties of triangular matrices. We also obtain explicit expressions for the inverses of strict $k$-Hessenberg matrices and banded matrices. Our results can be extended to the cases of block triangular and block Hessenberg matrices.", "subjects": "Rings and Algebras (math.RA)", "title": "Elementary triangular matrices and inverses of $k$-Hessenberg and triangular matrices", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9902915243781044, "lm_q2_score": 0.8198933293122506, "lm_q1q2_score": 0.8119334149120677 }
https://arxiv.org/abs/1208.2920
Fooling sets and rank
An $n\times n$ matrix $M$ is called a \textit{fooling-set matrix of size $n$} if its diagonal entries are nonzero and $M_{k,\ell} M_{\ell,k} = 0$ for every $k\ne \ell$. Dietzfelbinger, Hromkovi{č}, and Schnitger (1996) showed that $n \le (\mbox{rk} M)^2$, regardless of over which field the rank is computed, and asked whether the exponent on $\mbox{rk} M$ can be improved.We settle this question. In characteristic zero, we construct an infinite family of rational fooling-set matrices with size $n = \binom{\mbox{rk} M+1}{2}$. In nonzero characteristic, we construct an infinite family of matrices with $n= (1+o(1))(\mbox{rk} M)^2$.
\section{Introduction} An $n\times n$ matrix~$M$ over a field~$\mathbb{k}$ is called a \textit{fooling-set matrix of size~$n$} if \begin{subequations}\label{eq:def-fool} \begin{align} M_{kk} &\ne 0 &&\text{ for all~$k$ (its diagonal entries are all nonzero), and} \label{eq:def-fool:diag}\\ M_{k,\ell} \, M_{\ell,k} &= 0 &&\text{ for all $k\ne \ell$.} \label{eq:def-fool:off-diag} \end{align} \end{subequations} Note that the definition depends only on the zero-nonzero pattern of~$M$. The word ``fooling set'' originates from Communication Complexity, but the concept is used under different names in other contexts (see Section~\ref{sec:connect}). In Communication Complexity and Combinatorial Optimization fooling-set matrices are used to show lower bounds on other numerical properties of interest. To do this, one wants to find a large fooling-set (sub-)matrix contained in a given matrix~$A$, where permutation of rows and columns is allowed. Since large fooling-set submatrices are typically difficult to identify (deciding whether a fooling-set submatrix of given size exists in a given matrix was recently shown to be NP-hard~\cite{Shitov13fool}), it is desirable to upper-bound the size of a fooling-set matrix one may possibly hope for in terms of easily computable properties of~$A$. Dietzfelbinger, Hromkovi{\v{c}}, and Schnitger (\cite[Thm.~1.4]{DietzfelbingerHromkovicSchnitger96}, or see~\cite[Lemma~4.15]{KushilevitzNisan97}; cf.~\cite{KlauckDewolf13,FioriniKaibelPashkovichTheis13}) proved that the rank of a fooling-set matrix of size~$n$ is at least $\sqrt n$, i.e., \begin{equation}\label{eq:rk-of-fool} n \le (\rk_{\mathbb{k}} M)^2. \end{equation} This bound follows as $\rk_{\mathbb{k}} I_n = \rk_{\mathbb{k}} M \circ M^T \le (\rk_{\mathbb{k}} M)^2$, where $I_n$ is the identity matrix of size $n$ and $\circ$ denotes entrywise product. This inequality gives such an upper bound on the largest fooling-set submatrix in terms of the easily computable rank of~$A$. Dietzfelbinger et al.\ asked the question whether the exponent on the rank in the right-hand side of~\eqref{eq:rk-of-fool} can be improved or not \cite[Open Problem~2]{DietzfelbingerHromkovicSchnitger96}. This problem is stated specifically for 0/1-matrices in their paper, mirroring the particular Communication Complexity situation studied there. Klauck and de Wolf~\cite{KlauckDewolf13}, however, gave applications and pointed out the importance for Communication Complexity of the question regarding general (i.e., not 0/1) matrices. For applications in Combinatorial Optimization, 0/1 matrices play no special role. Currently, the examples (attributed to M.~H\"uhne in~\cite{DietzfelbingerHromkovicSchnitger96}) of fooling-set matrices~$M$ with smallest rank are such that $n \approx (\rk_{\mathbb{F}_2} M)^{\log_4 6}$ ($\log_4 6 = 1.292\dots$); for general matrices, Klauck and de Wolf~\cite{KlauckDewolf13} have given examples with $n \approx (\rk_\mathbb{Q} M)^{\log_3 6}$ ($\log_3 6 = 1.63\dots$). \paragraph{\bf In this paper, we settle this question.} Firstly, for the case that $\mathbb{k}$ has nonzero characteristic, we prove that the inequality~\eqref{eq:rk-of-fool} is asymptotically tight. Notably, not only is the exponent on the rank in inequality~\eqref{eq:rk-of-fool} best possible, but so is the constant (one) in front of the rank. We do this by constructing an infinite family fooling-set matrices~$M$ over $\mathbb{k}=\mathbb{F}_p$ of size~$n$, for with $n= (1+o(1))(\rk M)^2$. The construction is based on a periodic sequence involving binomial coefficients.\footnote{% An extended abstract of this part of the current paper appeared in the EuroComb'13 proceedings~\cite{FriesenTheis13}. } % Secondly, in characteristic zero, we prove that the inequality is best possible up to a multiplicative constant, by constructing, for infinitely many~$n$, fooling-set matrices~$M$ over $\mathbb{k}=\mathbb{Q}$ of size~$n$, with $n = \binom{\rk M+1}{2}$. This construction is inspired by the relations between binomial coefficients which used in the nonzero characteristic. The method used in all the \emph{earlier} examples mentioned above of fooling-set matrices with small rank was the following: One conjures up a single, small fooling-set matrix~$M^0$ (of size, say, 6), determines its rank (say, 3), and then uses the tensor-powers of~$M^0$ (which are fooling-set matrices, too). With these numerical values, from~$M^0$, one obtains $\log_36$ as a lower bound on the exponent on the rank in~\eqref{eq:rk-of-fool}. Our constructions are departures from this approach. In the characteristic $k>0$ case our matrices are circulant. For the characteristic $k=0$ case, the matrices have a more complicated block structure, but each block is Toeplitz. \paragraph{\bf Organization of this paper.} % In the next section we will explain some of the connections of the fooling-set vs.\ rank problem with Combinatorial Optimization and Graph Theory concepts. In Section~\ref{sec:construction}, we prove our result for nonzero characteristic, and in Section~\ref{sec:chzero}, we prove the result for characteristic zero. In the final section, we discuss some consequences and point to some questions which remain open. \section{Some Remarks on the Importance of Fooling-Set Matrices}\label{sec:connect} While the fooling-set size vs.\ rank problem is of interest in its own right as a minimum-rank type problem in Combinatorial Matrix Theory, fooling-set matrices are connected to other areas of Mathematics and Computer Science. \paragraph{\bf In Polytope Theory,} % given a polytope~$P$, sizes of fooling-set submatrices of appropriately defined matrices provide lower bounds to the number of facets of any polytope~$Q$ which can be mapped onto~$P$ by a projective mapping. We sketch the connection (see~\cite{FioriniKaibelPashkovichTheis13} for the details). Let~$P$ be a polytope. Let $A=A(P)$ be a matrix whose rows are indexed by the facets of~$P$ and whose columns are indexed by the vertices of~$P$, and which satisfies $A_{F,v} = 0$, if $v \in F$, and $A_{F,v} \ne 0$, if $v \notin F$. The following was first observed by Yannakakis (see~\cite{FioriniKaibelPashkovichTheis13} for a direct proof). \begin{theorem}[\cite{Yannakakis91}] If~$A$ has a fooling-set submatrix of size~$n$, then every polytope~$Q$ which can be mapped onto~$P$ by a projective mapping has at least~$n$ facets. \end{theorem} Since for any fooling-set submatrix of size~$n$ of~$A$, the inequality \begin{equation}\label{eq:dietz:ptp} n \le (\dim P+1)^2. \end{equation} follows from~\eqref{eq:rk-of-fool} (cf.~\cite{FioriniKaibelPashkovichTheis13}), the following variant of Dietzfelbinger et al.'s question is of pertinence in Polytope Theory: \textit{Can the fooling-set size vs.\ dimension inequality~\eqref{eq:dietz:ptp} be improved for polytopes?} Our Theorem~\ref{thm:main:0} below yields the following corollary. \begin{corollary} For infinitely many~$d$, there is a a polytope~$P$ of dimension~$d$ such that the matrix~$A(P)$ contains a fooling-set submatrix of size $\Omega(\sqrt d)$. \end{corollary} We do not prove this corollary in this paper, because it would require a considerable amount of polytope theory overhead to arrive at the a comparatively easy consequence of Theorem~\ref{thm:main:0}. As a quick sketch, let the following suffice. From a given matrix~$A$, one derives a pointed convex polyhedral cone by taking a rank factorization of~$A'$. Intersecting the cone with a hyperplane gives the desired polytope~$P$. The presence of rows/columns in~$A'$ which do not correspond to facets/vertices of~$P$ is not a problem by Proposition~5.4 in \cite{FioriniKaibelPashkovichTheis13}. In \textbf{Combinatorial Optimization,} the polytope theoretic situation occurs for particular families of polytopes which arise from combinatorial optimization problems. Sizes of fooling-set matrices then yield lower bounds to the minimum sizes of Linear Programs for combinatorial optimization problems~\cite{Yannakakis91}. See~\cite{FioriniKaibelPashkovichTheis13} for bounds based on fooling sets for a number of combinatorial optimization problems, including bipartite matching. In the Polytope Theory / Combinatorial Optimization applications, we typically have $\mathbb{k}=\mathbb{Q}$, and the rank of the large matrix~$A$ is known. However, since the definition of a fooling-set matrix depends only on the zero-nonzero pattern, changing the field from $\mathbb{Q}$ to $\mathbb{k}'$ and replacing the nonzero rational entries of~$A$ by nonzero numbers in~$\mathbb{k}'$ may yield a matrix with lower rank and hence a better upper bound on the size of a fooling-set matrix. \paragraph{\bf In Computational Complexity,} % fooling-set matrices provide lower bounds for the communication complexity of Boolean functions (see, e.g., \cite{AroraBarak09,KushilevitzNisan97,LovaszSaks88Moeb,DietzfelbingerHromkovicSchnitger96,KlauckDewolf13}), and for the number of states of an automaton accepting a given language (e.g., \cite{GruberHolzer06}). As an example from Communication Complexity where the ``fooling-set method'' can be seen to yield a poor lower bound is the inner product function \begin{equation*} f(x,y) = \sum_{j=1}^n x_jy_j,\qquad\text{for $x,y\in \mathbb{Z}_2^n$.} \end{equation*} The rank of the associated $2^n\times 2^n$-matrix is~$n$, hence, by~\eqref{eq:rk-of-fool}, there is no fooling-set submatrix larger than $n^2$. \paragraph{\bf In Graph Theory,} % a fooling-set matrix (up to permutation of rows and columns) can be understood as the incidence matrix of a bipartite graph containing a perfect cross-free matching. Recall that a matching in a bipartite graph~$H$ is called \textit{cross-free} if no two matching edges induce a~$C_4$-subgraph of~$H$. Cross-free matchings are best known as a lower bound on the size of biclique coverings of graphs (e.g.\ \cite{Dawande03,JuknaKulikov09}). A \textit{biclique covering} of a graph~$G$ is a collection of complete bipartite subgraphs of~$G$ such that each edge of~$G$ is contained in at least one of these bipartite subgraphs. If a cross-free matching of size~$n$ is contained as a subgraph in~$G$, then at least~$n$ bicliques are needed to cover all edges of~$G$. For some classes of graphs, this is a sharp lower bound on the biclique covering number~\cite{Dawande03,SotoTelha11}. \paragraph{\bf In Matrix Theory,} the maximum size of a fooling-set submatrix is known under a couple of different names, e.g.~as independence number \cite[Lemma 2.4]{CohenRothblum93}), or as intersection number. For some semirings, this number provides a lower bound for the factorization rank of the matrix over the semiring. \paragraph{\bf In each of these areas,} % fooling-set matrices are used as lower bounds. Upon embarking on a search for a big fooling-set matrix in a large, complicated matrix~$A$, one is interested in an \textit{a priori} upper bound on their sizes and thus the potential usefulness of the lower bound method. \section{Preliminaries} We will make use of binomial coefficients and a few of their standard properties. As multiple extensions of binomial coefficients to negative arguments are possible, we fix here the definition we use (following~\cite{KnuthGrahamPatashnik94}). For intgers $n,k$, let \begin{equation*} \binom{n}{k} := \begin{cases} \dfrac{n(n-1) \cdots (n-k+1)}{k(k-1) \cdots 1}, &\text{if } k \ge 0, \\ 0, &\text{if } k <0. \end{cases} \end{equation*} Note that the \textit{symmetry identity} \begin{equation*} \binom{n}{k}=\binom{n}{n-k}, \qquad\text{ for all $n\ge 0$ and all integers $k$,} \end{equation*} and the \textit{addition formula} \begin{equation*} \binom{n}{k}=\binom{n-1}{k} + \binom{n-1}{k-1}, \qquad\text{ for all integers $n,k$,} \end{equation*} hold. \numberwithin{theorem}{section} \section{Characteristic $p>0$: Fooling-Set Matrices from Sequences}\label{sec:construction} For a prime number~$p$, we denote by $\mathbb{F}_p$ the finite field with~$p$ elements. The following is the accurate statement of our result. \begin{theorem}\label{thm:main:nz} For every prime number~$p$, there is a family of fooling-set matrices $M^{\scriptscriptstyle(t)}$ over $\mathbb{F}_p$ of size~$n^{\scriptscriptstyle(t)}$, $t=1,2,3,\dots$, such that $n^{\scriptscriptstyle(t)} \to \infty$, and \begin{equation*} \frac{ n^{\scriptscriptstyle(t)} }{ (\rk_{\mathbb{F}_p} M^{\scriptscriptstyle(t)})^2 } \;\longrightarrow 1. \end{equation*} \end{theorem} As noted above, we use linear recurring sequences. For every~$t$, we construct an $n^{\scriptscriptstyle(t)}$-periodic function, which gives us a fooling-set matrix of size~$n^{\scriptscriptstyle(t)}$. \paragraph{\bf We now describe that construction.} % Let~$p$ be a prime number and $r \ge 2$ an integer. Define the function $f\colon \mathbb{Z}\to \mathbb{F}_p$ by the recurrence relation \begin{subequations}\label{eq:def-f} \begin{equation}\label{eq:def-f:recrel} f(k+r) = -f(k) - f(k+1) \quad\text{for all $k\in \mathbb{Z}$} \end{equation} and the initial conditions \begin{equation}\label{eq:def-f:initial} f(0) = 1\text{, and } f(1) = \ldots = f(r-1) = 0. \end{equation} \end{subequations} Fix an integer $n > r$. From the sequence, we define an $n\times n$ matrix as follows. For ease of notation, the matrix indices are taken to be in $\{0,\dots,n-1\}\times \{0,\dots,n-1\}$. We let \begin{equation}\label{eq:def-M} M_{k,\ell} := f(k-\ell). \end{equation} It is fairly easy to see that $\rk M \le r$. \begin{lemma}\label{lem:rk-M} The rank of~$M$ is at most~$r$. \end{lemma} \begin{proof} From~\eqref{eq:def-f:recrel}, for $k \ge r$, we deduce the equation $M_{k,\star} = -M_{k-r,\star} - M_{k-r+1,\star}$. Hence, each of the rows $M_{k,\star}$, $k \ge r$, is a linear combination of the first~$r$ rows of~$M$. \end{proof} It can be seen that the rank is, in fact, equal to~$r$: The top-left $r\times r$ submatrix is non-singular because it is upper-triangular with nonzeros along the diagonal. \paragraph{\bf In the remainder of the section, we derive the fooling-set property.} First, we reduce the fooling-set property~\eqref{eq:def-fool} of~$M$ to a property of the function~$f$. \begin{lemma}\label{lem:fool-eq}\mbox{} The matrix~$M$ defined in~\eqref{eq:def-M} is a fooling-set matrix, if and only if, \begin{equation}\label{eq:cross-symmetry} f(k) f(-k) = 0 \quad\text{ for all $k \in \{1,\dots,n-1\}$.} \end{equation} \end{lemma} \begin{proof} It is clear from \eqref{eq:def-f:initial} and~\eqref{eq:def-M} that $M_{j,j} = f(0) = 1$ for all $j=0,\dots,n-1$, so it remains to verify~\eqref{eq:def-fool:off-diag}. Since \begin{equation*} M_{i,j} M_{j,i} = f(i-j) f(j-i) = f(i-j) f(-(i-j)), \end{equation*} if $f(k) f(-k) = 0$ for all $k=1,\dots,n-1$, then $M_{i,j} M_{j,i}$ is zero whenever $i\ne j$. This proves~\eqref{eq:def-fool:off-diag}. \end{proof} Given appropriate conditions on $r$ and~$n$ (depending on~$p$), this condition on~$f$ can indeed be verified: \begin{lemma}\label{lem:key-lemma} For all integers $t \ge 1$, if we let $r := p^t+1$ and $n := r(r-1)+1$, then $f(k)f(-k) = 0$ for all $k\in \mathbb{Z}\setminus n\mathbb{Z}$. \end{lemma} Combining the above three lemmas, we can complete the proof of Theorem~\ref{thm:main:nz}. \begin{proof}[Proof of Theorem~\ref{thm:main:nz}.] Let~$p$ be a prime number. For every integer $t\ge 1$, let $r := p^t+1$ and $n^{\scriptscriptstyle(t)} := r(r-1) +1$, and define the matrix $M^{\scriptscriptstyle(t)} := M$ over $\mathbb{F}_p$ as in~\eqref{eq:def-M}. By Lemma~\ref{lem:rk-M}, the rank of $M^{\scriptscriptstyle(t)}$ is at most~$r$, and from Lemmas \ref{lem:fool-eq} and~\ref{lem:key-lemma} we conclude that $M^{\scriptscriptstyle(t)}$ is a fooling-set matrix. Hence, we have \begin{equation*} 1 \ge \frac{ n^{\scriptscriptstyle(t)} }{ \rk_{\mathbb{F}_p} (M^{\scriptscriptstyle(t)})^2 } \ge \frac{ r^2-r+1 }{r^2} \ge 1 - p^{-t}/4 \xrightarrow{t\to\infty} 1, \end{equation*} where the left-most inequality is from~\eqref{eq:rk-of-fool}. \end{proof} To prove Lemma~\ref{lem:key-lemma}, we need two more lemmas. The first one states that in every section $\{jr,\dots,(j+1)r-1\}$, $j=0,1,\dots$, there is a block of zeros whose length decreases with~$j$. \begin{lemma}\label{lem:zero-blocks} For $j=0,\dots,r-2$, we have \begin{equation}\label{eq:zero-block} f(jr + i ) = 0 \quad\text{for $i = 1,\dots, r-1-j$.} \end{equation} \end{lemma} \begin{proof} Equation~\eqref{eq:zero-block} is true for $j=0$ by~\eqref{eq:def-f:initial}. Suppose~\eqref{eq:zero-block} holds for some $j<r-2$. Then $f((j+1)r + i ) = 0$ for $i = 1,\dots, r-1-(j+1)$, because, by~\eqref{eq:def-f:recrel}, \begin{equation*} f((j+1)r + i) = f(jr + i + r) = -f(jr + i) - f(jr + (i+1)) = -0 - 0 \end{equation*} holds. \end{proof} Every function on~$\mathbb{Z}$ with values in a finite field which is defined by a (reversible) linear recurrence relation is periodic (cf.\ e.g.~\cite{LidlNiederreiter94}). The second lemma establishes that a specific number~$n$ is a period of~$f$ as defined in~\eqref{eq:def-f}. \begin{lemma}\label{lem:periodicity} If $r = p^t+1$ for some integer $t\ge1$, then $n := r(r-1) +1$ is a period of the function~$f$. \end{lemma} \begin{proof} In this proof, for convenience, we identify $\mathbb{F}_p$ with the integers modulo~$p$. Consider $h(j,i) := f((j+1)r-i)$ for $i,j\in\mathbb{Z}$. We have to show that \begin{subequations}\label{eq:periodicity:zZ} \begin{align} \label{eq:periodicity:zZ:0} h(r-1,0) &= 0. \\ \label{eq:periodicity:zZ:middle} h(r-1,1) = \ldots = h(r-1,r-2) &= 0, \text{ and }\\ \label{eq:periodicity:zZ:rminus1} h(r-1,r-1) &= 1. \end{align} \end{subequations} We will first prove the following claims. \begin{enumerate}[\it {Claim}~(a).] \item\label{claim:periodicity:rec} For all $i,j\in \mathbb{Z}$, \begin{equation*} h(j+1,i) = -h(j,i) -h(j,i-1). \end{equation*} \item \label{claim:periodicity:bd} For $j=0,\dots,r-3$ \begin{equation*} h(j,-1) = 0, \ h(j,j+1) = 0. \end{equation*} \item\label{claim:periodicity:binom} For $j=0,\dots,r-2$ and $0\le i \le j$ \begin{equation*} h(j,i) = (-1)^{j+1} \binom{j}{i} \mod p. \end{equation*} \end{enumerate} Before we prove the claims, we show how they imply~\eqref{eq:periodicity:zZ}. Recalling the well-known fact that \begin{equation*} \binom{p^t}{i} = 0 \mod p \end{equation*} for every integer~$t\ge 1$ and for all $i=1,\dots,p^t-1$ (cf.\ e.g.~\cite{LidlNiederreiter94}), the equations~\eqref{eq:periodicity:zZ:middle} follow by applying Claims \ref{claim:periodicity:rec} and~\ref{claim:periodicity:binom} with $j:=r-2$: For $i=1,\dots,r-2 = p^t-1$, since \begin{multline*} h(r-1,i) = -h(r-2,i) - h(r-2,i-1) =\\ = - (-1)^{r-1}\binom{r-2}{i} - (-1)^{r-1}\binom{r-2}{i-1} \mod p, \end{multline*} it follows that \begin{alignat*}{2} h(r-1,i) &= - \binom{r-1}{i} &\;&\mod p \\ &= - \binom{p^t}{i} &&\mod p \\ &= 0 &&\mod p. \end{alignat*} To prove~\eqref{eq:periodicity:zZ:rminus1}, we infer from the claims that \begin{multline*} h(r-1,r-1) = -h(r-2,r-1) -h(r-2,r-2) = \\ - f( (r-1)r - r+1 ) - (-1)^{r-1} \binom{r-2}{r-2} = \\ -f( (r-2)r + 1 ) -(-1)^{p} = 1, \end{multline*} where the last equation follows from Lemma~\ref{lem:zero-blocks} and the fact that $-(-1)^p=1$ even for $p=2$. Finally, for~\eqref{eq:periodicity:zZ:0}, we conclude that \begin{multline*} h(r-1,0) = -h(r-2,0) - h(r-2,-1) = \\ -(-1)^{r-1} \binom{r-2}{0} - f( r^2 -(r-1) ) = -(-1)^p - h(r-1,r-1) = \\ -(-1)^p -1 = 0, \end{multline*} where the last-but-one equation follows from \eqref{eq:periodicity:zZ:rminus1}. \smallskip% \subparagraph{\it Proof of Claim~(\ref{claim:periodicity:rec}).} This is a straightforward computation. For all $j,i$, we compute \begin{multline*} h(j+1,i) = f((j+2)r-i) = \\ f((j+1)r-i+r) = -f((j+1)r-i) - f((j+1)r-(i-1)) = \\ -h(j,i) -h(j,i-1). \end{multline*} \qed \smallskip% \subparagraph{\it Proof of Claim~(\ref{claim:periodicity:bd}).} This claim follows from Lemma~\ref{lem:zero-blocks}. We have \begin{align*} h(j,-1) &= f((j+1)r+1) = 0 &&\text{for $j=0,\dots,r-3$,}\\ \intertext{and} h(j,j+1) &= f((j+1)r-j-1) = f(jr+r-1-j) = 0 && \text{for $j=0,\dots,r-2$.} \end{align*} \qed \smallskip% \subparagraph{\it Proof of Claim~(\ref{claim:periodicity:binom}).} Since $h(0,0)=-1$, Claim~(\ref{claim:periodicity:binom}), follows from Claims (\ref{claim:periodicity:rec}) and~(\ref{claim:periodicity:bd}). \qed \medskip% \subparagraph{This completes the proof of Lemma~\ref{lem:periodicity}.} \end{proof} \begin{remark} As seen in the proof, not surprisingly, our recurrence relation~\eqref{eq:def-f:recrel} produces binomial coefficients. However, it would be interesting to know whether there are other linear recurrence relations, $f(k+r) = \sum_{j=0}^{r-1} \alpha_j f(k+j)$, which define circulant fooling-set matrices with the appropriate relation between size and rank. Since all such sequences are periodic, only the conclusion of Lemma~\ref{lem:key-lemma} must be satisfied, and the period must be asymptotic to~$r^2$. \end{remark} Lemmas \ref{lem:zero-blocks} and~\ref{lem:periodicity} allow us to prove Lemma~\ref{lem:key-lemma}. \begin{proof}[Proof of Lemma~\ref{lem:key-lemma}.] We need to show $f(k) f(-k) = 0$ whenever $n \nmid k$. By Lemma~\ref{lem:periodicity}, this is equivalent to showing $f(k) f(n-k) = 0$ for $k=1,\dots,n-1$. Given such a~$k$, let $j,i$ be such that $k = jr +i$ and $0\le i \le r-1$. If $i \le r-1-j$, then $f(k)=0$ by Lemma~\ref{lem:zero-blocks}, and we are done. If, on the other hand, $i > r-1-j$, then \begin{equation*} n-k = r^2 - r +1 - jr - i = (r-1-(j+1))r + (r-i+1), \end{equation*} and $r-i+1 \le j+1$, so, by Lemma~\ref{lem:zero-blocks}, we have $f(n-k) = 0$. \end{proof} \section{Characteristic Zero: Fooling-Set Matrices from Binomial Coefficients}\label{sec:chzero} We now prove the result in characteristic zero. \begin{theorem}\label{thm:main:0} For each~$r \ge 1$, there is a fooling-set matrix $\Mr$ over~$\mathbb{Q}$ of size $\binom{r+1}{2}$ and rank~$r$. \end{theorem} The entries of~$\Mr$ are binomial coefficients, up to sign. As in the previous section, the low rank property will follow from the binomial addition identity. Whereas the matrix in the previous section is circulant, this matrix has a more complicated block structure but each block is Toeplitz. \paragraph{\bf We now describe the construction of the matrices $\Mr$.} To get some feeling for these matrices, here are the first few examples \begin{equation*} \Mr[1]=\begin{pmatrix} 1 \end{pmatrix}\!,\ \Mr[2] = \begin{pmatrix} 1 & 0 & 1\\ -1 & 1 & 0 \\ 0 & 1 & 1\\ \end{pmatrix}\!,\ \Mr[3] = \begin{pmatrix} 1 & 0 & 0 & 1 & -1 & 1\\ -1 & 1 & 0 & 0 & 1 & 0\\ 1 & -1 & 1 & -1 & 0 & 0\\ 0 & 1 & 0 & 1 & 0 & 1\\ 0 & 0 & 1 & -1 & 1 & 0\\ 0 & 1 & 1 & 0 & 1 & 1 \end{pmatrix}. \end{equation*} The recursive structure of $\Mr$ can be seen from these examples\footnote{% If the reader wants to see larger examples, Matlab code to construct $\Mr$ can be found at \url{https://github.com/troyjlee/hadamard_factorization}.% }. % In general, the top left $r\times r$ principal submatrix of $\Mr$ will be lower triangular with ones of alternating sign, and the bottom right $\binom{r}{2}$-sized principal submatrix will be $\Mr[r-1]$. We now give the details of the construction. First we define, for each integer~$t$, a function $f_t\colon \mathbb{N}\times\mathbb{N} \to \mathbb{Z}$ (with $\mathbb{N}:= \{1,2,3,\dots\}$). These functions will be used in the construction. They can be thought of as infinite matrices, and we will use the notation $F_t^{r,s}$ to specify the $r\times s$ matrix \begin{equation*} \bigl( F_t^{r,s} \bigr)_{i,j} := f_t(i,j), \quad\text{ for $i=1,\dots,r$ and $j=1,\dots,s$.} \end{equation*} Let $t \in \mathbb{Z}$ and $i,j \in \mathbb{N}$. The function $f_t$ is defined as \begin{equation*} f_t(i,j) := \begin{cases} \displaystyle \binom{t-1}{j-i-1}, & \text{ if $t>0$,}\\[2ex] \displaystyle (-1)^{j-i}\binom{-t-1+j-i}{-t-1}, & \text{ if $t \le 0$ and $i < j$,}\\[2ex] \displaystyle (-1)^{i-j-t} \binom{i-j-1}{-t}, & \text{ if $t \le 0$ and $i \ge j$.} \end{cases} \end{equation*} Note that in each case, $f_t(i,j)$ depends on the difference $i-j$ only, thus each $f_t$ is Toeplitz. When $t>0$, we see that $f_t(i,j)=0$ whenever $i \ge j$ meaning that these~$f_t$ are upper triangular. When $t=0$, the definition simplifies to $f_0(i,j)=\binom{-1}{i-j}$, thus $f_0$ is lower triangular with ones on the main diagonal. To get a better idea where the $f_t$ come from, consider an extended Pascal's triangle where the upper and lower indices begin from $-1$. In the following table, the entries are binomial coefficients where upper indices label the rows, lower indices label the columns. \begin{center} \begin{tabular}{c|cccccc} &-1 &0&1&2&3&4 \\ \hline -1 & 0 &1 & -1 & 1 & -1&1\\ 0 & 0 &1 & 0 & 0 & 0&0\\ 1 & 0 &1& 1 &0 & 0&0 \\ 2& 0 & 1 & 2 & 1& 0&0\\ 3& 0 & 1 &3 & 3&1&0 \\ 4& 0 & 1 & 4 & 6 & 4 & 1 \end{tabular} \end{center} The matrix $f_t$ for $t >0$ is the infinite Toeplitz matrix whose first row is given by the row of Pascal's triangle indexed by $t-1$, and whose first column is all zero. For $t < 0$, up to signs, $f_t$ is the infinite Toeplitz matrix whose first column is given by the column of Pascal's triangle indexed by $-t$ and whose first row is given by the $-t-1$ column of Pascal's triangle, starting from the row indexed by $-t-1$. Using the $f_t$ we can now construct the fooling-set matrices~$\Mr$. For $r \ge 1$, let $\Mr$ be a matrix of size $\binom{r+1}{2}$ defined as \begin{equation*} \Mr = \begin{pmatrix} f_0^{r,r} & f_{-1}^{r,r-1} & f_{-2}^{r,r-2} & \cdots & f_{-r+1}^{r,1} \\[.75ex] f_1^{r-1,r} & f_0^{r-1,r-1} & f_{-1}^{r-1,r-2} & & f_{-r+2}^{r-1,1} \\[.75ex] f_2^{r-2,r} & f_1^{r-2,r-1} & f_0^{r-2,r-2} & & f_{-r+3}^{r-2,1} \\[1ex] \vdots & & & \ddots & \vdots \\[1ex] f_{r-1}^{1,r} & f_{r-2}^{1,r-1} &\cdots & \cdots & f_0^{1,1} \\ \end{pmatrix} \end{equation*} The size of $\Mr$ is clearly $\binom{r+1}{2}$. That $\Mr$ is a fooling-set matrix and has rank~$r$ will be shown in the next lemmas. \paragraph{\bf We first show that $\Mr$ is a fooling-set matrix.} This follows from the fact that~$f_0$ is lower triangular and that in the above extended Pascal's triangle for $t > 0$ the row indexed by $t-1$ and column indexed by~$t$ are disjoint. \begin{lemma}\label{lem:0:fool} $\Mr$ is a fooling-set matrix. \end{lemma} \begin{proof} The diagonal entries of $\Mr$ are~$1$ as desired. To show that $\Mr(i,j)\Mr(j,i) = 0$ for $i \ne j$, it suffices to show that $f_t(i,j) f_{-t}(j,i)=0$ for each~$t$. This clearly holds for $t=0$ as $f_0$ is lower triangular. Now suppose $t >0$. If $i \ge j$ then $f_t(i,j)=0$ thus in this case we are also fine. In the case $j > i$ we have \begin{equation*} \abs{ f_t(i,j)} \abs{ f_{-t}(j,i) } = \binom{t-1}{j-i-1} \binom{j-i-1}{t}=0. \end{equation*} The second term is zero for $j-i \le t$ while the first term is zero for $j-i \ge t+1$, thus the product is always zero. \end{proof} In fact, $\Mr$ has the stronger property that exactly one of $\Mr(i,j), \Mr(j,i)$ is zero for $i \ne j$. \paragraph{\bf We now come to the rank of~$\Mr$.} The following claim is the key to prove $\rk(\Mr) \le r$. \begin{lemma}\label{lem:recurrence} For any $t \in \mathbb{Z}$ and $i,j \in \mathbb{N}$ \begin{equation*} f_t(i,j)=f_{t-1}(i,j) + f_{t-1}(i+1,j). \end{equation*} \end{lemma} \begin{proof} We break the proof into three cases depending on the value of $t$. \subparagraph{Case 1: $t >1$} This case follows from the binomial addition formula \begin{align*} f_t(i,j)=\binom{t-1}{j-i-1}&=\binom{t-2}{j-i-1} + \binom{t-2}{j-i-2} \\ &=f_{t-1}(i,j) + f_{t-1}(i+1,j) \enspace. \end{align*} \subparagraph{Case 2: $t=1$} In this case we use the symmetry identity together with binomial addition formula. \begin{align*} f_1(i,j)=\binom{0}{j-i-1} = \binom{0}{i-j+1}&=\binom{-1}{i-j} + \binom{-1}{i-j+1} \\ & = f_0(i,j)+f_0(i+1,j) \enspace. \end{align*} \subparagraph{Case 3: $t \le 0$} First consider the case $i \ge j$. Then again by the binomial addition formula \begin{align*} f_{t}(i,j)&= (-1)^{i-j-t}\binom{i-j-1}{-t} \\ &=(-1)^{i-j-t} \Biggl(-\binom{i-j-1}{-t+1}+\binom{i-j}{-t+1} \Biggr) \\ &=(-1)^{i-j-t+1}\binom{i-j-1}{-t+1}+(-1)^{i-j-t+2}\binom{i-j}{-t+1} \\ &=f_{t-1}(i,j) + f_{t-1}(i+1,j) \enspace . \end{align*} Finally, consider the case $i<j$. This case requires some care as it could be that $i+1=j$. For $t<0$, however, notice that the two formulas defining $f_t$ agree when $i=j$. The first gives $(-1)^{j-i}$ and the second $(-1)^{i-j-t} (-1)^{-t}=(-1)^{j-i}$. Thus when $t<0$ and $i=j$ the two formulas in the definition are consistent. As we are in Case 3, we are safe expressing $f_{t-1}(i+1,j)$ using the formula for $i < j$ as $t \le 0$. \begin{align*} f_t(i,j)&=(-1)^{j-i}\binom{-t-1+j-i}{-t-1} \\ &=(-1)^{j-i} \Biggl( \binom{-t+j-i}{-t} - \binom{-t+j-i-1}{-t} \Biggr) \\ &=(-1)^{j-i} \binom{-t+j-i}{-t} + (-1)^{j-i-1}\binom{-t+j-i-1}{-t} \\ &=f_{t-1}(i,j) + f_{t-1}(i+1,j) \enspace. \\ \end{align*} \end{proof} \begin{lemma}\label{lem:0:rk} The rank of $\Mr$ is $r$. \end{lemma} \begin{proof} The rank of $\Mr$ is at least~$r$, because the submatrix $f_0^{r,r}$ has rank $r$. Lemma~\ref{lem:recurrence} shows that all rows of $\Mr$ can be expressed as linear combinations of the first~$r$ rows, thus also $\rk(\Mr) \le r$. \end{proof} \paragraph{\bf Putting it all together,} Theorem~\ref{thm:main:0} is obtained from Lemmas \ref{lem:0:fool} and~\ref{lem:0:rk}. \section{Conclusion}\label{sec:conclusio} We conclude by discussing some questions which remain open. First of all, in characteristic zero, it would be interesting to know whether inequality~\eqref{eq:rk-of-fool} is asymptotically tight, or, more generally: \begin{question} What is smallest constant~$C$ such that $n \le C\, (\rk_\mathbb{k} M)^2$ for all $n\times n$ fooling-set matrices~$M$ over a field~$\mathbb{k}$ of characteristic zero? \end{question} There is a possibility that, in characteristic zero, the minimum achievable rank on the right hand side of inequality~\eqref{eq:rk-of-fool} may depend not only on the characteristic, but on the field~$\mathbb{k}$ itself. Indeed, there are examples of zero-nonzero patterns for which the minimum rank of a matrix with that zero-nonzero pattern differs between $\mathbb{k} = \mathbb{Q}$ and $\mathbb{k} = \mathbb{R}$, see e.g.~\cite{KoppartyBhaskararao08}. Secondly, while the construction in Section~\ref{sec:construction} for nonzero characteristic gives circulant matrices, the matrices in Section~\ref{sec:chzero} are not circulant. \begin{question} Can the exponent on the rank in the inequality~\eqref{eq:rk-of-fool} be improved for {\em circulant} fooling-set matrices over $\mathbb{k}$ with characteristic zero? \end{question} \input{full_paper.bbl} \end{document}
{ "timestamp": "2014-01-17T02:11:23", "yymm": "1208", "arxiv_id": "1208.2920", "language": "en", "url": "https://arxiv.org/abs/1208.2920", "abstract": "An $n\\times n$ matrix $M$ is called a \\textit{fooling-set matrix of size $n$} if its diagonal entries are nonzero and $M_{k,\\ell} M_{\\ell,k} = 0$ for every $k\\ne \\ell$. Dietzfelbinger, Hromkovi{č}, and Schnitger (1996) showed that $n \\le (\\mbox{rk} M)^2$, regardless of over which field the rank is computed, and asked whether the exponent on $\\mbox{rk} M$ can be improved.We settle this question. In characteristic zero, we construct an infinite family of rational fooling-set matrices with size $n = \\binom{\\mbox{rk} M+1}{2}$. In nonzero characteristic, we construct an infinite family of matrices with $n= (1+o(1))(\\mbox{rk} M)^2$.", "subjects": "Combinatorics (math.CO); Computational Complexity (cs.CC)", "title": "Fooling sets and rank", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9820137884587393, "lm_q2_score": 0.8267117876664789, "lm_q1q2_score": 0.8118423745698559 }
https://arxiv.org/abs/1806.10220
Genus of The Hypercube Graph And Real Moment-Angle Complexes
In this paper we demonstrate a calculation to find the genus of the hypercube graph $Q_n$ using real moment-angle complex $\mathcal{Z}_\mathcal{K}(D^1,S^0)$ where $\mathcal{K}$ is the boundary of an $n$-gon. We also calculate an upper bound for the genus of the quotient graph $Q_n/C_n$, where $C_n$ represents the cyclic group with $n$ elements.
\section*{Introduction} In graph theory, the hypercube graph is defined as the 1-skeleton of the $n$-dimensional cube. The graph theoretical properties of this graph has been studied extensively by Harary et al in \cite {Harary}. It is well known that this graph has genus $1+(n-4)2^{n-3}$. This fact was proved by Ringel in \cite{Ringel}, Beineke and Harary in \cite{MR0175805}. The moment-angle complex or polyhedral product has been studied recently in the works of Buchstaber and Panov \cite{BP}, Denham and Suciu \cite{DS}, Bahri et al. \cite{BBCG}. In this paper, we give an embedding of the hypercube graph in the real moment-angle complex and calculate the genus of the hypercube graph. This demonstrates an interesting relationship between the geometry of hypercube graph and real moment-angle complex. \section{Genus of a Graph} \begin{definition} The \textit{hypercube graph} $Q_n$ for $n\geq 1$ is defined with the following vertex and edge sets. \begin{align*} V &= \{(a_1,\cdots, a_n)\ |\ a_i = 0 \text{ or } 1\}\\ &= \textrm{ the set of all ordered binary } n\textrm{-tuples with entries of 0 and 1 }\\ E &= \{(u,v)\in V\times V\ |\ u \text{ and } v \text{ differ at exactly one place}\} \end{align*} \end{definition} It is straightforward to see that the hypercube graph can also be defined recursively as a cartesian product \cite[p.~22]{HararyBook}. $$ Q_1 = K_2, \quad Q_n = K_2 \square Q_{n-1}. $$ Now we will define the genus of a graph. In this paper, a `surface' will mean a closed compact orientable manifold of dimension of 2. A graph embedding in a surface means a continuous one to one mapping from the topological representation of the graph into the surface. More explanation about graph embeddings can be found at \cite{TopGraph}. \begin{definition} The \emph{genus} $\gamma(G)$ of a graph $G$ is the minimal integer $n$ such that the graph can be embedded in the surface of genus $n$. In other words, it is the minimum number of handles which needs to be added to the 2-sphere such that the graph can be embedded in the surface without any edges crossing each other. \begin{figure} \centering \includegraphics[scale=0.6]{toroidal_graph.eps} \label{toroidal} \caption{$K_{3,3}$ and $K_5$ embedded in a torus.} \end{figure} \begin{example} All planar graphs have genus 0. The complete graph with 5 vertices denoted by $K_5$ and the complete bipartite graph with 6 vertices denoted by $K_{3,3}$ both have genus 1 ( Figure~\ref{toroidal}). The non-planarity of these graphs denoted by $K_n$ and $K_{m,n}$ are explained in \cite[chapter~6]{West}. \end{example} \end{definition} \begin{definition}[2-cell embedding] Assume that G is a graph embedded in a surface. Each region of the complement of the graph is called a face. If each face is homeomorphic to an open disk, this embedding is called a 2-cell embedding. \end{definition} In this paper we will restrict our attention to 2-cell embeddings of graphs because the embedding of the hypercube graph in a real moment-angle complex is a 2-cell embedding. We describe it in the next section. Now restricting our attention to 2-cell embeddings of a graph $G$ in a surface with genus $g$, we can see that $$\gamma_M(G) = \max \{g\ |\ G \textrm{ has a 2-cell embedding on a surface with genus }g\}$$ must exist. This is true because if a graph has a 2-cell embedding in a surface $S_g$, then each handle of the surface must contain at least one edge. So we have a loose upper bound of $\gamma_M(G)\leq e$ (See \cite{perez} for further explanation). So we can define the maximum genus of a finite connected graph as follows. \begin{definition}[Maximum genus] The maximum genus $\gamma_M(G)$ of a connected finite graph $G$ is the maximal integer $m$ such that $G$ has a 2-cell embedding on the surface of genus $m$. \end{definition} Two theorems which are important tools in the analysis of graph embeddings follow next. \begin{theorem}[Euler's Formula] Let a graph $G$ has a 2-cell embedding in the surface $S_g$ of genus $g$, with the usual parameter $V, E, F$. Then \begin{equation} |V|-|E|+|F|=2-2g \end{equation} \end{theorem} \begin{proof} See \cite[Chapter~3]{TopGraph} \end{proof} \begin{theorem}[Duke\cite{Duke}] A graph $G$ has a 2-cell embedding in a surface $S_g$ of genus g if and only if $\gamma(G)\leq g \leq \gamma_M(G)$. \end{theorem} The last theorem tells us that if there exist 2-cell embeddings of a graph in surfaces of genera $m$ and $n$ with $m\leq n$, then for any integer $k$ with $m\leq k \leq n$, there exists a 2-cell embedding of the graph in a surface with genus $k$. A detailed explanation and proof of this theorem can be found in Richard A. Duke's original paper \cite{Duke}. Using these theorems, we can find a lower bound for the genus of the hypercube graph. Let a graph $G$ is embedded in a surface and $f_i$ denote the number of faces which has $i$ edges as its boundary. So we have $$ 2|E| = \sum_{i} if_i $$ For the hypercube graph, each face must have at least 4 edges as its boundary. Therefore, $$ 2|E|= \sum_{i\geq4}if_i\geq \sum_{i} 4 f_i=4|F| $$ which implies $|F|\leq\frac{|E|}{2}$. Now using Euler's formula, \begin{align*} g=\ & 1-\frac{|V|}{2}+\frac{|E|}{2}-\frac{|F|}{2}\\ &\geq1-\frac{|V|}{2}+\frac{|E|}{2}-\frac{|E|}{4} = 1-\frac{|V|}{2}+\frac{|E|}{4} \end{align*} But for the hypercube graph $Q_n$, we have $|V|=2^n, |E|=n2^{n-1}$. So using the above inequality we get a lower bound\footnote{See \cite{HararyIneq} for more detail on the inequalities involving genus of a graph} for the genus of a hypercube graph, \begin{equation} \gamma(Q_n)\geq 1- 2^{n-1}+n2^{n-3}= 1+(n-4)2^{n-3} \end{equation} To show that this lower bound can be achieved, we will use the real moment-angle complex. In fact, we prove the following theorem. \begin{theorem}\label{main} For $n\geq 3$, the hypercube graph can be embedded in a surface with genus $1+(n-4)2^{n-3}$. Moreover, this embedding is a 2-cell embedding. \end{theorem} \section{Moment-Angle Complex} \begin{definition} Let $(X,A)$ be a pair of topological spaces and $\mathcal{K}$ be a finite simplicial complex on a set $[m]=\{1,\cdots,m\}$. For each face $\sigma\in\mathcal{K}$, define $$ (X,A)^\sigma = Y_1\times \cdots \times Y_m $$ where $$ Y_i = \begin{cases} X & \mathrm{if }\quad i\in \sigma\\ A & \mathrm{if }\quad i\notin \sigma \end{cases} $$ The moment-angle complex $\mathcal{Z}_\mathcal{K}(X,A)$ corresponding to pair $(X,A)$ and simplicial complex $\mathcal{K}$ is defines as the following subspace of the cartesian product $X^m$. $$ \mathcal{Z}_\mathcal{K}(X,A)=\bigcup_{\sigma\in \mathcal{K}} (X,A)^\sigma $$ \end{definition} For our calculation we will use the pair $(D^1,S^0)$. This space, $\mathcal{Z}_\mathcal{K}(D^1,S^0)$ is called the real moment-angle complex corresponding to $\mathcal{K}$. \begin{example} Let $\mathcal{L}_n$ denote the simplicial complex with $n$ discrete points. Then by the above definition \begin{gather*} \mathcal{Z}_{\mathcal{L}_n}(D^1,S^0) = (D^1\times S^0\times \cdots \times S^0)\cup (S^0\times D^1\times\\ \cdots \times S^0)\cup \cdots\cup (S^0\times S^0\times \cdots \times D^1) \end{gather*} It is easy to see that $\mathcal{Z}_{\mathcal{L}_n}(D^1,S^0)$ is homeomorphic to of the hypercube graph $Q_n$. \end{example} From the definition of the moment-angle complex, we can prove the following lemma. \begin{lemma} Let $f:\mathcal{L}\hookrightarrow \mathcal{K}$ be an inclusion map of simplicial complex where $\mathcal{L}$ and $\mathcal{K}$ both have the same number of vertices. Then there exists an inclusion map of moment-angle complexes, $\mathcal{Z}_f: \mathcal{Z}_{\mathcal{L}}(X,A)\hookrightarrow \mathcal{Z}_\mathcal{K}(X,A)$. \begin{proof} We can consider $\mathcal{L}$ as a subcomplex of $\mathcal{K}$. So any face $\sigma$ of $\mathcal{L}$ is also a face of $\mathcal{K}$. From this we can conclude that $$(X,A)^\sigma\subset \bigcup_{\tau\in\mathcal{K}}(X,A)^\tau$$ This implies that $\mathcal{Z}_{\mathcal{L}}(X,A)\subset\mathcal{Z}_K(X,A)$. \end{proof} \end{lemma} \begin{example} Let $\mathcal{K}_n$ be the boundary of an $n$-gon and $\mathcal{L}_n$ be the $n$ vertices of $\mathcal{K}_n$. Using the above lemma, we can conclude that $\mathcal{Z}_{\mathcal{L}_n}(D^1,S^0)=Q_n$ is embedded in $\mathcal{Z}_{K_n}(D^1,S^0)$. Also, if we consider the complement of $\mathcal{Z}_{\mathcal{L}_n}(D^1,S^0)$ in $\mathcal{Z}_{K_n}(D^1,S^0)$, we will get a collection of open discs $(D^1\times D^1)^\mathrm{o}$ which is straightforward from the definitions. So, this embedding of $Q_n$ in $\mathcal{Z}_{K_n}(D^1,S^0)$ is clearly a 2-cell embedding. \end{example} It is interesting to note that $\mathcal{Z}_{K_n}(D^1,S^0)$ is a closed compact surface with genus $1+(n-4)2^{n-3}$. This fact was proved by Coxeter in \cite{Cox}. We will give an inductive proof here. \begin{proposition} For $n\geq 3$, $\mathcal{Z}_{K_n}(D^1,S^0)$ is a closed, compact and orientable surface with genus $1+(n-4)2^{n-3}$. \begin{proof} For brevity, we write $\mathcal{Z}_{\mathcal{K}_n}$ to denote $\mathcal{Z}_{\mathcal{K}_n}(D^1,S^0)$.\\ If $n = 3$, it is straightforward that $\mathcal{Z}_{\mathcal{K}_n}=\partial(D^1\times D^1\times D^1)\approx S^2$. Let's assume the statement is true for an integer $n\geq 3$. So $\mathcal Z_{\mathcal{K}_n}$ is an orientable surface of genus $1+(n-4)2^{n-3}$. Also note that \begin{align*} \mathcal Z_{\mathcal{K}_n} = &\ \ \overbrace {D^1\times D^1\times S^0\times \cdots \cdots \cdots\times S^0}^{n\ \mathrm{ factors}} \\ & \cup S^0\times D^1\times D^1\times S^0\times \cdots \times S^0 \\ & \ \ \vdots\\ & \cup S^0\times \cdots \cdots \cdots \times S^0\times D^1\times D^1 \\ & \cup D^1\times S^0 \cdots \cdots \cdots \cdots \times S^0\times D^1 \\ \end{align*} Let $B$ be the last term in the union that is $ B = D^1\times S^0 \times \cdots \times S^0\times D^1 \subset \mathcal{Z}_{\mathcal{K}_n}$. So $B$ is $2^{n-2}$ copies of $D^1\times D^1$ on the surface $\mathcal{Z}_{\mathcal{K}_n}$. Now note that, $$\partial (B) = (S^0\times S^0 \times\cdots \times S^0\times D^1) \cup (D^1\times S^0 \times\cdots \times S^0\times S^0) $$ and $$ \mathcal{Z}_{\mathcal{K}_{n+1}} = ((\mathcal{Z}_{\mathcal{K}_n}-B)\times S^0)\cup (\partial B\times D^1) $$ This means that to construct $\mathcal{Z}_{\mathcal{K}_{n+1}}$, we first delete $2^{n-2}$ copies of $D^1\times D^1$ from $\mathcal{Z}_{\mathcal{K}_n}$, then take two copies of $\mathcal{Z}_{\mathcal{K}_n}-B$ and glue $2^{n-2}$ copies of 1-handle along the boundary of $B$. Therefore, $$ \mathcal{Z}_{\mathcal{K}_{n+1}} = \mathcal{Z}_{\mathcal{K}_n}\#\mathcal{Z}_{\mathcal{K}_n}\#(2^{n-2}-1) S^1\times S^1 $$ Here one of the $2^{n-2}$ handles is being used to construct the first connected sum $\mathcal{Z}_{\mathcal{K}_n}\#\mathcal{Z}_{\mathcal{K}_n}$ and the remaining $2^{n-2}-1$ copies of 1-handles are connected as $2^{n-2}-1$ copies of torus. So clearly $\mathcal{Z}_{\mathcal{K}_{n+1}}$ is a closed compact orientable surface with genus $$2(1+(n-4)2^{n-3})+2^{n-2}-1= 1+((n+1)-4)2^{(n+1)-3}.$$ \end{proof} \end{proposition} In the above discussion, we have proved that the hypercube graph $Q_n$ can be embedded in the real moment-angle complex $\mathcal{Z}_{\mathcal{K}_n}(D^1,S^0)$ which is a surface of genus $1+(n-4)2^{n-3}$. Hence Theorem \ref{main} is proved. \section{Action of \texorpdfstring{$C_n$}{Cn} on \texorpdfstring{$Q_n$}{Qn} and \texorpdfstring{$\mathcal{Z}_{\mathcal{K}_n}(D^1,S^0)$}{Zk}} Let $C_n$ denote the cyclic group with $n$ elements generated by $\sigma$. Since $\mathcal{K}_n$ can be considered as the boundary of a regular $n$-gon, we can define an action of $C_n$ on $\mathcal{K}_n$ by rotating the $n$-gon by $2\pi/n$ radians about the centre. If $(i,i+1)$ represents an edge, then this action will take this edge to $(i+1,i+2)$ (here the vertices are considered as $i\ (\mod n)$). So We can define an action of $C_n$ on $\mathcal{Z}_{\mathcal{K}_n}(D^1,S^0)$ by $\sigma (x_1,\cdots, x_n) = (x_{\sigma(1)},\cdots,x_{\sigma(n)})=(x_2, \cdots, x_n,x_1)$ where $(x_1,\cdots, x_n) \in (D^1,S^0)^\tau$ for a maximal face $\tau\in \mathcal{K}_n$. So $\sigma$ is rotating the coordinates of a point in the moment-angle complex $\mathcal{Z}_{\mathcal{K}_n}(D^1,S^0)$. We can define a similar action of $C_n$ on the hypercube graph $Q_n$ by rotating the coordinates of a point. It is straightforward to note that the following diagram commutes. $$ \begin{tikzcd} Q_n \arrow{d} \arrow[r, hook] & \Z_{\K_n}(D^1,S^0) \arrow{d} \\ Q_n/C_n \arrow[r,hook] & \Z_{\K_n}(D^1,S^0)/C_n \end{tikzcd} $$ Therefore the quotient graph $Q_n/C_n$ is embedded in the quotient space $\Z_{\K_n}(D^1,S^0)/C_n$. We will show that the quotient space \\ $\mathcal{Z}_{\mathcal{K}_n}(D^1,S^0)/C_n$ is also a closed connected orientable surface. Therefore, calculating the genus of the surface $\mathcal{Z}_{\mathcal{K}_n}(D^1,S^0)/C_n$ would suffice to give an upper bound for the genus of the quotient graph $Q_n/C_n$. First we will prove that $\mathcal{Z}_{\mathcal{K}_n}/C_n$ is closed connected, compact and orientable manifold. Then we will calculate the genus of this surface. Indeed the following theorem can be found in Ali's thesis \cite{Ali} theorem 4.2.2. \begin{theorem} Let $C_m$ be the subgroup of $C_n$ i.e. $m|n$, then $\mathcal{Z}_{\mathcal{K}_n}(D^1,S^0)/C_m$ is a closed surface. \end{theorem} It is not surprising that the quotient surface $\mathcal{Z}_{\mathcal{K}}(D^1,S^0)/C_n$ must be an orientable surface. We can prove it by giving a $\delta$-complex structure on this surface and check that all the triangles on the surface can be given an orientation such that any two neighboring triangles' edges fit nicely. \begin{lemma} The surface $\mathcal{Z}_{\mathcal{K}_n}(D^1,S^0)/C_n$ is an orientable surface. \begin{proof} First note that the action of $C_n$ permutes the coordinate of a point in a cyclic manner. So we only need to consider the space $D^1\times D^1\times S^0\times \dots \times S^0$, which is actually $2^{n-2}$ copies of $D^1\times D^1$. For each of these squares, we draw a diagonal from the lower left corner to the top right, make a triangulation of the surface and give suitable orientation to each triangle. Then we glue these squares along their boundaries under the identification generated by $C_n$. Let $\epsilon_1\epsilon_2\dots \epsilon_n$ represent the coordinate $(\epsilon_1,\dots, \epsilon_n )$ where $\epsilon_i = 0 \text{ or } 1$. For abbreviation, we write directed edges as $(000, 010)$ which represent the directed edge from $(0,0,0)$ to $(0,1,0)$. \textbf{Case 1:}($n=3$). We have two copies of $D^1\times D^1$ (Figure \ref{fig:my_label5}a). Under the action of $C_n$, we have the following identification of edges on the boundary of this two squares. $$ (000,010) \sim (000,100),\quad (001,011) \sim (010,110), $$ $$ (100,110) \sim (001,101),\quad (101,111) \sim (011,111) $$ As shown in the figure \ref{fig:my_label5}, we can give orientation to to each of the triangles such that the orientation of each edge fits together. \begin{figure} \centering \includegraphics[scale = 0.7]{orientable_3.eps} \caption{Quotient space $\mathcal{Z}_{\mathcal{K}_3}(D^1,S^0)/C_3$.} \label{fig:my_label5} \end{figure} \textbf{Case 2:}($n=4$). In this case, we have four copies of $D^1\times D^1$ as shown in (Figure \ref{fig:my_label6}b). And we have the identification of edges as follows: $$ (0000,0100) \sim (0000,1000),\quad (0010,0110) \sim (0100,1100) , $$ $$ (1000,1100) \sim (0001,1001),\quad (1011,1111) \sim (0111,1111) $$ $$ (1001,1101) \sim (0011,1011),\quad (0011,0111) \sim (0110,1110), $$ $$ (1010,1110) \sim (0101,1101),\quad (0001,0101) \sim (0010,1010) $$ We can see from the figure that the orientation of each triangle is compatible to each other. \begin{figure} \centering \includegraphics[scale = 0.8]{orientable_4.eps} \caption{Quotient space $\mathcal{Z}_{\mathcal{K}_4}(D^1,S^0)/C_4$.} \label{fig:my_label6} \end{figure} \textbf{Case 3:} ($n\geq 5$). For $n\geq 5$, we can have the identification of edges as follows: $$ (0000x,1000x) \sim (000x0,010x0),\quad (0010x,0110x) \sim (010x0,110x0), $$ $$ (100x0,110x0) \sim (00x01,10x01),\quad (101x1,111x1) \sim (01x11,1x111) $$ $$ (1001x,1101x) \sim (001x1,101x1),\quad (001x1,011x1) \sim (01x10,11x10), $$ $$ (1010x,1110x) \sim (010x1,110x1),\quad (000x1,010x1) \sim (00x10,10x10) $$ Here, $x$ represents a string of length $(n-4)$ whose characters can be $0$ or $1$. So we can see that all this identification preserve the orientation of the surface. Therefore, $\mathcal{Z}_\mathcal{K}(D^1,S^0)/C_n$ is an orientable surface. \end{proof} \end{lemma} Since the quotient of a compact and connected space is also a compact and connected space, we have proved the following theorem. \begin{theorem} \label{theorem_quotient} Let $\mathcal{K}$ be the boundary of an $n$-gon. Then $\mathcal{Z}_\mathcal{K}(D^1,S^0)/C_n$ is a closed, compact and orientable surface. \end{theorem} \subsection{Branched covering and genus of \texorpdfstring{$\mathcal{Z}_\mathcal{K}(D^1,S^0)/C_n$}{ZK}} Next, we will prove the following lemma which gives a formula for finding the genus of the quotient space. \begin{lemma} \label{my_lemma1} The genus of $\mathcal{Z}_\mathcal{K}(D^1,S^0)/C_n$ is given by the following formula: \begin{equation} g(\mathcal{Z}_\mathcal{K}(D^1,S^0)/C_n) = 1 + 2^{n-3} - \frac{1}{2n}\sum_{d|n} \phi(d)2^{n/d} \end{equation} \end{lemma} where $\phi$ is the Euler totient function. To prove this lemma, we will use the Riemann-Hurwitz formula for branched covering. Note that the quotient map $\mathcal{Z}_\mathcal{K}(D^1,S^0) \to \mathcal{Z}_\mathcal{K}(D^1,S^0)/C_n$ would be a covering map if remove a finite number of points (the corners of each $D^1\times D^1$). So this quotient map is a branched cover. We use the following definition from \cite{Turner}. \begin{definition} Let $X$ and $Y$ be two surfaces. A map $p: X\to Y$ is called a \textit{branched covering} if there exists a codimension 2 subset $S\subset Y$ such that $p: X\setminus p^{-1}(S) \to Y\setminus S$ is a covering map. The set S is called the branch set and the preimage $p^{-1}(S)$ is called the singular set. \end{definition} \begin{definition} Let $p: X\to Y$ be a branched covering of two surfaces where $Y$ is connected. The degree of this branched covering is the number of sheets of the induced covering after removing the branch points and singular points. \end{definition} In \cite{Ali}, it is proved that $\mathcal{Z}_\mathcal{K}(D^1,S^0) \to \mathcal{Z}_\mathcal{K}(D^1,S^0)/C_n$ is a branched covering of degree $n$. Since the quotient is closed, compact and orientable we can apply the classical Riemann-Hurwitz formula for branched covering. \begin{theorem}[Riemann-Hurwitz Formula] Let $G$ be a finite group acting on the surface $X$, such that the map $p:X \to X/G $ be a branched covering with a branch subset $S\subset X/G$. Let $G_y$ represent the isotropy subgroup for a point $y\in X$ and $\chi(X)$ be the Euler characteristic of $X$. Then \begin{equation} \chi (X) = |G|\cdot \chi(X/G) - \sum_{x\in S} \left(|G|-\frac{|G|}{n_x}\right) \end{equation} with $n_x = |G_y|$ for $y\in p^{-1}(x)$ and $x\in S$. \end{theorem} To apply this formula we need to calculate the cardinality of the isotropy subgroup for each of the singular points in $\mathcal{Z}_K(D^1,S^0)$. It is straightforward that the action of $C_n$ on a point permute its coordinate in a cyclic manner. The only points in $\mathcal{Z}_K(D^1,S^0)$ which have a nontrivial isotropy group have coordinates 0's or 1's only. Therefore, the cardinality of the isotropy subgroup is related to the number of aperiodic necklaces with 2-coloring. We will give some necessary definitions related to aperiodic necklaces and then count the Euler characteristic of $\mathcal{Z}_K(D^1,S^0)/C_n$ by using the Riemann-Hurwitz formula. \begin{definition} Let $W$ represent a word of length $n$ over an alphabet of size $k$. We define an action of the cyclic group $C_n=\langle \sigma \rangle$ on W by rotating its characters. For example, if $W = a_1a_2\cdots a_n$ where each $a_i$ is a character from the alphabet, then $\sigma(W) = a_2a_3\cdots a_n a_1$. A word $W$ of length $n$ is called an \textit{aperiodic word} if $W$ has $n$ distinct rotation. \end{definition} \begin{definition} An equivalence class of an aperiodic word under rotation is called a \textit{primitive necklace}. \end{definition} The total number of primitive $n$-necklaces on an alphabet of size k, denoted by $M(k,n)$, is given by Moreau's formula \cite{moreau1872permutations}, $$ M(k,n) = \frac{1}{n} \sum_{d|n} \mu (d) k^{n/d} $$ Note that we can deduce Moreau's formula by using M{\"o}bius inversion formula and the fact that $$ k^n = \sum_{d|n} d M(k,d) $$ \textbf{Total Number of Necklace of length $n$ with $k$-coloring}: Note that this number is the same as $\sum_{d|n} M(k,d)$ since $M(k,d)$ gives us the number of aperiodic necklaces for each divisor $d$ of $n$. So, we have $$ \begin{aligned} \sum_{d|n} M(k,d) &= \sum_{d|n} \frac{1}{d} \sum_{c|d} \mu (c) k^{d/c} \\ &=\frac{1}{n} \sum_{d|n} \sum_{c|d} \frac{n}{d} \mu (d/c) k^{c} \\ &= \frac{1}{n} \sum_{c|n} \sum_{\substack{b|\frac{n}{c}\\ d = bc}} \frac{n}{d} \mu (d/c) k^{c} \\ &= \frac{1}{n}\sum_{c|n} k^c \sum_{b|\frac{n}{c}}\mu(b)\left(\frac{n}{c}\right) \left(\frac{1}{b}\right)\\ &= \frac{1}{n}\sum_{d|n} \phi(d) k^{n/d}\\ \end{aligned} $$ The last line follows since $\sum_{b|\frac{n}{c}}\mu(b)(\frac{n}{c}) (\frac{1}{b}) = \phi(n/c)$, where $\phi$ is the Euler's totient function. For our calculation, $k=2$ since we are only concerned about words with 2 letters or necklaces with 2-coloring. We will denote this Moreau's formula by $$ M(n) = \frac{1}{n} \sum_{d|n} \mu (d) 2^{n/d} $$ So we have, $\sum_{d|n} M(d)= \frac{1}{n}\sum_{d|n} \phi(d) 2^{n/d}$. \begin{proof}[\textbf{Proof of lemma \ref{my_lemma1}}] Note that $C_n$ acts on a point of $\mathcal{Z}_\mathcal{K}(D^1,S^0)$ by cyclically permuting the coordinates. So $C_n$ acts freely on all but finitely many points. The coordinate of those points can be only $0$ or $1$. Each point in the branch set can be considered a primitive necklace of length $d$ where $d|n$. Clearly, there are $M(d)$ points in the branch set which has isotropy group $C_{n/d}$. So the summation in the Riemann-Hurwitz formula becomes $$\sum_{x\in S} \left(|G|-\frac{|G|}{n_x}\right) = \sum_{d|n} M(d) (n- \frac{n}{n/d})$$ Now using the Riemann-Hurwitz formula, $$ \begin{aligned} &\chi(\mathcal{Z}_\mathcal{K}(D^1,S^0)) = n .\chi(X/G) - \sum_{d|n} M(d) (n- \frac{n}{n/d})\\ &\implies (4-n)2^{n-2} = n . \chi(X/G) - \sum_{d|n} n M(d) + \sum_{d|n} d M(d)\\ &\implies 2^n -n 2^{n-2} = n. \chi(X/G) - n \sum_{d|n} M(d) + 2^n\\ &\implies \chi(\mathcal{Z}_\mathcal{K}(D^1,S^0)/C_n) = \sum_{d|n} M(d) - 2^{n-2}\\ &\implies \chi(\mathcal{Z}_\mathcal{K}(D^1,S^0)/C_n) = \frac{1}{n}\sum_{d|n} \phi(d)2^{n/d} - 2^{n-2}\\ &\implies g(\mathcal{Z}_\mathcal{K}(D^1,S^0)/C_n) = 1 + 2^{n-3} - \frac{1}{2n}\sum_{d|n} \phi(d)2^{n/d} \end{aligned} $$ So the quotient space $Z_{K_n}(D_1,S^0)/C_n$ has genus equal to $$ 1 + 2^{n-3} - \frac{1}{2}(\#\textit{ of n-length necklace with 2-coloring}) $$ \end{proof} \begin{example} For $n = 6$, $Z_{K_n}(D^1,S^0)$ is a surface with genus $1+ (6-4)2^{6-3} = 17$. Under the above formula, the genus of the quotient space $Z_{K_n}(D^1,S^0)/C_n$ is $$ 1+ 2^3 - \frac{1}{2}(\textit{\#of 6-length necklace with 2-coloring}) $$ The number of $6$ length necklace with 2-coloring is exactly $$ \frac{1}{6}\sum_{d|6} \phi(d)2^{6/d} = \frac{1}{6}(1.2^6+1.2^3+2.2^2+2.2) = 14 $$ So $Z_{K_n}(D^1,S^0)/C_n$ has genus $9-14/2 = 2$. \end{example} \subsection{ An upper bound for genus of quotient graph \texorpdfstring{$Q_n/C_n$}{Qn/Cn}} From the above discussion, we have proved the following lemma. \begin{lemma} The genus of the quotient graph, $Q_n/C_n$ has an upper bound: \begin{equation} \gamma(Q_n/C_n) \leq 1 + 2^{n-3} - \frac{1}{2n}\sum_{d|n} \phi(d)2^{n/d} \end{equation} \end{lemma} \begin{remark} It can be showed that theorem \ref{theorem_quotient} is also true for a subgroup $C_m\subset C_n$ where $m\lvert n$. In this paper we are not using that result, but I will add a proof of this fact in my PhD thesis. \end{remark} \section*{Acknowledgements} The author is thankful to Professor Frederick Cohen for his insightful discussion on the real moment-angle complex and related topics. Also many thanks to the reviewer whose careful reading and suggestions have improved the paper. \bibliographystyle{model1-num-names}
{ "timestamp": "2019-04-05T02:04:27", "yymm": "1806", "arxiv_id": "1806.10220", "language": "en", "url": "https://arxiv.org/abs/1806.10220", "abstract": "In this paper we demonstrate a calculation to find the genus of the hypercube graph $Q_n$ using real moment-angle complex $\\mathcal{Z}_\\mathcal{K}(D^1,S^0)$ where $\\mathcal{K}$ is the boundary of an $n$-gon. We also calculate an upper bound for the genus of the quotient graph $Q_n/C_n$, where $C_n$ represents the cyclic group with $n$ elements.", "subjects": "Algebraic Topology (math.AT)", "title": "Genus of The Hypercube Graph And Real Moment-Angle Complexes", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9873750510899382, "lm_q2_score": 0.8221891370573386, "lm_q1q2_score": 0.8118090412075819 }
https://arxiv.org/abs/1711.10112
Heuristics for the arithmetic of elliptic curves
This is an introduction to a probabilistic model for the arithmetic of elliptic curves, a model developed in a series of articles of the author with Bhargava, Kane, Lenstra, Park, Rains, Voight, and Wood. We discuss the theoretical evidence for the model, and we make predictions about elliptic curves based on corresponding theorems proved about the model. In particular, the model suggests that all but finitely many elliptic curves over $\mathbb{Q}$ have rank $\le 21$, which would imply that the rank is uniformly bounded.
\section{Introduction}\label{S:introduction} Let $E$ be an elliptic curve over $\mathbb{Q}$ (see \cite{SilvermanAEC2009} for basic definitions). Let $E(\mathbb{Q})$ be the set of rational points on $E$. The group law on $E$ gives $E(\mathbb{Q})$ the structure of an abelian group, and Mordell proved that $E(\mathbb{Q})$ is finitely generated \cite{Mordell1922}; let $\rk E(\mathbb{Q})$ denote its rank. The present survey article, based primarily on articles of the author with Eric Rains~\cite{Poonen-Rains2012-selmer}, with Manjul Bhargava, Daniel M. Kane, Hendrik Lenstra, and Eric Rains~\cite{Bhargava-Kane-Lenstra-Poonen-Rains2015}, and with Jennifer Park, John Voight, and Melanie Matchett Wood~\cite{Park-Poonen-Voight-Wood-preprint} is concerned with the following question: \begin{question} \label{Q:bounded rank} Is $\rk E(\mathbb{Q})$ bounded as $E$ varies over all elliptic curves over $\mathbb{Q}$? \end{question} Question~\ref{Q:bounded rank} was implicitly asked by Poincar\'e in 1901~\cite{Poincare1901}*{p.~173}, even before $E(\mathbb{Q})$ was known to be finitely generated! Since then, many authors have put forth guesses, and the folklore expectation has flip-flopped at least once; see \cite{Poincare1950-Oeuvres5}*{p.~495, end of footnote~(${}^3$)}, \cite{Honda1960}*{p.~98}, \cite{Cassels1966-diophantine}*{p.~257}, Tate~\cite{Tate1974}*{p.~194}, \cite{Mestre1982}, \cite{Mestre1986}*{II.1.1 and II.1.2}, \cite{Brumer1992}*{Section~1}, \cite{Ulmer2002}*{Conjecture~10.5}, and \cite{Farmer-Gonek-Hughes2007}*{(5.20)}, or see \cite{Park-Poonen-Voight-Wood-preprint}*{Section~3.1} for a summary. The present survey describes a probabilistic model for the arithmetic of elliptic curves, and presents theorems about the model that suggest that $\rk E(\mathbb{Q}) \le 21$ for all but finitely many elliptic curves $E$, and hence that $\rk E(\mathbb{Q})$ is bounded. Ours is not the first heuristic for boundedness: there is one by Rubin and Silverberg for a family of quadratic twists \cite{Rubin-Silverberg2000}*{Remarks 5.1 and~5.2}, and another by Granville, discussed in \cite{Watkins-et-al2014}*{Section~11} and developed further in \cite{Watkins-discursus}. Interestingly, the latter also suggests a bound of $21$. Modeling ranks directly is challenging because there are few theorems about the distribution of ranks. Also, although there exists extensive computational data that suggests answers to some questions (e.g., \cite{Balakrishnan-Ho-Kaplan-Spicer-Stein-Weigandt2016}), it seems that far more data would be needed to suggest answers to others. Therefore, instead of modeling ranks in isolation, we model ranks, Selmer groups, and Shafarevich--Tate groups simultaneously, so that we can calibrate and corroborate the model using a diverse collection of known results. \section{The arithmetic of elliptic curves} \subsection{Counting elliptic curves by height} Every elliptic curve $E$ over $\mathbb{Q}$ is isomorphic to the projective closure of a unique curve $y^2=x^3+Ax+B$ in which $A$ and $B$ are integers with $4A^3+27B^2 \ne 0$ (the smoothness condition) such that there is no prime $p$ such that $p^4|A$ and $p^6|B$. Let $\mathscr{E}$ be the set of elliptic curves of this form, so $\mathscr{E}$ contains one curve in each isomorphism class. Define the \defi{height} of $E \in \mathscr{E}$ by \[ \height E \colonequals \max(|4A^3|,|27B^2|). \] (This definition is specific to the ground field $\mathbb{Q}$, but it has analogues over other number fields.) Define \[ \mathscr{E}_{\le H} \colonequals \{E \in \mathscr{E} : \height E \le H\}. \] Ignoring constant factors, we have about $H^{1/3}$ integers $A$ with $|4A^3| \le H$, and $H^{1/2}$ integers $B$ with $|27B^2| \le H$. A positive fraction of such pairs $(A,B)$ satisfy the smoothness condition and divisibility conditions, so one should expect $\#\mathscr{E}_{\le H}$ to be about $H^{1/3} H^{1/2} = H^{5/6}$. In fact, an elementary sieve argument \cite{Brumer1992}*{Lemma~4.3} proves the following: \begin{proposition} \label{P:elliptic curves of bounded height} We have \[ \#\mathscr{E}_{\le H} = (2^{4/3} 3^{-3/2} \zeta(10)^{-1} + o(1)) \; H^{5/6} \] as $H \to \infty$. \end{proposition} Define the \defi{density} of a subset $S \subseteq \mathscr{E}$ as \[ \lim_{H\rightarrow\infty} \frac{\#(S\cap \mathscr{E}_{\le H})}{\# \mathscr{E}_{\le H}}, \] if the limit exists. For example, it is a theorem that 100\% of elliptic curves $E$ over $\mathbb{Q}$ have no nontrivial rational torsion points; this statement is to be interpreted as saying that the density of the set $S \colonequals \{E \in \mathscr{E}: E(\mathbb{Q})_{{\operatorname{tors}}} = 0\}$ is $1$ (even though there do exist $E$ with $E(\mathbb{Q})_{{\operatorname{tors}}} \ne 0$). \subsection{Elliptic curves over local fields} Our model will be inspired by theorems and conjectures about the arithmetic of elliptic curves over $\mathbb{Q}$. But before studying elliptic curves over $\mathbb{Q}$, we should thoroughly understand elliptic curves over local fields. Let $\mathbb{Q}_v$ be the completion of $\mathbb{Q}$ at a place $v$. There is a natural injective homomorphism $\inv \colon {\operatorname{H}}^2(\mathbb{Q}_v,\mathbb{G}_m) \to \mathbb{Q}/\mathbb{Z}$ that is an isomorphism if $v$ is nonarchimedean. Let $E$ be an elliptic curve over $\mathbb{Q}_v$. Fix $n \ge 1$. Taking Galois cohomology in the exact sequence \[ 0 \longrightarrow E[n] \longrightarrow E \stackrel{n}\longrightarrow E \longrightarrow 0 \] yields a homomorphism $E(\mathbb{Q}_v)/n E(\mathbb{Q}_v) \to {\operatorname{H}}^1(\mathbb{Q}_v,E[n])$. Let $W_v$ be its image. If $v$ is a nonarchimedean place not dividing $n$ and $E$ has good reduction, then $W_v$ equals the subgroup of unramified classes in ${\operatorname{H}}^1(\mathbb{Q}_v,E[n])$ \cite{Poonen-Rains2012-selmer}*{Proposition~4.13}. The theory of the Heisenberg group \cite{MumfordTheta3}*{pp.~44--46} yields an exact sequence \[ 1 \longrightarrow \mathbb{G}_m \longrightarrow \mathcal{H} \longrightarrow E[n] \longrightarrow 1, \] which induces a map of sets \[ q_v \colon {\operatorname{H}}^1(\mathbb{Q}_v,E[n]) \longrightarrow {\operatorname{H}}^2(\mathbb{Q}_v,\mathbb{G}_m) \stackrel{\inv}\hookrightarrow \mathbb{Q}/\mathbb{Z}. \] It turns out that $q_v$ is a quadratic form in the sense that $q_v(x+y)-q_v(x)-q_v(y)$ is bi-additive~\cite{Zarhin1974}*{\S2}. Moreover, $q_v|_{W_v}=0$ \cite{O'Neil2002}*{Proposition~2.3}. In fact, using Tate local duality one can show that $W_v$ is a maximal isotropic subgroup of ${\operatorname{H}}^1(\mathbb{Q}_v,E[n])$ with respect to $q_v$ \cite{Poonen-Rains2012-selmer}*{Proposition~4.11}. \subsection{Selmer groups and Shafarevich--Tate groups} Now let $E$ be an elliptic curve over $\mathbb{Q}$. Let $\mathbf{A} = \prod'_v (\mathbb{Q}_v,\mathbb{Z}_v)$ be the ad\`ele ring of $\mathbb{Q}$; here $v$ ranges over nontrivial places of $\mathbb{Q}$, Write $E(\mathbf{A})$ for $\prod_v E(\mathbb{Q}_v)/nE(\mathbb{Q}_v)$, and write ${\operatorname{H}}^1(\mathbf{A},E[n])$ for the restricted product $\prod'_v ({\operatorname{H}}^1(\mathbb{Q}_v,E[n]),W_v)$. We have a commutative diagram \[ \xymatrix{ E(\mathbb{Q})/nE(\mathbb{Q}) \ar[r] \ar[d] & {\operatorname{H}}^1(\mathbb{Q},E[n]) \ar[d]^{\beta} \\ E(\mathbf{A})/nE(\mathbf{A}) \ar[r]^{\alpha} & {\operatorname{H}}^1(\mathbf{A},E[n]). \\ } \] The \defi{$n$-Selmer group} is defined by $\Sel_n E \colonequals \beta^{-1}(\im \alpha) \subseteq {\operatorname{H}}^1(\mathbb{Q},E[n])$. (This is equivalent to the classical definition; we have only replaced $\prod_v {\operatorname{H}}^1(\mathbb{Q}_v,E[n])$ with a subgroup ${\operatorname{H}}^1(\mathbf{A},E[n])$ into which $\alpha$ and $\beta$ map.) The reason for defining $\Sel_n E$ is that it is a computable finite upper bound for (the image of) $E(\mathbb{Q})/nE(\mathbb{Q})$. On the other hand, the \defi{Shafarevich--Tate group} is defined by \[ \Sha = \Sha(E) \colonequals \ker\left( {\operatorname{H}}^1(\mathbb{Q},E) \to \prod_v {\operatorname{H}}^1(\mathbb{Q}_v,E) \right). \] It is a torsion abelian group with an alternating pairing \[ [\;,\;] \colon \Sha \times \Sha \to \mathbb{Q}/\mathbb{Z} \] defined by Cassels. Conjecturally, $\Sha$ is finite; in this case, $[\;,\;]$ is nondegenerate and $\#\Sha$ is a square \cite{Cassels1962-IV}. The definitions easily yield an exact sequence \begin{equation} \label{E:Selmer-Sha} 0 \longrightarrow \frac{E(\mathbb{Q})}{nE(\mathbb{Q})} \longrightarrow \Sel_n E \longrightarrow \Sha[n] \longrightarrow 0, \end{equation} so $\Sha[n]$ is measuring the difference between $\Sel_n E$ and the group $E(\mathbb{Q})/nE(\mathbb{Q})$ it is trying to approximate. Each group in \eqref{E:Selmer-Sha} decomposes according to the factorization of $n$ into powers of distinct primes, so let us restrict to the case in which $n=p^e$ for some prime $p$ and nonnegative integer $e$. Taking the direct limit over $e$ yields an exact sequence \[ 0 \longrightarrow E(\mathbb{Q}) \tensor \frac{\mathbb{Q}_p}{\mathbb{Z}_p} \longrightarrow \Sel_{p^\infty} E \longrightarrow \Sha[p^\infty] \longrightarrow 0 \] of $\mathbb{Z}_p$-modules in which $\Sel_{p^\infty} E \colonequals \varinjlim \Sel_{p^e} E$ and $\Sha[p^\infty] \colonequals \Union_{e \ge 0} \Sha[p^e]$. Moreover, one can show that if $E(\mathbb{Q})[p]=0$ (as holds for 100\% of curves), then $\Sel_{p^e} E \to (\Sel_{p^\infty} E)[p^e]$ is an isomorphism (cf.~\cite{Bhargava-Kane-Lenstra-Poonen-Rains2015}*{Proposition~5.9(b)}), so no information about the individual $p^e$-Selmer groups has been lost in passing to the limit. \subsection{The Selmer group as an intersection of maximal isotropic direct summands} \label{S:Selmer group is intersection} If $\xi = (\xi_v) \in {\operatorname{H}}^1(\mathbf{A},E[n])$, then for all but finitely many $v$ we have $\xi_v \in W_v$ and hence $q_v(\xi_v)=0$, so we may define $Q(\xi) \colonequals \sum_v q_v(\xi_v)$. This defines a quadratic form $Q \colon {\operatorname{H}}^1(\mathbf{A},E[n]) \to \mathbb{Q}/\mathbb{Z}$. \begin{theorem} \label{T:intersection of maximal isotropic subgroups} \hfill \begin{enumerate}[\upshape (a)] \item Each of $\im \alpha$ and $\im \beta$ is a maximal isotropic subgroup of ${\operatorname{H}}^1(\mathbf{A},E[n])$ with respect to $Q$ \cite{Poonen-Rains2012-selmer}*{Theorem~4.14(a)}. \item\label{I:beta injective} If $n$ is prime or $G_\mathbb{Q} \to \operatorname{GL}_2(\mathbb{Z}/n\mathbb{Z})$ is surjective then $\beta$ is injective. (See \cite{Poonen-Rains2012-selmer}*{Proposition~3.3(e)} and \cite{Bhargava-Kane-Lenstra-Poonen-Rains2015}*{Proposition~6.1}.) \end{enumerate} \end{theorem} By definition, $\beta(\Sel_n E) = (\im \alpha) \intersect (\im \beta)$. Thus, under either hypothesis in~\eqref{I:beta injective}, $\Sel_n E$ is isomorphic to an intersection of maximal isotropic subgroups of ${\operatorname{H}}^1(\mathbf{A},E[n])$. Moreover, $\im \alpha$ is a direct summand of ${\operatorname{H}}^1(\mathbf{A},E[n])$ \cite{Bhargava-Kane-Lenstra-Poonen-Rains2015}*{Corollary~6.8}. It is conjectured that $\im \beta$ is a direct summand as well, at least for asymptotically 100\% of elliptic curves over $\mathbb{Q}$ \cite{Bhargava-Kane-Lenstra-Poonen-Rains2015}*{Conjecture~6.9}, and it could hold for all of them. \subsection{The Birch and Swinnerton-Dyer conjecture} See \cite{Wiles2006} for an introduction to the Birch and Swinnerton-Dyer conjecture more detailed than what we present here, and see \cite{Stein-Wuthrich2013}*{Section~8} for some more recent advances towards it. Let $E \in \mathscr{E}$. To $E$ one can associate its \defi{$L$-function} $L(E,s)$, a holomorphic function initially defined when $\re s$ is sufficiently large, but known to extend to a holomorphic function on $\mathbb{C}$ (this is proved using the modularity of $E$). Just as the Dirichlet analytic class number formula expresses the residue at $s=1$ of the Dedekind zeta function of a number field $k$ in terms of the arithmetic of $k$, the Birch and Swinnerton-Dyer conjecture expresses the leading term in the Taylor expansion of $L(E,s)$ around $s=1$ in terms of the arithmetic of $E$. We will state it only in the case that $\rk E(\mathbb{Q})=0$ since that is all that we will need. In addition to the quantities previously associated to $E$, we need \begin{itemize} \item the \defi{real period} $\Omega$, defined as the integral over $E(\mathbb{R})$ of a certain $1$-form; and \item the \defi{Tamagawa number} $c_p$ for each finite prime $p$, a $p$-adic volume analogous to the real period. \end{itemize} Also define \[ \Sha_0(E) \colonequals \begin{cases} \#\Sha(E), &\textup{if $\rk E(\mathbb{Q}) = 0$;} \\ 0, &\textup{if $\rk E(\mathbb{Q}) > 0$.} \end{cases} \] \begin{conjecture}[The rank~0 part of the Birch and Swinnerton-Dyer conjecture] \label{C:BSD} If $E \in \mathscr{E}$, then \begin{equation} \label{E:BSD} L(E,1) = \frac{\Sha_0 \; \Omega \; \prod_p c_p}{\#E(\mathbb{Q})_{{\operatorname{tors}}}^2}. \end{equation} \end{conjecture} \begin{remark} In the case where the rank $r \colonequals \rk E(\mathbb{Q})$ is greater than $0$, Conjecture~\ref{C:BSD} states only that $L(E,1)=0$, whereas the full Birch and Swinnerton-Dyer conjecture predicts that $\ord_{s=1} L(E,s)=r$ and predicts the leading coefficient in the Taylor expansion of $L(E,s)$ at $s=1$. \end{remark} Let $H = \height E$. Following Lang~\cite{Lang1983} (see also \cite{Goldfeld-Szpiro1995}, \cite{deWeger1998}, \cite{Hindry2007}, \cite{Watkins2008-ExpMath}, and \cite{Hindry-Pacheco2016}), we estimate the typical size of $\Sha_0$ by estimating all the other quantities in \eqref{E:BSD} as $H \to \infty$; see \cite{Park-Poonen-Voight-Wood-preprint}*{Section~6} for details. The upshot is that if we average over $E$ and ignore factors that are $H^{o(1)}$, then \eqref{E:BSD} simplifies to $1 \sim \Sha_0 \, \Omega$ and we obtain $\Sha_0 \sim \Omega^{-1} \sim H^{1/12}$. More precisely: \begin{itemize} \item $\prod_p c_p = H^{o(1)}$ \cite{deWeger1998}*{Theorem~3}, \cite{Hindry2007}*{Lemma~3.5}, \cite{Watkins2008-ExpMath}*{pp.~114--115}, \cite{Park-Poonen-Voight-Wood-preprint}*{Lemma~6.2.1}; \item $\#E(\mathbb{Q})_{{\operatorname{tors}}} \le 16$ \cite{Mazur1977}; \item $\Omega = H^{-1/12 + o(1)}$ \cite{Hindry2007}*{Lemma 3.7}, \cite{Park-Poonen-Voight-Wood-preprint}*{Corollary~6.1.3}; and \item the Riemann hypothesis for $L(E,s)$ implies that $L(E,1) \le H^{o(1)}$ \cite{Iwaniec-Sarnak2000}*{p.~713}. In fact, it is reasonable to expect $\underset{E \in \mathscr{E}_{\le H}}{\Average}\; L(E,1) \asymp 1$. (The symbol $\asymp$ means that the left side is bounded above and below by positive constants times the right side.) \end{itemize} Thus we expect \begin{equation} \label{E:average Sha0} \underset{E \in \mathscr{E}_{\le H}}{\Average}\; \Sha_0(E) = H^{1/12+o(1)} \end{equation} as $H \to \infty$. \section{Modeling elliptic curves over \texorpdfstring{$\mathbb{Q}$}{Q}} \subsection{Modeling the \texorpdfstring{$p$}{p}-Selmer group} According to Theorem~\ref{T:intersection of maximal isotropic subgroups}, $\Sel_p E$ is isomorphic to an intersection of maximal isotropic subspaces in an infinite-dimensional quadratic space over $\mathbb{F}_p$. So one might ask whether one could make sense of choosing maximal isotropic subspaces in an infinite-dimensional quadratic space at random, so that one could intersect two of them to obtain a space whose distribution is conjectured to be that of $\Sel_p E$. This can be done by equipping an infinite-dimensional quadratic space with a locally compact topology \cite{Poonen-Rains2012-selmer}*{Section~2}, but the resulting distribution can be obtained more simply by working within a $2n$-dimensional quadratic space and taking the limit as $n \to \infty$. Now every nondegenerate $2n$-dimensional quadratic space with a maximal isotropic subspace is isomorphic to the quadratic space $V_n=(\mathbb{F}_p^{2n},Q)$, where $Q$ is the quadratic form \[ Q(x_1,\ldots,x_n,y_1,\ldots,y_n) \colonequals x_1 y_1 + \cdots + x_n y_n. \] Therefore we conjecture that the distribution of $\dim_{\mathbb{F}_p} \Sel_p E$ as $E$ varies over $\mathscr{E}$ equals the limit as $n \to \infty$ of the distribution of the dimension of the intersection of two maximal isotropic subspaces in $V_n$ chosen uniformly at random from the finitely many possibilities. The limit exists and can be computed explicitly; this yields the formula on the right in the following: \begin{conjecture}[\cite{Poonen-Rains2012-selmer}*{Conjecture~1.1}] For each $s \ge 0$, the density of $\{E \in \mathscr{E}: \dim_{\mathbb{F}_p} \Sel_p E = s\}$ equals \begin{equation} \label{E:Sel_p distribution} \prod_{j \ge 0} (1+p^{-j})^{-1} \prod_{j=1}^s \frac{p}{p^j-1}. \end{equation} \end{conjecture} \begin{remark} Let $E_d$ be the elliptic curve $dy^2=x^3-x$ over $\mathbb{Q}$. Heath-Brown proved that the density of integers $d$ such that $\dim_{\mathbb{F}_2} \Sel_2 E_d - 2 = s$ equals \[ \prod_{j \ge 0} (1+2^{-j})^{-1} \prod_{j=1}^s \frac{2}{2^j-1}, \] matching~\eqref{E:Sel_p distribution} for $p=2$ \cites{Heath-Brown1993,Heath-Brown1994}. (The $-2$ is there to remove the ``causal'' contribution to $\dim \Sel_2 E_d$ coming from $E_d(\mathbb{Q})[2]$.) As we have explained, this result is a natural consequence of the theory of Section~\ref{S:Selmer group is intersection}, but in fact Heath-Brown's result came first and the theory was reverse engineered from it \cite{Poonen-Rains2012-selmer}! Heath-Brown's result was extended by Swinnerton-Dyer \cite{Swinnerton-Dyer2008} and Kane \cite{Kane2013} to the family of quadratic twists of any $E \in \mathscr{E}$ with $E[2] \subseteq E(\mathbb{Q})$ and no cyclic $4$-isogeny. \end{remark} \subsection{Modeling the \texorpdfstring{$p^e$}{p to the e}-Selmer group} If $p$ is replaced by $p^e$, then we should replace $\mathbb{F}_p^{2n}$ by $V_n \colonequals ((\mathbb{Z}/p^e\mathbb{Z})^{2n},Q)$. But now there are different types of maximal isotropic subgroups up to isomorphism. For example, if $e=2$, then $(\mathbb{Z}/p^2\mathbb{Z})^n \times \{0\}^n$ and $(p\mathbb{Z}/p^2\mathbb{Z})^{2n}$ are both maximal isotropic subgroups; of these, only the first is a direct summand of $V_n$. In what follows, we will use only direct summands, for reasons to be explained at the end of this section. \begin{conjecture} \label{C:Sel p^e} If we intersect two random maximal isotropic direct summands of $V_n \colonequals ((\mathbb{Z}/p^e\mathbb{Z})^{2n},Q)$ and take the limit as $n \to \infty$ of the resulting distribution, we obtain the distribution of $\Sel_{p^e} E$ as $E$ varies over $\mathscr{E}$. \end{conjecture} For $m \ge 1$, let $\sigma(m)$ denote the sum of the positive divisors of $m$. One can prove that the limit as $n \to \infty$ of the average size of the random intersection equals $\sigma(p^e)$, and there is an analogous result for positive integers $m$ not of the form $p^e$ \cite{Bhargava-Kane-Lenstra-Poonen-Rains2015}*{Proposition~5.20}. This suggests the following: \begin{conjecture}[\cite{Poonen-Rains2012-selmer}*{Conjecture~1(b)}, \cite{Bhargava-Kane-Lenstra-Poonen-Rains2015}*{Section~5.7}, \cite{Bhargava-Shankar-4selmer}*{Conjecture~4}] \label{C:average Sel_m} For each positive integer $m$, \[ \underset{E \in \mathscr{E}}\Average\; \#\Sel_m E = \sigma(m). \] (The average is interpreted as the limit as $H \to \infty$ of the average over $\mathscr{E}_{\le H}$.) \end{conjecture} One could similarly compute the higher moments of the conjectural distribution; see \cite{Poonen-Rains2012-selmer}*{Proposition~2.22(a)} and \cite{Bhargava-Kane-Lenstra-Poonen-Rains2015}*{Section~5.5}. There are several reasons why insisting upon direct summands in Conjecture~\ref{C:Sel p^e} seems right: \begin{itemize} \item Conjecturally, both of the maximal isotropic subgroups arising in the arithmetic of the elliptic curve \emph{are} direct summands: see the last paragraph of Section~\ref{S:Selmer group is intersection}. \item Requiring direct summands is essentially the only way to make the model for $\Sel_{p^e} E$ consistent with the model for $\Sel_p E$, given that $\Sel_p E \simeq (\Sel_{p^e} E)[p]$ for 100\% of curves \cite{Bhargava-Kane-Lenstra-Poonen-Rains2015}*{Remark~6.12}. \item It leads to Conjecture~\ref{C:average Sel_m}, which has been proved for $m \le 5$ \cites{Bhargava-Shankar-2selmer,Bhargava-Shankar-3selmer,Bhargava-Shankar-4selmer,Bhargava-Shankar-5selmer}. \end{itemize} \subsection{Modeling the \texorpdfstring{$p^\infty$}{p to the infty}-Selmer group and the Shafarevich--Tate group} Choosing a maximal isotropic direct summand of $((\mathbb{Z}/p^e\mathbb{Z})^{2n},Q)$ compatibly for all $e$ is equivalent to choosing a maximal isotropic direct summand of the quadratic $\mathbb{Z}_p$-module $V_n \colonequals (\mathbb{Z}_p^{2n},Q)$. This observation will lead us to a process that models $\Sel_{p^e} E$ for all $e$ simultaneously, or equivalently, that models $\Sel_{p^\infty} E$ directly. To simplify notation, for any $\mathbb{Z}_p$-module $X$, let $X'$ denote $X \tensor \frac{\mathbb{Q}_p}{\mathbb{Z}_p}$; if $X$ is a $\mathbb{Z}_p$-submodule of $V_n$, then $X'$ is a $\mathbb{Z}_p$-submodule of $V_n'$. Now choose maximal isotropic direct summands $Z$ and $W$ of $V_n$ with respect to the measure arising from taking the inverse limit over $e$ of the uniform measure on the set of maximal isotropic direct summands of $(\mathbb{Z}/p^e\mathbb{Z})^{2n}$ \cite{Bhargava-Kane-Lenstra-Poonen-Rains2015}*{Sections 2 and~4}; then we conjecture that the limiting distribution of $Z' \intersect W'$ as $n \to \infty$ equals the distribution of $\Sel_{p^\infty} E$ as $E$ varies over $\mathscr{E}$. Again, the point is that this limiting distribution is compatible with the the previously conjectured distribution for $\Sel_{p^e} E$ for each nonnegative integer $e$, and the conjecture for $\Sel_{p^e} E$ was based on \emph{theorems} about Selmer groups of elliptic curves (see Section~\ref{S:Selmer group is intersection}). Even better, using the same ingredients, we can model $\rk E(\mathbb{Q})$ and $\Sha[p^\infty]$ at the same time: \begin{conjecture}[\cite{Bhargava-Kane-Lenstra-Poonen-Rains2015}*{Conjecture~1.3}] \label{C:RST} If we choose maximal isotropic direct summands $Z$ and $W$ of $(\mathbb{Z}_p^{2n},Q)$ at random as above, and we define \[ R \colonequals (Z \intersect W)', \qquad S \colonequals Z' \intersect W', \qquad T \colonequals S/R, \] then the limit as $n \to \infty$ of the distribution of the exact sequence \[ 0 \longrightarrow R \longrightarrow S \longrightarrow T \longrightarrow 0 \] equals the distribution of the sequence \[ 0 \longrightarrow E(\mathbb{Q}) \tensor \frac{\mathbb{Q}_p}{\mathbb{Z}_p} \longrightarrow \Sel_{p^\infty} E \longrightarrow \Sha[p^\infty] \longrightarrow 0 \] as $E$ varies over $\mathscr{E}$. \end{conjecture} There are several pieces of indirect evidence for the rank and $\Sha$ predictions of Conjecture~\ref{C:RST}: \begin{itemize} \item Each of $R$ and $E(\mathbb{Q}) \tensor \frac{\mathbb{Q}_p}{\mathbb{Z}_p}$ is isomorphic to $(\mathbb{Q}_p/\mathbb{Z}_p)^r$ for some nonnegative integer $r$, called the \defi{$\mathbb{Z}_p$-corank} of the module. \item The $\mathbb{Z}_p$-corank of $R$ is $0$ or $1$, with probability $1/2$ each \cite{Bhargava-Kane-Lenstra-Poonen-Rains2015}*{Proposition~5.6}. Likewise, a variant of a conjecture of Goldfeld (see \cite{Goldfeld1979}*{Conjecture~B} and \cites{Katz-Sarnak1999a,Katz-Sarnak1999b}) predicts that $\rk E(\mathbb{Q})$ (which equals the $\mathbb{Z}_p$-corank of $E(\mathbb{Q}) \tensor \frac{\mathbb{Q}_p}{\mathbb{Z}_p}$) is $0$, $1$, $\ge 2$ with densities $1/2$, $1/2$, $0$, respectively. \item The group $T$ is finite and carries a nondegenerate alternating pairing with values in $\mathbb{Q}_p/\mathbb{Z}_p$, just as $\Sha[p^\infty]$ conjecturally does (the $p$-part of the Cassels pairing). In particular, $\#T$ is a square. \item Smith has proved a result analogous to Conjecture~\ref{C:RST} for the family of quadratic twists of any $E \in \mathscr{E}$ with $E[2] \subseteq E(\mathbb{Q})$ and no cyclic $4$-isogeny \cite{Smith-preprint}. \end{itemize} Further evidence is that there are in fact \emph{three} distributions that have been conjectured to be the distribution of $\Sha[p^\infty]$ as $E$ varies over rank~$r$ elliptic curves, and these three distributions coincide \cite{Bhargava-Kane-Lenstra-Poonen-Rains2015}*{Theorems 1.6(c) and~1.10(b)}. This is so even in the cases with $r \ge 2$, which conjecturally occur with density $0$. For a fixed nonnegative integer $r$, the three distributions are as follows: \begin{enumerate}[\upshape 1.] \item A distribution defined by Delaunay \cites{Delaunay2001,Delaunay2007,Delaunay-Jouhet2014a}, who adapted the Cohen--Lenstra heuristics for class groups \cite{Cohen-Lenstra1984}. \item The limit as $n \to \infty$ of the distribution of $T \colonequals (Z' \intersect W')/(Z \intersect W)'$ when $(Z,W)$ is sampled from the set of pairs of maximal isotropic direct summands of $(\mathbb{Z}_p^{2n},Q)$ satisfying $\rk_{\mathbb{Z}_p}(Z \intersect W)=r$. (This set of pairs is the set of $\mathbb{Z}_p$-points of a scheme of finite type, so it carries a natural measure \cite{Bhargava-Kane-Lenstra-Poonen-Rains2015}*{Sections 2 and~4}.) \item The limit as $n \to \infty$ through integers of the same parity as $r$ of the distribution of $(\coker A)_{{\operatorname{tors}}}$ when $A$ is sampled from the space of matrices in $\operatorname{M}_n(\mathbb{Z}_p)$ satisfying $A^T=-A$ and $\rk_{\mathbb{Z}_p}(\ker A)=r$; here $\ker A$ and $\coker A$ are defined by viewing $A$ as a $\mathbb{Z}_p$-linear homomorphism $\mathbb{Z}_p^n \to \mathbb{Z}_p^n$. \end{enumerate} The last of these is inspired by the theorem of Friedman and Washington \cite{Friedman-Washington1989} that for each odd prime $p$, the limit as $n \to \infty$ of the distribution $\coker A$ for $A \in \operatorname{M}_n(\mathbb{Z}_p)$ chosen at random with respect to Haar measure equals the distribution conjectured by Cohen and Lenstra to be the distribution of the $p$-primary part of the class group of a varying imaginary quadratic field. \subsection{Modeling the rank of an elliptic curve} In the previous section, we saw in the third construction that conditioning on $\rk_{\mathbb{Z}_p}(\ker A)=r$ yields the conjectural distribution of $\Sha[p^\infty]$ for rank~$r$ curves. The simplest possible explanation for this would be that sampling $A$ at random from $\operatorname{M}_n(\mathbb{Z}_p)_{\alt} \colonequals \{ A \in \operatorname{M}_n(\mathbb{Z}_p) : A^T=-A\}$ \emph{without} conditioning on $\rk_{\mathbb{Z}_p}(\ker A)$ caused $\rk_{\mathbb{Z}_p}(\ker A)$ to be distributed like the rank of an elliptic curve. What is the distribution of $\rk_{\mathbb{Z}_p}(\ker A)$? If $n$ is even, then the locus in $\operatorname{M}_n(\mathbb{Z}_p)_{\alt}$ defined by $\det A=0$ is the set of $\mathbb{Z}_p$-points of a hypersurface, which has Haar measure~$0$, so $\rk_{\mathbb{Z}_p}(\ker A) = 0$ with probability~$1$. If $n$ is odd, however, then $\rk_{\mathbb{Z}_p}(\ker A)$ cannot be $0$, because $n-\rk_{\mathbb{Z}_p}(\ker A)$ is the rank of $A$, which is even for an alternating matrix. For $n$ odd, it turns out that $\rk_{\mathbb{Z}_p}(\ker A)=1$ with probability~$1$. If we imagine that $n$ was chosen large and with random parity, then the result is that $\rk_{\mathbb{Z}_p}(\ker A)$ is $0$ or $1$, with probability $1/2$ each. This result agrees with the variant of Goldfeld's conjecture mentioned above. This model cannot, however, distinguish the relative frequencies of curves of various ranks $\ge 2$, because in the model the event $\rk_{\mathbb{Z}_p}(\ker A) \ge 2$ occurs with probability~$0$. Therefore we propose a refined model in which instead of sampling $A$ from $\operatorname{M}_n(\mathbb{Z}_p)_{\alt}$, we sample $A$ from the set $\operatorname{M}_n(\mathbb{Z})_{\alt,\le X}$ of matrices in $\operatorname{M}_n(\mathbb{Z})_{\alt}$ with entries bounded by a number $X$ \emph{depending on the height $H$ of the elliptic curve being modeled}, tending to $\infty$ as $H \to \infty$. This way, for elliptic curves of a given height $H$, the model predicts a potentially positive but diminishing probability of each rank $\ge 2$ (the probability that an integer point in a box lies on a certain subvariety), and we can quantify the rate at which this probability tends to $0$ as $H \to \infty$ in order to count curves of height up to $H$ having each given rank. In fact, we let $n$ grow with $H$ as well. Here, more precisely, is the refined model. To model an elliptic curve $E$ of height $H$, using functions $\eta(H)$ and $X(H)$ to be specified later, we do the following: \begin{enumerate}[\upshape 1.] \item Choose $n$ to be an integer of size about $\eta(H)$ of random parity (e.g., we could choose $n$ uniformly at random from $\{\lceil \eta(H) \rceil,\lceil \eta(H) \rceil+1\}$). \item Choose $A_E \in \operatorname{M}_n(\mathbb{Z})_{\alt, \le X(H)}$ uniformly at random, independently for each $E$. \item Define random variables $\Sha_E' \colonequals (\coker A)_{{\operatorname{tors}}}$ and $\rk_E' \colonequals \rk_\mathbb{Z}(\ker A)$. \end{enumerate} Think of $\Sha_E'$ as the ``pseudo-Shafarevich--Tate group'' of $E$ and $\rk_E'$ as the ``pseudo-rank'' of $E$; their behavior is intended to model the actual $\Sha$ and rank. To complete the description of the model, we must specify the functions $\eta(H)$ and $X(H)$. We do this by asking ``How large is $\Sha_0$ on average?'', both in the model and in reality. Recall from \eqref{E:average Sha0} that we expect \begin{equation} \label{E:average Sha0 redux} \underset{E \in \mathscr{E}_{\le H}}{\Average}\; \Sha_0(E) = H^{1/12+o(1)}. \end{equation} Define \[ \Sha_{E,0}' \colonequals \begin{cases} \#\Sha_E', &\textup{if $\rk'_E = 0$;} \\ 0, &\textup{if $\rk'_E > 0$.} \end{cases} \] Using that the determinant of an $n \times n$ matrix is given by a polynomial of degree~$n$ in the entries, we can prove that \begin{equation} \label{E:average pseudo-Sha} \underset{E \in \mathscr{E}_{\le H}}{\Average}\; \Sha_{E,0}' = X(H)^{\eta(H)(1+o(1))}, \end{equation} assuming that $\eta(H)$ does not grow too quickly with $H$. Comparing \eqref{E:average Sha0 redux} and \eqref{E:average pseudo-Sha} suggests choosing $\eta(H)$ and $X(H)$ so that $X(H)^{\eta(H)} = H^{1/12+o(1)}$. We assume this from now on. It turns out that we will not need to know any more about $\eta(H)$ and $X(H)$ than this. \subsection{Consequences of the model} To see what distribution of ranks is predicted by the refined model, we must calculate the distribution of ranks of alternating matrices whose entries are integers with bounded absolute value; the relevant result, whose proof is adapted from \cite{Eskin-Katznelson1995}, is the following: \begin{theorem}[cf.~\cite{Park-Poonen-Voight-Wood-preprint}*{Theorem~9.1.1}] \label{thm:EskinKatznelsonAlternating} If $1 \leq r \leq n$ and $n-r$ is even, and $A \in \operatorname{M}_n(\mathbb{Z})_{\alt, \le X}$ is chosen uniformly at random, then \[ \Prob(\rk(\ker A ) \ge r) \asymp_n (X^n)^{-(r-1)/2}. \] (The subscript $n$ on the symbol $\asymp$ means that the implied constants depend on $n$.) \end{theorem} Theorem~\ref{thm:EskinKatznelsonAlternating} implies that for fixed $r \ge 1$ and $E \in \mathscr{E}$ of height $H$, \begin{equation} \label{E:Prob of rank >=r} \Prob(\rk_E' \ge r) = (X(H)^{\eta(H)})^{-(r-1)/2 + o(1)} = H^{-(r-1)/24 + o(1)}. \end{equation} Using this, and the fact $\#\mathscr{E}_{\le H} \asymp H^{5/6} = H^{20/24}$ (Proposition~\ref{P:elliptic curves of bounded height}), we can now sum \eqref{E:Prob of rank >=r} over $E \in \mathscr{E}_{\le H}$ to prove the following theorem about our model: \begin{theorem}[\cite{Park-Poonen-Voight-Wood-preprint}*{Theorem~7.3.3}] \label{T:rank 21} The following hold with probability~$1$: \begin{align*} \#\{ E \in \mathscr{E}_{\le H} : \rk'_E = 0 \} &= H^{20/24+o(1)} \\ \#\{ E \in \mathscr{E}_{\le H} : \rk'_E = 1 \} &= H^{20/24+o(1)} \\ \#\{ E \in \mathscr{E}_{\le H} : \rk'_E \ge 2 \} &= H^{19/24+o(1)} \\ \#\{ E \in \mathscr{E}_{\le H} : \rk'_E \ge 3 \} &= H^{18/24+o(1)} \\ &\vdots \\ \#\{ E \in \mathscr{E}_{\le H} : \rk'_E \ge 20 \} &= H^{1/24+o(1)} \\ \#\{ E \in \mathscr{E}_{\le H} : \rk'_E \ge 21 \} &\le H^{o(1)}, \\ \#\{ E \in \mathscr{E} : \rk'_E > 21 \} &\textup{ is finite}. \end{align*} \end{theorem} This suggests the conjecture that the same statements hold for the \emph{actual} ranks of elliptic curves over $\mathbb{Q}$. In particular, we conjecture that $\rk E(\mathbb{Q})$ is uniformly bounded, bounded by the maximum of the ranks of the conjecturally finitely many elliptic curves of rank $>21$. \begin{remark} Elkies has found infinitely many elliptic curves over $\mathbb{Q}$ of rank $\ge 19$, and one of rank $\ge 28$; these have remained the records since 2006 \cite{Elkies2006}. \end{remark} \section{Generalizations} \subsection{Elliptic curves over global fields} What about elliptic curves over other global fields $K$? Let $\mathscr{E}_K$ be a set of representatives for the isomorphism classes of elliptic curves over $K$. Let $B_K \colonequals \limsup_{E \in \mathscr{E}_K} \rk E(K)$. For example, the conjecture suggested by Theorem~\ref{T:rank 21} predicts that $20 \le B_{\mathbb{Q}} \le 21$. \begin{theorem}[\cite{Tate-Shafarevich1967}, \cite{Ulmer2002}] \label{T:global function field} If $K$ is a global function field, then $B_K = \infty$. \end{theorem} Even for number fields, $B_K$ can be arbitrarily large (but maybe still always finite): \begin{theorem}[\cite{Park-Poonen-Voight-Wood-preprint}*{Theorem~12.4.2}] \label{T:number fields} There exist number fields $K$ of arbitrarily high degree such that $B_K \ge [K:\mathbb{Q}]$. \end{theorem} Examples of number fields $K$ for which $B_K$ is large include fields in anticyclotomic towers and certain multiquadratic fields; see \cite{Park-Poonen-Voight-Wood-preprint}*{Section~12.4}. A naive adaptation of our heuristic (see \cite{Park-Poonen-Voight-Wood-preprint}*{Sections 12.2 and~12.3}) would suggest $20 \le B_K \le 21$ for every global field $K$, but Theorems \ref{T:global function field} and~\ref{T:number fields} contradict this conclusion. Our rationalization of this is that the elliptic curves of high rank in Theorems \ref{T:global function field} and~\ref{T:number fields} are special in that they are definable over a proper subfield of $K$, and these special curves exhibit arithmetic phenomena that our model does not take into account. To exclude these curves, let $\mathscr{E}_K^\circ$ be the set of $E \in \mathscr{E}_K$ such that $E$ is not a base change of a curve from a proper subfield, and let $B_K^\circ \colonequals \limsup_{E \in \mathscr{E}_K^\circ} \rk E(K)$. It is possible that $B_K^\circ < \infty$ for every global field $K$. \begin{remark} On the other hand, it is not true that $B_K^\circ \le 21$ for all number fields, as we now explain. Shioda proved that $y^2=x^3+t^{360}+1$ has rank $68$ over $\mathbb{C}(t)$ \cite{Shioda1992}. In fact, it has rank $68$ also over $K(t)$ for a suitable number field $K$. For this $K$, specialization yields infinitely many elliptic curves in $\mathscr{E}_K^{\circ}$ of rank $\ge 68$. Thus $B_K^\circ \ge 68$. See \cite{Park-Poonen-Voight-Wood-preprint}*{Remark~12.3.1} for details. \end{remark} \subsection{Abelian varieties} \begin{question} \label{Q:abelian varieties} For abelian varieties $A$ over number fields $K$, is there a bound on $\rk A(K)$ depending only on $\dim A$ and $[K:\mathbb{Q}]$? \end{question} By restriction of scalars, we can reduce to the case $K=\mathbb{Q}$ at the expense of increasing the dimension. By ``Zarhin's trick'' that $A^4 \times (A^\vee)^4$ is principally polarized \cite{Zarhin1974-trick}, we can reduce to the case that $A$ is principally polarized, again at the expense of increasing the dimension. For fixed $g \ge 0$, one can write down a family of projective varieties including all $g$-dimensional principally polarized abelian varieties over $\mathbb{Q}$, probably with each isomorphism class represented infinitely many times. We can assume that each abelian variety $A$ is defined by a system of homogeneous polynomials with integer coefficients, in which the number of variables, the number of polynomials, and their degrees are bounded in terms of $g$. Define the height of $A$ to be the maximum of the absolute values of the coefficients. Then the number of $g$-dimensional principally polarized abelian varieties over $\mathbb{Q}$ of height $\le H$ is bounded by a polynomial in $H$. If there is a model involving a pseudo-rank $\rk_A'$ whose probability of exceeding $r$ gets divided by at least a fixed fractional power of $H$ each time $r$ is incremented by $1$, as we had for elliptic curves, then the pseudo-ranks are bounded with probability~$1$. This might suggest a positive answer to Question~\ref{Q:abelian varieties}, though the evidence is much flimsier than in the case of elliptic curves. \section*{Acknowledgments} I thank Nicolas Billerey, Serge Cantat, Andrew Granville, Eric Rains, Michael Stoll, and John Voight for comments. \begin{bibdiv} \begin{biblist} \bib{Balakrishnan-Ho-Kaplan-Spicer-Stein-Weigandt2016}{article}{ author={Balakrishnan, Jennifer S.}, author={Ho, Wei}, author={Kaplan, Nathan}, author={Spicer, Simon}, author={Stein, William}, author={Weigandt, James}, title={Databases of elliptic curves ordered by height and distributions of Selmer groups and ranks}, journal={LMS J. Comput. Math.}, volume={19}, date={2016}, number={suppl. A}, pages={351--370}, issn={1461-1570}, review={\MR {3540965}}, } \bib{Bhargava-Kane-Lenstra-Poonen-Rains2015}{article}{ author={Bhargava, Manjul}, author={Kane, Daniel M.}, author={Lenstra, Hendrik W., Jr.}, author={Poonen, Bjorn}, author={Rains, Eric}, title={Modeling the distribution of ranks, Selmer groups, and Shafarevich-Tate groups of elliptic curves}, journal={Camb. J. Math.}, volume={3}, date={2015}, number={3}, pages={275--321}, issn={2168-0930}, review={\MR {3393023}}, label={BKLPR15}, } \bib{Bhargava-Shankar-2selmer}{article}{ author={Bhargava, Manjul}, author={Shankar, Arul}, title={Binary quartic forms having bounded invariants, and the boundedness of the average rank of elliptic curves}, journal={Ann. of Math. (2)}, volume={181}, date={2015}, number={1}, pages={191--242}, issn={0003-486X}, review={\MR {3272925}}, doi={10.4007/annals.2015.181.1.3}, } \bib{Bhargava-Shankar-3selmer}{article}{ author={Bhargava, Manjul}, author={Shankar, Arul}, title={Ternary cubic forms having bounded invariants, and the existence of a positive proportion of elliptic curves having rank 0}, journal={Ann. of Math. (2)}, volume={181}, date={2015}, number={2}, pages={587--621}, issn={0003-486X}, review={\MR {3275847}}, doi={10.4007/annals.2015.181.2.4}, } \bib{Bhargava-Shankar-4selmer}{misc}{ author={Bhargava, Manjul}, author={Shankar, Arul}, title={The average number of elements in the 4-Selmer groups of elliptic curves is 7}, date={2013-12-27}, note={Preprint, \texttt {arXiv:1312.7333v1}\phantom {i}}, } \bib{Bhargava-Shankar-5selmer}{misc}{ author={Bhargava, Manjul}, author={Shankar, Arul}, title={The average size of the 5-Selmer group of elliptic curves is 6, and the average rank is less than 1}, date={2013-12-30}, note={Preprint, \texttt {arXiv:1312.7859v1}\phantom {i}}, } \bib{Brumer1992}{article}{ author={Brumer, Armand}, title={The average rank of elliptic curves.~I}, journal={Invent. Math.}, volume={109}, date={1992}, number={3}, pages={445--472}, issn={0020-9910}, review={\MR {1176198 (93g:11057)}}, doi={10.1007/BF01232033}, } \bib{Cassels1962-IV}{article}{ author={Cassels, J. W. S.}, title={Arithmetic on curves of genus $1$. IV. Proof of the Hauptvermutung}, journal={J. Reine Angew. Math.}, volume={211}, date={1962}, pages={95--112}, issn={0075-4102}, review={\MR {0163915 (29 \#1214)}}, } \bib{Cassels1966-diophantine}{article}{ author={Cassels, J. W. S.}, title={Diophantine equations with special reference to elliptic curves}, journal={J. London Math. Soc.}, volume={41}, date={1966}, pages={193--291}, issn={0024-6107}, review={\MR {0199150 (33 \#7299)}}, } \bib{Cohen-Lenstra1984}{article}{ author={Cohen, H.}, author={Lenstra, H. W., Jr.}, title={Heuristics on class groups of number fields}, conference={ title={Number theory, Noordwijkerhout 1983}, address={Noordwijkerhout}, date={1983}, }, book={ series={Lecture Notes in Math.}, volume={1068}, publisher={Springer}, place={Berlin}, }, date={1984}, pages={33--62}, review={\MR {756082 (85j:11144)}}, doi={10.1007/BFb0099440}, } \bib{Delaunay2001}{article}{ author={Delaunay, Christophe}, title={Heuristics on Tate-Shafarevitch groups of elliptic curves defined over $\mathbb {Q}$}, journal={Experiment. Math.}, volume={10}, date={2001}, number={2}, pages={191--196}, issn={1058-6458}, review={\MR {1837670 (2003a:11065)}}, } \bib{Delaunay2007}{article}{ author={Delaunay, Christophe}, title={Heuristics on class groups and on Tate-Shafarevich groups: the magic of the Cohen-Lenstra heuristics}, conference={ title={Ranks of elliptic curves and random matrix theory}, }, book={ series={London Math. Soc. Lecture Note Ser.}, volume={341}, publisher={Cambridge Univ. Press}, place={Cambridge}, }, date={2007}, pages={323--340}, review={\MR {2322355 (2008i:11089)}}, } \bib{Delaunay-Jouhet2014a}{article}{ author={Delaunay, Christophe}, author={Jouhet, Fr{\'e}d{\'e}ric}, title={$p^\ell $-torsion points in finite abelian groups and combinatorial identities}, journal={Adv. Math.}, volume={258}, date={2014}, pages={13--45}, issn={0001-8708}, review={\ \MR {3190422}}, doi={10.1016/j.aim.2014.02.033}, } \bib{Elkies2006}{misc}{ author={Elkies, Noam D.}, title={$\mathbb {Z}^{28}$ in $E(\mathbb {Q})$, etc.}, date={2006-05-03}, note={Email to the \texttt {NMBRTHRY@LISTSERV.NODAK.EDU} mailing list}, } \bib{Eskin-Katznelson1995}{article}{ author={Eskin, Alex}, author={Katznelson, Yonatan R.}, title={Singular symmetric matrices}, journal={Duke Math. J.}, volume={79}, date={1995}, number={2}, pages={515--547}, issn={0012-7094}, review={\MR {1344769 (96h:11099)}}, doi={10.1215/S0012-7094-95-07913-7}, } \bib{Farmer-Gonek-Hughes2007}{article}{ author={Farmer, David W.}, author={Gonek, S. M.}, author={Hughes, C. P.}, title={The maximum size of {$L$}-functions}, journal={J. Reine Angew. Math.}, volume={609}, year={2007}, pages={215--236}, issn={0075-4102}, review={\MR {2350784 (2009b:11140)}}, doi={10.1515/CRELLE.2007.064}, } \bib{Friedman-Washington1989}{article}{ author={Friedman, Eduardo}, author={Washington, Lawrence C.}, title={On the distribution of divisor class groups of curves over a finite field}, conference={ title={Th\'eorie des nombres}, address={Quebec, PQ}, date={1987}, }, book={ publisher={de Gruyter}, place={Berlin}, }, date={1989}, pages={227--239}, review={\MR {1024565 (91e:11138)}}, } \bib{Goldfeld1979}{article}{ author={Goldfeld, Dorian}, title={Conjectures on elliptic curves over quadratic fields}, conference={ title={Number theory, Carbondale 1979 (Proc. Southern Illinois Conf., Southern Illinois Univ., Carbondale, Ill., 1979)}, }, book={ series={Lecture Notes in Math.}, volume={751}, publisher={Springer}, place={Berlin}, }, date={1979}, pages={108--118}, review={\MR {564926 (81i:12014)}}, } \bib{Goldfeld-Szpiro1995}{article}{ author={Goldfeld, Dorian}, author={Szpiro, Lucien}, title={Bounds for the order of the Tate-Shafarevich group}, note={Special issue in honour of Frans Oort}, journal={Compositio Math.}, volume={97}, date={1995}, number={1-2}, pages={71--87}, issn={0010-437X}, review={\MR {1355118 (97a:11102)}}, } \bib{Heath-Brown1993}{article}{ author={Heath-Brown, D. R.}, title={The size of Selmer groups for the congruent number problem}, journal={Invent. Math.}, volume={111}, date={1993}, number={1}, pages={171--195}, issn={0020-9910}, review={\MR {1193603 (93j:11038)}}, doi={10.1007/BF01231285}, } \bib{Heath-Brown1994}{article}{ author={Heath-Brown, D. R.}, title={The size of Selmer groups for the congruent number problem. II}, note={With an appendix by P. Monsky}, journal={Invent. Math.}, volume={118}, date={1994}, number={2}, pages={331--370}, issn={0020-9910}, review={\MR {1292115 (95h:11064)}}, doi={10.1007/BF01231536}, } \bib{Hindry2007}{article}{ author={Hindry, Marc}, title={Why is it difficult to compute the Mordell-Weil group?}, conference={ title={Diophantine geometry}, }, book={ series={CRM Series}, volume={4}, publisher={Ed. Norm., Pisa}, }, date={2007}, pages={197--219}, review={\MR {2349656 (2008i:11074)}}, } \bib{Hindry-Pacheco2016}{article}{ author={Hindry, Marc}, author={Pacheco, Am\'\i lcar}, title={An analogue of the Brauer--Siegel theorem for abelian varieties in positive characteristic}, journal={Mosc. Math. J.}, volume={16}, date={2016}, number={1}, pages={45--93}, issn={1609-3321}, review={\MR {3470576}}, } \bib{Honda1960}{article}{ author={Honda, Taira}, title={Isogenies, rational points and section points of group varieties}, journal={Japan. J. Math.}, volume={30}, date={1960}, pages={84--101}, review={\MR {0155828 (27 \#5762)}}, } \bib{Iwaniec-Sarnak2000}{article}{ author={Iwaniec, H.}, author={Sarnak, P.}, title={Perspectives on the analytic theory of $L$-functions}, note={GAFA 2000 (Tel Aviv, 1999)}, journal={Geom. Funct. Anal.}, date={2000}, number={Special Volume}, pages={705--741}, issn={1016-443X}, review={\MR {1826269 (2002b:11117)}}, } \bib{Kane2013}{article}{ author={Kane, Daniel}, title={On the ranks of the 2-Selmer groups of twists of a given elliptic curve}, journal={Algebra Number Theory}, volume={7}, date={2013}, number={5}, pages={1253--1279}, issn={1937-0652}, review={\MR {3101079}}, doi={10.2140/ant.2013.7.1253}, } \bib{Katz-Sarnak1999a}{book}{ author={Katz, Nicholas M.}, author={Sarnak, Peter}, title={Random matrices, Frobenius eigenvalues, and monodromy}, series={American Mathematical Society Colloquium Publications}, volume={45}, publisher={Amer.\ Math.\ Soc.}, place={Providence, RI}, date={1999}, pages={xii+419}, isbn={0-8218-1017-0}, review={\MR { 2000b:11070}}, } \bib{Katz-Sarnak1999b}{article}{ author={Katz, Nicholas M.}, author={Sarnak, Peter}, title={Zeroes of zeta functions and symmetry}, journal={Bull. Amer. Math. Soc. (N.S.)}, volume={36}, date={1999}, number={1}, pages={1--26}, issn={0273-0979}, review={\MR {1640151 (2000f:11114)}}, doi={10.1090/S0273-0979-99-00766-1}, } \bib{Lang1983}{article}{ author={Lang, William E.}, title={On Enriques surfaces in characteristic $p$. I}, journal={Math. Ann.}, volume={265}, date={1983}, number={1}, pages={45--65}, issn={0025-5831}, review={\MR {719350 (86c:14031)}}, } \bib{Mazur1977}{article}{ author={Mazur, B.}, title={Modular curves and the Eisenstein ideal}, journal={Inst. Hautes \'Etudes Sci. Publ. Math.}, number={47}, date={1977}, pages={33--186 (1978)}, issn={0073-8301}, review={\MR {488287 (80c:14015)}}, } \bib{Mestre1982}{article}{ author={Mestre, Jean-Fran{\c {c}}ois}, title={Construction d'une courbe elliptique de rang $\geq 12$}, language={French, with English summary}, journal={C. R. Acad. Sci. Paris S\'er. I Math.}, volume={295}, date={1982}, number={12}, pages={643--644}, issn={0249-6321}, review={\MR {688896 (84b:14019)}}, } \bib{Mestre1986}{article}{ author={Mestre, Jean-Fran{\c {c}}ois}, title={Formules explicites et minorations de conducteurs de vari\'et\'es alg\'ebriques}, language={French}, journal={Compositio Math.}, volume={58}, date={1986}, number={2}, pages={209--232}, issn={0010-437X}, review={\MR {844410 (87j:11059)}}, } \bib{Mordell1922}{article}{ author={Mordell, L. J.}, title={On the rational solutions of the indeterminate equations of the third and fourth degrees}, journal={Proc. Cambridge Phil. Soc.}, volume={21}, date={1922}, pages={179--192}, } \bib{MumfordTheta3}{book}{ author={Mumford, David}, title={Tata lectures on theta. III}, series={Progress in Mathematics}, volume={97}, note={With the collaboration of Madhav Nori and Peter Norman}, publisher={Birkh\"auser Boston Inc.}, place={Boston, MA}, date={1991}, pages={viii+202}, isbn={0-8176-3440-1}, review={\MR {1116553 (93d:14065)}}, } \bib{O'Neil2002}{article}{ author={O'Neil, Catherine}, title={The period-index obstruction for elliptic curves}, journal={J. Number Theory}, volume={95}, date={2002}, number={2}, pages={329--339}, issn={0022-314X}, review={\MR {1924106 (2003f:11079)}}, doi={10.1016/S0022-314X(01)92770-2}, note={Erratum in {\em J. Number Theory} \textbf {109} (2004), no.~2, 390}, } \bib{Park-Poonen-Voight-Wood-preprint}{misc}{ author={Park, Jennifer}, author={Poonen, Bjorn}, author={Voight, John}, author={Wood, Melanie Matchett}, title={A heuristic for boundedness of ranks of elliptic curves}, date={2016-02-03}, note={Preprint, \texttt {arXiv:1602.01431v1}, to appear in \emph {J.\ Europ.\ Math.\ Soc.}}, } \bib{Poincare1901}{article}{ author={Poincar\'e, H.}, title={Sur les propri\'et\'es arithm\'etiques des courbes alg\'ebriques}, journal={J.\ Pures Appl.\ Math.\ (5)}, volume={7}, date={1901}, pages={161--234}, } \bib{Poincare1950-Oeuvres5}{book}{ author={Poincar\'e, Henri}, title={{\OE }uvres d'Henri Poincar\'e, Volume~5}, editor={Ch\^atelet, Albert}, publisher={Gauthier-Villars}, address={Paris}, date={1950}, } \bib{Poonen-Rains2012-selmer}{article}{ author={Poonen, Bjorn}, author={Rains, Eric}, title={Random maximal isotropic subspaces and Selmer groups}, journal={J. Amer. Math. Soc.}, volume={25}, date={2012}, number={1}, pages={245--269}, issn={0894-0347}, review={\MR {2833483}}, doi={10.1090/S0894-0347-2011-00710-8}, } \bib{Rubin-Silverberg2000}{article}{ author={Rubin, Karl}, author={Silverberg, Alice}, title={Ranks of elliptic curves in families of quadratic twists}, journal={Experiment. Math.}, volume={9}, date={2000}, number={4}, pages={583--590}, issn={1058-6458}, review={\MR {1806293 (2001k:11105)}}, } \bib{Shioda1992}{article}{ author={Shioda, Tetsuji}, title={Some remarks on elliptic curves over function fields}, note={Journ\'ees Arithm\'etiques, 1991 (Geneva)}, journal={Ast\'erisque}, number={209}, date={1992}, pages={12, 99--114}, issn={0303-1179}, review={\MR {1211006 (94d:11046)}}, } \bib{SilvermanAEC2009}{book}{ author={Silverman, Joseph H.}, title={The arithmetic of elliptic curves}, series={Graduate Texts in Mathematics}, volume={106}, edition={2}, publisher={Springer, Dordrecht}, date={2009}, pages={xx+513}, isbn={978-0-387-09493-9}, review={\MR {2514094 (2010i:11005)}}, doi={10.1007/978-0-387-09494-6}, } \bib{Smith-preprint}{misc}{ author={Smith, Alexander}, title={$2^\infty $-Selmer groups, $2^\infty $-class groups, and Goldfeld's conjecture}, date={2017-06-07}, note={Preprint, \texttt {arXiv:1702.02325v2}}, } \bib{Stein-Wuthrich2013}{article}{ author={Stein, William}, author={Wuthrich, Christian}, title={Algorithms for the arithmetic of elliptic curves using Iwasawa theory}, journal={Math. Comp.}, volume={82}, date={2013}, number={283}, pages={1757--1792}, issn={0025-5718}, review={\MR {3042584}}, doi={10.1090/S0025-5718-2012-02649-4}, } \bib{Swinnerton-Dyer2008}{article}{ author={Swinnerton-Dyer, Peter}, title={The effect of twisting on the 2-Selmer group}, journal={Math. Proc. Cambridge Philos. Soc.}, volume={145}, date={2008}, number={3}, pages={513--526}, issn={0305-0041}, review={\MR {2464773 (2010d:11059)}}, doi={10.1017/S0305004108001588}, } \bib{Tate1974}{article}{ author={Tate, John T.}, title={The arithmetic of elliptic curves}, journal={Invent. Math.}, volume={23}, date={1974}, pages={179--206}, issn={0020-9910}, review={\MR {0419359 (54 \#7380)}}, } \bib{Tate-Shafarevich1967}{article}{ author={T{\`e}{\u \i }t, D. T.}, author={{\v {S}}afarevi{\v {c}}, I. R.}, title={The rank of elliptic curves}, language={Russian}, journal={Dokl. Akad. Nauk SSSR}, volume={175}, date={1967}, pages={770--773}, issn={0002-3264}, review={\MR {0237508 (38 \#5790)}}, } \bib{Ulmer2002}{article}{ author={Ulmer, Douglas}, title={Elliptic curves with large rank over function fields}, journal={Ann. of Math. (2)}, volume={155}, date={2002}, number={1}, pages={295--315}, issn={0003-486X}, review={\MR {1888802 (2003b:11059)}}, doi={10.2307/3062158}, } \bib{Watkins2008-ExpMath}{article}{ author={Watkins, Mark}, title={Some heuristics about elliptic curves}, journal={Experiment. Math.}, volume={17}, date={2008}, number={1}, pages={105--125}, issn={1058-6458}, review={\MR {2410120 (2009g:11076)}}, } \bib{Watkins-discursus}{misc}{ author={Watkins, Mark}, title={A discursus on $21$ as a bound for ranks of elliptic curves over $\mathbf {Q}$, and sundry related topics}, date={2015-08-20}, note={Available at \url {http://magma.maths.usyd.edu.au/~watkins/papers/DISCURSUS.pdf}\phantom {i}}, } \bib{Watkins-et-al2014}{article}{ author={Watkins, Mark}, author={Donnelly, Stephen}, author={Elkies, Noam D.}, author={Fisher, Tom}, author={Granville, Andrew}, author={Rogers, Nicholas F.}, title={Ranks of quadratic twists of elliptic curves}, language={English, with English and French summaries}, journal={Publ.\ math.\ de Besan\c {c}on}, volume={2014/2}, date={2014}, pages={63--98}, label={Wat${}^+$14}, } \bib{deWeger1998}{article}{ author={de Weger, Benjamin M.~M.}, title={$A+B=C$ and big $\Sha $'s}, language={English}, journal={Quart.\ J.\ Math.\ Oxford Ser.\ (2)}, volume={49}, date={1998}, number={193}, pages={105--128}, issn={0033-5606}, review={\MR {1617347 (99j:11065)}}, doi={10.1093/qjmath/49.193.105}, } \bib{Wiles2006}{article}{ author={Wiles, Andrew}, title={The Birch and Swinnerton-Dyer conjecture}, conference={ title={The millennium prize problems}, }, book={ publisher={Clay Math. Inst., Cambridge, MA}, }, date={2006}, pages={31--41}, review={\MR {2238272}}, } \bib{Zarhin1974-trick}{article}{ author={Zarhin, Ju. G.}, title={A remark on endomorphisms of abelian varieties over function fields of finite characteristic}, language={Russian}, journal={Izv. Akad. Nauk SSSR Ser. Mat.}, volume={38}, date={1974}, pages={471--474}, issn={0373-2436}, review={\MR {0354689 (50 \#7166)}}, } \bib{Zarhin1974}{article}{ author={Zarhin, Ju. G.}, title={Noncommutative cohomology and Mumford groups}, language={Russian}, journal={Mat. Zametki}, volume={15}, date={1974}, pages={415--419}, issn={0025-567X}, review={\MR {0354612 (50 \#7090)}}, } \end{biblist} \end{bibdiv} \end{document}
{ "timestamp": "2017-12-04T02:01:06", "yymm": "1711", "arxiv_id": "1711.10112", "language": "en", "url": "https://arxiv.org/abs/1711.10112", "abstract": "This is an introduction to a probabilistic model for the arithmetic of elliptic curves, a model developed in a series of articles of the author with Bhargava, Kane, Lenstra, Park, Rains, Voight, and Wood. We discuss the theoretical evidence for the model, and we make predictions about elliptic curves based on corresponding theorems proved about the model. In particular, the model suggests that all but finitely many elliptic curves over $\\mathbb{Q}$ have rank $\\le 21$, which would imply that the rank is uniformly bounded.", "subjects": "Number Theory (math.NT); Algebraic Geometry (math.AG)", "title": "Heuristics for the arithmetic of elliptic curves", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9873750523901973, "lm_q2_score": 0.8221891305219504, "lm_q1q2_score": 0.8118090358237615 }
https://arxiv.org/abs/2004.10038
On the spectral gap and the diameter of Cayley graphs
We obtain a new bound connecting the first non--trivial eigenvalue of the Laplace operator of a graph and the diameter of the graph, which is effective for graphs with small diameter or for graphs, having the number of maximal paths comparable to the expectation.
\section{Introduction} \label{sec:introduction} Expander graphs were first introduced by Bassalygo and Pinsker \cite{BP}, and their existence first proved by Pinsker \cite{Pinsker_expander} (also, see \cite{Margulis}). The property of a graph of being an expander is significant in many of mathematical and computational contexts, see, e.g., \cite{Kowalski_exp}, \cite{Lubotzky}, \cite{Saloff}. It is well--known that the expansion property of a graph is controlled by the spectral gap of the Laplace operator $\Delta$, namely, by the first non--trivial eigenvalue $\la_1$ of $\Delta$, see \cite{Lubotzky} (all required definitions can be found in Section \ref{sec:diameter} below). In this paper we study the connection of $\la_1$ and the diameter of a graph and we concentrate on {\it Cayley graphs} (although some generalizations are possible as well, see Theorem \ref{t:basis_graph}). In \cite{DS-C} the following result was obtained (also, see \cite[Corollary 3.2.7]{Saloff}). \begin{theorem} Let $\Gr$ be a finite group. Let $S\subseteq \Gr$ be a set and $d$ be the diameter of its Cayley graph $\Cay (S)$. Then \[ \la_1 (\Cay (S)) \ge \frac{1}{2d^2 |S|} \,. \] \label{t:DS-C} \end{theorem} Now we formulate our first main result. \begin{theorem} Let $\Gr$ be a finite group. Let $S\subseteq \Gr$ be a set and $d$ be the diameter of its Cayley graph $\Cay (S)$. Then \[ \la_1 (\Cay (S)) \ge \frac{|\Gr|}{d |S|^d} \,. \] \label{t:basis} \end{theorem} A set $S \subseteq \Gr$ is called a {\it basis} of order $d$ if $S^d = \Gr$. It follows that Theorem \ref{t:basis} is better than Theorem \ref{t:DS-C} in the case when a basis $S$ of order $d$ satisfies $|S|^{d-1} < 2d |\Gr|$. In particular, our result is better for all possible $S$ in the case $d=2$. On the other hand, if $d$ is the diameter of $\Cay (S)$, then $|S|^d \ge |\Gr|$. Thus our result is better than Theorem \ref{t:DS-C} for "economical"\, basis $S$, i.e., in the case when $\Gr$ has no elements require a lot of multiplications of $S$ to be represented. For example, assuming the condition $|S|^d \ll_d |\Gr|$, we have $\la_1 (\Cay (S)) \gg_d 1$. The same bound takes place if the number of representations of any $x\in \Gr$ as $x=s_1 \dots s_d$, $s_j \in S$ is $\Omega (|S|^d/|\Gr|)$. Other examples of effective using of Theorem \ref{t:basis} are contained in Remark \ref{r:other_g} and in Section \ref{sec:examples} below. Here we show that our new bound for the gap of the Laplace operator allows to say something new on non--commutative sets having no solutions of linear equations, Sidon sets, as well as about the famous Erd\H{o}s--Tur\'an conjecture. Actually, the methods from \cite[Chapter 3]{Saloff} are rather general and one can obtain an analogue of Theorem \ref{t:DS-C} for almost arbitrary graphs. In this direction we prove \begin{theorem} Let $G = G(V,E)$ be a finite graph with the valency $\mathcal{V}$ and the diameter $d$. Then \[ \la_1 (G) \ge \frac{|V|}{d\mathcal{V}^d} \,. \] \label{t:basis_graph} \end{theorem} In Sections \ref{sec:Z_N}, \ref{sec:non-abelian} we concentrate on the case of Cayley graphs and obtain a characterisation of the spectral gap in terms of the intersection of our set $S$ with arithmetic progressions and (non--abelian) Bohr sets. Let us formulate a result from these Sections (see Corollaries \ref{c:basis_ab}, \ref{c:basis_nab}). \begin{theorem} Let $\Gr$ be a finite group, $\eps \in (0,1)$ be a real number, $d \ge 2$ be an integer and $B, \Omega \subseteq \Gr$, $|\Omega| = (1-\eps) |\Gr|$ be sets such that any element of $\Gr \setminus \Omega$ can be represented as a product of $d$ elements of $B B^{-1}$ or $B^{-1} B$ in at least $g$ ways. Suppose that $\Gr$ has no normal proper subgroups of index at most $2/\eps$. Then \begin{equation}\label{f:basis_nab+_intr} \la_1 (\Cay (B)) \ge \frac{g \eps^{\log_{3/2} 3} |\Gr|}{16d^2 |B|^{2d}} \,. \end{equation} \label{t:basis_nab_intr} \end{theorem} In the abelian case the dependence on the parameters in \eqref{f:basis_nab+_intr} is better, see Corollary \ref{c:basis_ab} below. Thus Theorem \ref{t:basis_nab_intr} shows that in the case of Cayley graphs one can has a relatively large exceptional set $\Omega$ and nevertheless a rather good lower bound for $\la_1 (\Cay (B))$. Finally, in Appendix we collect some simple properties of non--abelian Bohr sets. \section{Definitions} \label{sec:definitions} Here and throughout this paper $\Gr$ is a finite group with the identity $e$. Given two sets $A,B\subset \Gr$, define the \textit{product set} of $A$ and $B$ as $$AB:=\{ab ~:~ a\in{A},\,b\in{B}\}\,.$$ In a similar way we define the higher product sets, e.g., $A^3$ is $AAA$. Let $A^{-1} := \{a^{-1} ~:~ a\in A \}$. Having an element $g\in \Gr$ and a positive integer $k$ we write $g^{1/k}$ for the set $\{x \in \Gr ~:~ x^k = g \}$. Further, if $A\subseteq \Gr$ is a set, then $A^{1/k}$ equals $\{ a^{1/k} ~:~ a\in A\}$. In this paper we use the same letter to denote a set $A\subseteq \Gr$ and its characteristic function $A: \Gr \to \{0,1 \}$. Given a function $f: \Gr \to \C$, we write $\langle f \rangle$ for $\sum_{x \in \Gr} f(x)$. Now we recall some notions and simple facts from the representation theory, see, e.g., \cite{Naimark} or \cite{Serr_representations}. For a finite group $\Gr$ let $\FF{\Gr}$ be the set of all irreducible unitary representations of $\Gr$. It is well--known that size of $\FF{\Gr}$ coincides with the number of all conjugate classes of $\Gr$. For $\rho \in \FF{\Gr}$ denote by $d_\rho$ the dimension of this representation. By $d_{\min} (\Gr)$ denote the quantity $\min_{\rho \neq 1} d_\rho$. We write $\langle \cdot, \cdot \rangle$ for the corresponding Hilbert--Schmidt scalar product $\langle A, B \rangle = \langle A, B \rangle_{HS}:= \tr (AB^*)$, where $A,B$ are any two matrices of the same sizes. Put $\| A\|_{HS} = \sqrt{\langle A, A \rangle}$. Clearly, $\langle \rho(g) A, \rho(g) B \rangle = \langle A, B \rangle$ and $\langle AX, Y\rangle = \langle X, A^* Y\rangle$. Also, we have $\sum_{\rho \in \FF{\Gr}} d^2_\rho = |\Gr|$. For any function $f:\Gr \to \mathbb{C}$ and $\rho \in \FF{\Gr}$ define the matrix $\FF{f} (\rho)$, which is called the Fourier transform of $f$ at $\rho$ by the formula \begin{equation}\label{f:Fourier_representations} \FF{f} (\rho) = \sum_{g\in \Gr} f(g) \rho (g) \,. \end{equation} Then the inverse formula takes place \begin{equation}\label{f:inverse_representations} f(g) = \frac{1}{|\Gr|} \sum_{\rho \in \FF{\Gr}} d_\rho \langle \FF{f} (\rho), \rho (g^{-1}) \rangle \,, \end{equation} and the Parseval identity is \begin{equation}\label{f:Parseval_representations} \sum_{g\in \Gr} |f(g)|^2 = \frac{1}{|\Gr|} \sum_{\rho \in \FF{\Gr}} d_\rho \| \FF{f} (\rho) \|^2_{HS} \,. \end{equation} The main property of the Fourier transform is the convolution formula \begin{equation}\label{f:convolution_representations} \FF{f*g} (\rho) = \FF{f} (\rho) \FF{g} (\rho) \,, \end{equation} where the convolution of two functions $f,g : \Gr \to \mathbb{C}$ is defined as \[ (f*g) (x) = \sum_{y\in \Gr} f(y) g(y^{-1}x) \,. \] Given a function $f : \Gr \to \mathbb{C}$ and a positive integer $k$, we write $f^{(k)} = (f^{(k-1)} * f)$ for the $k$th convolution of $f$. Finally, it is easy to check that for any matrices $A,B$ one has $\| AB\|_{HS} \le \| A\| \| B\|_{HS}$ and $\| A\| \le \| A \|_{HS}$, where $\| \cdot \|$ is the operator $l^2$--norm of $A$, that is just the maximal singular value of $A$. In particular, it shows that $\| \cdot \|_{HS}$ is indeed a matrix norm. Also, giving a set $S\subseteq \Gr$ we denote $\min_{\rho \in \FF{\Gr},\, \rho \neq 1} \| \FF{S} (\rho) \|$ as $\| S\|$. The signs $\ll$ and $\gg$ are the usual Vinogradov symbols. All logarithms are to base $2$. \section{On the diameter of Cayley graphs} \label{sec:diameter} Let $S\subseteq \Gr$ be a set and let $\Cay (S)$ be the correspondent {\it Cayley graph} of $S$ defined as $\Cay(S) = (V,E)$ with the vertex set $V=\Gr$ and the set of edges $$ E = \{ (g,gs) ~:~ g\in \Gr,\, s\in S\} \,.$$ Clearly, $\Cay(S)$ is a regular graph and its diameter equals minimal $d$ such that $S^d = \Gr$. As usual we consider the (oriented) {\it Laplace operator} of $\Cay (S)$ defined for an arbitrary function $f:\Gr \to \C$ as \begin{equation}\label{f:Delta_f} (\Delta f)(x) = f(x) - |S|^{-1} \sum_{s\in S} f(xs) \,. \end{equation} In other words, the matrix of $\Delta$ is $I-|S|^{-1} M(x,y)$, where $I$ is the identity operator and $M(x,y)$ is the adjacency matrix of the graph $\Cay (S)$ (the {\it Markov operator} of $\Cay (S)$), $M(x,y) = S(x^{-1} y)$. Actually, formula \eqref{f:Delta_f} defines an operator with an arbitrary function $F(x)$ instead of $S(x)$ if one replaces $|S|$ by $\| F\|_1$. Further the Laplace operator has the spectrum $$0 = \lambda_0 (\Cay (S)) \le \lambda_1 (\Cay (S)) \le |\lambda_2 (\Cay (S))| \le \dots \le |\lambda_{|\Gr|-1} (\Cay (S))|$$ and there is a variational description of $\lambda_1 (\Cay (S))$, namely, $$ \lambda_1 (\Cay (S)) = \min_{\langle f \rangle = 0,\, \| f\|_2=1} \langle \Delta f, f \rangle \,. $$ The quantity $\lambda_1$ is hugely connected with the expansion properties of the considered graph, see, e.g., \cite{Lubotzky}. Below we write $\la_j$ for $\la_j (\Cay (S))$. Also, we will consider the correspondent eigenvalues $0 = \lambda^*_0 \le \lambda^*_1 \le \lambda^*_2 \le \dots \le \lambda^*_{|\Gr|-1}$ of the operator $I-|S|^{-2} MM^*$ (one can think about these numbers as squares of "singular"\, values of $\D$). The same can be defined for an arbitrary graph $G=G(V,E)$, see \cite{Lubotzky}, namely, assuming for simplicity that the valency of $G$ is a constant, say, $\mathcal{V}$, we write \begin{equation}\label{f:Delta_f_any_graph} (\Delta f)(x) = f(x) - \mathcal{V}^{-1} \sum_{(x,y) \in E} f(y) \,. \end{equation} \bigskip The spectrum of Cayley graph $\Cay(S)$ is closely connected with the Fourier transform of the characteristic function of $S$. For example, it is well--known, see \cite{SX} or \cite[Proposition 6.2.4]{Kowalski_exp} that the multiplicity of any $\la_j$, $j\neq 0$ is at least $d_{\min} (\Gr)$ because each eigenspace of the Markov operator $M$ is a subrepresentation of the regular representation. We collect a series of further required simple results on the spectrum of $\Cay (S)$ in the following \begin{lemma} Let $\Gr$ be a finite group, let $S\subseteq \Gr$ be a set. Then $1-\lambda_j$, $1-\lambda^*_j$ belong to the spectra of matrices $|S|^{-1} \FF{S} (\rho)$, $|S|^{-2} \FF{S} (\rho) \FF{S} (\rho)^*$, correspondingly, where $\rho$ runs over $\FF{\Gr}$. Further $\la_1 \ge 1 - |S|^{-1} \| S \|$ and \begin{equation}\label{f:Laplace&representations_la^*} \la_1^* = 1-|S|^{-2} \| S\|^2 \,. \end{equation} \label{l:Laplace&representations} \end{lemma} \begin{proof} Let $f'(x) = f(x^{-1})$. The required inclusion follows from the formula $(\Delta f) (x) = f(x) - |S|^{-1} (S * f')(x^{-1})$ and similar for $I-|S|^{-2} MM^*$. Let $f$ be an eigenfunction of $\Delta$, that is $(\Delta f) (x) = \mu f(x)$, $\mu \in \mathbb{C}$. Taking the Fourier transform, we derive \[ \mu \FF{f} (\rho) = \FF{f} (\rho) - |S|^{-1} \FF{f} (\rho) \FF{S} (\rho)^* \,. \] In other words, \[ 0 = \FF{f} (\rho) ((1-\mu)I - |S|^{-1} \FF{S} (\rho)^*) \,. \] In view of \eqref{f:inverse_representations} we know that there is $\rho \in \FF{\Gr}$ such that $\FF{f} (\rho) \neq 0$ because otherwise $f\equiv 0$. Hence the matrix $(1-\mu)I - |S|^{-1} \FF{S} (\rho)^*$ cannot be invertible for this $\rho$ and thus $1-\mu$ belongs to the spectrum of $|S|^{-1} \FF{S} (\rho)^*$, which coincides with the spectrum of $|S|^{-1} \FF{S} (\rho)$. Further, applying the Cauchy--Schwartz inequality, formula \eqref{f:Parseval_representations} twice, as well as identity \eqref{f:convolution_representations}, we have for any function $f: \Gr \to \C$, $\| f\|_2 =1$, $\langle f\rangle =0$ that \[ \langle \Delta f, f \rangle = 1- |S|^{-1} \sum_x (S * f')(x^{-1}) \overline{f(x)} = 1- (|S| |\Gr|)^{-1} \sum_{\rho \in \FF{\Gr}} d_\rho \langle \FF{S} (\rho) \FF{f'} (\rho), \FF{\overline{f}} (\rho) \rangle \ge \] \[ \ge 1- (|S| |\Gr|)^{-1} \| S\| \sum_{\rho \in \FF{\Gr}} d_\rho \| \FF{f} (\rho) \|^2 = 1- \| S\| / |S| \,. \] Finally, to get \eqref{f:Laplace&representations_la^*} we first notice that by the same calculations with $S$ replaced by $S*S^{-1}$, we have $\la^*_1 \ge 1-|S|^{-2} \| S\|^2$. Let us obtain the reverse inequality. Find a certain $\rho \in \FF{\Gr}$, $\rho \neq 1$ and a vector $\_phi \in \C^{d_\rho}$, $\|\_phi\|_2 = 1$ such that $\| S\|^2 = \langle \FF{S} (\rho) \FF{S} (\rho)^* \_phi, \_phi \rangle$. Using the definition of the Fourier transform, we get \begin{equation}\label{f:30.03_1} \| S\|^2 = \langle \FF{S} (\rho) \FF{S} (\rho)^* \_phi, \_phi \rangle = \sum_{g\in \Gr} (S*S^{-1}) (g) \langle \rho(g) \_phi, \_phi \rangle := \sum_{g\in \Gr} (S*S^{-1}) (g) F(g) \,. \end{equation} Let us calculate the Fourier transform of $F$. Applying the orthogonality relations for any $\pi \in \FF{\Gr}$ see, e.g., \cite[Theorem 1, page 67]{Naimark}, we obtain \[ \FF{F} (\pi) = \sum_{i,j} \_phi(i) \overline{\_phi(j)} \sum_{g\in \Gr} \rho(g)_{ij} \pi (g) = \frac{|\Gr|}{d_\rho} \left|\sum_k \_phi(k) \right|^2 \ge 0\,. \] Hence the Fourier transform of $F$ is non--negative and thus $F$ can be written as $f' * f$ for a certain function $f$. Since $\rho \neq 1$, it follows that $\sum_g F(g) = 0$ (here we have used the orthogonality relations again). It implies $\langle f \rangle = 0$. But by the definition of the Laplace operator, we have for any function $f$ that \begin{equation}\label{f:TT^*_action} \langle M M^* f, f \rangle = |S|^{-2} \sum_{x\in \Gr} (S*S^{-1}) (x) (f' * f) (x) \,. \end{equation} Returning to \eqref{f:30.03_1}, using the fact $\langle f \rangle = 0$ and the variational property of the singular values of $M$, we derive \[ \| S\|^2 = |S|^2 \langle M M^* f, f \rangle \le |S|^2 (1-\la^*_1) \] or, in other words, $\la^*_1 \le 1-|S|^{-2} \| S\|^2$ as required. $\hfill\Box$ \end{proof} \bigskip The proof of the first main Theorem \ref{t:basis} bases on an idea from \cite{Mosh_digits_Che}. We formulate our result in a slightly more general form. \begin{theorem} Let $\Gr$ be a finite group, $\Omega \subset \Gr$ be a set, let $g$ be a positive real, $d \ge 2$ be an integer, and let $B\subseteq \Gr$ be a set such that any element of $\Gr \setminus \Omega$ can be represented as a product of $d$ elements of $B$ in at least $g$ ways. Then \begin{equation}\label{f:basis_g_lambda} \la_1 (\Cay (B)) \ge \frac{g|\Gr|}{d (|B|+ g|\Omega|)^d} - \frac{g|\Omega|}{|B|}\,. \end{equation} Suppose that for sets $B_1,B_2 \subseteq \Gr$ one has $(B_1 * B_2)^{(d)} (x) \ge g$ outside $\Omega$. Then \begin{equation}\label{f:basis_g_lambda*-} \la_1 (\Cay (B_1 *B_2)) \ge \frac{g|\Gr|}{d (|B_1||B_2|+g|\Omega|)^{d}} - \frac{g|\Omega|}{|B_1||B_2|} \,. \end{equation} In particular, \begin{equation}\label{f:basis_g_lambda*} \la^*_1 (\Cay (B)) \ge \frac{g|\Gr|}{d (|B|^2+g|\Omega|)^{d}} - \frac{g|\Omega|}{|B|^2} \,. \end{equation} \label{t:basis_g} \end{theorem} \begin{proof} We assume at the beginning that $\Omega = \emptyset$. Let $f(x) = f_B (x) = B(x) - |B|/|\Gr|$ be the balanced function of the set $B$. Clearly, we have $\sum_{x\in \Gr} f(x) = 0$, further for an arbitrary $j$ one has $f^{(j)} (x) = B^{(j)} (x) - |B|^j/|\Gr|$ and hence $\sum_{x\in \Gr} f^{(j)}(x) = 0$. For any $k\ge 1$ consider \[ \T_k (f) = \sum_{x\in \Gr} f^{(k)} (x)^2 = \sum_{x\in \Gr} B^{(k)} (x)^2 - \frac{|B|^{2k}}{|\Gr|} \,. \] Using the definition of the Laplace operator and counting the number of cycles of length $2k$ in $\Cay(B)$, we obtain \begin{equation}\label{f:28.03_0} |\Gr| \T_k (f) = |B|^{2k} \sum_{j=0}^{|\Gr|-1} |1-\la_j|^{2k} - |B|^{2k} = |B|^{2k} \sum_{j=1}^{|\Gr|-1} |1-\la_j|^{2k} \,. \end{equation} Notice that $\T_1(f) < |B|$. We have \begin{equation}\label{f:28.03_1} \T_k (f) = \sum_y \sum_{z_1, z_2} f^{(k-d)} (yz^{-1}_1) f^{(k-d)} (y z^{-1}_2) (B^{(d)} (z_1) - g) (B^{(d)} (z_2) - g) \end{equation} and by the Cauchy--Schwarz inequality for any $z_1,z_2 \in \Gr$, we obtain \begin{equation}\label{f:28.03_2} \sum_y f^{(k-d)} (yz^{-1}_1) f^{(k-d)} (y z^{-1}_2) \le \T_{k-d} (f) \,. \end{equation} For any $x\in \Gr$, we know that $B^{(d)} (x) \ge g$. Combining the last inequality with \eqref{f:28.03_1}, \eqref{f:28.03_2}, we derive \[ \T_k (f) \le \sum_{z_1, z_2} \T_{k-d} (f) (B^{(d)} (z_1) - g) (B^{(d)} (z_2) - g) = \T_{k-d} (f) (|B|^d - g |\Gr|)^2 \,. \] By induction we see that for any $l$ the following holds \[ \T_{dl+1} (f) \le \T_{1} (f) (|B|^d - g |\Gr|)^{2l} < |B| (|B|^d - g |\Gr|)^{2l} \,. \] Substituting the last bound into \eqref{f:28.03_0}, we obtain \[ (1-\la_1)^{2dl+2} |B|^{2ld+2} |\Gr|^{-1} \le \T_{dl+1} (f) < |B| (|B|^d - g |\Gr|)^{2l} = |B|^{2ld+1} \left( 1- \frac{g|\Gr|}{|B|^d} \right)^{2l} \,. \] Taking $l$ sufficiently large, we get \[ 1-\la_1 \le \left( 1- \frac{g|\Gr|}{|B|^d} \right)^{1/d} \le 1- \frac{g|\Gr|}{d |B|^d} \] as required. Now if $\Omega \neq \emptyset$, then replace the characteristic function of $B$ by $\tilde{B} (x) = B(x) + g\Omega (b^{}x)$, where $b$ is an arbitrary element of $B^{d-1}$. Then for any $x\in \Gr$ one has $\tilde{B}^{(d)} (x) \ge g$ and we can apply the arguments above. It gives us \[ \la_1 (\Cay (B)) + \frac{g|\Omega|}{|B|} \ge \la_1 (\Cay (\tilde{B})) \ge \frac{g|\Gr|}{d \|\tilde{B} \|_1^d} = \frac{g|\Gr|}{d (|B|+ g|\Omega|)^d} \] and we have \eqref{f:basis_g_lambda}. It remains to obtain \eqref{f:basis_g_lambda*-} and again we consider firstly the case $\Omega = \emptyset$. Let us apply the same arguments with a new function $F(x) = (f_1 * f_2) (x)$ instead of $f$, where $f_1 = f_{B_1}$ and $f_2 = f_{B_2}$. One has $$ \T_1 (F) = \sum_{x\in \Gr} F^2 (x) = \sum_{x\in \Gr} (f_1 * f_2)^2 (x) = \sum_{x\in \Gr} (B_1 * B_2)^2 (x) - \frac{|B_1|^2 |B_2|^2}{|\Gr|} < |B_1| |B_2| \min\{ |B_1|, |B_2| \} $$ and we can repeat the arguments above. For non--empty $\Omega$ consider the function $(B_1 * B_2) (x) + g \Omega (bx)$, where $b$ is an arbitrary element of $(B_1 B_2)^{d-1}$ and apply the arguments as before. To obtain \eqref{f:basis_g_lambda*} we just use \eqref{f:basis_g_lambda*-} with $B_1=B$, $B_2 = B^{-1}$ or vice versa. This completes the proof. $\hfill\Box$ \end{proof} \begin{remark} Using the well--known Pl\"unnecke inequality \cite{TV} in the case of the symmetric (for simplicity) basis $B \subseteq \Gr$ of order $d$ and an abelian group $\Gr$ one has that for any $A\subseteq \Gr$ the following holds \[ |A| \cdot \left( \frac{|\Gr|}{|A|} \right)^{1/d} \le |A| \cdot \left( \frac{|B^d|}{|A|} \right)^{1/d} \le |AB| \,. \] It shows that $\Cay (B)$ has an expansion property and, in principle, one can obtain some lower estimates for $\la_1$ in terms of the expansion constant $h(\Cay(B))$, see, e.g., \cite[Proposition 3.4.3]{Kowalski_exp}. Similarly, notice that there is another well--known general bound for the spectrum of a strictly positive matrix $A = (a_{ij})_{i,j=1}^n$, namely, $|\mu_2 (A)| \le \mu_1 (A) \cdot \frac{M-m}{M+m}$, where $M=\max_{i,j} a_{ij}$, $m=\min_{i,j} a_{ij}$ and $\mu_1 (A) \ge |\mu_2 (A)| \ge \dots $ are eigenvalues of the matrix $A$. Nevertheless, Theorem \ref{t:DS-C} and our Theorem \ref{t:basis_g} give better bounds than both considered estimates. \end{remark} \begin{remark} \label{r:other_g} If for any $x\in \Gr$ one has $B^{(d)} (x) \ge 1$, then, clearly, for an arbitrary integer $l\ge d$ the following holds $B^{(l)} (x) \ge |B|^{l-d}$. Thus one can improve bounds \eqref{f:basis_g_lambda}, \eqref{f:basis_g_lambda*} of Theorem \ref{t:basis_g} taking larger $l$ in the case when we know some better lower estimates for $B^{(l)} (x)$. Now suppose that $B^{(d)} (x) \gg |B|^d/|\Gr|$ for any $x\in \Gr$, that is, the number of the representations is comparable to its expectation. Then $\la_1 (\Cay (B)) \gg 1/d$ and hence the bound for $\la_1 (\Cay (B))$ does not depend on $|\Gr|$ and $|B|$. \end{remark} Combining Theorem \ref{t:basis_g} and Lemma \ref{l:Laplace&representations}, we obtain \begin{corollary} Let $\Gr$ be a finite group, $g$ be a positive real, $d\ge 2$ be an integer, and let $B\subseteq \Gr$ be a set such that for any $x\in \Gr$ one has $(B * B^{-1})^{(d)} (x) \ge g$ or $(B^{-1} *B)^{(d)} (x) \ge g$. Then for an arbitrary non--trivial representation $\rho$ one has \[ \| \FF{B} (\rho) \| \le |B| \left( 1-\frac{g|\Gr|}{d |B|^{2d}} \right)^{1/2} \,. \] \label{c:basis_g} \end{corollary} The next Corollary shows that basis properties of a set $B$ imply the uniform distribution of the product $B^k$ for large $k$. \begin{corollary} Let $\Gr$ be a finite group, $g$ be a positive real, $d\ge 2$ be an integer, and let $B\subseteq \Gr$ be a set such that $(B * B^{-1})^d$ or $(B^{-1} * B)^d$ is at least one on $\Gr$. Suppose that $k$ grows to infinity faster than $$ \frac{d |B|^{2d}}{|\Gr|} \cdot \log \left( \frac{|\Gr|}{|B|} \right) \,. $$ Then for any $x\in \Gr$ one has \begin{equation}\label{f:UD_basis} B^{(k)} (x) = \frac{|B|^k}{|\Gr|} (1+o(1)) \,. \end{equation} \label{c:UD_basis} \end{corollary} \begin{proof} Without loosing of the generality, we consider the case $B*B^{-1}$. Using formula \eqref{f:inverse_representations}, we get \begin{equation}\label{tmp:01.04_2} B^{(k+2)} (x) = \frac{1}{|\Gr|} \sum_{\rho \in \FF{\Gr}} d_\rho \langle \FF{B}^{k+2} (\rho), \rho (x^{-1}) \rangle = \frac{|B|^{k+2}}{|\Gr|} + \mathcal{E} \,, \end{equation} and our task is to estimate the error term $\mathcal{E}$. By Corollary \ref{c:basis_g}, we have $\| \FF{B} (\rho) \| \le |B| \left( 1-\frac{|\Gr|}{d|B|^{2d}} \right)^{1/2}$ and thus in view of \eqref{f:Parseval_representations}, we get \begin{equation}\label{tmp:01.04_3} |\mathcal{E}| \le \left(|B| \left( 1 - \frac{|\Gr|}{d|B|^{2d}} \right)^{1/2} \right)^k \cdot \frac{1}{|\Gr|} \sum_{\rho \in \FF{\Gr}} d_\rho \| \FF{B} (\rho) \|^2_{HS} \le \left( 1 - \frac{|\Gr|}{d|B|^{2d}} \right)^{k/2} |B|^{k+1} \,. \end{equation} Comparing \eqref{tmp:01.04_2}, \eqref{tmp:01.04_3}, we obtain the result. $\hfill\Box$ \end{proof} \bigskip The same arguments work in the general case in the proof of Theorem \ref{t:basis_graph}. We left to the reader the task to insert the exceptional set $\Omega$ in Theorem \ref{t:basis_graph'} below. \begin{theorem} Let $G = G(V,E)$ be a graph with the valency $\mathcal{V}$. Suppose that there are at least $g$ paths of length $d$ between any two vertices of $G$. Then \begin{equation}\label{f:basis_graph'} \la_1 (G) \ge \frac{g|V|}{d\mathcal{V}^d} \,. \end{equation} \label{t:basis_graph'} \end{theorem} \begin{proof} Let $F(x,y) = I - \mathcal{V}^{-1} M(x,y)$ be the matrix of the operator from \eqref{f:Delta_f_any_graph}, $M$ is the adjacency matrix of the graph $G$ and denote by $F^{(j)}$, $M^{(j)}$ the powers of these matrices. Clearly, we have $\sum_{x,y} F(x,y) = 0$ and, moreover, by the definition of the valency, one has $\sum_{a} F(a,y) = \sum_{b} F(x,b) = 0$ for any $x$ and $y$. Hence for an arbitrary $j$ and any $x$, $y$ the following holds \begin{equation}\label{f:28.03_-1} \sum_{a} F^{(j)} (a,y) = \sum_{b} F^{(j)} (x,b) = 0 \,. \end{equation} For any $k\ge 1$ consider \begin{equation}\label{f:28.03_0'} \T_k = \sum_{x,y} F^{(k)} (x,y)^2 = \tr (F^{(k)} (F^{(k)})^*) = \mathcal{V}^{2k} \sum_{j=0}^{|V|-1} |1-\la_j|^{2k} = \mathcal{V}^{2k} \sum_{j=1}^{|V|-1} |1-\la_j|^{2k} \,. \end{equation} Notice that \begin{equation}\label{f:28.03_0+} \T_1 = |V| - 2\mathcal{V}^{-1} \tr (M) + \mathcal{V}^{-2} |E| \le |V| + \mathcal{V}^{-1} |V| \le 2|V| \,. \end{equation} Using formula \eqref{f:28.03_-1}, we obtain \begin{equation}\label{f:28.03_1'} \T_k = \sum_{x,y} \sum_{a,b} F^{(k-d)} (x,a) F^{(k-d)} (x,b) (M^{(d)} (a,y) - g) (M^{(d)} (b,y) - g) \end{equation} and by the Cauchy--Schwarz inequality for any $a,b \in V$, we have $$ \sum_{x} F^{(k-d)} (x,a) F^{(k-d)} (x,b) \le \left( \sum_{x} F^{(k-d)} (x,a)^2 \right)^{1/2} \left( \sum_{x} F^{(k-d)} (x,b)^2 \right)^{1/2} $$ \begin{equation}\label{f:28.03_2'} := q^{1/2}(a) q^{1/2}(b) \,. \end{equation} Clearly, $\| q^{1/2} \|^2_2 = \sum_a q(a) = \T_{k-d}$. For any $x,y$, we know that $M^{(d)} (x,y) \ge g$. Combining the last inequality with \eqref{f:28.03_1'}, \eqref{f:28.03_2'}, we derive \[ \T_k \le \sum_{a,b} q^{1/2} (a) q^{1/2} (b) ((M^{(d)} (M^{(d)})^* ) (a,b) - 2g \mathcal{V}^d + g^2 |V|) \le \] \[ \le \| M^{(d)} q^{1/2} \|^2_2 + (g^2 |V| - 2g \mathcal{V}^d) \left( \sum_a q^{1/2} (a) \right)^2 \le (\mathcal{V}^{2d} + (g^2 |V| - 2g \mathcal{V}^d) |V| ) \T_{k-d} \,. \] By induction and estimate \eqref{f:28.03_0+} we see that \[ \T_{dl+1} \le \T_{1} (\mathcal{V}^{d} - g |V| )^{2l} \le 2|V| (\mathcal{V}^{d} - g |V| )^{2l} \,. \] Substituting the last bound into \eqref{f:28.03_0'}, we obtain \[ (1-\la_1)^{2dl+2} \mathcal{V}^{2ld+2} \le \T_{dl+1} (f) \le 2|V| (\mathcal{V}^{d} - g |V| )^{2l} = 2\mathcal{V}^{2ld} |V| \left( 1 - \frac{g|V|}{\mathcal{V}^d} \right)^{2l} \,. \] Taking $l$ sufficiently large, we get \begin{equation*}\label{tmp:13.04_1} 1-\la_1 \le \left( 1- \frac{g|V|}{\mathcal{V}^d} \right)^{1/d} \le 1 - \frac{g|V|}{d\mathcal{V}^d} \end{equation*} as required. $\hfill\Box$ \end{proof} \section{On $\Z/N\Z$--case} \label{sec:Z_N} Now we consider the case of an abelian group $\Gr$ and for simplicity we often take $\Gr$ equals $\Z/N\Z$ with a prime $N$ (bounds for spectral gaps of Cayley graphs in general abelian groups can be found in \cite{Sanders_Ab_lectures}, say). In this case we show that results of the previous Section can be obtained via another tool (namely, see Theorem \ref{t:Lev_1-eps} below) and moreover one can characterise the existence of the spectral gap in combinatorial terms. It is easy to see (or consult Lemma \ref{l:Laplace&representations}) that in the abelian case for any set $S \subseteq \Gr$ we have the identity $\la_1 (\Cay (S)) = 1-|S|^{-1} \| S\|$. In other words, for any non--trivial character $\chi$ \begin{equation}\label{f:la1_abelian} \left|\sum_{s\in S} \chi(s) \right| \le (1-\la_1 (\Cay (S))) |S| \end{equation} and the estimate is attained for a certain $\chi$. Thus the estimation of the exponential sums and finding non--trivial upper bounds for the quantity $\la_1$ is the same problem for abelian $\Gr$. \bigskip In this Section our basic tool is \cite[Theorem 1]{Lev_1-eps}. \begin{theorem} Let $A\subseteq \Z/N\Z$ be a set, $\eps \in (0,1)$, $\delta \in (0,1/2)$ be real numbers and $|\FF{A} (1)| \ge (1-2\eps (1-\cos \pi \delta)) |A|$. Then there is $a\in \Z/N\Z$ and $l< \delta N$ such that \[ |A\setminus [a,a+l]| < \eps |A| \,. \] \label{t:Lev_1-eps} \end{theorem} Given a positive integer $d$, a set $P \subseteq \Gr$ and a non--negative function $f$ on $\Gr$ denote by \begin{equation}\label{f:sigma_d} \sigma^{(d)}_P (f) := \| f\|_1^{-d} \sum_{x \in P} f^{(d)} (x) \le 1 \,. \end{equation} We characterise the spectral gap of $\Cay (B)$ in terms of purely combinatorial quantity \eqref{f:sigma_d}. \begin{theorem} Let $N$ be a prime number, $d$ be a positive integer and $\eps, \delta \in (0,1)$ be real numbers. Suppose that for any arithmetic progression $P$, $|P| \le \delta N$, $\delta < d/2$ one has $\sigma^{(d)}_P (B) \le 1-\alpha$. Then $$\la_1 (\Cay (B)) \ge \frac{2\alpha}{d} \left(1 - \cos \frac{\pi \delta}{d} \right) \,. $$ In the opposite direction for any arithmetic progression $P$, $|P| \le \delta N$ one has $\sigma^{(d)}_P (B) \le 1-\alpha$, where $\alpha = ( 1-(1-\la_1 (\Cay (B)))^d - \pi \d )/2$. \label{t:Lev_appl} \end{theorem} \begin{proof} To obtain the first part of the required result we apply Theorem \ref{t:Lev_1-eps} with the parameters $\delta/d$ and $\eps = \la_1 /(2(1-\cos \pi \delta/d)))$. In view of \eqref{f:la1_abelian}, we have the decomposition $B=B_* \bigsqcup E$, where $B_* = B\cap [a,a+l]$, $a\in \Z/N\Z$, $l<\delta N/d$ and $|E|< \eps |B|$. Let $P =[a,a+l]$. Then $dP$ is another arithmetic progression of length at most $\delta N$. Further \[ |B|^d = \sum_x B^{(d)} (x) \le \sum_{x} B^{(d)}_* (x) + d |E| |B|^{d-1} < \sum_{x\in dP} B^{(d)}_* (x) + \eps d |B|^{d} = \] \[ = |B|^d \sigma^{(d)}_{dP} (B) + \eps d |B|^{d} \le |B|^d (1-\alpha + \eps d) \] or, equivalently, \[ \la_1 \ge \frac{2\alpha}{d} \left(1 - \cos \frac{\pi \delta}{d} \right) \] as required. To get the second part of our Theorem take any arithmetic progression $P$ such that $\sigma^{(d)}_P (B) > 1-\alpha$, where $\alpha$ will be chosen later. Then for any nonzero $r\in \Z/N\Z$, we have \begin{equation}\label{f:31.03_1} \FF{B}^d (r) = \sum_x B^{(d)} (x) e^{-2 \pi irx/N} = \sum_{x\in P} B^{(d)} (x) e^{-2 \pi irx/N} + \theta \alpha |B|^d \,, \end{equation} where $|\theta| \le 1$ is a certain number. By the assumption $N$ is a prime number. Shifting and choosing $r$ in appropriate way, one can assume that $r=1$ and $P$ is a symmetric progression with the step one, i.e., $P = \{ x\in \Z/N\Z ~:~ |x|\le \delta N/2 \}$. Returning to \eqref{f:31.03_1} and applying formula \eqref{f:la1_abelian} to estimate the left--hand side of \eqref{f:31.03_1}, we obtain \[ (1-\alpha) |B|^d < \sum_{x\in P} B^{(d)} (x) \le |B|^d ((1-\la_1)^d + \alpha) + \sum_{x\in P} B^{(d)} (x) |e^{-2 \pi ix/N} - 1| \le |B|^d ((1-\la_1)^d + \alpha + \pi \delta) \] or, in other words, \[ \a \ge 2^{-1} ( 1-(1-\la_1)^d - \pi \d ) \,. \] This completes the proof. $\hfill\Box$ \end{proof} \bigskip Theorem \ref{t:Lev_appl} has a consequence about the Laplace operator of an arbitrary basis of order $d$. \begin{corollary} Let $N$ be a prime number, $d \ge 2$ be an integer and $B, \Omega \subseteq \Z/N\Z$ be sets such that any element of $\Z/N\Z \setminus \Omega$ can be represented as a sum of $d$ elements of $B$ in at least $g\ge 1$ ways. Then \begin{equation}\label{f:basis_ab} \la_1 (\Cay (B)) \ge \frac{g(N - 2|\Omega|)}{d|B|^d} \left( 1 - \cos \left( \frac{\pi}{2d} \right) \right) \,. \end{equation} If $|\Omega| = (1-\eps)N$, then \begin{equation}\label{f:basis_ab+} \la_1 (\Cay (B)) \ge \frac{\eps gN}{d|B|^d} \left( 1 - \cos \left( \frac{\eps \pi}{2d} \right) \right) \,. \end{equation} \label{c:basis_ab} \end{corollary} \begin{proof} Let $\d \in (0,1)$ be a number which we will choose later, let $P$ be an arbitrary arithmetic progression with $|P| \le \delta N$ and $P^c := (\Z/N\Z) \setminus P$. Since $B^{(d)} (x) \ge g$ for any $x\in \Gr \setminus \Omega$, we see that \begin{equation}\label{f:sigma_basis_P^c} \sigma^{(d)}_P (B) = 1 - |B|^{-d} \sigma^{(d)}_{P^c} (B) \le 1 - \frac{g(|P^c|-|\Omega|)}{|B|^d} \le 1 - \frac{gN(1-\d) - g|\Omega|}{|B|^d} \,. \end{equation} Applying Theorem \ref{t:Lev_appl} with $\a = \frac{gN(1-\d) - g|\Omega|}{|B|^d}$ and $\d=1/2$, we derive \[ \la_1 (\Cay (B)) \ge \frac{g(N - 2|\Omega|)}{d|B|^d} \left( 1- \cos \left( \frac{\pi}{2d} \right) \right) \] as required. To obtain \eqref{f:basis_ab+} use Theorem \ref{t:Lev_appl} with the parameters $\d = \frac{\eps}{2}$ and $\a = \frac{\eps gN}{2|B|^d}$. \begin{comment} Now applying Theorem \ref{t:Lev_1-eps} with the parameters $\delta = 1/(2d)$ and $\eps = \la_1 /(2(1-\cos \pi/2d)))$, we have the decomposition $B=B_* \bigsqcup E$, where $B_* = B\cap P$, $P$ is an arithmetic progression with $|P| < N/2d$ and $|E|< \eps |B|$. By commutativity, we have $(d-1)B + E \supseteq \Gr \setminus dP$ and hence $|E| \ge \frac{N}{2|(d-1)B|}$. Since $dP$ can be covered by $d$ translations $T$ of $P$, it follows that for $\tilde{E} = E \cup T$, we have $(d-1)B + \tilde{E} = \Gr$. Applying formula \eqref{f:basis_g_lambda*-} of Theorem \ref{t:basis_g} which we use in the abelian case, we get \[ \la_1 (\Cay(B*E)) + \frac{d}{|E|} \ge \la_1 (\Cay(B*\tilde{E})) \ge \frac{N}{2d(|B|(|E|+d))^d} \,. \] It follows that \[ \la_1 (\Cay(B*E)) \ge \frac{N}{|B|^d} \cdot \min \left\{ \frac{1}{(2d)^{d+1}}, \frac{(1-\cos \frac{\pi}{2d})^d}{2d \la^d_1} \right\}- \frac{2d |(d-1) B|}{N} \,. \] \end{comment} This completes the proof. $\hfill\Box$ \end{proof} \bigskip Thus the bound of Corollary \ref{c:basis_ab} is comparable with the estimate from Theorem \ref{t:basis}. The main advantage of using Theorem \ref{t:Lev_appl} is reformulation of the problem of counting $\la_1$ in terms of purely combinatorial quantity \eqref{f:sigma_d}. Also, the dependence on $\Omega$ in \eqref{f:basis_ab} is better than in Theorem \ref{t:basis_g}. \begin{example} Put $S=\Lambda \bigcup P \subseteq \Z/N\Z$, where $\Lambda$ is a randomly chosen set such that $2\Lambda = \Z/N\Z$ (or let $2\Lambda$ is close to $\Z/N\Z$, it is not important), $c_1 >0$ is an absolute constant, $|\Lambda| = c_1 \sqrt{N}$ and $|P| = C \sqrt{N}$ is an arithmetic progression with step one, $C>0$ is a large parameter. One can easily show that the largest non--zero Fourier coefficient of $S$ coincides with the largest non--zero Fourier coefficient of $P$. The last is $|P| (1+o(1))$ and hence $$ \la_1(\Cay (S)) \ge 1 - \frac{|P|(1+o(1))}{|S|} \ge \frac{c_1}{c_1+C} + o(1) \gg \frac{1}{C} \,. $$ On the other hand, Corollary \ref{c:basis_ab} gives us $\la_1 (\Cay (S)) \gg \frac{1}{(c_1+C)^2} \gg \frac{1}{C^2}$. Thus for a fixed large $C$ these bounds have comparable quality. \end{example} \section{The general case} \label{sec:non-abelian} In this Section we generalise the results from Section \ref{sec:Z_N} to the non--abelian case. Following \cite[Section 17]{Sanders_A(G)} define the Bohr sets in a (non--abelian) group $\Gr$. \begin{definition} Let $\G$ be a collection of some unitary representations of $\Gr$ and $\delta \in (0,2]$ be a real number. Put \[ \Bohr (\G,\delta) = \{ g\in \Gr ~:~ \| \gamma(g) - I \| \le \delta\,, \forall \gamma \in \Gamma \} \,. \] \end{definition} Clearly, $e\in \Bohr (\G,\delta)$, and $\Bohr (\G,\delta) = \Bohr^{-1} (\G,\delta) = \Bohr (\G^*,\delta)$. Also, notice that (see, e.g., formula \eqref{f:1-xy} below) \begin{equation}\label{f:Bohr_sums} \Bohr (\G,\delta_1) \Bohr (\G,\delta_2) \subseteq \Bohr (\G,\delta_1+\delta_2) \,. \end{equation} By left/right invariance of $\| \cdot \|$ one can easily show (or consult \cite[Lemma 4.1]{Sanders_doubling_metrics}) the normality of Bohr sets, i.e., the identity $x \Bohr (\G,\delta) x^{-1} = \Bohr (\G,\delta)$, which holds for any $x\in \Gr$. If $\G = \{ \rho \}$, then we write just $\Bohr (\rho,\delta)$ for $\Bohr (\G,\delta)$ (a lower bound for size of $\Bohr (\rho,\delta)$ can be found in \cite[Lemma 17.3]{Sanders_A(G)}). Further properties of Bohr sets are contained in the Appendix. \begin{lemma} Let $A\subseteq \Gr$ be a set, $\eps, \delta \in (0,1)$ be real numbers. Suppose that for a certain unitary representation $\rho$ one has $\| \FF{A} (\rho) \| \ge (1-\eps) |A|$. Then $\sum_{g\notin \Bohr (\rho,\delta)} (A * A^{-1}) (g) \le \frac{2\eps}{\d} |A|^2$. \label{l:vlF} \end{lemma} \begin{proof} By the assumption $\| \FF{A} (\rho) \| \ge (1-\eps) |A|$. It means that \[ \| |A|^2 I - \sum_{g\in \Gr} (A * A^{-1}) (g) (I - \rho (g)) \| = \|\sum_{g\in \Gr} (A * A^{-1}) (g) \rho (g) \| \ge (1-\eps)^2 |A|^2 \,. \] For any $g\in \Gr$ each operator $I - \rho (g)$ is normal and non--negatively defined. Moreover, the operator $\frac{1}{2} ( (A * A^{-1}) (g) (I - \rho (g)) + (A * A^{-1}) (g^{-1}) (I - \rho (g^{-1})) )$ is hermitian because $(A * A^{-1}) (g^{-1}) = (A * A^{-1}) (g^{})$. Hence an arbitrary combination of such operators with non--negative coefficients is hermitian and non--negatively defined as well. It gives \begin{equation}\label{tmp:01.04_1} \| \sum_{g\notin \Bohr (\rho,\delta)} (A * A^{-1}) (g^{}) (I - \rho (g)) \| \le \| \sum_{g\in \Gr} (A * A^{-1}) (g^{}) (I - \rho (g)) \| \le (2\eps - \eps^2) |A|^2 \end{equation} because $\Bohr (\rho,\delta)$ is a symmetric set. Again, for an arbitrary $g\notin \Bohr (\rho,\delta)$ each operator $I - \rho^* (g)$ is normal and positively defined and, moreover, any such operator has all its singular values at least $\delta$ in view of the definition of Bohr sets. Also, $A (g) \ge 0$ for any $g\in \Gr$. Thus by the variational principle we derive from \eqref{tmp:01.04_1} that \[ \d \sum_{g\notin \Bohr (\rho,\delta)} (A * A^{-1}) (g) \le (2\eps - \eps^2) |A| \le 2\eps |A|^2 \] as required. $\hfill\Box$ \end{proof} \bigskip Now we are ready to obtain a non--abelian analogue of Theorem \ref{t:Lev_appl}. \begin{theorem} Let $d$ be a positive integer and $\eps, \delta \in (0,1)$ be real numbers.\\ Suppose that for any Bohr set $P = \Bohr (\rho, \delta)$, $\rho \neq 1$ one has $\sigma^{(d)}_P (B* B^{-1}) \le 1-\alpha$. Then $$\la_1 (\Cay (B)) \ge \frac{\alpha \delta }{2d^2} \,. $$ In the opposite direction for any Bohr set $P = \Bohr (\rho, \delta)$, $\rho \neq 1$ one has $\sigma^{(d)}_P (B* B^{-1}) \le 1-\alpha$, where $$ \alpha = \frac{1-(1-\la^*_1 (\Cay (B)))^{d} - \d}{2} \,. $$ \label{t:Lev_appl_non-abelian} \end{theorem} \begin{proof} By the first part of Lemma \ref{l:Laplace&representations} one has $\|B\| \ge |B|(1-\la_1)$. In other words, for a certain $\rho \neq 1$, we have $\| \FF{B} (\rho) \| \ge |B| (1-\la_1)$. To obtain the first statement of the required result we apply Lemma \ref{l:vlF} with the parameters $\delta = \delta/d$ and $\eps = \la_1$. We have the decomposition of the function $f(x) = (B * B^{-1}) (x)$ as $B(x) = f_1 (x) + f_2 (x)$, where the function $f_1$ is supported on the Bohr set $P_* = \Bohr (\rho, \d)$, the function $f_2$ is supported outside $P_*$ and $\|f_2 \|_1 \le \frac{2\eps d}{\d} |B|^2$. Further \[ |B|^{2d} = \sum_x f^{(d)} (x) \le \sum_{x} f^{(d)}_1 (x) + \frac{2d^2 \eps}{\d} |B|^{2d} = |B|^{d} \sigma^{(d)}_{P^d_*} (B * B^{-1}) + \frac{2d^2 \eps}{\d} |B|^{2d} \le \] \[ \le |B|^{2d} \left( 1-\alpha + \frac{2d^2 \eps}{\d} \right) \] or, equivalently, \[ \la_1 \ge \frac{\alpha \delta }{2d^2} \,. \] To get the second part of our Theorem take any Bohr set $P = \Bohr (\rho, \delta)$, $\rho \neq 1$ such that $\sigma^{(d)}_P (B* B^{-1}) > 1-\alpha$, where $\alpha$ will be chosen later. We have \begin{equation}\label{f:31.03_1'} \FF{f}^d (\rho) = \sum_x (B* B^{-1})^{(d)} (x) \rho (x) = \sum_{x\in P} (B* B^{-1})^{(d)} (x) \rho (x) + \theta \alpha |B|^{2d} \,, \end{equation} where $|\theta| \le 1$ is a certain number. Further in view of the second part of Lemma \ref{l:Laplace&representations} we can estimate $\| \FF{f}^d\|$ as $(1-\la^*_1)^d |B|^{2d}$. It gives \[ (1-\alpha) |B|^{2d} < \sum_{x\in P} (B* B^{-1})^{(d)} (x) \le |B|^{2d} ((1-\la^*_1)^{d} + \alpha) + \sum_{x\in P} (B* B^{-1})^{(d)} (x) \|\rho (x) - I \| \le \] \[ \le |B|^{2d} ((1-\la^*_1)^{d} + \alpha + \delta) \] or, in other words, \[ \a \ge 2^{-1} ( 1-(1-\la^*_1)^{d} - \d ) \,. \] This completes the proof. $\hfill\Box$ \end{proof} \begin{remark} Clearly, if for any $x\in \Gr$ and any Bohr set $P$ one can estimate from above the intersections $|B\cap Px|$ or $|B\cap xP|$ as $(1-\alpha) |B|$, then for an arbitrary $d$ the following holds $\sigma^{(d)}_{P} (B*B^{-1}) \le 1-\alpha$. \end{remark} We need in upper bounds for Bohr sets. \begin{lemma} Let $\Gr$ be a finite group and $\rho$ be an irreducible representation, $\rho \neq 1$. Then for \begin{eqnarray}\label{f:B/2} \delta \le \frac{1}{\sqrt{2}} \left( 1-\frac{1}{d_\rho} \right)^{1/2} \,, \quad d_\rho > 1 \quad \quad \mbox{and} \quad \quad \delta \le \frac{\sqrt{3}}{2} \,, \quad d_\rho = 1 \end{eqnarray} the following holds \begin{equation}\label{f:B/2_concl} |\Bohr (\rho, \delta)| \le |\Gr|/2 \,. \end{equation} Moreover, if $\Gr$ has no normal proper subgroups of index at most $1/\eps$, $\eps \le 1/2$, then \begin{equation}\label{f:B/2_concl2} |\Bohr (\rho, \delta_\eps)| \le \eps |\Gr| \,, \end{equation} where \[ \delta_\eps \le \left( 2-\frac{2}{d_\rho} \right)^{1/2} \cdot \eps^{\log_{3/2}2} \,, \quad d_\rho > 1 \quad \quad \mbox{and} \quad \quad \delta_\eps \le \sqrt{3} \cdot \eps^{\log_{3/2}2} \,, \quad d_\rho = 1 \,. \] \label{l:B/2} \end{lemma} \begin{proof} Take $\d$ as in \eqref{f:B/2}. If $|\Bohr (\rho, \delta)| > |\Gr|/2$, then $\Bohr (\rho, \delta)^2 = \Gr$ and hence $\Bohr (\rho, 2\delta) = \Gr$. In other words, for any $g\in \Gr$ one has $\| \rho(g) - I \| \le 2\d$. But \begin{equation}\label{tmp:18.04_1} 2d_\rho - 2 \tr (\rho(g)) = \| \rho(g) - I \|^2_{HS} \le d_\rho \| \rho(g) - I \|^2 \end{equation} and, on the other hand, by the orthogonality relations and the irreducibility of $\rho$ one has \[ \sum_{g\in \Gr} |\tr (\rho(g))|^2 = |\Gr| \,. \] Hence there is $g$ such that $|\tr (\rho(g))| \le 1$ and in view of \eqref{tmp:18.04_1}, we obtain \[ 2d_\rho -2 \le d_\rho (2\d)^2 \] as required. Finally, if $d_\rho =1$, then $\rho$ is just a non--trivial character on $\Gr$ and \[ \max_{g\in \Gr} \| \rho(g) - I \| \ge \min_{1<k \,|\, |\Gr|} \max_n |e^{2\pi i n/k} - 1| \ge \sqrt{3} \,, \] where the minimum is taken over all divisors of $|\Gr|$. It remains to obtain \eqref{f:B/2_concl2}. Suppose that $|\Bohr (\rho, \delta_\eps)| > \eps |\Gr|$. We know that any Bohr set is normal. Also, it is well--known (see, e.g., \cite{TV}) that for any set $A \subseteq \Gr$ one has either $|AA| \ge 3|A|/2$ or $AA^{-1}$ is a subgroup of $\Gr$. By the assumption $\Gr$ has no normal proper subgroups of index at most $1/\eps$. Thus for an integer $k \ge (1/2\eps)^{\log_{3/2} 2}+1$ one has $\Bohr^k (\rho, \delta_\eps) = \Gr$ and hence $\Bohr (\rho, k\delta_\eps) = \Gr$. It follows that $k \delta_\eps$ is greater than $(2-2/d_\rho)^{1/2}$ for $d_\rho >1$ and $\sqrt{3}$ for $d_\rho = 1$. This completes the proof. $\hfill\Box$ \end{proof} \bigskip Clearly, estimate \eqref{f:B/2_concl} is tight as the case $\Gr = \F_2^n$ shows. Finally, notice a well--known fact that for any $H<\Gr$ one has $|\Gr /H| \ge d_{\min} (\Gr) +1$. Thus $d_{\min} (\Gr) \ge 1/\eps$ guaranties that $\Gr$ has no proper subgroups of index at most $1/\eps$. Another sufficient property for avoiding normal subgroups of index $1/\eps$ is simplicity of $\Gr$, of course. \bigskip Finally, let us obtain an analogue of Corollary \ref{c:basis_ab}. \begin{corollary} Let $\Gr$ be a finite group, $d \ge 2$ be an integer and $B, \Omega \subseteq \Gr$, be sets such that any element of $\Gr \setminus \Omega$ can be represented as a product of $d$ elements of $B B^{-1}$ or $B^{-1} B$ in at least $g\ge 1$ ways. Then \begin{equation}\label{f:basis_nab} \la_1 (\Cay (B)) \ge \frac{g(|\Gr| - 2|\Omega|)}{8 d^2|B|^{2d}} \,. \end{equation} If $|\Omega| = (1-\eps) |\Gr|$ and $\Gr$ has no normal proper subgroups of index at most $2/\eps$, then \begin{equation}\label{f:basis_nab+} \la_1 (\Cay (B)) \ge \frac{\eps^{\log_{3/2} 3} g |\Gr|}{16d^2 |B|^{2d}} \,. \end{equation} \label{c:basis_nab} \end{corollary} \begin{proof} Without loosing of the generality we consider the case $B B^{-1}$. Let $\d$ be as in formula \eqref{f:B/2} of Lemma \ref{l:B/2}. Then anyway one can take $\d = \frac{1}{2}$. Also, let $P = \Bohr(\rho, \d)$ be a Bohr set with $\rho \neq 1$ and let $P^c := \Gr \setminus P$. By Lemma \ref{l:B/2} we know that $|P| \le |\Gr|/2$ and hence $|P^c| \ge |\Gr|/2$. Since $(B * B^{-1})^{(d)} (x) \ge g$ for any $x\in \Gr \setminus \Omega$, we see that \begin{equation}\label{f:sigma_basis_P^c_nab} \sigma^{(d)}_P (B*B^{-1}) = 1 - |B|^{-2d} \sigma^{(d)}_{P^c} (B*B^{-1}) \le 1 - \frac{g(|P^c|-|\Omega|)}{|B|^{2d}} \le 1 - \frac{g |\Gr|/2 - g|\Omega|}{|B|^{2d}} \,. \end{equation} Applying the first part of Theorem \ref{t:Lev_appl_non-abelian} with $\a = \frac{g |\Gr| - 2g|\Omega|}{2|B|^{2d}}$ and $\d$ as before, we derive \[ \la_1 (\Cay (B)) \ge \frac{g(|\Gr| - 2|\Omega|)}{8 d^2|B|^{2d}} \] as required. To obtain \eqref{f:basis_nab+} use the first part of Theorem \ref{t:Lev_appl_non-abelian} with the parameters $\d=\d_{\eps/2} \ge (\eps/2)^{\log_{3/2} 2}$ and $\a = \frac{\eps g |\Gr|}{2|B|^{2d}}$. This completes the proof. $\hfill\Box$ \end{proof} \section{Examples} \label{sec:examples} Our first example of using the results from the previous Sections concerns maximal sets in non--abelian groups, avoiding non--affine equations. For simplicity we consider just an equation with three variables. \begin{corollary} Let $\Gr$ be a finite group with the identity $e$ and $A\subseteq \Gr$ be a maximal set such that $e\notin A^3$. Then \begin{equation}\label{f:A^3_1} \la_1 (\Cay(A)) \ge \frac{|\Gr|}{2(|A|+|\sqrt{A^{-1}}| + |{e}^{1/3}|)^2} - \frac{1+|\sqrt{A^{-1}}| + |{e}^{1/3}|}{|A|} \,, \end{equation} and \begin{equation}\label{f:A^3_2} \la_1 (\Cay(A \cup \sqrt{A^{-1}})) \ge \frac{|\Gr|}{2(|A|+ |{e}^{1/3}|)^2} - \frac{1+|e^{1/3}|}{|A|} \,. \end{equation} \label{c:A^3} \end{corollary} \begin{proof} Indeed, by maximality of $A$ we see that any $x\notin A$ either belongs to $A^{-1} A^{-1}$ or $x\in \sqrt{A^{-1}}$ or $x\in e^{1/3}$. In other words, the set $(A^{}\cup \{e\})^{2}$ covers the group $\Gr$, excepting a set of size at most $|\sqrt{A^{-1}} \cup e^{1/3}|$ and the set $(A^{}\cup \sqrt{A^{-1}} \cup \{e\})^{2}$ covers the group $\Gr$, excepting a set of size at most $|e^{1/3}|$. Applying Theorem \ref{t:basis_g}, we obtain \eqref{f:A^3_1}, \eqref{f:A^3_2}. This completes the proof. $\hfill\Box$ \end{proof} \bigskip In the next example we consider the family of so--called {\it $B_k$--sets}, see, e.g., \cite{O'Bryant}. Recall that $A \subseteq \mathbb{N}$ is called a $B_k$--set, $k\ge 2$ if all sums $a_1+\dots+a_k$, $a_1, \dots, a_k \in A$ are distinct. \begin{corollary} Let $A \subseteq \{1,2,\dots, N\}$ be a $B_k$--set and $N$ be a prime. Suppose that $|A| \gg_k N^{1/k}$. Then there is a constant $c = c (k) >0$ such that for all $r\neq 0$ one has \begin{equation} \left| \sum_{x\in A} e^{2\pi i rx/N} \right| \le (1-c) |A| \,. \end{equation} \end{corollary} \begin{proof} Since $A$ is a $B_k$--set and $|A| \gg_k N^{1/k}$, it follows that there are $$|kA| = \binom{|A|+k-1}{k} \gg_k |A|^k \gg_k N := \eps(k) N$$ elements of $kA$ belonging to $\{1, \dots, kN \}$. Consider the set $A$ modulo $N$. Then modulo $N$ the set $kA \subseteq \Z/N\Z$ has at least $\eps(k) N/k$ elements. Applying Corollary \ref{c:basis_ab} with $g=1$, $d=k$ and $|\Omega| = (1- \eps(k)/k)N$, we obtain \[ \la_1 (\Cay(A)) \gg_k \frac{N}{|A|^k} \gg_k 1 \,. \] This completes the proof. $\hfill\Box$ \end{proof} \bigskip Our third example concerns the well--known problem of Erd\H{o}s and Tur\'an (see \cite{ET}) on the quantity $\limsup_n A^{(2)} (n)$ for an arbitrary basis $A\subseteq \mathbb{N}$ of order two. It was conjectured that the $\limsup$ equals infinity for any such $A$. We show that any basis of order two has a certain expansion property. Given a set $A \subseteq \mathbb{N}$ denote by $A_N$ the intersection of $A$ with $\{1,\dots, N\}$. Notice that if $\limsup_N |A_N|/N^{1/2} = \infty$, then, obviously, $\limsup_n A^{(2)} (n) = \infty$. \begin{corollary} Let $A \subseteq \mathbb{N}$ be a set such that $A+A$ equals $\mathbb{N}$ up to a finite number of exceptions. Suppose that $|A_N| \le K N^{1/2}$ for all sufficiently large prime $N$. Then there is a constant $c = c (K) >0$ such that for all sufficiently large $N$ and any $r\neq 0$ one has \begin{equation} \left| \sum_{x=1}^N A_N (x) e^{2\pi i rx/N} \right| \le (1-c) |A_N| \,. \end{equation} \label{c:ET} \end{corollary} \begin{proof} By the assumption there is a number $M$ such that $A^{(2)} (x) \ge 1$ for all $x\ge M$. Take a sufficiently large prime $N\ge 4M$ and consider the set $A_N := A \pmod N$. Obviously, $2A_N$ contains at least three quarters of $\Z/N\Z$. Applying Corollary \ref{c:basis_ab} with $g=1$, $d=2$ and $|\Omega| \le N/4$, we obtain \[ \la_1 (\Cay(A_N)) \gg \frac{N}{|A_N|^2} \ge \frac{1}{K^2} \,. \] This completes the proof. $\hfill\Box$ \end{proof} \bigskip Corollary \ref{c:ET} shows in particular, that for a basis $A\subseteq \mathbb{N}$ the function $A^{(k)} (x)$ becomes more and more uniform as $k$ tends to infinity (see Corollary \ref{c:UD_basis}). \section{Appendix} \label{sec:appendix} In this Section we collect further natural properties of Bohr sets and connected notions, which have well--known abelian analogues. We do this for the convenience of the reader who is interested in this particular form of Bohr sets, most of these results are more or less contained in papers \cite{Bourgain_AP3}, \cite{Sanders_A(G)}, \cite{Sanders_doubling_metrics} and others. In Section \ref{sec:non-abelian} we have used the connection of the Bohr sets with the set of unitary representations $\rho$ such that $\| \FF{A} (\rho) \| \ge (1-\eps) |A|$ for a given set $A\subseteq \Gr$. Thus it is natural to give a more general \begin{definition} Let $A\subseteq \Gr$ be a set, $\eps \in [0,1]$ be a real number. The {\it spectrum} $\Spec_\eps (A)$ of $A$ is the set unitary representations \[ \Spec_\eps (A) = \{ \rho ~:~ \| \FF{A} (\rho) \| \ge \eps |A| \} \,. \] \end{definition} Using the arguments of the proof of Lemma \ref{l:vlF}, we obtain a non--abelian analogue of the well--known result of Yudin \cite{Yudin}. \begin{proposition} Let $A\subseteq \Gr$ be a set, and $\eps_1, \eps_2 \in [0,1]$ be real numbers. Then \[ \Spec_{1-\eps_1} (A) \cdot \Spec_{1-\eps_2} (A) \subseteq \Spec_{1-\eps_1 - \eps_2} (A) \,. \] \end{proposition} \begin{proof} As we have from the arguments of the proof of Lemma \ref{l:vlF}, see estimate \eqref{tmp:01.04_1}, that a unitary representation $\rho$ belongs to $\Spec_{1-\eps} (A)$ iff \[ \| \sum_{g\in \Gr} (A * A^{-1}) (g) (I - \rho (g)) \| \le (2 \eps - \eps^2) |A|^2 = (1-(1-\eps)^2) |A|^2 \,. \] But \begin{equation}\label{f:1-xy} I-\rho_1 (g) \rho_2 (g) = (I-\rho_1 (g)) \rho_2 (g) + I - \rho_2 (g) \end{equation} and hence by the triangle inequality for the operator norm, we get \[ \| \sum_{g\in \Gr} (A * A^{-1}) (g) (I - \rho_1 (g) \rho_2 (g)) \| \le (2 \eps_1 - \eps^2_1 + 2 \eps_2 - \eps^2_2) |A|^2 = (1-(1- \eps_1-\eps_2)^2) |A|^2 \] as required. $\hfill\Box$ \end{proof} \bigskip Our next result shows that $\Bohr(\rho, \delta)$ has small product and hence it is possible to check the condition of smallness of the quantity $\sigma^{(d)}_P (B) \le 1-\alpha$ from Theorem \ref{t:Lev_appl_non-abelian} just for sets with small product. \begin{proposition} Let $\delta \in [0,2/5]$ be a real number and $\rho$ be a unitary representation. Then $$ |\Bohr(\rho, \delta) \cdot \Bohr(\rho, \delta)| \le 2^{\frac{21 d_\rho^2}{2}} |\Bohr(\rho, \delta)| $$ and there are sets $X, Y \subseteq \Gr$, $|X|, |Y| < 2^{25 d_\rho^2}$ such that \[ \Bohr(\rho, \delta) \subseteq \Bohr(\rho, \delta/2) X\,, \quad \quad \Bohr(\rho, \delta) \subseteq Y \Bohr(\rho, \delta/2) \,. \] \label{p:Bohr_doubling} \end{proposition} \begin{proof} Write $k=d_\rho$. In view of \eqref{f:Bohr_sums} it is enough to compare sizes of $|\Bohr(\rho, \delta)|$ and $|\Bohr(\rho, 2\delta)|$. Further, one can check that $2(1-\cos \theta) \le \theta^2$ and $2(1-\cos \theta) \ge \theta^2/2$ for $|\theta|\le \sqrt{6}$. Put \[ \eta := \eta (\delta) = \frac{1}{2\pi} \arccos \left(1 - \frac{\delta^2}{2} \right) \,. \] We have $ \frac{\delta}{2\pi} \le \eta(\delta) \le \frac{\delta}{\pi} $. Let $U(\delta)$ be the set of the unitary matrices $U$ such that $\| U - I\| \le \delta$. In \cite[Lemma 17.4]{Sanders_A(G)} it was proved that the Haar measure $\mu$ of $U(\delta)$ equals \[ \mu (U(\delta)) = \frac{1}{k!} \int_{-\eta}^{\eta} \dots \int_{-\eta}^{\eta}\, \prod_{1\le n<m \le k} |e^{2\pi i \theta_n} - e^{2\pi i \theta_m}|^2 \,d\theta_1 \dots d\theta_k = \] \begin{equation}\label{tmp:03.04_1} = \frac{1}{k!} \int_{-\eta}^{\eta} \dots \int_{-\eta}^{\eta}\, \prod_{1\le n<m \le k} 2(1-\cos(2\pi (\theta_n-\theta_m))) \,d\theta_1 \dots d\theta_k \,. \end{equation} Put \[ F(k) = \frac{(2\pi)^{2\binom{k}{2}}}{k!} \int_{-1}^{1} \dots \int_{-1}^{1}\, \prod_{1\le n<m \le k} (\theta_n-\theta_m)^2 \,d\theta_1 \dots d\theta_k \,. \] From \eqref{tmp:03.04_1} and our bounds for $2(1-\cos \theta)$, it follows that \[ 2^{-\binom{k}{2}} \eta^{k^2} F(k) \le \mu (U(\delta)) \le \eta^{k^2} F(k) \] because $4\pi \eta(1) =2\pi/3 \le \sqrt{6}$. Using the assumption $\delta \le 2/5$ and the previous formula, we obtain \begin{equation}\label{tmp:18.04_2} \frac{\mu (U(5\delta/2))}{\mu (U(\delta/2))} \le 2^{\binom{k}{2}} \cdot \frac{\eta(5\delta/2)^{k^2}}{\eta(\delta/2)^{k^2}} \le 2^{\binom{k}{2}} 2^{10k^2} < 2^{\frac{21k^2}{2}} \,. \end{equation} Now let $V = \rho (\Gr)$. It is easy to see that for any unitary matrix $u$ one has $|\Bohr(\rho, \delta)| \ge |V\cap U(\d/2) u|$ because $U(\d/2) u (U(\d/2) u)^{-1} \subseteq U(\delta)$. Further, integrating over the Haar measure, we get in view of \eqref{tmp:18.04_2} \[ |\Bohr(\rho, 2\delta)| = (\mu (U(\d/2)) )^{-1} \int_{} |\rho (\Bohr(\rho, 2\delta)) \cap U(\d/2) u| \,d u \le \] \[ \le |\Bohr(\rho, \delta)| \frac{\mu (U(5\d/2))}{\mu (U(\d/2))} < 2^{\frac{21k^2}{2}} |\Bohr(\rho, \delta)| \,. \] Now by the Ruzsa covering lemma (see, e.g., \cite{TV}) one finds $X$ (and similarly $Y$) such that \[ \Bohr(\rho, \delta) \subseteq \Bohr(\rho, \delta/4) \cdot \Bohr^{-1}(\rho, \delta/4) \cdot X \subseteq \Bohr(\rho, \delta/2) \cdot X \,, \] where as above \[ |X| \le \frac{|\Bohr(\rho, 5\delta/4)|}{|\Bohr(\rho, \delta/4)|} \le \frac{\mu (\Bohr(\rho, 11\delta/8))}{\mu (\Bohr(\rho, \delta/8))} \le 2^{\binom{k}{2}} \frac{\eta(11\delta/8)^{k^2}}{\eta(\delta/8)^{k^2}} < 2^{25 k^2} \,. \] This completes the proof. $\hfill\Box$ \end{proof} \bigskip Having a lower bound for size of one--dimensional Bohr set (see \cite[Lemma 17.3]{Sanders_A(G)} or Proposition above) one can obtain a lower bound for size of Bohr sets with an arbitrary $\G$. \begin{proposition} Let $\Bohr (\rho_j, \delta_j)$, $j=1,\dots, k$ be Bohr sets such that $\d_1 \le \d_2 \le \dots \le \d_k$. Then \[ |\Bohr (\{\rho_1, \dots, \rho_k \}, \delta_k)| \ge |\Gr|^{-1} \prod_{j=1}^k |\Bohr (\rho_j, \delta_j/2)| \,. \] \end{proposition} \begin{proof} Let $B = \Bohr (\{\rho_1, \dots, \rho_k \}, \delta_k)$ and $B_j = \Bohr (\rho_j, \delta_j/2)$, $j=1,2,\dots, k$. Clearly, for any $j$ one has $B_j - B_j \subseteq B$. Hence \begin{equation}\label{tmp:20.04_2} \sigma:= \sum_{x\in \Gr} (B_1 * B^{-1}_1) (x) \dots (B_k * B^{-1}_k) (x) = \sum_{x\in B} (B_1 * B^{-1}_1) (x) \dots (B_k * B^{-1}_k) (x) \le |B| |B_1| \dots |B_k| \,. \end{equation} On the other hand, in view of formulae \eqref{f:Parseval_representations}, \eqref{f:convolution_representations}, we get \begin{equation}\label{tmp:20.04_3} \sigma = \frac{1}{|\Gr|} \sum_{\rho} d_\rho \langle \FF{B}_1 (\rho) \FF{B}^*_1 (\rho) \dots \FF{B}_{k-1} (\rho) \FF{B}^*_{k-1} (\rho), \FF{B}_k (\rho) \FF{B}^*_k (\rho) \rangle \ge \frac{|B_1|^2 \dots |B_k|^2}{|\Gr|} \end{equation} because the operators $\FF{B}_1 (\rho) \FF{B}^*_1 (\rho)$ are hermitian and non--negatively defined. Comparing \eqref{tmp:20.04_2}, \eqref{tmp:20.04_3}, we obtain the result. $\hfill\Box$ \end{proof} \bigskip A Bohr set $\Bohr (\rho, \delta)$ is called to be {\it regular} if \[ \left| |\Bohr (\rho, (1+\kappa)\delta)| - |\Bohr (\rho, \delta)| \right| \le 100 d^2_\rho |\kappa| \cdot |\Bohr (\rho, \delta)| \,, \] whenever $|\kappa| \le 1/(100 d^2_\rho)$. Even in the abelian case it is easy to see that not each Bohr set is regular. Nevertheless, it was showed in \cite{Bourgain_AP3} that for $\Gr = \Z/N\Z$ one can find a regular Bohr set decreasing the parameter $\delta$ slightly. We show the same for general groups, repeating the arguments from \cite[Lemma 4.25]{TV} (also, see \cite[Lemma 9.3]{Sanders_A(G)}). \begin{proposition} Let $\delta \in [0,1/2]$ be a real number and $\rho$ be a unitary representation. Then there is $\delta_1 \in [\delta,2\delta]$ such that $\Bohr (\rho, \delta_1)$ is regular. \end{proposition} \begin{proof} Consider the non--decreasing function $f : [0,1] \to \mathbb{R}$ defined as $$ f(a) := d^{-2}_\rho \log \mu(\Bohr (\rho, 2^a \delta)) \,. $$ By the first part of Proposition \ref{p:Bohr_doubling}, we have $f(1)- f(0) \le \log (21/2)$. Clearly, if we could find $a\in [0.1, 0.9]$ such that $|f(a) - f(a')| \le 25|a-a'|$ for all $|a'-a| \le 0.1$, then the set $\Bohr (\rho, 2^a \delta)$ is regular. If not, then for every such $a$ there is an interval $I_a$, $a\in I_a$, $|I_a| \le 0.1$ with $\int_{I_a} df > 25 |I_a|$. Obviously, these intervals cover $[0.1, 0.9]$ and by the Vitali covering lemma one can find a finite subcollection of disjoint intervals of total measure at least $0.8/5$, say. But then $$ \log (21/2) \ge \int_{0}^{1} df \ge 25 \cdot 0.8/5 = 4 $$ and this is a contradiction. $\hfill\Box$ \end{proof} \bigskip Finally, let us say something non--trivial about the spectrum of regular Bohr sets. \begin{proposition} Let $B=\Bohr(\rho, \delta)$ be a regular Bohr set, and $B' = \Bohr(\rho, \delta')$, where $\delta' \le \kappa \delta/(100 d_\rho^2)$ and $\kappa \in (0,1)$ be a real number. Then $$ \Spec_{\eps} (B) \subseteq \Spec_{1-\frac{2\kappa}{\eps}} (B') \,. $$ \end{proposition} \begin{proof} Let $\pi \in \Spec_{\eps} (B)$. Also, let $B^{+} = \Bohr(\rho, \delta+\delta')$, $B^{-} = \Bohr(\rho, \delta-\delta')$. We have \[ \eps |B| \le \| \FF{B} (\pi) \| \le |B'|^{-1} \| \FF{B} (\pi)\| \| \FF{B}' (\pi)\| + \| \sum_x (B(x) - |B'|^{-1} (B*B')(x)) \pi (x) \| \le \] \begin{equation}\label{tmp:20.04_1} \le |B'|^{-1} \| \FF{B} (\pi)\| \| \FF{B}' (\pi)\| + \sum_x |B(x) - |B'|^{-1} (B*B')(x)| \,. \end{equation} It is easy to see that the summation in \eqref{tmp:20.04_1} is taken over $B^{+}\setminus B^{-}$. By the regularity of $B$ one can estimate this sum as $2\kappa |B|$. Hence \[ \| \FF{B}' (\pi)\| \ge |B'| (1-2\kappa |B|\| \FF{B} (\pi)\|^{-1}) \ge |B'| (1-2\kappa \eps^{-1}) \] or, in other words, $\pi \in \Spec_{1-2\kappa\eps^{-1}} (B')$. This completes the proof. $\hfill\Box$ \end{proof}
{ "timestamp": "2020-04-22T02:12:45", "yymm": "2004", "arxiv_id": "2004.10038", "language": "en", "url": "https://arxiv.org/abs/2004.10038", "abstract": "We obtain a new bound connecting the first non--trivial eigenvalue of the Laplace operator of a graph and the diameter of the graph, which is effective for graphs with small diameter or for graphs, having the number of maximal paths comparable to the expectation.", "subjects": "Combinatorics (math.CO); Number Theory (math.NT)", "title": "On the spectral gap and the diameter of Cayley graphs", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9873750492324249, "lm_q2_score": 0.822189121808099, "lm_q1q2_score": 0.8118090246236359 }
https://arxiv.org/abs/2302.03579
Unshuffling a deck of cards
We investigate the mathematics behind unshuffles, a type of card shuffle closely related to classical perfect shuffles. To perform an unshuffle, deal all the cards alternately into two piles and then stack the one pile on top of the other. There are two ways this stacking can be done (left stack on top or right stack on top), giving rise to the terms left shuffle ($L$) and right shuffle ($R$), respectively. We give a solution to a generalization of Elmsley's Problem (a classic mathematical card trick) using unshuffles for decks with $2^k$ cards. We also find the structure of the permutation groups $\langle L, R \rangle$ for a deck of $2n$ cards for all values of $n$. We prove that the group coincides with the perfect shuffle group unless $n\equiv 3 \pmod 4$, in which case the group $\langle L, R \rangle$ is equal to $B_n$, the group of centrally symmetric permutations of $2n$ elements, while the perfect shuffle group is an index 2 subgroup of $B_n$.
\section{Introduction} A well known card shuffling technique in the world of magicians is the so-called {\em perfect shuffle} (also known as the {\em faro shuffle}). Not only are perfect shuffles used by magicians to do card tricks, but gamblers have used these shuffles to cheat at card games since the 1800's~\cite{expose,braue,Green, Jordan}. As with all card shuffling techniques, perfect shuffles can be considered from a mathematical perspective as a permutations of the set of cards. From this point of view, we can better understand the characteristics, limitations, and properties of these shuffles. Here is how to do a perfect shuffle. Take a deck of $2n$ cards and split the deck exactly in half. Next, perfectly interlace the cards from the two stacks together. There are two ways that this interlacing can be done. An {\em out shuffle} interlaces the stacks in such a way that the top and bottom cards from the original deck remain on the top and bottom, respectively. An {\em in shuffle} interlaces the cards so that the top and bottom cards from the original deck become the second and second-to-last card, respectively. See Figure~\ref{perfect} for an example. While perfect shuffles are the basis for many flashy card tricks today, Persi Diaconis (a professional magician and mathematician) and Ron Graham estimated in 2011 that it takes ``a few hundred hours, certainly thousands of repetitions" for a person to learn to do a perfect shuffle. They also added, ``We estimate that there are fewer than a hundred people in the world who can do eight perfect shuffles in under a minute."\cite{diaconis2011} Under the circumstances, it is natural to wonder if an easier shuffling technique could provide the same opportunities for impressive tricks. With this motivation, Doug Ensley introduced so-called {\em unshuffles} in his article ``Unshuffling for the imperfect magician."~\cite{Ensley} \begin{figure} \begin{center} \includegraphics[width=11cm]{Perfectshuffle.pdf} \end{center} \caption{Perfect shuffles on a deck of 6 cards.}\label{perfect} \end{figure} Unshuffles are not nearly so difficult. Take a deck of $2n$ cards, and deal the cards from the top of the deck alternately into two piles, starting with one pile on the left and then one on the right. Reassemble the deck by stacking one pile on top of the other. There are two ways that this can be done. The {\em right shuffle} (denoted $R$) stacks the right pile on top of the left. The {\em left shuffle} ($L$) stacks the left pile on top of the right. After a right shuffle, notice that the top and bottom cards have now swapped places, while after a left shuffle, the original top and bottom cards are now the center two cards of the deck. See Figure~\ref{unshuffle} for an example with 6 cards. Ensley investigated unshuffles and described several tricks that can be performed via unshuffles by capitalizing on their mathematical properties~\cite{Ensley}. Our purpose here is to continue Ensley's investigation of unshuffles. We have two main results. First, we consider Elmsley's Problem, a classic mathematical card trick where one moves the top card in the deck to any given position. We solve a generalization of Elmsley's Problem using unshuffles on a deck of $2^k$ cards (see Theorem~\ref{elmsley}). For our second main result, we determine the structure of the permutation groups generated by left and right shuffles $\langle L, R \rangle$ on $2n$ cards. We prove that the group coincides with that of perfect shuffles for all $n$ such that $n\not\equiv 3\pmod{4}$. If $n\equiv 3\pmod{4}$, then the group $\langle L, R \rangle$ with $2n$ cards is isomorphic to $B_n$, the group of all centrally symmetric permutations. Meanwhile, $\langle I, O \rangle$ is an index 2 subgroup of $B_n$ in this case. The full result is as follows. \begin{theorem}\label{introtheorem} Suppose a deck has $2n$ cards. \begin{enumerate}[(a)] \item If $2n = 12$, then $\langle L, R\rangle = \langle I, O\rangle$ which is isomorphic to the semi-direct product $\mathbb{Z}^6_2 \rtimes S_5$, where $S_5$ is the symmetric group on $5$ elements \item If $2n=24$, then $\langle L, R\rangle=\langle I, O\rangle$ which is isomorphic to the semi-direct product $\mathbb{Z}^{11}_2 \rtimes M_{12}$, where the group $M_{12}$ is the Mathieu group of degree 12. The group has order $2^{11}\cdot 95040.$ \item If $2n = 2^k$, then $\langle L, R\rangle=\langle I, O\rangle$ which is isomorphic to the semi-direct product $\mathbb{Z}^k_2 \rtimes \mathbb{Z}_k$ \item If $n \equiv 0\pmod{4}$, $n > 12$, and $n$ is not a power of 2, then $\langle L, R\rangle = \langle I, O\rangle$ which is the intersection of the kernels of $sgn$ and $\overline{sgn}$ and has order $n!2^{n-2}$. (We define $sgn$ and $\overline{sgn}$ in Section~\ref{groups}.) \item If $n \equiv 1 \pmod{4}$ and $n>1$, then $\langle L, R\rangle=\langle I, O\rangle$ which is the kernel $\overline{sgn}$ and has order $n!2^{n-1}$. \item If $n \equiv 2\pmod{4}$ and $n > 6$, then $\langle L, R\rangle = \langle I, O\rangle=B_n$, where $B_n$ is the group of centrally symmetric permutations in $S_{2n}$ and has order $n!2^{n}$. \item If $n \equiv 3\pmod{4}$, then $\langle L, R\rangle = B_n$ and has order $n!2^n$. \end{enumerate} \end{theorem} From this result, we see that limiting oneself to unshuffles will not limit the card arrangements that one can reach as compared to using perfect shuffles. For $n\not\equiv 3 \pmod{4}$, unshuffles on a deck of $2n$ cards yield the exact same set of arrangements as perfect shuffles. When $n\equiv 3 \pmod{4}$, unshuffles allow a person to reach all the arrangements as perfect shuffles plus another set of arrangements of equal size which perfect shuffles do not reach. We begin in Section~\ref{properties} by investigating the mathematical properties of unshuffles. In Section~\ref{elmsleysection}, we provide a solution to a generalization of Elmsley's Problem on a deck of $2^k$ cards. We prove Theorem~\ref{introtheorem} in Section~\ref{groups} as follows. Theorem~\ref{special} proves parts (a) and (b) of Theorem~\ref{introtheorem}, Theorem~\ref{power} proves (c), and Theorem~\ref{main} proves (d)--(g). \begin{figure} \begin{center} \includegraphics[width=11cm]{unshuffle.pdf} \end{center} \caption{Unshuffles on a deck of 6 cards.}\label{unshuffle} \end{figure} \section{Properties of unshuffles}\label{properties} Throughout our discussion, we consider decks with an even number of cards, denoted by $2n$. We index the cards by their distance from the top of the deck. So the top card has index 0, the second card from the top has index 1, and so on. The bottom card has index $2n-1$. We treat shuffles as functions and hence read their products from right to left. A formula for the location of the $i^{th}$ card after an in or out perfect shuffle is known (see, for example,~\cite{Diaconis, Ensley}). These formulas can be stated in terms of modular arithmetic as follows: $$I(i) = 2i + 1 \!\!\!\pmod{2n+1},\quad\text{ and } \quad O(i) = \begin{cases} 2i \!\!\! \pmod{2n-1} & \text{if } i\neq 2n-1\\ 2n-1 & \text{if } i =2n-1. \end{cases} $$ In the same way, we can find formulas to express the action of the left and right shuffles on a deck of cards. Notice that the formulas have similarities with those given above for in and out shuffles. \begin{lemma}\label{formula} Suppose a deck has $2n$ cards. The index of the $i^{th}$ card after a left or right shuffle is given by the following formulas: $$ L(i) = ni + n-1\!\!\!\! \pmod{2n+1},\quad\text{ and } \quad R(i) = \begin{cases} (n-1)i \!\!\!\! \pmod{2n-1} & \text{if } i\neq 0\\ 2n-1 & \text{if } i =0. \end{cases} $$ \end{lemma} \begin{proof} After we deal the deck of cards from the top of the deck alternately into two piles starting with the first card on the left, the cards are in the following array: \begin{align*}~\label{array} \begin{array}{cccc} 2n-2 &&& 2n-1 \\ 2n-4 &&& 2n-3\\ \vdots &&& \vdots\\ 2 &&& 3\\ 0 && & 1 \end{array} \end{align*} Stacking the piles together with a left shuffle puts the cards in the following order: $$2n-2, 2n-4, \ldots, 2, 0, 2n-1, 2n-3, \ldots, 3, 1.$$ The index of the $i^{th}$ card after a left shuffle is then given by the equation: \begin{equation}\label{L} L(i) = \begin{cases} -\tfrac{1}{2}i + n-1 & \text{if $i$ is even }\\ -\tfrac{1}{2}(i+1) + 2n & \text{if $i$ is odd} \end{cases} \end{equation} We want to write $L(i)$ in terms of an expression in $\mathbb{Z}_{2n+1}$. Observe that in $\mathbb{Z}_{2n+1}^*$, we have $(-2)^{-1} \equiv n \pmod{2n+1}$ and also $2n \equiv -1\pmod{2n+1}$. Therefore, no matter whether $i$ is even or odd, we can express $L(i)$ as follows: $L(i)= ni + (n-1) \pmod{2n+1}.$ Now let's consider right shuffles. After stacking the cards with the right stack on top, the cards are in the following order: $$2n-1, 2n-3, \ldots, 3, 1,2n-2, 2n-4, \ldots, 2, 0.$$ So then for even $i$, the index $R(i)$ is given simply by adding $n$ to $L(i)$. For odd $i$, $R(i)$ is given by subtracting $n$ from $L(i)$. Thus: $$ R(i) = \begin{cases} -\tfrac{1}{2}i + 2n-1 & \text{if $i$ is even }\\ -\tfrac{1}{2}(i+1) + n & \text{if $i$ is odd} \end{cases} $$ We want to write $R(i)$ in terms of an expression in $\mathbb{Z}_{2n-1}$. This cannot be done for $i=0$, since $R(0) = 2n-1$, so that is a special case. But for $i>0$, we proceed as follows. Observe that in $\mathbb{Z}_{2n-1}^*$, we have $(-2)^{-1} \equiv n-1 \pmod{2n+1}$ and also $n \equiv 1-n \pmod{2n-1}$. Making the appropriate substitutions, we find that for all $i>0$, we can express the above formula for $R$ as: $R(i)= (n-1)i \pmod{2n-1}.$ \end{proof} We note that a right shuffle swaps the places of the outermost cards (cards $0$ and $2n-1$) and performs a left shuffle on the inner $2n-2$ cards. See Figure~\ref{unshuffle} for an example. Using the above formulas, we can now determine how many times one must do a left (or right) shuffle so that the deck returns to its original arrangement. In mathematical terms, we are finding the {\em order} of the shuffle. \begin{proposition} Suppose a deck has $2n$ cards. The order of the left shuffle is the order of $-2$ in $\mathbb{Z}_{2n+1}^{*}$. Denote the order of $-2$ in $\mathbb{Z}_{2n-1}^{*}$ by $r$. If $r$ is even, then the right shuffle has order $r$. If $r$ is odd, right shuffle has order $2r$. \end{proposition} \begin{proof} If we repeat the left shuffle $k$ times we have: $L^k(i) = n^ki + n^k - 1\pmod{2n+1}$. If $k$ is the order of $L$, then $n^ki + n^k - 1 \equiv i \pmod{2n+1}$ for all $i$. This implies $n^k \equiv 1\pmod{2n+1}$. In other words, the order of $L$ is the order of $n$ in $\mathbb{Z}_{2n+1}^{*}$. Equivalently, the order of $L$ is the order of the inverse of $n$ in $\mathbb{Z}_{2n+1}^{*}$, which is $-2$. Next consider the right shuffle. This takes slightly more work because we must consider the inner cards of the deck separately from the top and bottom cards (cards 0 and $2n-1$). Let's begin with the cards with indices $0<i<2n-1$. The right shuffle formula for these cards is $R(i) = (n-1)i$, which is equivalent to $R(i) = -ni$, because $n-1 \equiv -n\pmod{2n-1}$. Composing $R$ with itself $k$ times, we have $R^k(i) = (-n)^ki$ for $0<i<2n-1$. We have $R^k(i) \equiv (-n)^ki \equiv i\pmod{2n-1}$ for all $0<i<2n-1$ if and only if $k$ is a multiple of the order of $-n$. Equivalently, $k$ is a multiple of the order of $-2$ in $\mathbb{Z}_{2n-1}^{*}$ (which is the inverse of $-n$ in $\mathbb{Z}_{2n-1}^{*}$). Now we turn to the top and bottom cards of the deck (cards 0 and $2n-1$). Since the top and bottom cards swap places after one right shuffle, $k$ must be an even integer in order to have $R^k(0)=0$ and $R^k(2n-1) = 2n-1$. It follows that the order of $R$ is the order of $-2$ in $\mathbb{Z}_{2n-1}^{*}$ if that value is even. Otherwise, the order of $R$ is twice the order of $-2$ in $\mathbb{Z}_{2n-1}^{*}$. \end{proof} \begin{example} Suppose we have 52 cards. The order of the left shuffle will be the order of $-2$ in $\mathbb{Z}_{53}^{*}$, which is $52$. The order of $-2$ in $\mathbb{Z}_{50}^{*}$ is 8, so the order of the right shuffle is 8. \end{example} \begin{figure} \begin{center} \includegraphics[width=5cm]{Vshuffle.pdf} \end{center} \caption{The shuffle $V$ on a deck of 6 cards.}\label{V} \end{figure} We now introduce a third type of shuffle, denoted by $V$. This shuffle simply reverses the order of the cards. See Figure~\ref{V} for an example. The shuffle $V$ changes the index of the $i^{th}$ card as follows: $$V(i) = 2n-1-i.$$ Observe that $V$ has order 2. This new shuffle $V$ helps us define the connection between left and right shuffles and perfect shuffles $I$ and $O$. This connection is discussed in Proposition 2 of~\cite{Ensley}, as well. \begin{proposition}\label{connection} Suppose a deck has 2n cards. Left and right shuffles are related to in and out shuffles and $V$ as follows: $$L = V I^{-1} = I^{-1} V $$ $$R = V O^{-1} = O^{-1} V$$ \end{proposition} \begin{proof} We first prove that $L I = I L$. Using the formulas for the left shuffle and the in shuffle, we compute $L(I(i)) = n(2i+1) + n - 1 = 2ni + 2n - 1 \pmod{2n+1}$ and $I(L(i)) = 2(ni + n - 1) + 1 = 2ni + 2n - 1 \pmod{2n+1}$. The expressions for $LI$ and $IL$ are equal, as desired. Next, observe that: $$L(I(i))=2ni + 2n - 1 \equiv 2ni + i + 2n - 1 - i \equiv 2n - 1 - i =V(i)\pmod{2n+1}.$$ Thus, we have proved that $L I = I L = V$. Equivalently, $L = V I^{-1} = I^{-1} V. $ Using a similar process, we can also prove $R = V O^{-1} = O^{-1} V.$ We leave this to the reader. \end{proof} Observe from the above lemma that a left shuffle is ``close" to being the inverse of an in shuffle (up to reversing the order of the deck), and the right shuffle is similarly close to being the inverse of an out shuffle. In Section~\ref{groups}, we will be interested in writing the shuffle $V$ either as a product of perfect shuffles or as a product of unshuffles. The following lemma describes a special case where this can be done easily, which will be useful to us in Theorem~\ref{special}. \begin{lemma}\label{relation} If $V = I^y$ where $y$ is even, then $V = L^y$. \end{lemma} \begin{proof} By Proposition~\ref{connection}, $L = I^{-1}V=VI^{-1}$. Using this and the facts that $V = I^y$, $y$ is even, and $V^2$ is the identity, we compute: $L^y = (I^{-1}V)^y = (I^y)^{-1}V^y =V. $ \end{proof} A valuable property of perfect shuffles goes as follows. Take two cards that are equidistant from the center of the deck. After an in or out shuffle, the two cards remain equidistant from the center of the deck. We make this precise in the following definition. \begin{definition} A permutation $\sigma$ of $\{0,1,\ldots, 2n-1\}$ {\bf preserves central symmetry} if for every $i, j\in \{0,1,\ldots, 2n-1\}$ such that $i + j = 2n-1$, we have $\sigma(i) + \sigma(j) = 2n-1$. \end{definition} Among magicians, this principle of preserving central symmetry is termed {\em stay-stack}. The fact that perfect shuffles preserve central symmetry was first pointed out in 1957 by Russell Duck, a Pennsylvania policeman (see Section 3 of \cite{Diaconis}), and the observation can be exploited to create all sorts of card tricks. It is straightforward to show that unshuffles have this property, as well. \begin{proposition} Left and right shuffles preserve central symmetry. \end{proposition} \begin{proof} Perfect shuffles $I$ and $O$ preserve central symmetry. Hence so do their inverses $I^{-1}$ and $O^{-1}$. The shuffle $V$ has the effect of swapping the cards in each centrally symmetric pair, and so $V$ also preserves central symmetry. Left and right shuffles are compositions of $V, I^{-1}$, and $O^{-1}$ (Proposition~\ref{connection}), so $L$ and $R$ must also preserve central symmetry. \end{proof} \section{Elmsley's Problem with unshuffles}\label{elmsleysection} In the 1950's, Scottish computer programmer and magician Alex Elmsley posed a problem which grew popular and has since been given his name: \begin{quote}{\bf Elmsley's Problem.} Is it possible to move the top card in the deck to any given position via perfect shuffles? \end{quote} Elmsley himself found a clever solution, which he published in the card magazine {\em Ibidem} (No. 11, September 1957, see also~\cite{diaconis2011, Diaconis, Morris}). The solution is as follows. \begin{theorem}[Solution to Elmsley's Problem] Begin with a deck of $2n$ cards. To move the top card to position $i$, express $i$ in binary, and then, reading the binary expression from left to right, let 1 represent an in shuffle and let 0 represent an out shuffle. Performing this indicated sequence of in and out shuffles in order from left to right will move card $0$ to position $i$. \end{theorem} This slick solution to Elmsley's Problem encouraged quite a bit of related work. Paul Swinford discovered that in the special case where the deck has $2^k$ cards, the sequence of shuffles described in the above proposition swaps the places of the top card and the $i^{th}$ card. More generally, he observed that a sequence of shuffles that brings card $i$ to position $j$ will also move card $j$ to position $i$~\cite{swinford1,swinford2}. Later, Ramnath and Scully described a way to move card $i$ to card $j$ using perfect shuffles for any deck of $2n$ cards~\cite{Ramnath}. Diaconis and Graham found a solution to the inverse of Elmsley's Problem (that is, bringing card $i$ to the top of the deck via perfect shuffles)~\cite{DiaconisGraham}. Elmsley's Problem has also been solved for so-called generalized perfect shuffles~\cite{Medvedoff}. Performing this trick with perfect shuffles is an impressive feat.\footnote{See the trick performed and discussed here: \url{https://youtu.be/Y2lXsxmBx7E}} Having the option to use unshuffles will make it easier. We give a solution to Elmsley's Problem for unshuffles in the special case where the deck has $2^k$ cards. We first prove a lemma determines the location of the $i^{th}$ card after a left or right shuffle in terms of the binary representation of $i$. \begin{lemma}\label{binary} Suppose we have a deck of $2^k$ cards for some $k \geq 1$. Write $i$ where $0\leq i\leq 2n-1$ in binary notation: $i = x_{k-1}~x_{k-2}\cdots x_1~x_0$, and let $\overline{x}_j = 1-x_j$. Using the binary expansion, left and right shuffles move the $i^{th}$ card to a new position as follows: \vspace{-1cm} \begin{align*} &L(x_{k-1}~x_{k-2}\cdots x_1~x_0) = x_{0}~\overline{x}_{k-1}~\overline{x}_{k-2} \cdots \overline{x}_{2}~\overline{x}_{1}, \\ &R(x_{k-1}~x_{k-2}\cdots x_1~x_0)= \overline{x}_{0}~\overline{x}_{k-1}~\overline{x}_{k-2} \cdots \overline{x}_{2}~\overline{x}_{1}. \end{align*} \end{lemma} \begin{proof} Using our left shuffle formula (Lemma~\ref{formula}) and the fact that $2n = 2^k$, we have $L(i) = 2^{k-1}i + (2^{k-1}-1) \pmod {2^k+1}.$ Plug in the value $i=x_{k-1}\cdot2^{k-1} + x_{k-2}\cdot2^{k-2} + \cdots + x_{0}\cdot2^{0}$: $$L(i) = x_{k-1}\cdot2^{2k-2} + x_{k-2}\cdot2^{2k-3} + \cdots + x_{0}\cdot2^{k-1} + 2^{k-1} - 1 \pmod {2^k+1}.$$ The above expression can be simplified in two ways. First note that we can substitute $2^{k-1}-1 = 1\cdot2^{k-2}+1\cdot2^{k-3}+\cdots+1\cdot2^{0}$. Secondly, since $2^k \equiv -1\pmod{2^k+1}$, we can replace $x_{j}\cdot2^{j+(k-1)}$ with $-x_{j}\cdot2^{j-1}$. \begin{align*} L(i)&\equiv -x_{k-1}\cdot2^{k-2}-x_{k-2}\cdot2^{k-3}-\cdots-x_{1}\cdot2^{0}+x_{0}\cdot2^{k-1}+2^{k-1}-1\\ &\equiv x_{0}\cdot2^{k-1}+ (1-x_{k-1})\cdot2^{k-2}+(1-x_{k-2})\cdot2^{k-3}+\cdots +(1-x_{1})2^{0}\\ &\equiv x_{0}\cdot2^{k-1}+ \overline{x}_{k-1}\cdot2^{k-2}+\overline{x}_{k-2}\cdot2^{k-3}+\cdots +\overline{x}_{1}2^{0} \pmod{2^k+1} \end{align*} where $\overline{x}_j = 1-x_j$. We have reached our desired conclusion: $$L(x_{k-1}~x_{k-2}\cdots x_1~x_0) = x_{0}~\overline{x}_{k-1}~\overline{x}_{k-2} \cdots \overline{x}_{2}~\overline{x}_{1}.$$ We use the same method for right shuffles. The right shuffle formula for $i>0$ is: $R(i) = (2^{k-1}-1)i \pmod {2^k-1}$. Since $2^{k-1}-1 \equiv -2^{k-1} \pmod{2^k-1},$ we have $$R(i) \equiv -2^{k-1}i \pmod {2^k-1}$$ for $i>0.$ Plugging in $i =x_{k-1}\cdot2^{k-1} + x_{k-2}\cdot2^{k-2} + \cdots + x_{0}\cdot2^{0}$ and using the fact that $2^k \equiv 1\pmod{2^k-1}$, we have: \begin{align*} R(i) &\equiv (-2^{k-1})(x_{k-1}\cdot2^{k-1} + x_{k-2}\cdot2^{k-2} + \cdots +x_1\cdot2^1 + x_{0}\cdot2^{0})\\ &\equiv -x_{k-1}\cdot2^{2k-2} -x_{k-2}\cdot2^{2k-3} - \cdots - x_1\cdot 2^{k}-x_{0}\cdot2^{k-1} \\ &\equiv -x_{k-1}\cdot2^{k-2} -x_{k-2}\cdot2^{k-3} -\cdots -x_{1}\cdot2^{0}-x_{0}\cdot2^{k-1} \pmod{2^k-1} \end{align*} Now note that $1\cdot2^{k-1}+1\cdot2^{k-2}+\cdots+1\cdot2^{0}=2^{k}-1 \equiv 0\pmod{2^k-1}$. So we can add this to the above expression without changing its value: \begin{align*} R(i) &\equiv (1-x_{0})\cdot2^{k-1} +(1-x_{k-1})\cdot2^{k-2}+\cdots +(1-x_{1})\cdot2^{0} \\ &\equiv \overline{x}_{0}\cdot2^{k-1} +\overline{x}_{k-1}\cdot2^{k-2}+\cdots +\overline{x}_{1}\cdot2^{0} \pmod{2^k-1}, \end{align*} where $\overline{x}_j = 1-x_j$. This proves our result for $i>0$. For the special case $i=0$, observe that we have the following, as desired: $$R(0) = 2^k-1 = 1 \cdot 2^{k-1} + 1 \cdot 2^{k-2} + \cdots + 1 \cdot 2^{0}.$$ \end{proof} \begin{figure} \begin{center} \includegraphics[width=11cm]{Elmsley8.pdf} \end{center} \caption{Unshuffles that move the top card to position 5.}\label{Elmsely8} \end{figure} We now give and prove our solution to Elmsley's Problem using unshuffles in the case that there are $2^k$ cards. In fact, we solve a more general problem. The process that we describe {\em swaps the places of card $i$ and card $j$ in $k$ shuffles for any $i$ and $j$}. Thus, we can specialize to the case $i=0$ to recover the solution to Elmsley's Problem. \begin{theorem}\label{elmsley} Suppose there are $2^k$ cards for some $k\geq 1$. To swap the cards in positions $i$ and $j$ using unshuffles, write $i$ and $j$ in binary, and compute $i\oplus j = x_{k-1}~x_{k-2}\cdots x_1~x_0$ (where $\oplus$ denotes the xor operation). Shuffle the deck with $k$ shuffles denoted in order by $S_0, S_1, \ldots, S_{k-1}$, where $S_r$ is defined as follows: \begin{enumerate} \item If $k$ is odd, $$ S_r = \begin{cases} L & \text{if } x_r=0\\ R & \text{if } x_r =1 \end{cases} $$ \item If $k$ is even, $$ S_r = \begin{cases} R & \text{if } x_r=0\\ L & \text{if } x_r =1 \end{cases} $$ \end{enumerate} This sequence of shuffles will swap the cards in positions $i$ and $j$. \end{theorem} \begin{proof} Lemma~\ref{binary} tells us that both a left shuffle and right shuffle have the effect of sliding the bits of the card index to the right by 1 position and moving the last bit to the front. Since there are $k$ bits total, it follows that performing $k$ shuffles (left or right or a mixture of both) will return all bits back to their original position (with some of the bits flipped, perhaps). Moreover, after these $k$ shuffles are done, each bit will have flipped a total of either $k$ or $k-1$ times. The exact number of times a given bit $x_r$ is flipped is pinned down by what type of shuffle the $r^{th}$ shuffle was. The bit $x_r$ will have flipped $k$ times if $S_r$ is a right shuffle, and $x_r$ will have flipped $k-1$ times if $S_r$ is a left shuffle. Using this we can design a sequence of $k$ shuffles $S_0, S_1, \ldots, S_{k-1}$ that turns the binary expansion for $j$ into that of $i$ and vice versa. First, identify the bits where $j$ and $i$ differ via computing: $i\oplus j= x_{k-1}~x_{k-2}\cdots x_1~x_0$. If $x_r = 0$, then we need no change to the $r^{th}$ bit of $j$, and we can plan our shuffles according to the parity of $k$ so that the $r^{th}$ bit is unchanged when the shuffles are finished. Namely, if $k$ is odd, then set $S_r = L$ because that will cause the $r^{th}$ bit of $j$ to flip $k-1$ times and hence remain unchanged. If $k$ is even, set $S_r = R$, and the $r^{th}$ bit of $j$ will flip $k$ times and hence remain unchanged. Similarly, if $x_r = 1$, then the bits of $i$ and $j$ differ. We plan our shuffles according to the parity of $k$ so that the $r^{th}$ bit is flipped. If $k$ is odd, then set $S_r = R$, and the $r^{th}$ bit of $j$ will flip. If $k$ is even, set $S_r = L$. This defines a sequence of $k$ shuffles $S_0, S_1, \ldots, S_{k-1}$. Performing these shuffles in order will change all the bits of $j$ to coincide with the bits of $i$ and vice versa. Therefore, the cards in positions $i$ and $j$ will trade positions after the sequence of shuffles. \end{proof} \begin{example} We illustrate this solution to Elmsley's Problem with two examples, one with $k$ odd and the other with $k$ even. Suppose there are 8 cards and we want the top card to trade positions with the $5^{th}$ card. We compute $000_2 \oplus 101_2 = 101_2$. The shuffles should be done in the order of $R, L, R$. See Figure~\ref{Elmsely8}. Now suppose there are 16 cards and we want to swap the cards in the $6^{th}$ position and the $11^{th}$ position. In binary, we write $6 = 0110_2$ and $11=1011_2$. We compute $0110_2 \oplus 1011_2 = 1101_2$. Theorem~\ref{elmsley} tells us the shuffles $L, R, L, L$ in that order will swap these two cards. See Figure~\ref{Elmsely16}. \end{example} \begin{figure} \begin{center} \includegraphics[width=11cm]{Elmsley16.pdf} \end{center} \caption{Unshuffles that swap the positions of card 6 and card 11.}\label{Elmsely16} \end{figure} \section{The permutation groups of unshuffles}\label{groups} We now find structure of the permutation group $G = \langle L, R\rangle$. Through this, we will know exactly what card arrangements can be reached using unshuffles. In some (but not all) cases, the permutation group $\langle L, R\rangle$ coincides with $\langle I, O\rangle$. The fact that the groups often coincide is not so surprising, given the close relationship between perfect shuffles and unshuffles (Proposition~\ref{connection}). However, the details require careful work. We prove Theorem~\ref{introtheorem} in a sequence of three theorems (Theorems~\ref{special},~\ref{power}, and~\ref{main}). Let's begin by reviewing the permutation groups of perfect shuffles. Let $B_n$ be the group of all centrally symmetric permutations of $2n$ elements. This group can also be identified as the group of all signed $n\times n$ permutation matrices. Because $L, R, I$, and $O$ are all themselves centrally symmetric permutations, it follows that both $\langle L, R\rangle$ and $\langle I, O\rangle$ are subgroups of $B_n$. We define several homomorphisms on $B_n$. First let $$sgn: B_n\longrightarrow \{\pm 1\}$$ be the homomorphism such that $sgn(g)$ is the sign (or, parity) of the permutation $g$. Next, observe that while each permutation $g \in B_n$ is a permutation on $2n$ elements, $g$ induces a permutation on the $n$ centrally symmetric pairs. We define a homomorphism $$\phi:B_n \longrightarrow S_n$$ by assigning $\phi(g)$ to be the permutation that $g$ induces on the $n$ centrally symmetric pairs. Next we define $$\overline{sgn}: B_n\longrightarrow \{\pm 1\}$$ by assigning $\overline{sgn}(g)$ to be the sign of the permutation $\phi(g)\in S_n$. We easily pick up a final homomorphism as follows. Let $sgn\overline{sgn}: B_n\longrightarrow \{\pm 1\}$ be the homomorphism that assigns $g$ to the product $sgn(g)\overline{sgn}(g)$. Now we are ready to state the group structure of $\langle I, O\rangle$. The first three items are special cases (when the deck has size 12, 24, or a power of 2, respectively). Beyond that, the group structure is determined by the congruence class of $n$ modulo 4. For a proof and full details, see~\cite{Diaconis}. \begin{theorem}\cite{Diaconis}\label{perfectgroups} The structure of the permutation group $\langle I, O\rangle$ on $2n$ cards is as follows: \begin{enumerate} \item If $2n = 12$, then $\langle I, O\rangle$ is isomorphic to the semi-direct product $\mathbb{Z}^6_2 \rtimes S_5$, where $S_5$ is the symmetric group on $5$ elements. \item If $2n=24$, then $\langle I, O\rangle$ is isomorphic to the semi-direct product $\mathbb{Z}^{11}_2 \rtimes M_{12}$, where $M_{12}$ is the Mathieu group of degree 12. \item If $2n = 2^k$, $\langle I, O\rangle$ is isomorphic to the semi-direct product $\mathbb{Z}^k_2 \rtimes \mathbb{Z}_k$ \item If $n \equiv 0 \pmod{4}$, $n> 12$, and $n$ is not a power of 2, then $\langle I, O\rangle$ is the intersection of the kernels of $sgn$ and $\overline{sgn}$ and has order $n!2^{n-2}$. \item If $n \equiv 1 \pmod{4}$ and $n>1$, then $\langle I, O\rangle$ is the kernel $\overline{sgn}$ and has order $n!2^{n-1}$. \item If $n \equiv 2 \pmod{4}$ and $n>6$, then $\langle I, O\rangle = B_n$ and has order $n!2^{n}$. \item If $n \equiv 3 \pmod{4}$, then $\langle I, O\rangle$ is equal to the kernel of $sgn\cdot\overline{sgn}$ and has order $n!2^{n-1}$. \end{enumerate} \end{theorem} Now we turn to consider the group $G = \langle L,R \rangle$. We begin by proving $G = \langle L,R \rangle$ coincides with the perfect shuffle group in the cases where $2n$ is 12 or 24 (Theorem~\ref{special}) and in the case where $2n$ is a power of 2 (Theorem~\ref{power}). \newpage \begin{theorem}\label{special} If $2n = 12$ or $2n = 24$, then $\langle L, R\rangle = \langle I, O\rangle$. \end{theorem} \begin{proof} We first observe that the formula for $I^r$ is give by: $$I^r(i) \equiv 2^ri + 2^{r-1} + 2^{r-2} + ... + 1 \equiv ~~2^ri + 2^r - 1 \pmod{2n+1}.$$ Now suppose that we have $2n=12$ cards. Observe that $$I^6(i) = 2^6(i) + 2^6-1 \equiv 11-i =V(i)\pmod{13}.$$ Therefore $V = I^6$. Lemma~\ref{relation} now implies that $V = L^6$. Because we can express $V$ in terms of $I$, Proposition~\ref{connection} implies that $\langle L, R\rangle \subseteq \langle I, O\rangle$. Moreover, because we can express $V$ in terms of $L$, it implies $\langle I,O\rangle \subseteq \langle L, R\rangle$. Therefore $\langle L, R\rangle = \langle I, O\rangle$. Similarly, for $2n=24$ cards we have $$I^{10}(i) = 2^{10}(i) + 2^{10}-1 \equiv 23-i = V(i)\pmod{25}.$$ Because $V = I^{10}$, Lemma~\ref{relation} implies $V = L^{10}$. The same argument for $12$ cards applies to $24$ cards, so $\langle L, R\rangle = \langle I, O\rangle$ for the $2n = 24$ case, as well. \end{proof} \begin{theorem}\label{power} Suppose $2n=2^k$ for some positive integer $k$. Then $\langle L,R \rangle = \langle I, O\rangle$. \end{theorem} \begin{proof} Using the formula for $I^r(i)$ mentioned in the proof of Theorem~\ref{special} and setting $r=k$, we observe: $$I^k(i) \equiv 2^ki + 2^k - 1 \equiv - i + 2^k - 1 =V(i) \pmod{2^k+1}.$$ Therefore $V = I^k.$ This implies that $L$ and $R$ can be written as combinations of $I$ and $O$ by Proposition~\ref{connection}. So then $\langle L, R\rangle \subseteq \langle I, O\rangle.$ Now we prove $\langle I, O\rangle \subseteq \langle L, R\rangle$, splitting into two cases according to the parity of $k$. Suppose that $k$ is even. Begin with the equation $L = VI^{-1}$ from Proposition~\ref{connection}. Composing both sides with themselves $k$ times, we have $$L^k = (VI^{-1})(VI^{-1})\cdots(VI^{-1}) = V^k I^{-k} = I^{-k},$$ because $V$ and $I^{-1}$ commute and $k$ is even. Plugging in $V=I^k$ and remembering that $V=V^{-1}$, we find that $L^k = V.$ This implies that $I$ and $O$ can be written in terms of $L$ and $R$ for $k$ even. Therefore $\langle I, O\rangle \subseteq \langle L, R\rangle$, which proves $\langle I, O\rangle = \langle L, R\rangle$ for $k$ even, as desired Suppose $k$ is odd. Begin with $R = V O^{-1}$ (from Proposition~\ref{connection}) and compose both sides with themselves $k$ times. Remembering that $O^{-1}$ and $V$ commute and using the fact that $O^{-k}$ is the identity (because $O^k(i) = 2^ki \equiv i \pmod{2^k-1}$), we find $$R^k = (VO^{-1})(VO^{-1})\cdots(VO^{-1}) = V^k O^{-k} = V^k = V.$$ Therefore when $k$ is odd, $V = R^k.$ Similar to before, we conclude $\langle I, O\rangle \subseteq \langle L, R\rangle$, which proves $\langle I, O\rangle = \langle L, R\rangle$ for $k$ odd, as desired \end{proof} Now that we have settled the above special cases, we have the more difficult task of considering what happens in general. We must establish some preliminary definitions and two lemmas first. Let $G^*$ denote the subgroup of $G=\langle L,R\rangle$ consisting of all shuffles in $G$ that leave the set $\{0, 1, \ldots, n-1\}$ invariant. Since all shuffles in $G^*$ also must preserve central symmetry, it follows that elements in $G^*$ can be expressed as $\sigma\sigma'$ where $\sigma$ is a permutation of $\{0, 1, \ldots, n-1\}$ and $\sigma'$ represents the corresponding permutation of the elements $\{0', 1', \ldots, (n-1)'\}$ where $r' = 2n-1-r$. So an element $\sigma\sigma'$ in $G^*$ is completely determined by $\sigma$. Using this notation throughout, we now prove two lemmas. \begin{lemma}\label{G*} Let $n> 1$ be such that $n$ is not a power of 2 and $n\neq 6, 12$. The group $G^*$ contains all permutations of the form $\sigma\sigma'$ where $\sigma$ is any even permutation of $\{0, 1, \ldots, n-1\}$ and $\sigma'$ is as defined above. \end{lemma} \begin{proof} In~\cite{Diaconis}, the authors introduce several useful permutations which are created with perfect shuffles and will be useful in our context, as well. Let $k$ and $r$ be positive integers and define: \begin{align*} c &= O(I^{-1}OIO^{-1})^2O^{-1}\\ w &= O^{-1}Ic^{-1}O^{-1}Ic^2I^{-1}Oc^{-1}I^{-1}O\\ b &= (I^kO^{-k}I^{-1}O)^{-2}\\ c' &= ObO^{-1}\\ h(r) &= O^{-r}I^r \end{align*} All of these permutations can be performed using left and right shuffles, as well. To see this, substitute $I = VL^{-1}$ and $O = VR^{-1}$ into the above expressions. Because $V$ commutes with both $L$ and $R$ and because $V^2$ is the identity, we are left with a product of left and right shuffles. In Lemma 9 of~\cite{Diaconis}, the authors prove that when $n$ is odd, $c$ and $w$ generate all permutations $\sigma\sigma'$ where $\sigma$ is an even permutation of $\{0, 1, \ldots, n-1\}$. Therefore our result is proved for $n$ odd. Now suppose $n$ is even, $n$ is not a power of 2, and $n\neq 6,12$. Write $2n = 2^kv$ where $v>1$ and $v$ is odd. In Lemmas 15, 16, 17, and 18 of~\cite{Diaconis}, the authors prove that under the stated conditions on $n$, the shuffles $b, c', h(1), h(2), \ldots, h(k-1)$ generate all permutations $\sigma\sigma'$ where $\sigma$ is an even permutation of $\{0, 1, \ldots, n-1\}$. Therefore our result for left and right shuffles follows, as well. \end{proof} We must prove one more preliminary lemma, but first we establish some notation and make observations. Recall the homomorphism $\phi:B_n\longrightarrow S_n$ is defined by assigning $\phi(g)$ to be the permutation that $g$ induces on the $n$ centrally symmetric pairs. In~\cite{Diaconis}, the authors found that the parities of the permutations $I$, $O$, $\phi(I),$ and $\phi(O)$ depended on the congruence class of $n$ modulo $4$ (see Table 3 in~\cite{Diaconis}). Also remember that $V$ switches the card in position $i$ with the card in position $i'$ for $0 \leq i \leq n-1$, so $V$ is the product of $n$ transpositions: $V = (0, 0')(1, 1')(2,2')\cdots(n-1,(n-1)').$ Therefore the parity of $V$ is $(-1)^n$. Also $\phi(V) = (1)$, the identity permutation. Hence $\overline{sgn}(V)=1$. These parities, along with the relation between perfect shuffles and unshuffles found in Proposition~\ref{connection}, help us construct Table~\ref{signs} for the parities of $L, R, \phi(L),$ and $\phi(R)$, which we will use frequently. \begin{table} \begin{center} \setlength{\arrayrulewidth}{0.5mm} \setlength{\tabcolsep}{10pt} \renewcommand{\arraystretch}{1.5} \begin{tabular}{ |p{2.7cm}|p{1.0cm}|p{1.0cm}|p{1.0cm}|p{1.0cm}| } \hline & $L$& $R$& $\phi(L)$& $\phi(R)$ \\ \hline $n \equiv 0\pmod{4}$ & $~~1$ & $~~1$ & $~~1$ & $~~1$ \\ $n \equiv 1\pmod{4}$ & $~~1$ & $-1$ & $~~1$ & $~~1$ \\ $n \equiv 2\pmod{4}$ & $-1$ & $-1$ & $-1$ & $~~1$ \\ $n \equiv 3\pmod{4}$ & $-1$ & $~~1$ & $~~1$ & $-1$ \\ \hline \end{tabular} \end{center} \caption{Parities of $L$, $R$, $\phi(L)$, $\phi(R)$ for congruence classes of $n$ modulo $4$.}\label{signs} \end{table} The kernel of the homomorphism $\phi:B_n \longrightarrow S_n$ is generated by all transpositions $(x,x')$ where $x \in \{0, 1, \ldots, n-1\}$ and $x' = 2n-1 - x$. These transpositions commute with each other. Hence $ker(\phi)$ is a group of order $2^n$. We denote the restriction of the homomorphism $\phi$ to $G$ by $\phi|_G$, and we denote the kernel of $\phi|_G$ by $K$. The following lemma finds the order of $K$. \begin{lemma}\label{K} Let $n> 1$ be such that $n$ is not a power of 2 and $n\neq 6, 12$, and let $K$ denote the kernel of the homomorphism $\phi|_G$. If $n\equiv 0 \pmod{4}$, then $|K| = 2^{n-1}$. Otherwise, $|K| = 2^{n}$. \end{lemma} \begin{proof} We first construct a permutation $f\in B_n$ as follows: $$ f(i) = \begin{cases} L(i) & \text{if $i$ is even and $0\leq i \leq n-1$}\\ & \text{ \quad or if $i$ is odd and $n\leq i \leq 2n-1$ }\\ L(i)' & \text{if $i$ is odd and $0\leq i \leq n-1$}\\ &\text{ \quad or if $i$ is even and $n\leq i \leq 2n-1$ } \end{cases} $$ Our movitation for creating this permutation is that $f$ induces the same permutation as $L$ on the centrally symmetric pairs, so $\phi(f) = \phi(L)$, but unlike $L$, the permutation $f$ leaves the set $\{0, 1, \ldots, n-1\}$ invariant (hence, of course, $f$ also leaves the set $\{n, n+1, \ldots, 2n-1\}$ invariant). To verify this, consult Equation~\ref{L} for $L(i)$. We will make use of $f$ throughout. Now suppose that $n \equiv 0, 1,$ or $ 3 \pmod{4}$. In this case, Table~\ref{signs} tells us $\phi(L)$ is an {\em even} permutation. Denote this permutation via $\sigma\in S_n$. It follows that $f = \sigma\sigma'$. By Lemma~\ref{G*}, since $\sigma$ is an even permutation, we know that $f$ is an element of $G^*$. Therefore $f$ can be realized using left and right shuffles. Now consider the permutation: $f^{-1}L$. Written in cycle notation, we have: $f^{-1}L = (11')(33')\cdots(kk')$, where $k=n-1$ if $n$ is even and $k=n-2$ if $n$ is odd. In the special case that $n=3,4,$ or 5, we have all the ingredients we need to proceed, but for $n>5$, we need to define one extra ingredient: $g = (0,1)(3,5)(0',1')(3',5')$. Notice that by Lemma~\ref{G*}, we know $g \in G^*$, so $g$ can be realized with left and right shuffles, as well. With these ingredients ready, we construct the permutation $h$: $$ h = \begin{cases} f^{-1}L=(1 1') &\textrm{ if } n=3\\ f^{-1}L=(1 1')(3 3') &\textrm{ if } n=4, 5\\ (f^{-1}L)g(f^{-1}L)g =(0 0')(1 1') &\textrm{ if }n> 5 \end{cases} $$ Observe that because $f, L, g \in G$, this permutation $h$ is also an element of $G$. Moreover, $h\in K$ because $h$ is a product of transpositions of the form $(xx')$. Now suppose $n=3$. Conjugating $h=(1 1')$ by elements of $G^*$, we generate $(0 0'), (1 1'),$ and $(2 2')$, all of which are in $K$. Together these three elements will generate a set of 8 elements. Therefore $|K|\geq 2^3.$ But on the other hand, we know $|K|\leq 2^3$ since $K$ is a subgroup of the kernel of $\phi$, so then $|K|=2^3$. This concludes the argument for $n=3.$ Next suppose $n\geq 4$ (and we continue to assume $n \equiv 0, 1,$ or $ 3 \pmod{4}$). Conjugating $h$ by elements of $G^*$, it follows that all elements of the form $(yy')(zz')$ where $y,z \in \{0,1,\ldots, n-1\}$ are in $K$. Multiply these elements of the form $(yy')(zz')$ together in all possible ways, and we produce any product of an {\em even} number of these transpositions. The total number of such products (all of which belong to $K$) is: $$\binom{n}{0}+\binom{n}{2}+ \binom{n}{4}+\cdots ~~ = 2^{n-1}. $$ Hence $|K|\geq 2^{n-1}$ if $n \equiv 0, 1, 3 \pmod{4}$ and $n\geq 4$. If $n \equiv 2 \pmod{4}$, we can use a similar argument to above to prove that $|K|\geq 2^{n-1}$. In this case, $\phi(R)$ is an even permutation instead of $\phi(L)$, and so the argument proceeds by switching $L$ to $R$ and making related adjustments. We leave these details to the reader and move forward assuming that $|K|\geq 2^{n-1}$ for all $n$. If $n \equiv 0 \pmod{4}$, then $L$ and $R$ are both even permutations, so all permutations in $K$ are even. Since $K$ is a subgroup of the kernel of $\phi$ which contains odd permutations and has order $2^n$, we know $|K| < 2^n$. On the other hand, as we exhibited above, $|K|\geq 2^{n-1}$. Therefore $|K|=2^{n-1}$, as desired. If $n \equiv 3 \pmod{4}$, then $L$ is an odd permutation and $f^{-1}L$ is an odd permutation in $K$. Multiplying $f^{-1}L$ together with the elements in the set of even permutations in $K$ we constructed above, we generate another $2^{n-1}$ unique elements in $K$. Therefore $|K|\geq 2^{n-1}+2^{n-1} = 2^n$. On the other hand, we know $|K| \leq 2^n$ because $K$ is a subgroup of the kernel of $\phi$, so the result follows. Finally, suppose that $n \equiv 1 \text{ or } 2 \pmod{4}$. In this case, $R$ is an odd permutation and $\phi(R)$ is even. Similar to before, let $k\in G^*$ such that $\phi(k) = \phi(R)$. Then $kR^{-1}$ is an odd permutation in $K$. As in the previous case, it follows that $|K|=2^n.$ \end{proof} We are now ready to determine the group structure of $G = \langle L, R\rangle$ for all $n>1$ such that $n$ is not a power of 2 and $n\neq 6, 12$. \begin{theorem}\label{main} Suppose a deck has $2n$ cards and let $G = \langle L, R\rangle$. \begin{enumerate}[(a)] \item If $n \equiv 0\pmod{4}$, $n > 12$, and $n$ is not a power of 2, $G = \langle I, O\rangle$. \item If $n \equiv 1\pmod{4}$ and $n>1$, $G = \langle I, O\rangle$. \item If $n \equiv 2\pmod{4}$ and $n > 6$, $G = \langle I, O\rangle = B_n$. \item If $n \equiv 3\pmod{4}$, $G = B_n$. \end{enumerate} \end{theorem} \begin{proof} In all cases, we know that $A_n \subseteq \phi(G^{*})\subseteq \phi(G)$ by Lemma~\ref{G*}, and we will frequently use this fact. We begin with (a). Note that both $\phi(L)$ and $\phi(R)$ are even permutations for $n \equiv 0\pmod{4}$ (Table~\ref{signs}). So, we conclude that $\phi(G) = A_n$ since $A_n \subseteq \phi(G^{*})\subseteq \phi(G)$. By the First Isomorphism Theorem and Lemma~\ref{K}, we have that $\lvert G \rvert = \lvert A_n \rvert \lvert K \rvert=n!\cdot2^{n-2}$. We next prove that $G=\langle L, R\rangle \subseteq \langle I, O\rangle$ when $n \equiv 0\pmod{4}$. Because $L = VI^{-1}$ and $R = VO^{-1}$, it suffices to show that $V \in \langle I, O\rangle$. By Theorem~\ref{perfectgroups}, $\langle I, O\rangle$ is the intersection of the kernels of $sgn$ and $\overline{sgn}$. We already discussed that the parity of $V$ is $(-1)^{n}$, and since $n$ is even, $sgn(V) = 1$. We also know $\overline{sgn}(V)=1$. Therefore it follows that $V \in \langle I, O\rangle$, and so $\langle L, R\rangle \subseteq \langle I, O\rangle$. Because the two groups have the same order, we conclude $G = \langle L, R\rangle= \langle I, O\rangle$. For (b), we can use a similar argument to that of (a) to show that $\lvert G \rvert = \lvert A_n \rvert \lvert K \rvert$. By Lemma~\ref{K}, $\lvert G \rvert = n!\cdot2^{n-1}$. Now Theorem~\ref{perfectgroups} tells us that for $n\equiv 1\pmod{4}$, the group $\langle I, O\rangle$ is equal to the the kernel of $\overline{sgn}$. Since $\overline{sgn}(V)=1$, we know $V \in \langle I, O\rangle$. So, $G=\langle L, R\rangle \subseteq \langle I, O\rangle$. Because two groups have the same order, it must be the case that $G = \langle L, R\rangle= \langle I, O\rangle$. We now prove (c). Note that $\phi(L)$ is odd when $n \equiv 2\pmod{4}$. So, we can conclude that $\phi(G) = S_n$ since $A_n$ together with $\phi(L)$ will generate all permutations of $S_n$. By the First Isomorphism Theorem and Lemma~\ref{K}, we have that $\lvert G \rvert = \lvert S_n \rvert \lvert K \rvert= n!\cdot2^{n}$. Now $\langle L, R\rangle$ is a subgroup of $B_n$, but since the two groups have the same order, we conclude $G = \langle L, R\rangle= \langle I, O\rangle = B_n$. Our proof for (d) is similar to that of (c). In this case, it is $\phi(R)$ that is odd instead of $\phi(L)$ (Table~\ref{signs}), but in the same way we can conclude that $\phi(G) = S_n$. The First Isomorphism Theorem and Lemma~\ref{K} tell us that $\lvert G \rvert = \lvert S_n \rvert \lvert K \rvert= n!\cdot2^{n}$. We know that $G=\langle L, R\rangle \subseteq B_n$, but since the orders of the two groups are equal, we get our desired conclusion of $G = B_n$. \end{proof} \begin{example} The smallest $n$ where the perfect shuffle group and unshuffle group differ is when $n=3$ (a deck of 6 cards). A deck of 6 cards has $6! = 720$ possible arrangements total. Perfect shuffles and unshuffles can realize only 24 and 48 card arrangements, respectively. In particular, Theorem~\ref{perfectgroups} tells us that the group $\langle I, O\rangle$ is equal to the kernel of $sgn\overline{sgn}$ which is a group of order 24 isomorphic to $S_4$, the symmetric group on 4 elements. On the other hand, Theorem~\ref{main} tells us the group $\langle L, R\rangle$ is $B_3$, the group of all centrally symmetric permutations of 6 elements. This group has order 48 and is isomorphic to the direct product $S_4 \times \mathbb{Z}_2$. \end{example} We have restricted our focus to perfect shuffles and unshuffles here, but a wide variety of different shuffling techniques exist which provide mathematical diversion~\cite{Bayer, Butler, Johnson, Ledet, Medvedoff, MorrisHartwig}. Three great expository books that discuss these ideas in depth and describe fun mathematical card tricks to impress your friends and family are~\cite{diaconis2011,Morris2, mulcahy}. \bibliographystyle{plain} \section{Introduction} A well known card shuffling technique in the world of magicians is the so-called {\em perfect shuffle} (also known as the {\em faro shuffle}). Not only are perfect shuffles used by magicians to do card tricks, but gamblers have used these shuffles to cheat at card games since the 1800's~\cite{expose,braue,Green, Jordan}. As with all card shuffling techniques, perfect shuffles can be considered from a mathematical perspective as a permutations of the set of cards. From this point of view, we can better understand the characteristics, limitations, and properties of these shuffles. Here is how to do a perfect shuffle. Take a deck of $2n$ cards and split the deck exactly in half. Next, perfectly interlace the cards from the two stacks together. There are two ways that this interlacing can be done. An {\em out shuffle} interlaces the stacks in such a way that the top and bottom cards from the original deck remain on the top and bottom, respectively. An {\em in shuffle} interlaces the cards so that the top and bottom cards from the original deck become the second and second-to-last card, respectively. See Figure~\ref{perfect} for an example. While perfect shuffles are the basis for many flashy card tricks today, Persi Diaconis (a professional magician and mathematician) and Ron Graham estimated in 2011 that it takes ``a few hundred hours, certainly thousands of repetitions" for a person to learn to do a perfect shuffle. They also added, ``We estimate that there are fewer than a hundred people in the world who can do eight perfect shuffles in under a minute."\cite{diaconis2011} Under the circumstances, it is natural to wonder if an easier shuffling technique could provide the same opportunities for impressive tricks. With this motivation, Doug Ensley introduced so-called {\em unshuffles} in his article ``Unshuffling for the imperfect magician."~\cite{Ensley} \begin{figure} \begin{center} \includegraphics[width=11cm]{Perfectshuffle.pdf} \end{center} \caption{Perfect shuffles on a deck of 6 cards.}\label{perfect} \end{figure} Unshuffles are not nearly so difficult. Take a deck of $2n$ cards, and deal the cards from the top of the deck alternately into two piles, starting with one pile on the left and then one on the right. Reassemble the deck by stacking one pile on top of the other. There are two ways that this can be done. The {\em right shuffle} (denoted $R$) stacks the right pile on top of the left. The {\em left shuffle} ($L$) stacks the left pile on top of the right. After a right shuffle, notice that the top and bottom cards have now swapped places, while after a left shuffle, the original top and bottom cards are now the center two cards of the deck. See Figure~\ref{unshuffle} for an example with 6 cards. Ensley investigated unshuffles and described several tricks that can be performed via unshuffles by capitalizing on their mathematical properties~\cite{Ensley}. Our purpose here is to continue Ensley's investigation of unshuffles. We have two main results. First, we consider Elmsley's Problem, a classic mathematical card trick where one moves the top card in the deck to any given position. We solve a generalization of Elmsley's Problem using unshuffles on a deck of $2^k$ cards (see Theorem~\ref{elmsley}). For our second main result, we determine the structure of the permutation groups generated by left and right shuffles $\langle L, R \rangle$ on $2n$ cards. We prove that the group coincides with that of perfect shuffles for all $n$ such that $n\not\equiv 3\pmod{4}$. If $n\equiv 3\pmod{4}$, then the group $\langle L, R \rangle$ with $2n$ cards is isomorphic to $B_n$, the group of all centrally symmetric permutations. Meanwhile, $\langle I, O \rangle$ is an index 2 subgroup of $B_n$ in this case. The full result is as follows. \begin{theorem}\label{introtheorem} Suppose a deck has $2n$ cards. \begin{enumerate}[(a)] \item If $2n = 12$, then $\langle L, R\rangle = \langle I, O\rangle$ which is isomorphic to the semi-direct product $\mathbb{Z}^6_2 \rtimes S_5$, where $S_5$ is the symmetric group on $5$ elements \item If $2n=24$, then $\langle L, R\rangle=\langle I, O\rangle$ which is isomorphic to the semi-direct product $\mathbb{Z}^{11}_2 \rtimes M_{12}$, where the group $M_{12}$ is the Mathieu group of degree 12. The group has order $2^{11}\cdot 95040.$ \item If $2n = 2^k$, then $\langle L, R\rangle=\langle I, O\rangle$ which is isomorphic to the semi-direct product $\mathbb{Z}^k_2 \rtimes \mathbb{Z}_k$ \item If $n \equiv 0\pmod{4}$, $n > 12$, and $n$ is not a power of 2, then $\langle L, R\rangle = \langle I, O\rangle$ which is the intersection of the kernels of $sgn$ and $\overline{sgn}$ and has order $n!2^{n-2}$. (We define $sgn$ and $\overline{sgn}$ in Section~\ref{groups}.) \item If $n \equiv 1 \pmod{4}$ and $n>1$, then $\langle L, R\rangle=\langle I, O\rangle$ which is the kernel $\overline{sgn}$ and has order $n!2^{n-1}$. \item If $n \equiv 2\pmod{4}$ and $n > 6$, then $\langle L, R\rangle = \langle I, O\rangle=B_n$, where $B_n$ is the group of centrally symmetric permutations in $S_{2n}$ and has order $n!2^{n}$. \item If $n \equiv 3\pmod{4}$, then $\langle L, R\rangle = B_n$ and has order $n!2^n$. \end{enumerate} \end{theorem} From this result, we see that limiting oneself to unshuffles will not limit the card arrangements that one can reach as compared to using perfect shuffles. For $n\not\equiv 3 \pmod{4}$, unshuffles on a deck of $2n$ cards yield the exact same set of arrangements as perfect shuffles. When $n\equiv 3 \pmod{4}$, unshuffles allow a person to reach all the arrangements as perfect shuffles plus another set of arrangements of equal size which perfect shuffles do not reach. We begin in Section~\ref{properties} by investigating the mathematical properties of unshuffles. In Section~\ref{elmsleysection}, we provide a solution to a generalization of Elmsley's Problem on a deck of $2^k$ cards. We prove Theorem~\ref{introtheorem} in Section~\ref{groups} as follows. Theorem~\ref{special} proves parts (a) and (b) of Theorem~\ref{introtheorem}, Theorem~\ref{power} proves (c), and Theorem~\ref{main} proves (d)--(g). \begin{figure} \begin{center} \includegraphics[width=11cm]{unshuffle.pdf} \end{center} \caption{Unshuffles on a deck of 6 cards.}\label{unshuffle} \end{figure} \section{Properties of unshuffles}\label{properties} Throughout our discussion, we consider decks with an even number of cards, denoted by $2n$. We index the cards by their distance from the top of the deck. So the top card has index 0, the second card from the top has index 1, and so on. The bottom card has index $2n-1$. We treat shuffles as functions and hence read their products from right to left. A formula for the location of the $i^{th}$ card after an in or out perfect shuffle is known (see, for example,~\cite{Diaconis, Ensley}). These formulas can be stated in terms of modular arithmetic as follows: $$I(i) = 2i + 1 \!\!\!\pmod{2n+1},\quad\text{ and } \quad O(i) = \begin{cases} 2i \!\!\! \pmod{2n-1} & \text{if } i\neq 2n-1\\ 2n-1 & \text{if } i =2n-1. \end{cases} $$ In the same way, we can find formulas to express the action of the left and right shuffles on a deck of cards. Notice that the formulas have similarities with those given above for in and out shuffles. \begin{lemma}\label{formula} Suppose a deck has $2n$ cards. The index of the $i^{th}$ card after a left or right shuffle is given by the following formulas: $$ L(i) = ni + n-1\!\!\!\! \pmod{2n+1},\quad\text{ and } \quad R(i) = \begin{cases} (n-1)i \!\!\!\! \pmod{2n-1} & \text{if } i\neq 0\\ 2n-1 & \text{if } i =0. \end{cases} $$ \end{lemma} \begin{proof} After we deal the deck of cards from the top of the deck alternately into two piles starting with the first card on the left, the cards are in the following array: \begin{align*}~\label{array} \begin{array}{cccc} 2n-2 &&& 2n-1 \\ 2n-4 &&& 2n-3\\ \vdots &&& \vdots\\ 2 &&& 3\\ 0 && & 1 \end{array} \end{align*} Stacking the piles together with a left shuffle puts the cards in the following order: $$2n-2, 2n-4, \ldots, 2, 0, 2n-1, 2n-3, \ldots, 3, 1.$$ The index of the $i^{th}$ card after a left shuffle is then given by the equation: \begin{equation}\label{L} L(i) = \begin{cases} -\tfrac{1}{2}i + n-1 & \text{if $i$ is even }\\ -\tfrac{1}{2}(i+1) + 2n & \text{if $i$ is odd} \end{cases} \end{equation} We want to write $L(i)$ in terms of an expression in $\mathbb{Z}_{2n+1}$. Observe that in $\mathbb{Z}_{2n+1}^*$, we have $(-2)^{-1} \equiv n \pmod{2n+1}$ and also $2n \equiv -1\pmod{2n+1}$. Therefore, no matter whether $i$ is even or odd, we can express $L(i)$ as follows: $L(i)= ni + (n-1) \pmod{2n+1}.$ Now let's consider right shuffles. After stacking the cards with the right stack on top, the cards are in the following order: $$2n-1, 2n-3, \ldots, 3, 1,2n-2, 2n-4, \ldots, 2, 0.$$ So then for even $i$, the index $R(i)$ is given simply by adding $n$ to $L(i)$. For odd $i$, $R(i)$ is given by subtracting $n$ from $L(i)$. Thus: $$ R(i) = \begin{cases} -\tfrac{1}{2}i + 2n-1 & \text{if $i$ is even }\\ -\tfrac{1}{2}(i+1) + n & \text{if $i$ is odd} \end{cases} $$ We want to write $R(i)$ in terms of an expression in $\mathbb{Z}_{2n-1}$. This cannot be done for $i=0$, since $R(0) = 2n-1$, so that is a special case. But for $i>0$, we proceed as follows. Observe that in $\mathbb{Z}_{2n-1}^*$, we have $(-2)^{-1} \equiv n-1 \pmod{2n+1}$ and also $n \equiv 1-n \pmod{2n-1}$. Making the appropriate substitutions, we find that for all $i>0$, we can express the above formula for $R$ as: $R(i)= (n-1)i \pmod{2n-1}.$ \end{proof} We note that a right shuffle swaps the places of the outermost cards (cards $0$ and $2n-1$) and performs a left shuffle on the inner $2n-2$ cards. See Figure~\ref{unshuffle} for an example. Using the above formulas, we can now determine how many times one must do a left (or right) shuffle so that the deck returns to its original arrangement. In mathematical terms, we are finding the {\em order} of the shuffle. \begin{proposition} Suppose a deck has $2n$ cards. The order of the left shuffle is the order of $-2$ in $\mathbb{Z}_{2n+1}^{*}$. Denote the order of $-2$ in $\mathbb{Z}_{2n-1}^{*}$ by $r$. If $r$ is even, then the right shuffle has order $r$. If $r$ is odd, right shuffle has order $2r$. \end{proposition} \begin{proof} If we repeat the left shuffle $k$ times we have: $L^k(i) = n^ki + n^k - 1\pmod{2n+1}$. If $k$ is the order of $L$, then $n^ki + n^k - 1 \equiv i \pmod{2n+1}$ for all $i$. This implies $n^k \equiv 1\pmod{2n+1}$. In other words, the order of $L$ is the order of $n$ in $\mathbb{Z}_{2n+1}^{*}$. Equivalently, the order of $L$ is the order of the inverse of $n$ in $\mathbb{Z}_{2n+1}^{*}$, which is $-2$. Next consider the right shuffle. This takes slightly more work because we must consider the inner cards of the deck separately from the top and bottom cards (cards 0 and $2n-1$). Let's begin with the cards with indices $0<i<2n-1$. The right shuffle formula for these cards is $R(i) = (n-1)i$, which is equivalent to $R(i) = -ni$, because $n-1 \equiv -n\pmod{2n-1}$. Composing $R$ with itself $k$ times, we have $R^k(i) = (-n)^ki$ for $0<i<2n-1$. We have $R^k(i) \equiv (-n)^ki \equiv i\pmod{2n-1}$ for all $0<i<2n-1$ if and only if $k$ is a multiple of the order of $-n$. Equivalently, $k$ is a multiple of the order of $-2$ in $\mathbb{Z}_{2n-1}^{*}$ (which is the inverse of $-n$ in $\mathbb{Z}_{2n-1}^{*}$). Now we turn to the top and bottom cards of the deck (cards 0 and $2n-1$). Since the top and bottom cards swap places after one right shuffle, $k$ must be an even integer in order to have $R^k(0)=0$ and $R^k(2n-1) = 2n-1$. It follows that the order of $R$ is the order of $-2$ in $\mathbb{Z}_{2n-1}^{*}$ if that value is even. Otherwise, the order of $R$ is twice the order of $-2$ in $\mathbb{Z}_{2n-1}^{*}$. \end{proof} \begin{example} Suppose we have 52 cards. The order of the left shuffle will be the order of $-2$ in $\mathbb{Z}_{53}^{*}$, which is $52$. The order of $-2$ in $\mathbb{Z}_{50}^{*}$ is 8, so the order of the right shuffle is 8. \end{example} \begin{figure} \begin{center} \includegraphics[width=5cm]{Vshuffle.pdf} \end{center} \caption{The shuffle $V$ on a deck of 6 cards.}\label{V} \end{figure} We now introduce a third type of shuffle, denoted by $V$. This shuffle simply reverses the order of the cards. See Figure~\ref{V} for an example. The shuffle $V$ changes the index of the $i^{th}$ card as follows: $$V(i) = 2n-1-i.$$ Observe that $V$ has order 2. This new shuffle $V$ helps us define the connection between left and right shuffles and perfect shuffles $I$ and $O$. This connection is discussed in Proposition 2 of~\cite{Ensley}, as well. \begin{proposition}\label{connection} Suppose a deck has 2n cards. Left and right shuffles are related to in and out shuffles and $V$ as follows: $$L = V I^{-1} = I^{-1} V $$ $$R = V O^{-1} = O^{-1} V$$ \end{proposition} \begin{proof} We first prove that $L I = I L$. Using the formulas for the left shuffle and the in shuffle, we compute $L(I(i)) = n(2i+1) + n - 1 = 2ni + 2n - 1 \pmod{2n+1}$ and $I(L(i)) = 2(ni + n - 1) + 1 = 2ni + 2n - 1 \pmod{2n+1}$. The expressions for $LI$ and $IL$ are equal, as desired. Next, observe that: $$L(I(i))=2ni + 2n - 1 \equiv 2ni + i + 2n - 1 - i \equiv 2n - 1 - i =V(i)\pmod{2n+1}.$$ Thus, we have proved that $L I = I L = V$. Equivalently, $L = V I^{-1} = I^{-1} V. $ Using a similar process, we can also prove $R = V O^{-1} = O^{-1} V.$ We leave this to the reader. \end{proof} Observe from the above lemma that a left shuffle is ``close" to being the inverse of an in shuffle (up to reversing the order of the deck), and the right shuffle is similarly close to being the inverse of an out shuffle. In Section~\ref{groups}, we will be interested in writing the shuffle $V$ either as a product of perfect shuffles or as a product of unshuffles. The following lemma describes a special case where this can be done easily, which will be useful to us in Theorem~\ref{special}. \begin{lemma}\label{relation} If $V = I^y$ where $y$ is even, then $V = L^y$. \end{lemma} \begin{proof} By Proposition~\ref{connection}, $L = I^{-1}V=VI^{-1}$. Using this and the facts that $V = I^y$, $y$ is even, and $V^2$ is the identity, we compute: $L^y = (I^{-1}V)^y = (I^y)^{-1}V^y =V. $ \end{proof} A valuable property of perfect shuffles goes as follows. Take two cards that are equidistant from the center of the deck. After an in or out shuffle, the two cards remain equidistant from the center of the deck. We make this precise in the following definition. \begin{definition} A permutation $\sigma$ of $\{0,1,\ldots, 2n-1\}$ {\bf preserves central symmetry} if for every $i, j\in \{0,1,\ldots, 2n-1\}$ such that $i + j = 2n-1$, we have $\sigma(i) + \sigma(j) = 2n-1$. \end{definition} Among magicians, this principle of preserving central symmetry is termed {\em stay-stack}. The fact that perfect shuffles preserve central symmetry was first pointed out in 1957 by Russell Duck, a Pennsylvania policeman (see Section 3 of \cite{Diaconis}), and the observation can be exploited to create all sorts of card tricks. It is straightforward to show that unshuffles have this property, as well. \begin{proposition} Left and right shuffles preserve central symmetry. \end{proposition} \begin{proof} Perfect shuffles $I$ and $O$ preserve central symmetry. Hence so do their inverses $I^{-1}$ and $O^{-1}$. The shuffle $V$ has the effect of swapping the cards in each centrally symmetric pair, and so $V$ also preserves central symmetry. Left and right shuffles are compositions of $V, I^{-1}$, and $O^{-1}$ (Proposition~\ref{connection}), so $L$ and $R$ must also preserve central symmetry. \end{proof} \section{Elmsley's Problem with unshuffles}\label{elmsleysection} In the 1950's, Scottish computer programmer and magician Alex Elmsley posed a problem which grew popular and has since been given his name: \begin{quote}{\bf Elmsley's Problem.} Is it possible to move the top card in the deck to any given position via perfect shuffles? \end{quote} Elmsley himself found a clever solution, which he published in the card magazine {\em Ibidem} (No. 11, September 1957, see also~\cite{diaconis2011, Diaconis, Morris}). The solution is as follows. \begin{theorem}[Solution to Elmsley's Problem] Begin with a deck of $2n$ cards. To move the top card to position $i$, express $i$ in binary, and then, reading the binary expression from left to right, let 1 represent an in shuffle and let 0 represent an out shuffle. Performing this indicated sequence of in and out shuffles in order from left to right will move card $0$ to position $i$. \end{theorem} This slick solution to Elmsley's Problem encouraged quite a bit of related work. Paul Swinford discovered that in the special case where the deck has $2^k$ cards, the sequence of shuffles described in the above proposition swaps the places of the top card and the $i^{th}$ card. More generally, he observed that a sequence of shuffles that brings card $i$ to position $j$ will also move card $j$ to position $i$~\cite{swinford1,swinford2}. Later, Ramnath and Scully described a way to move card $i$ to card $j$ using perfect shuffles for any deck of $2n$ cards~\cite{Ramnath}. Diaconis and Graham found a solution to the inverse of Elmsley's Problem (that is, bringing card $i$ to the top of the deck via perfect shuffles)~\cite{DiaconisGraham}. Elmsley's Problem has also been solved for so-called generalized perfect shuffles~\cite{Medvedoff}. Performing this trick with perfect shuffles is an impressive feat.\footnote{See the trick performed and discussed here: \url{https://youtu.be/Y2lXsxmBx7E}} Having the option to use unshuffles will make it easier. We give a solution to Elmsley's Problem for unshuffles in the special case where the deck has $2^k$ cards. We first prove a lemma determines the location of the $i^{th}$ card after a left or right shuffle in terms of the binary representation of $i$. \begin{lemma}\label{binary} Suppose we have a deck of $2^k$ cards for some $k \geq 1$. Write $i$ where $0\leq i\leq 2n-1$ in binary notation: $i = x_{k-1}~x_{k-2}\cdots x_1~x_0$, and let $\overline{x}_j = 1-x_j$. Using the binary expansion, left and right shuffles move the $i^{th}$ card to a new position as follows: \vspace{-1cm} \begin{align*} &L(x_{k-1}~x_{k-2}\cdots x_1~x_0) = x_{0}~\overline{x}_{k-1}~\overline{x}_{k-2} \cdots \overline{x}_{2}~\overline{x}_{1}, \\ &R(x_{k-1}~x_{k-2}\cdots x_1~x_0)= \overline{x}_{0}~\overline{x}_{k-1}~\overline{x}_{k-2} \cdots \overline{x}_{2}~\overline{x}_{1}. \end{align*} \end{lemma} \begin{proof} Using our left shuffle formula (Lemma~\ref{formula}) and the fact that $2n = 2^k$, we have $L(i) = 2^{k-1}i + (2^{k-1}-1) \pmod {2^k+1}.$ Plug in the value $i=x_{k-1}\cdot2^{k-1} + x_{k-2}\cdot2^{k-2} + \cdots + x_{0}\cdot2^{0}$: $$L(i) = x_{k-1}\cdot2^{2k-2} + x_{k-2}\cdot2^{2k-3} + \cdots + x_{0}\cdot2^{k-1} + 2^{k-1} - 1 \pmod {2^k+1}.$$ The above expression can be simplified in two ways. First note that we can substitute $2^{k-1}-1 = 1\cdot2^{k-2}+1\cdot2^{k-3}+\cdots+1\cdot2^{0}$. Secondly, since $2^k \equiv -1\pmod{2^k+1}$, we can replace $x_{j}\cdot2^{j+(k-1)}$ with $-x_{j}\cdot2^{j-1}$. \begin{align*} L(i)&\equiv -x_{k-1}\cdot2^{k-2}-x_{k-2}\cdot2^{k-3}-\cdots-x_{1}\cdot2^{0}+x_{0}\cdot2^{k-1}+2^{k-1}-1\\ &\equiv x_{0}\cdot2^{k-1}+ (1-x_{k-1})\cdot2^{k-2}+(1-x_{k-2})\cdot2^{k-3}+\cdots +(1-x_{1})2^{0}\\ &\equiv x_{0}\cdot2^{k-1}+ \overline{x}_{k-1}\cdot2^{k-2}+\overline{x}_{k-2}\cdot2^{k-3}+\cdots +\overline{x}_{1}2^{0} \pmod{2^k+1} \end{align*} where $\overline{x}_j = 1-x_j$. We have reached our desired conclusion: $$L(x_{k-1}~x_{k-2}\cdots x_1~x_0) = x_{0}~\overline{x}_{k-1}~\overline{x}_{k-2} \cdots \overline{x}_{2}~\overline{x}_{1}.$$ We use the same method for right shuffles. The right shuffle formula for $i>0$ is: $R(i) = (2^{k-1}-1)i \pmod {2^k-1}$. Since $2^{k-1}-1 \equiv -2^{k-1} \pmod{2^k-1},$ we have $$R(i) \equiv -2^{k-1}i \pmod {2^k-1}$$ for $i>0.$ Plugging in $i =x_{k-1}\cdot2^{k-1} + x_{k-2}\cdot2^{k-2} + \cdots + x_{0}\cdot2^{0}$ and using the fact that $2^k \equiv 1\pmod{2^k-1}$, we have: \begin{align*} R(i) &\equiv (-2^{k-1})(x_{k-1}\cdot2^{k-1} + x_{k-2}\cdot2^{k-2} + \cdots +x_1\cdot2^1 + x_{0}\cdot2^{0})\\ &\equiv -x_{k-1}\cdot2^{2k-2} -x_{k-2}\cdot2^{2k-3} - \cdots - x_1\cdot 2^{k}-x_{0}\cdot2^{k-1} \\ &\equiv -x_{k-1}\cdot2^{k-2} -x_{k-2}\cdot2^{k-3} -\cdots -x_{1}\cdot2^{0}-x_{0}\cdot2^{k-1} \pmod{2^k-1} \end{align*} Now note that $1\cdot2^{k-1}+1\cdot2^{k-2}+\cdots+1\cdot2^{0}=2^{k}-1 \equiv 0\pmod{2^k-1}$. So we can add this to the above expression without changing its value: \begin{align*} R(i) &\equiv (1-x_{0})\cdot2^{k-1} +(1-x_{k-1})\cdot2^{k-2}+\cdots +(1-x_{1})\cdot2^{0} \\ &\equiv \overline{x}_{0}\cdot2^{k-1} +\overline{x}_{k-1}\cdot2^{k-2}+\cdots +\overline{x}_{1}\cdot2^{0} \pmod{2^k-1}, \end{align*} where $\overline{x}_j = 1-x_j$. This proves our result for $i>0$. For the special case $i=0$, observe that we have the following, as desired: $$R(0) = 2^k-1 = 1 \cdot 2^{k-1} + 1 \cdot 2^{k-2} + \cdots + 1 \cdot 2^{0}.$$ \end{proof} \begin{figure} \begin{center} \includegraphics[width=11cm]{Elmsley8.pdf} \end{center} \caption{Unshuffles that move the top card to position 5.}\label{Elmsely8} \end{figure} We now give and prove our solution to Elmsley's Problem using unshuffles in the case that there are $2^k$ cards. In fact, we solve a more general problem. The process that we describe {\em swaps the places of card $i$ and card $j$ in $k$ shuffles for any $i$ and $j$}. Thus, we can specialize to the case $i=0$ to recover the solution to Elmsley's Problem. \begin{theorem}\label{elmsley} Suppose there are $2^k$ cards for some $k\geq 1$. To swap the cards in positions $i$ and $j$ using unshuffles, write $i$ and $j$ in binary, and compute $i\oplus j = x_{k-1}~x_{k-2}\cdots x_1~x_0$ (where $\oplus$ denotes the xor operation). Shuffle the deck with $k$ shuffles denoted in order by $S_0, S_1, \ldots, S_{k-1}$, where $S_r$ is defined as follows: \begin{enumerate} \item If $k$ is odd, $$ S_r = \begin{cases} L & \text{if } x_r=0\\ R & \text{if } x_r =1 \end{cases} $$ \item If $k$ is even, $$ S_r = \begin{cases} R & \text{if } x_r=0\\ L & \text{if } x_r =1 \end{cases} $$ \end{enumerate} This sequence of shuffles will swap the cards in positions $i$ and $j$. \end{theorem} \begin{proof} Lemma~\ref{binary} tells us that both a left shuffle and right shuffle have the effect of sliding the bits of the card index to the right by 1 position and moving the last bit to the front. Since there are $k$ bits total, it follows that performing $k$ shuffles (left or right or a mixture of both) will return all bits back to their original position (with some of the bits flipped, perhaps). Moreover, after these $k$ shuffles are done, each bit will have flipped a total of either $k$ or $k-1$ times. The exact number of times a given bit $x_r$ is flipped is pinned down by what type of shuffle the $r^{th}$ shuffle was. The bit $x_r$ will have flipped $k$ times if $S_r$ is a right shuffle, and $x_r$ will have flipped $k-1$ times if $S_r$ is a left shuffle. Using this we can design a sequence of $k$ shuffles $S_0, S_1, \ldots, S_{k-1}$ that turns the binary expansion for $j$ into that of $i$ and vice versa. First, identify the bits where $j$ and $i$ differ via computing: $i\oplus j= x_{k-1}~x_{k-2}\cdots x_1~x_0$. If $x_r = 0$, then we need no change to the $r^{th}$ bit of $j$, and we can plan our shuffles according to the parity of $k$ so that the $r^{th}$ bit is unchanged when the shuffles are finished. Namely, if $k$ is odd, then set $S_r = L$ because that will cause the $r^{th}$ bit of $j$ to flip $k-1$ times and hence remain unchanged. If $k$ is even, set $S_r = R$, and the $r^{th}$ bit of $j$ will flip $k$ times and hence remain unchanged. Similarly, if $x_r = 1$, then the bits of $i$ and $j$ differ. We plan our shuffles according to the parity of $k$ so that the $r^{th}$ bit is flipped. If $k$ is odd, then set $S_r = R$, and the $r^{th}$ bit of $j$ will flip. If $k$ is even, set $S_r = L$. This defines a sequence of $k$ shuffles $S_0, S_1, \ldots, S_{k-1}$. Performing these shuffles in order will change all the bits of $j$ to coincide with the bits of $i$ and vice versa. Therefore, the cards in positions $i$ and $j$ will trade positions after the sequence of shuffles. \end{proof} \begin{example} We illustrate this solution to Elmsley's Problem with two examples, one with $k$ odd and the other with $k$ even. Suppose there are 8 cards and we want the top card to trade positions with the $5^{th}$ card. We compute $000_2 \oplus 101_2 = 101_2$. The shuffles should be done in the order of $R, L, R$. See Figure~\ref{Elmsely8}. Now suppose there are 16 cards and we want to swap the cards in the $6^{th}$ position and the $11^{th}$ position. In binary, we write $6 = 0110_2$ and $11=1011_2$. We compute $0110_2 \oplus 1011_2 = 1101_2$. Theorem~\ref{elmsley} tells us the shuffles $L, R, L, L$ in that order will swap these two cards. See Figure~\ref{Elmsely16}. \end{example} \begin{figure} \begin{center} \includegraphics[width=11cm]{Elmsley16.pdf} \end{center} \caption{Unshuffles that swap the positions of card 6 and card 11.}\label{Elmsely16} \end{figure} \section{The permutation groups of unshuffles}\label{groups} We now find structure of the permutation group $G = \langle L, R\rangle$. Through this, we will know exactly what card arrangements can be reached using unshuffles. In some (but not all) cases, the permutation group $\langle L, R\rangle$ coincides with $\langle I, O\rangle$. The fact that the groups often coincide is not so surprising, given the close relationship between perfect shuffles and unshuffles (Proposition~\ref{connection}). However, the details require careful work. We prove Theorem~\ref{introtheorem} in a sequence of three theorems (Theorems~\ref{special},~\ref{power}, and~\ref{main}). Let's begin by reviewing the permutation groups of perfect shuffles. Let $B_n$ be the group of all centrally symmetric permutations of $2n$ elements. This group can also be identified as the group of all signed $n\times n$ permutation matrices. Because $L, R, I$, and $O$ are all themselves centrally symmetric permutations, it follows that both $\langle L, R\rangle$ and $\langle I, O\rangle$ are subgroups of $B_n$. We define several homomorphisms on $B_n$. First let $$sgn: B_n\longrightarrow \{\pm 1\}$$ be the homomorphism such that $sgn(g)$ is the sign (or, parity) of the permutation $g$. Next, observe that while each permutation $g \in B_n$ is a permutation on $2n$ elements, $g$ induces a permutation on the $n$ centrally symmetric pairs. We define a homomorphism $$\phi:B_n \longrightarrow S_n$$ by assigning $\phi(g)$ to be the permutation that $g$ induces on the $n$ centrally symmetric pairs. Next we define $$\overline{sgn}: B_n\longrightarrow \{\pm 1\}$$ by assigning $\overline{sgn}(g)$ to be the sign of the permutation $\phi(g)\in S_n$. We easily pick up a final homomorphism as follows. Let $sgn\overline{sgn}: B_n\longrightarrow \{\pm 1\}$ be the homomorphism that assigns $g$ to the product $sgn(g)\overline{sgn}(g)$. Now we are ready to state the group structure of $\langle I, O\rangle$. The first three items are special cases (when the deck has size 12, 24, or a power of 2, respectively). Beyond that, the group structure is determined by the congruence class of $n$ modulo 4. For a proof and full details, see~\cite{Diaconis}. \begin{theorem}\cite{Diaconis}\label{perfectgroups} The structure of the permutation group $\langle I, O\rangle$ on $2n$ cards is as follows: \begin{enumerate} \item If $2n = 12$, then $\langle I, O\rangle$ is isomorphic to the semi-direct product $\mathbb{Z}^6_2 \rtimes S_5$, where $S_5$ is the symmetric group on $5$ elements. \item If $2n=24$, then $\langle I, O\rangle$ is isomorphic to the semi-direct product $\mathbb{Z}^{11}_2 \rtimes M_{12}$, where $M_{12}$ is the Mathieu group of degree 12. \item If $2n = 2^k$, $\langle I, O\rangle$ is isomorphic to the semi-direct product $\mathbb{Z}^k_2 \rtimes \mathbb{Z}_k$ \item If $n \equiv 0 \pmod{4}$, $n> 12$, and $n$ is not a power of 2, then $\langle I, O\rangle$ is the intersection of the kernels of $sgn$ and $\overline{sgn}$ and has order $n!2^{n-2}$. \item If $n \equiv 1 \pmod{4}$ and $n>1$, then $\langle I, O\rangle$ is the kernel $\overline{sgn}$ and has order $n!2^{n-1}$. \item If $n \equiv 2 \pmod{4}$ and $n>6$, then $\langle I, O\rangle = B_n$ and has order $n!2^{n}$. \item If $n \equiv 3 \pmod{4}$, then $\langle I, O\rangle$ is equal to the kernel of $sgn\cdot\overline{sgn}$ and has order $n!2^{n-1}$. \end{enumerate} \end{theorem} Now we turn to consider the group $G = \langle L,R \rangle$. We begin by proving $G = \langle L,R \rangle$ coincides with the perfect shuffle group in the cases where $2n$ is 12 or 24 (Theorem~\ref{special}) and in the case where $2n$ is a power of 2 (Theorem~\ref{power}). \newpage \begin{theorem}\label{special} If $2n = 12$ or $2n = 24$, then $\langle L, R\rangle = \langle I, O\rangle$. \end{theorem} \begin{proof} We first observe that the formula for $I^r$ is give by: $$I^r(i) \equiv 2^ri + 2^{r-1} + 2^{r-2} + ... + 1 \equiv ~~2^ri + 2^r - 1 \pmod{2n+1}.$$ Now suppose that we have $2n=12$ cards. Observe that $$I^6(i) = 2^6(i) + 2^6-1 \equiv 11-i =V(i)\pmod{13}.$$ Therefore $V = I^6$. Lemma~\ref{relation} now implies that $V = L^6$. Because we can express $V$ in terms of $I$, Proposition~\ref{connection} implies that $\langle L, R\rangle \subseteq \langle I, O\rangle$. Moreover, because we can express $V$ in terms of $L$, it implies $\langle I,O\rangle \subseteq \langle L, R\rangle$. Therefore $\langle L, R\rangle = \langle I, O\rangle$. Similarly, for $2n=24$ cards we have $$I^{10}(i) = 2^{10}(i) + 2^{10}-1 \equiv 23-i = V(i)\pmod{25}.$$ Because $V = I^{10}$, Lemma~\ref{relation} implies $V = L^{10}$. The same argument for $12$ cards applies to $24$ cards, so $\langle L, R\rangle = \langle I, O\rangle$ for the $2n = 24$ case, as well. \end{proof} \begin{theorem}\label{power} Suppose $2n=2^k$ for some positive integer $k$. Then $\langle L,R \rangle = \langle I, O\rangle$. \end{theorem} \begin{proof} Using the formula for $I^r(i)$ mentioned in the proof of Theorem~\ref{special} and setting $r=k$, we observe: $$I^k(i) \equiv 2^ki + 2^k - 1 \equiv - i + 2^k - 1 =V(i) \pmod{2^k+1}.$$ Therefore $V = I^k.$ This implies that $L$ and $R$ can be written as combinations of $I$ and $O$ by Proposition~\ref{connection}. So then $\langle L, R\rangle \subseteq \langle I, O\rangle.$ Now we prove $\langle I, O\rangle \subseteq \langle L, R\rangle$, splitting into two cases according to the parity of $k$. Suppose that $k$ is even. Begin with the equation $L = VI^{-1}$ from Proposition~\ref{connection}. Composing both sides with themselves $k$ times, we have $$L^k = (VI^{-1})(VI^{-1})\cdots(VI^{-1}) = V^k I^{-k} = I^{-k},$$ because $V$ and $I^{-1}$ commute and $k$ is even. Plugging in $V=I^k$ and remembering that $V=V^{-1}$, we find that $L^k = V.$ This implies that $I$ and $O$ can be written in terms of $L$ and $R$ for $k$ even. Therefore $\langle I, O\rangle \subseteq \langle L, R\rangle$, which proves $\langle I, O\rangle = \langle L, R\rangle$ for $k$ even, as desired Suppose $k$ is odd. Begin with $R = V O^{-1}$ (from Proposition~\ref{connection}) and compose both sides with themselves $k$ times. Remembering that $O^{-1}$ and $V$ commute and using the fact that $O^{-k}$ is the identity (because $O^k(i) = 2^ki \equiv i \pmod{2^k-1}$), we find $$R^k = (VO^{-1})(VO^{-1})\cdots(VO^{-1}) = V^k O^{-k} = V^k = V.$$ Therefore when $k$ is odd, $V = R^k.$ Similar to before, we conclude $\langle I, O\rangle \subseteq \langle L, R\rangle$, which proves $\langle I, O\rangle = \langle L, R\rangle$ for $k$ odd, as desired \end{proof} Now that we have settled the above special cases, we have the more difficult task of considering what happens in general. We must establish some preliminary definitions and two lemmas first. Let $G^*$ denote the subgroup of $G=\langle L,R\rangle$ consisting of all shuffles in $G$ that leave the set $\{0, 1, \ldots, n-1\}$ invariant. Since all shuffles in $G^*$ also must preserve central symmetry, it follows that elements in $G^*$ can be expressed as $\sigma\sigma'$ where $\sigma$ is a permutation of $\{0, 1, \ldots, n-1\}$ and $\sigma'$ represents the corresponding permutation of the elements $\{0', 1', \ldots, (n-1)'\}$ where $r' = 2n-1-r$. So an element $\sigma\sigma'$ in $G^*$ is completely determined by $\sigma$. Using this notation throughout, we now prove two lemmas. \begin{lemma}\label{G*} Let $n> 1$ be such that $n$ is not a power of 2 and $n\neq 6, 12$. The group $G^*$ contains all permutations of the form $\sigma\sigma'$ where $\sigma$ is any even permutation of $\{0, 1, \ldots, n-1\}$ and $\sigma'$ is as defined above. \end{lemma} \begin{proof} In~\cite{Diaconis}, the authors introduce several useful permutations which are created with perfect shuffles and will be useful in our context, as well. Let $k$ and $r$ be positive integers and define: \begin{align*} c &= O(I^{-1}OIO^{-1})^2O^{-1}\\ w &= O^{-1}Ic^{-1}O^{-1}Ic^2I^{-1}Oc^{-1}I^{-1}O\\ b &= (I^kO^{-k}I^{-1}O)^{-2}\\ c' &= ObO^{-1}\\ h(r) &= O^{-r}I^r \end{align*} All of these permutations can be performed using left and right shuffles, as well. To see this, substitute $I = VL^{-1}$ and $O = VR^{-1}$ into the above expressions. Because $V$ commutes with both $L$ and $R$ and because $V^2$ is the identity, we are left with a product of left and right shuffles. In Lemma 9 of~\cite{Diaconis}, the authors prove that when $n$ is odd, $c$ and $w$ generate all permutations $\sigma\sigma'$ where $\sigma$ is an even permutation of $\{0, 1, \ldots, n-1\}$. Therefore our result is proved for $n$ odd. Now suppose $n$ is even, $n$ is not a power of 2, and $n\neq 6,12$. Write $2n = 2^kv$ where $v>1$ and $v$ is odd. In Lemmas 15, 16, 17, and 18 of~\cite{Diaconis}, the authors prove that under the stated conditions on $n$, the shuffles $b, c', h(1), h(2), \ldots, h(k-1)$ generate all permutations $\sigma\sigma'$ where $\sigma$ is an even permutation of $\{0, 1, \ldots, n-1\}$. Therefore our result for left and right shuffles follows, as well. \end{proof} We must prove one more preliminary lemma, but first we establish some notation and make observations. Recall the homomorphism $\phi:B_n\longrightarrow S_n$ is defined by assigning $\phi(g)$ to be the permutation that $g$ induces on the $n$ centrally symmetric pairs. In~\cite{Diaconis}, the authors found that the parities of the permutations $I$, $O$, $\phi(I),$ and $\phi(O)$ depended on the congruence class of $n$ modulo $4$ (see Table 3 in~\cite{Diaconis}). Also remember that $V$ switches the card in position $i$ with the card in position $i'$ for $0 \leq i \leq n-1$, so $V$ is the product of $n$ transpositions: $V = (0, 0')(1, 1')(2,2')\cdots(n-1,(n-1)').$ Therefore the parity of $V$ is $(-1)^n$. Also $\phi(V) = (1)$, the identity permutation. Hence $\overline{sgn}(V)=1$. These parities, along with the relation between perfect shuffles and unshuffles found in Proposition~\ref{connection}, help us construct Table~\ref{signs} for the parities of $L, R, \phi(L),$ and $\phi(R)$, which we will use frequently. \begin{table} \begin{center} \setlength{\arrayrulewidth}{0.5mm} \setlength{\tabcolsep}{10pt} \renewcommand{\arraystretch}{1.5} \begin{tabular}{ |p{2.7cm}|p{1.0cm}|p{1.0cm}|p{1.0cm}|p{1.0cm}| } \hline & $L$& $R$& $\phi(L)$& $\phi(R)$ \\ \hline $n \equiv 0\pmod{4}$ & $~~1$ & $~~1$ & $~~1$ & $~~1$ \\ $n \equiv 1\pmod{4}$ & $~~1$ & $-1$ & $~~1$ & $~~1$ \\ $n \equiv 2\pmod{4}$ & $-1$ & $-1$ & $-1$ & $~~1$ \\ $n \equiv 3\pmod{4}$ & $-1$ & $~~1$ & $~~1$ & $-1$ \\ \hline \end{tabular} \end{center} \caption{Parities of $L$, $R$, $\phi(L)$, $\phi(R)$ for congruence classes of $n$ modulo $4$.}\label{signs} \end{table} The kernel of the homomorphism $\phi:B_n \longrightarrow S_n$ is generated by all transpositions $(x,x')$ where $x \in \{0, 1, \ldots, n-1\}$ and $x' = 2n-1 - x$. These transpositions commute with each other. Hence $ker(\phi)$ is a group of order $2^n$. We denote the restriction of the homomorphism $\phi$ to $G$ by $\phi|_G$, and we denote the kernel of $\phi|_G$ by $K$. The following lemma finds the order of $K$. \begin{lemma}\label{K} Let $n> 1$ be such that $n$ is not a power of 2 and $n\neq 6, 12$, and let $K$ denote the kernel of the homomorphism $\phi|_G$. If $n\equiv 0 \pmod{4}$, then $|K| = 2^{n-1}$. Otherwise, $|K| = 2^{n}$. \end{lemma} \begin{proof} We first construct a permutation $f\in B_n$ as follows: $$ f(i) = \begin{cases} L(i) & \text{if $i$ is even and $0\leq i \leq n-1$}\\ & \text{ \quad or if $i$ is odd and $n\leq i \leq 2n-1$ }\\ L(i)' & \text{if $i$ is odd and $0\leq i \leq n-1$}\\ &\text{ \quad or if $i$ is even and $n\leq i \leq 2n-1$ } \end{cases} $$ Our movitation for creating this permutation is that $f$ induces the same permutation as $L$ on the centrally symmetric pairs, so $\phi(f) = \phi(L)$, but unlike $L$, the permutation $f$ leaves the set $\{0, 1, \ldots, n-1\}$ invariant (hence, of course, $f$ also leaves the set $\{n, n+1, \ldots, 2n-1\}$ invariant). To verify this, consult Equation~\ref{L} for $L(i)$. We will make use of $f$ throughout. Now suppose that $n \equiv 0, 1,$ or $ 3 \pmod{4}$. In this case, Table~\ref{signs} tells us $\phi(L)$ is an {\em even} permutation. Denote this permutation via $\sigma\in S_n$. It follows that $f = \sigma\sigma'$. By Lemma~\ref{G*}, since $\sigma$ is an even permutation, we know that $f$ is an element of $G^*$. Therefore $f$ can be realized using left and right shuffles. Now consider the permutation: $f^{-1}L$. Written in cycle notation, we have: $f^{-1}L = (11')(33')\cdots(kk')$, where $k=n-1$ if $n$ is even and $k=n-2$ if $n$ is odd. In the special case that $n=3,4,$ or 5, we have all the ingredients we need to proceed, but for $n>5$, we need to define one extra ingredient: $g = (0,1)(3,5)(0',1')(3',5')$. Notice that by Lemma~\ref{G*}, we know $g \in G^*$, so $g$ can be realized with left and right shuffles, as well. With these ingredients ready, we construct the permutation $h$: $$ h = \begin{cases} f^{-1}L=(1 1') &\textrm{ if } n=3\\ f^{-1}L=(1 1')(3 3') &\textrm{ if } n=4, 5\\ (f^{-1}L)g(f^{-1}L)g =(0 0')(1 1') &\textrm{ if }n> 5 \end{cases} $$ Observe that because $f, L, g \in G$, this permutation $h$ is also an element of $G$. Moreover, $h\in K$ because $h$ is a product of transpositions of the form $(xx')$. Now suppose $n=3$. Conjugating $h=(1 1')$ by elements of $G^*$, we generate $(0 0'), (1 1'),$ and $(2 2')$, all of which are in $K$. Together these three elements will generate a set of 8 elements. Therefore $|K|\geq 2^3.$ But on the other hand, we know $|K|\leq 2^3$ since $K$ is a subgroup of the kernel of $\phi$, so then $|K|=2^3$. This concludes the argument for $n=3.$ Next suppose $n\geq 4$ (and we continue to assume $n \equiv 0, 1,$ or $ 3 \pmod{4}$). Conjugating $h$ by elements of $G^*$, it follows that all elements of the form $(yy')(zz')$ where $y,z \in \{0,1,\ldots, n-1\}$ are in $K$. Multiply these elements of the form $(yy')(zz')$ together in all possible ways, and we produce any product of an {\em even} number of these transpositions. The total number of such products (all of which belong to $K$) is: $$\binom{n}{0}+\binom{n}{2}+ \binom{n}{4}+\cdots ~~ = 2^{n-1}. $$ Hence $|K|\geq 2^{n-1}$ if $n \equiv 0, 1, 3 \pmod{4}$ and $n\geq 4$. If $n \equiv 2 \pmod{4}$, we can use a similar argument to above to prove that $|K|\geq 2^{n-1}$. In this case, $\phi(R)$ is an even permutation instead of $\phi(L)$, and so the argument proceeds by switching $L$ to $R$ and making related adjustments. We leave these details to the reader and move forward assuming that $|K|\geq 2^{n-1}$ for all $n$. If $n \equiv 0 \pmod{4}$, then $L$ and $R$ are both even permutations, so all permutations in $K$ are even. Since $K$ is a subgroup of the kernel of $\phi$ which contains odd permutations and has order $2^n$, we know $|K| < 2^n$. On the other hand, as we exhibited above, $|K|\geq 2^{n-1}$. Therefore $|K|=2^{n-1}$, as desired. If $n \equiv 3 \pmod{4}$, then $L$ is an odd permutation and $f^{-1}L$ is an odd permutation in $K$. Multiplying $f^{-1}L$ together with the elements in the set of even permutations in $K$ we constructed above, we generate another $2^{n-1}$ unique elements in $K$. Therefore $|K|\geq 2^{n-1}+2^{n-1} = 2^n$. On the other hand, we know $|K| \leq 2^n$ because $K$ is a subgroup of the kernel of $\phi$, so the result follows. Finally, suppose that $n \equiv 1 \text{ or } 2 \pmod{4}$. In this case, $R$ is an odd permutation and $\phi(R)$ is even. Similar to before, let $k\in G^*$ such that $\phi(k) = \phi(R)$. Then $kR^{-1}$ is an odd permutation in $K$. As in the previous case, it follows that $|K|=2^n.$ \end{proof} We are now ready to determine the group structure of $G = \langle L, R\rangle$ for all $n>1$ such that $n$ is not a power of 2 and $n\neq 6, 12$. \begin{theorem}\label{main} Suppose a deck has $2n$ cards and let $G = \langle L, R\rangle$. \begin{enumerate}[(a)] \item If $n \equiv 0\pmod{4}$, $n > 12$, and $n$ is not a power of 2, $G = \langle I, O\rangle$. \item If $n \equiv 1\pmod{4}$ and $n>1$, $G = \langle I, O\rangle$. \item If $n \equiv 2\pmod{4}$ and $n > 6$, $G = \langle I, O\rangle = B_n$. \item If $n \equiv 3\pmod{4}$, $G = B_n$. \end{enumerate} \end{theorem} \begin{proof} In all cases, we know that $A_n \subseteq \phi(G^{*})\subseteq \phi(G)$ by Lemma~\ref{G*}, and we will frequently use this fact. We begin with (a). Note that both $\phi(L)$ and $\phi(R)$ are even permutations for $n \equiv 0\pmod{4}$ (Table~\ref{signs}). So, we conclude that $\phi(G) = A_n$ since $A_n \subseteq \phi(G^{*})\subseteq \phi(G)$. By the First Isomorphism Theorem and Lemma~\ref{K}, we have that $\lvert G \rvert = \lvert A_n \rvert \lvert K \rvert=n!\cdot2^{n-2}$. We next prove that $G=\langle L, R\rangle \subseteq \langle I, O\rangle$ when $n \equiv 0\pmod{4}$. Because $L = VI^{-1}$ and $R = VO^{-1}$, it suffices to show that $V \in \langle I, O\rangle$. By Theorem~\ref{perfectgroups}, $\langle I, O\rangle$ is the intersection of the kernels of $sgn$ and $\overline{sgn}$. We already discussed that the parity of $V$ is $(-1)^{n}$, and since $n$ is even, $sgn(V) = 1$. We also know $\overline{sgn}(V)=1$. Therefore it follows that $V \in \langle I, O\rangle$, and so $\langle L, R\rangle \subseteq \langle I, O\rangle$. Because the two groups have the same order, we conclude $G = \langle L, R\rangle= \langle I, O\rangle$. For (b), we can use a similar argument to that of (a) to show that $\lvert G \rvert = \lvert A_n \rvert \lvert K \rvert$. By Lemma~\ref{K}, $\lvert G \rvert = n!\cdot2^{n-1}$. Now Theorem~\ref{perfectgroups} tells us that for $n\equiv 1\pmod{4}$, the group $\langle I, O\rangle$ is equal to the the kernel of $\overline{sgn}$. Since $\overline{sgn}(V)=1$, we know $V \in \langle I, O\rangle$. So, $G=\langle L, R\rangle \subseteq \langle I, O\rangle$. Because two groups have the same order, it must be the case that $G = \langle L, R\rangle= \langle I, O\rangle$. We now prove (c). Note that $\phi(L)$ is odd when $n \equiv 2\pmod{4}$. So, we can conclude that $\phi(G) = S_n$ since $A_n$ together with $\phi(L)$ will generate all permutations of $S_n$. By the First Isomorphism Theorem and Lemma~\ref{K}, we have that $\lvert G \rvert = \lvert S_n \rvert \lvert K \rvert= n!\cdot2^{n}$. Now $\langle L, R\rangle$ is a subgroup of $B_n$, but since the two groups have the same order, we conclude $G = \langle L, R\rangle= \langle I, O\rangle = B_n$. Our proof for (d) is similar to that of (c). In this case, it is $\phi(R)$ that is odd instead of $\phi(L)$ (Table~\ref{signs}), but in the same way we can conclude that $\phi(G) = S_n$. The First Isomorphism Theorem and Lemma~\ref{K} tell us that $\lvert G \rvert = \lvert S_n \rvert \lvert K \rvert= n!\cdot2^{n}$. We know that $G=\langle L, R\rangle \subseteq B_n$, but since the orders of the two groups are equal, we get our desired conclusion of $G = B_n$. \end{proof} \begin{example} The smallest $n$ where the perfect shuffle group and unshuffle group differ is when $n=3$ (a deck of 6 cards). A deck of 6 cards has $6! = 720$ possible arrangements total. Perfect shuffles and unshuffles can realize only 24 and 48 card arrangements, respectively. In particular, Theorem~\ref{perfectgroups} tells us that the group $\langle I, O\rangle$ is equal to the kernel of $sgn\overline{sgn}$ which is a group of order 24 isomorphic to $S_4$, the symmetric group on 4 elements. On the other hand, Theorem~\ref{main} tells us the group $\langle L, R\rangle$ is $B_3$, the group of all centrally symmetric permutations of 6 elements. This group has order 48 and is isomorphic to the direct product $S_4 \times \mathbb{Z}_2$. \end{example} We have restricted our focus to perfect shuffles and unshuffles here, but a wide variety of different shuffling techniques exist which provide mathematical diversion~\cite{Bayer, Butler, Johnson, Ledet, Medvedoff, MorrisHartwig}. Three great expository books that discuss these ideas in depth and describe fun mathematical card tricks to impress your friends and family are~\cite{diaconis2011,Morris2, mulcahy}. \bibliographystyle{plain}
{ "timestamp": "2023-02-08T02:19:34", "yymm": "2302", "arxiv_id": "2302.03579", "language": "en", "url": "https://arxiv.org/abs/2302.03579", "abstract": "We investigate the mathematics behind unshuffles, a type of card shuffle closely related to classical perfect shuffles. To perform an unshuffle, deal all the cards alternately into two piles and then stack the one pile on top of the other. There are two ways this stacking can be done (left stack on top or right stack on top), giving rise to the terms left shuffle ($L$) and right shuffle ($R$), respectively. We give a solution to a generalization of Elmsley's Problem (a classic mathematical card trick) using unshuffles for decks with $2^k$ cards. We also find the structure of the permutation groups $\\langle L, R \\rangle$ for a deck of $2n$ cards for all values of $n$. We prove that the group coincides with the perfect shuffle group unless $n\\equiv 3 \\pmod 4$, in which case the group $\\langle L, R \\rangle$ is equal to $B_n$, the group of centrally symmetric permutations of $2n$ elements, while the perfect shuffle group is an index 2 subgroup of $B_n$.", "subjects": "Combinatorics (math.CO); Group Theory (math.GR)", "title": "Unshuffling a deck of cards", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9928785702006464, "lm_q2_score": 0.8175744761936437, "lm_q1q2_score": 0.8117521769556874 }
https://arxiv.org/abs/math/0703505
A Neumann Type Maximum Principle for the Laplace Operator on Compact Riemannian Manifolds
In this paper we present a proof of a Neumann type maximum principle for the Laplace operator on compact Riemannian manifolds. A key p oint is the simple geometric nature of the constant in the a priori estimate of this maximum principle. In particular, this maximum principle can be applied to manifolds with Ricci curvature bounded from below and diameter bounded from above to yield a maximum estimate without dependence on a positive lower bound for the volume.
\section{#1} \setcounter{equation}{0}} \newtheorem{lem}{Lemma}[section] \begin{document} \newtheorem{defn}[lem]{Definition} \newtheorem{theo}[lem]{Theorem} \newtheorem{cor}[lem]{Corollary} \newtheorem{prop}[lem]{Proposition} \newtheorem{rk}[lem]{Remark} \newtheorem{ex}[lem]{Example} \newtheorem{note}[lem]{Note} \newtheorem{conj}[lem]{Conjecture} \title{A Neumann Type Maximum Principle for the Laplace Operator on Compact Riemannian Manifolds} \author{Guofang Wei and Rugang Ye \\ {\small Department of Mathematics} \\ {\small University of California, Santa Barbara}} \date{} \maketitle \begin{abstract} In this paper we present a proof of a Neumann type maximum principle for the Laplace operator on compact Riemannian manifolds. A key point is the simple geometric nature of the constant in the a priori estimate of this maximum principle. In particular, this maximum principle can be applied to manifolds with Ricci curvature bounded from below and diameter bounded from above to yield a maximum estimate without dependence on a positive lower bound for the volume. \end{abstract} \sect{Introduction} The main purpose of this paper is to present a proof of a Neumann type maximum principle for the Laplace operator on a closed Riemannian manifold. As a key feature of this maximum principle, the constant in the maximum estimate depends on the Riemannian manifold only in terms of the dimension and the volume-normalized Neumann isoperimetric constant. This allows us to apply it to manifolds with Ricci curvature bounded from below and diameter bounded from above to obtain a maximum principle without dependence on a positive lower bound for the volume. A special case of this maximum principle, namely Theorem C with $\Phi=0$ has been believed to be true and used in [P] for establishing an eigenvalue pinching theorem for manifolds with positive Ricci curvature. (The accounts in [P] also suggest a belief in a general version.) But we cannot find any other reference for this maximum principle (the special case or the general case) in the literature. A corresponding maximum principle (in various formulations) for the Dirichlet boundary value problem on a domain is well-known. But its usual proof, which is an application of Moser iteration based on the Sobolev inequality, is not suitable for the Neumann type problem of this paper for a number of reasons. In particular, the key independence from volume lower bound mentioned above requires new arguments for our Neumann type problem. Another obvious difference is that no average of the subsolution appears in the maximum principle for the Dirichlet boundary value problem, in contrast to the situation of this paper. Consider a closed Riemannian manifold $(M, g)$ of dimension $n$, where $g$ denotes the metric. Let $L^p(M)$ denote the $L^p$ space of functions on $M$, $L^p(TM)$ the $L^p$ space of vector fields on $M$, and $W^{k, p}(M)$ the $W^{k,p}$ Sobolev space of functions on $M$. The $L^p$ norm with respect to $g$ will be denoted by $\| \cdot \|_p$, i.e. \begin{eqnarray} \|f\|_p =\left(\int_M |f|^p\right)^{\frac{1}{p}}, \|\Phi\|_p=\left(\int_M |\Phi|^p\right)^{\frac{1}{p}} \end{eqnarray} for $f \in L^p(M)$ and $\Phi \in L^p(TM)$. (The notation of the volume form of $g$ is often omitted in this paper.) The following volume-normalized $L^p$ norm $ \| \cdot \|_p^*$ will play an important role in this paper: \begin{eqnarray} \|f\|_p^* = \left(\frac{1}{vol_g(M)} \int_M |f|^p\right)^{\frac{1}{p}}, \|\Phi\|^*_p &=&\left(\frac{1}{vol_g(M)}\int_M|\Phi|^p\right)^{\frac{1}{p}}, \end{eqnarray} where $vol_g(M)$ denotes the volume of $(M,g)$. The average of a function $u \in L^1(M)$ on $M$ will be denoted by $u_M$, i.e. \begin{eqnarray} u_M=\frac{1}{vol_g(M)}\int_M u. \end{eqnarray} For a function $u$ on $M$ we denote its positive part by $u^+$ and its negative part by $u^-$, i.e. $u^+=\max\{u, 0\}$ and $u^-=\min\{u, 0\}$. The Laplace operator $\Delta$ is the negative Laplacian, i.e. $\Delta u=\mbox{div} \nabla u$. Let $C^*_{I,N}(M,g)$ denote the volume-normalized Neumann isoperimetric constant, which is defined in terms of the Neumann isoperimentric constant $C_{I,N}(M,g)$, see Section 2. \\ \noindent {\bf Theorem A} {\it Assume $n \ge 3$. Let $u$ be a function in $W^{1,\alpha}(M)$ with $\alpha>n$, which satisfies \begin{eqnarray} \label{sub} \Delta u \ge f+\mbox{ div }\Phi \end{eqnarray} in the weak sense for a measurable function $f$ on $M$ such that $f^- \in L^p(M)$ and a vector field $\Phi \in L^{2p}(TM)$ with $p>\frac{n}{2}$, i.e. \begin{eqnarray} \label{nabla3} \int_M \nabla u \cdot \nabla \phi \le -\int_M f \phi + \int_M \Phi \cdot \nabla \phi \end{eqnarray} for all nonnegative $\phi \in W^{1,2}(M)$ (equivalently, all nonnegative $\phi \in W^{1,\frac{\alpha}{\alpha-1}}(M)$). Then we have \begin{eqnarray} \label{maxA} \sup_M \, u \le u_M+ C(n, p, C^*_{N,I}(M,g)) (\|f^-\|^*_p+ \|\Phi\|^*_{2p}) \end{eqnarray} with a positive constant $C(n, p, C^*_{N,I}(M,g))$ depending only on $n, p$ and $C^*_{N,I}(M,g)$. This constant depends continuously and increasingly on $C^*_{N,I}(M,g)$. } \\ The classical strong maximum principle says that $u\equiv u_M$ if $\Delta u \ge 0$ (in the weak sense). Theorem A includes this as a special corollary. But the main point of Theorem A lies in the quantitative estimate (\ref{maxA}) and the simple geometric nature of the constant $C(n,p,C^*_{N,I}(M,g))$ in the estimate. As emphasized above, no other data from the metric $g$ such as the volume are involved in this constant. In contrast to traditional estimates of the maximum principle type, the estimate (\ref{maxA}) is not scaling invariant. In other words, the estimate obtained with respect to a rescaled metric and the corresponding rescaled $f$ and $\Phi$ differs from the original estimate. This non-invariace is brought into the estimate by a construction in the proof of Lemma 4.2, see Remark 3 in Section 4. (For a discussion of the behavior of the estimate under rescaling of the metric see Remark 4.) Without breaking the scaling invariance it would be impossible to obtain a maximum estimate in which the constant depends solely on the dimension $n$, the exponent $p$ and the volume-normalized Neumann isoperimetric constant. This is one of the key features of our arguments. (Scaling invariant maximum estimates can also be derived, see [Y].) As a consequence of Theorem A and S.~Gallot's estimate of the volume-normalized Neumann isoperimetric constant in [Ga1] (see Theorem \ref{gallot}) we obtain the following result which involves a lower bound for the Ricci curvature and an upper bound for the diameter. For convenience, we define the {\it diameter rescaled Ricci curvature} of a unit tangent vector $v$ to be $\hat Ric(v,v)=diam_g(M)^2 Ric(v,v)$, where $diam_g(M)$ denotes the diameter of $(M,g)$. We set $\kappa_{\hat Ric}=\min_{v\in TM, |v|=1} \hat Ric(v,v)$ \ and $\hat \kappa_{\hat Ric}=|\kappa_{\hat Ric}^-|=|\min\{\kappa_{\hat Ric}, 0\}|$. \\ \noindent {\bf Theorem B} {\it Assume $n \ge 3$. Let $u$ be a function in $W^{1,\alpha}(M)$ with $\alpha>n$ satisfying \begin{eqnarray} \Delta u \ge f+\mbox{ div }\Phi \end{eqnarray} in the weak sense for a measurable function $f$ such that $f^- \in L^p(M)$ and a vector field $\Phi \in L^{2p}(TM)$ with $p>\frac{n}{2}$. Then we have \begin{eqnarray} \label{maxB} \sup_M \, u \le u_M+ C(n, p, \hat \kappa_{\hat Ric}, diam_g(M)) (\|f^-\|^*_p+ \|\Phi\|^*_{2p}). \end{eqnarray} with a positive constant $C(n, p, \hat \kappa_{\hat Ric}, diam_g(M))$ depending only on $n, p, \hat \kappa_{\hat Ric}$ and the diameter. This constant depends continuously on its arguments and increasingly on $\hat \kappa_{\hat Ric}$ and $diam_g(M)$.}\\ If we assume an upper bound $D$ for the diameter and a nonpositive lower bound $\kappa$ for the Ricci curvature, then we have $\min_M \hat Ric \ge D^2 \kappa$ and $C(n,p, \hat \kappa_{\hat Ric}, diam_g(M)) \le C(n,p, D^2 |\kappa|, D)$. Hence the estimates (\ref{maxB}) can be applied. We state a corollary for the case of positive Ricci curvature. We formulate it under the assumption $Ric \ge n-1$, which can always be achieved by rescaling. \\ \noindent {\bf Theorem C} {\it Assume $n \ge 3$ and that the Ricci curvature satisfies $Ric \ge n-1$. Let $u$ be a function in $W^{1,\alpha}(M)$ with $\alpha>n$ satisfying \begin{eqnarray} \Delta u \ge f+\mbox{ div }\Phi \end{eqnarray} in the weak sense for a measurable function $f$ on $M$ such that $f^- \in L^p(M)$ and a vector field $\Phi \in L^{2p}(TM)$ with $p>\frac{n}{2}$. Then we have \begin{eqnarray} \label{maxC} \sup_M \, u \le u_M+ C(n, p)(\|f\|_p^*+\|\Phi\|^*_{2p}) \end{eqnarray} with a positive constant $C(n, p)$ depending only on $n$ and $p$.}\\ Analogous results hold true if we assume an upper bound for the diameter, and a lower bound for the Ricci curvature in a suitable integral sense, thanks Gallot's and Petersen-Sprouse's estimates for the Neumann isoperimetric constant in [Ga2] and [PS]. We omit the obvious statements of those results. \\ \noindent {\bf Remark 1} In the above results we restrict to dimensions $n \ge 3$. The 2-dimensional analogues also hold true, see [Y]. We would also like to mention that it is straightforward to extend the above results to compact manifolds with boundary under the Neumann boundary condition. One can also extend the above results to general elliptic operators of divergence form, see [Y]. \\ \noindent {\bf Remark 2} Theorem A is also valid if we replace in the definition of $C^*_{I,N}(M,g)$ the Neumann isoperimetric constant $C_{N,I}(M,g)$ by the Poincar\'{e}-Sobolev constant $C_{P,S}(M,g)$ (see Section 2 for its definition). Indeed, it is the Poincar\'{e} inequality (\ref{poincare1}), the Poincar\'{e}-Sobolev inequality (\ref{poincare2}) and the Sobolev inequality (\ref{sobolev}) which are employed in our arguments. The Neumann isoperimetric constant appears in these inequalities. Obviously, the Poincar\'{e}-Sobolev inequality (\ref{poincare2}) can be reformulated in terms of the Poincar\'{e}-Sobolev constant. Then the Poincar\'{e} inequality (\ref{poincare1}) and the Sobolev inequality (\ref{sobolev}) follow as corollaries, with the constants suitably modified. We formulate Theorem A in terms of the Neumann isoperimetric constant because we consider it to be a more fundamental quantity. \\ The proof of Theorem A involves several ingredients. One is Moser iteration based on the Sobolev inequality. Various versions of this technique have been used in many situations, but the way it is done in this paper is new, see the proof of Lemma \ref{maxth1}. It is in this proof that the scaling invariance is broken, as mentioned above. On the other hand, from this proof one can see that the technique of Moser iteration alone cannot lead to a maximum estimate for $u-u_M$ in terms of $f$ and $\Phi$. Instead, the estimate one obtains also depends on the $L^2$ norm of $(u-u_M)^+$. Without using additional tools it seems impossible to go any further. Our strategy for overcoming this difficulty is to employ the Green function $G_0$ of the Laplace operator. First we combine Lemma \ref{maxth1} with the Poincar\'{e} inequality to establish Theorem \ref{maxth2} which is the corresponding maximum principle for solutions (rather than subsolutions). Using this result we reduce the right hand side of (\ref{sub}) to a constant. Then we utilize the Green function $G_0$ to obtain the desired estimate. Employing the Green function is crucial for the whole scheme. There is an additional subtlety here. Usually, maximum principles based on Moser iteration hold true for all subsolutions $u$ in the Sobolev space $W^{1,2}(M)$. (This is the case in Lemma \ref{maxth1} (for subsolutions) and Theorem \ref{maxth2} (for solutions).) In the situation of Theorem A (hence also Theorem B and Theorem C), we have to require $u \in W^{1,\alpha}(M)$ for $\alpha>n$. This restriction stems from the involvement of the Green function. Using additional tools, one can extend Theorem A to $u \in W^{1,2}(M)$, provided that $\Phi=0$, see [Y]. It remains open whether one can extend the full Theorem A to $u \in W^{1,2}(M)$. (See also [Y] for a weaker maximum principle which holds true for all $u \in W^{1,2}(M)$.) In the above scheme of utilizing the Green function $G_0$, a lower bound for $G_0$ is needed. In [Si], a lower bound for $G_0$ in terms of $\hat \kappa_{\hat Ric}$, the volume and the diameter is obtained. This lower bound is sufficient for establishing the estimate (\ref{maxB}) in Theorem B and the estimate (\ref{maxC}) in Theorem C, but is not suitable for establishing the general estimate (\ref{maxA}) in Theorem A. Following the arguments in [CL] and [Si], we derive in Section 3 a lower bound for $G_0$ which is proportional to $vol_g(M)^{-1}$ with a factor given in terms of the volume-normalized Neumann isoperimetric constant. This form of lower bound is exactly what we need for establishing Theorem A. It is also of independent interest. We would like to mention that Theorem C is sufficient for the purpose of [P] because all involved functions in [P] are at least Lipschitz continuous. We would also like to mention that Theorem A (or Theorem \ref{maxth1} ) leads to an estimate for the $L^p$ norm of the Green function $G_0$ for each $0<p<\frac{n}{n-2}$ (thanks an observation by Xiaodong Wang) and an estimate for the $L^q$ norm of the gradient of $G_0$ for each $0<q<\frac{n}{n-1}$. This will be presented elsewhere. We would like to thank Xiaodong Wang for bringing the question regarding the validity of Theorem C (with $\Phi=0$) to our attention. The first named author would also like to acknowledge many helpful discussions with Xiaodong Wang, and also with Jian Song. \sect{The Neumann Isoperimetric Constant} Consider a closed Riemannian manifold $(M, g)$ of dimension $n$. The Neumann isoperimetric constant of $(M, g)$ is defined to be \begin{eqnarray} C_{N, I}(M,g)=\sup\{\frac{vol(\Omega)^{\frac{n-1}{n}}}{A(\partial \Omega)}: \Omega \subset M \mbox{ is a } C^1 \mbox{ domain }, vol_g(\Omega) \le \frac{1}{2} vol_g(M)\}. \end{eqnarray} The Poincar\'{e}-Sobolev constant (for the exponent $2$) of $(M,g)$ is defined to be \begin{eqnarray} C_{P,S}(M,g)=\sup\{\|u-u_M\|_{\frac{2n}{n-2}}: u \in C^1(M), \|\nabla u\|_2=1\}. \end{eqnarray} We have the following Poincar\'{e} inequality, Poincar\'{e}-Sobolev inequality and Sobolev inequality. See [Y] for their proofs. (For these inequalities with somewhat different constants see [Si].) The Poincar\'{e}-Sobolev inequality (\ref{poincare2}) gives an upper bound of the Poincar\'{e}-Sobolev constant in terms of the Neumann isoperimetric constant. \begin{lem} \label{poincare} There hold for all $u \in W^{1,2}(M)$ \begin{eqnarray} \label{poincare1} \|u-u_M\|_2 \le \frac{2(n-1)}{n-2}C_{N, I}(M, g) vol_g(M)^{\frac{1}{n}}\|\nabla u\|_2, \end{eqnarray} \begin{eqnarray} \label{poincare2} \|u-u_M\|_{\frac{2n}{n-2}} \le \frac{4(n-1)}{n-2} C_{N,I}(M, g) \|\nabla u\|_2, \end{eqnarray} and \begin{eqnarray} \label{sobolev} \|u\|_{\frac{2n}{n-2}} \le \frac{2(n-1)}{n-2} C_{N,I}(M, g)\|\nabla u\|_2+ \frac{\sqrt{2}}{vol_g(M)^{\frac{1}{n}}} \|u\|_2, \end{eqnarray} whenever $n\ge 3$. \end{lem} It is convenient to use the following volume-normalized Neumann isoperimetric constant: \begin{eqnarray} C_{I,N}^*(M,g)=C_{N,I}(M,g)vol_g(M)^{\frac{1}{n}}, \end{eqnarray} which was first introduced by J.~Cheeger in his study of the first eigenvalue of the Laplace operator [Che]. Note that $C_{I,N}(M,g)$ is scaling invariant, while $C^*_{I, M}(M,g)$ has the same scaling weight as the $n$-th root of the volume, or the diameter. In terms of $C^*_{I,N}(M,g)$ and the volume-normalized $L^p$ norms, Lemma \ref{poincare} can be reformulated as follows. \begin{lem} \label{poincare*} There hold for all $u \in W^{1,2}(M)$ \begin{eqnarray} \label{poincare1*} \|u-u_M\|^*_2 \le \frac{2(n-1)}{n-2}C^*_{N, I}(M, g)\|\nabla u\|^*_2, \end{eqnarray} \begin{eqnarray} \label{poincare2*} \|u-u_M\|^*_{\frac{2n}{n-2}} \le \frac{4(n-1)}{n-2} C_{N,I}^*(M, g) \|\nabla u\|^*_2, \end{eqnarray} and \begin{eqnarray} \label{sobolev*} \|u\|^*_{\frac{2n}{n-2}} \le \frac{2(n-1)}{n-2} C_{N,I}^*(M, g)\|\nabla u\|^*_2+ \sqrt{2} \|u\|^*_2, \end{eqnarray} whenever $n\ge 3$. \end{lem} The following estimate of the volume-normalized Neumann isoperimetric constant easily follows from S.~Gallot's corresponding estimate in [Ga1]. \begin{theo} \label{gallot} There holds \begin{eqnarray} \label{gallot} C_{N,I}^*(g, M) \le C(n, \hat \kappa_{\hat Ric})diam_g(M), \end{eqnarray} where $C(n, \hat \kappa_{\hat Ric})$ is a positive constant depending only on $n$ and $\hat \kappa_{\hat Ric}$. It depends continuously and increasingly on $\hat \kappa_{\hat Ric}$. \end{theo} \noindent {\em Proof.} We rescale to make the diameter equal one. Then we apply the estimate for the Neumann isoperimetric constant in [Ga1]. Expressing the estimate in terms of the original metric we arrive at (\ref{gallot}). \hspace*{\fill}\rule{3mm}{3mm}\quad \\ \sect{The Green Function} Consider a closed Riemannian manifold $(M, g)$ of dimension $n$ as before. Let $G_0(x, y)$ be the unique Green function of the Laplace operator $\Delta$ such that $\int_M G_0(x, y)dy=0$ for all $x \in M$, where $dy$ denotes the volume form of $g$. Thus we have \begin{eqnarray} \label{greenrep} u(x)=u_M-\int_M G_0(x, y)\Delta_y u(y)dy \end{eqnarray} for all $u \in C^{\infty}(M)$, where $\Delta_y$ means $\Delta$ with the subscript indicating the argument. (Similar notations will be used below.) $G_0(x,y)$ is smooth away from $x=y$. Moreover, $G_0(x,y)=G_0(y,x)$ for all $x,y \in M, x \not =y$. In this section we present some basic facts about $G_0$ and derive a lower bound of $G_0$ in terms of $C_{N, I}(M,g)$ and the volume. \begin{lem} \label{greenintegral} Assume $n \ge 3$. Then there holds \begin{eqnarray} G_0(x,\cdot) \in W^{1, \beta}(M) \end{eqnarray} for all $0<\beta<\frac{n}{n-1}$. \end{lem} \noindent {\em Proof.} By e.g. [Theorem 4.17, A] we have $|G_0(x,y)| \le Cd(x,y)^{2-n}$ and $|\nabla_y G_0(x,y)| \le Cd(x,y)^{1-n}$ for all $x, y \in M, x \not = y$ and a positive constant $C$ depending on $g$. Consequently, we have \begin{eqnarray} \label{greenintegral-1} G_0(x, \cdot) \in L^{q_1}(M), \, \nabla_y G_0(x,y) \in L^{q_2}(M) \end{eqnarray} for all $0<q_1<\frac{n}{n-2}$ and all $0<q_2<\frac{n}{n-1}$. Then it follows easily that $G(x, \cdot) \in W^{1, p}(M)$ for all $x \in M$ and $0<q<\frac{n}{n-1}$. Indeed, we have for an arbitary $x \in M$ and small $\epsilon>0$ \begin{eqnarray} \int_{M-B_{\epsilon}(x)} G_0(x,y) \mbox{ div}_y \Phi(y) dy &=&-\int_{M-B_{\epsilon}(x)} \nabla_y G_0(x,y) \cdot \Phi(y)dy \nonumber \\ &+&\int_{\partial B_{\epsilon}(x)} G_0(x,y) \Phi(y) \cdot \nu(y) \end{eqnarray} for all smooth vector fields $\Phi$ on $M$, where $\nu$ denotes the inward unit normal of the geodesic sphere $\partial B_{\epsilon}(x)$. Since $|G_0(x, y)| \le C\epsilon^{2-n}$ on $\partial B_{\epsilon}(x)$ we can let $\epsilon \rightarrow 0$ to arrive at \begin{eqnarray} \label{greenweak} \int_{M} G_0(x,y) \mbox{ div}_y \Phi(y) dy =-\int_{M} \nabla_y G_0(x,y) \cdot \Phi(y)dy. \end{eqnarray} By (\ref{greenintegral-1}) and (\ref{greenweak}) we infer that $G(x, \cdot) \in W^{1,q}(M)$ for all $0<q<\frac{n}{n-1}(M)$ and that (\ref{greenweak}) holds true for all $\Phi \in W^{1, p}(TM)$ whenever $p>n$, where $W^{1,p}(TM)$ denotes the $W^{1,p}$ Sobolev space of vector fields on $M$. \\ \hspace*{\fill}\rule{3mm}{3mm}\quad \\ \begin{lem} \label{greennew} Let $u \in W^{1,q}(M)$ with $q >n$. Then \begin{eqnarray} \label{greenrep1} u(x)=u_M+\int_M \nabla_y G_0(x, y) \cdot \nabla_y u (y) dy \end{eqnarray} holds true for a.e. $x \in M$. \end{lem} \noindent {\em Proof.} By Lemma \ref{greenintegral}, we can integrate (\ref{greenrep}) by parts to deduce (\ref{greenrep1}) for all $u \in C^{\infty}(M)$. Applying Lemma \ref{greenintegral} and a limiting argument we then conclude that (\ref{greenrep1}) holds true for each $u \in W^{1,q}(M)$ a.e. as long as $q>n$. \\ \hspace*{\fill}\rule{3mm}{3mm}\quad \\ Next let $H(x,y,t)$ be the heat kernel for $\Delta$, i.e. \begin{eqnarray} \label{kernelequation} \frac{\partial}{\partial t}H(x,y, t)=\Delta_y H(x, y, t) \end{eqnarray} for $t>0$ and \begin{eqnarray} \label{delta} \lim_{t \rightarrow 0} H(x, y, t)=\delta_x \end{eqnarray} in the sense of distributions, where $\delta_x$ is the Dirac $\delta$-function with center $x$. $H$ is symmetric in $x, y$ and smooth away from $x=y, t=0$. We have the basic representation formula \begin{eqnarray} \label{heatrep} u(x,t)=\int_{0}^td\tau \int_M H(x,y,t-\tau) (\frac{\partial}{\partial \tau}-\Delta_y) u(y, \tau) dy+\int_M H(x,y,t) u(y, 0)dy \end{eqnarray} for all smooth functions $u$ and $t>0$. Note that $H(x,y,t)>0$ for $t>0$ and all $x, y \in M$. We set \begin{eqnarray} G(x,y,t)=H(x,y,t)-\frac{1}{vol_g(M)}. \end{eqnarray} Choosing $u(x,t)\equiv 1$ in (\ref{heatrep}) we deduce \begin{eqnarray} \label{H-1} \int_M H(x,y,t)dy =1 \end{eqnarray} and hence \begin{eqnarray} \label{G-0} \int_M G(x,y,t)dy=0 \end{eqnarray} for all $x \in M$ and $t>0$. \begin{lem} Assume $n \ge 3$. Then there holds \begin{eqnarray} \label{greenequalA} G_0(x,y)=\int_0^{\infty} G(x,y,t)dt. \end{eqnarray} \end{lem} \noindent {\em Proof.} We have \begin{eqnarray} \label{infty} |H(x,y,t)-\frac{1}{vol_g(M)}| \le Ct^{-\frac{n}{2}} \end{eqnarray} for a certain positive constant $C$ depending on $g$ (for a geometric estimate of $C$ see [CL]). On the other hand, we have the inequality (see e.g. [Proposition VII.3.5, Ch]) \begin{eqnarray} \label{zero} |\frac{H(x,y,t)}{{\cal H}(x,y,t)}-1| \le Cd(x,y) \end{eqnarray} for all $t>0$ and $x, y \in M$ with $d(x,y) \le \frac{1}{4}inj_g(M)$, where $C$ is a positive constant depending on $g$, $inj_g(M)$ denotes the injectivity radius of $(M,g)$, and \begin{eqnarray} {\cal H}(x,y,t)=(4\pi t)^{-\frac{n}{2}} e^{-\frac{d(x,y)^2}{4t}}. \end{eqnarray} By (\ref{heatrep}) we have for a smooth function $u(x)$ \begin{eqnarray} \label{heatrep1} u(x)&=& -\int_0^{t} d\tau\int_M H(x,y,t-\tau) \Delta_y u(y) dy+\int_M H(x,y,t)u(y)dy \nonumber \\ &=& -\int_0^{t} d\tau\int_M G(x,y,t-\tau) \Delta_y u(y) dy +\int_M G(x,y,t)u(y)dy +u_M \nonumber \\ &=& -\int_0^{t} ds\int_M G(x,y,s) \Delta_y u(y) dy +\int_M G(x,y,t)u(y)dy +u_M \end{eqnarray} By (\ref{infty}) and (\ref{zero}) we can let $t\rightarrow \infty$ in (\ref{heatrep1}) to obtain \begin{eqnarray} u(x)=-\int_M (\int_0^{\infty} G(x,y,s)ds) \Delta_y u(y) dy +u_M. \end{eqnarray} On the other hand, by (\ref{G-0}) we deduce \begin{eqnarray} \int_M (\int_0^{\infty} G(x,y,s)ds)dy=0 \end{eqnarray} for all $x \in M$. We conclude that (\ref{greenequalA}) holds true. \\ \hspace*{\fill}\rule{3mm}{3mm}\quad \\ \begin{lem} \label{greenformula} There holds \begin{eqnarray} \label{greenequal} G(x,y,t+s)=\int_M G(x,z,s)G(z,y,t)dz \end{eqnarray} for all $x, y \in M$ and $t>0, s>0$, where $dz$ denotes the volume form of $g$ with $z \in M$ as the argument. In particular, we have \begin{eqnarray} \label{squareformula} G(x,x,t)= \int_M G(x,y,\frac{t}{2})^2dy \end{eqnarray} and it follows that $G(x,x,t)>0$ for all $x \in M$ and $t>0$. \end{lem} \noindent {\em Proof.} Note that $G(x,y,t)$ satisfies the heat equation \begin{eqnarray} \label{G-heat} \frac{\partial}{\partial t}G(x,y,t)=\Delta_y G(x,y,t)=\Delta_x G(x,y,t). \end{eqnarray} Choosing $u(x,t)=G(x,y,t+s)$ in (\ref{heatrep}) for each fixed $y$ we deduce, on account of (\ref{G-heat}) and (\ref{G-0}) \begin{eqnarray} G(x,y,t+s)= \int_M H(x,z, t)G(z,y,s)dz= \int_M G(x,z,t)G(z,y,s)dz. \end{eqnarray} Switching $t$ with $s$ we arrive at the desired equation (\ref{greenequal}). The formula (\ref{squareformula}) follows immediately and hence $G(x,x,t) \ge 0$. If $G(x,x,t_0)=0$ for some $x$ and $t_0>0$, then (\ref{squareformula}) implies that $G(x,y,\frac{t_0}{2})=0$ for all $y \in M$. Then $G(x,y, t)=0$ for all $y \in M$ and $t \ge \frac{t_0}{2}$, because $G(x,y,t)$ satisfies the heat equation. It follow that $H(x,y,t)=vol_g(M)^{-1}$ for all $y \in M$ and $t \ge \frac{t_0}{2}$. This contradicts (\ref{heatrep}) as is easy to see. We conclude that $G(x,x,t)>0$ for all $x\in M$ and $t>0$. \\ \hspace*{\fill}\rule{3mm}{3mm}\quad \\ \begin{theo} \label{greenth} Assume $n \ge 3$. Then there holds \begin{eqnarray} \label{greenestimate} G_0(x,y) \ge -C_0(n) C_{I,N}^*(M,g)^2 vol_g(M)^{-1} \end{eqnarray} for all $x, y \in M, x \not =y$, where \begin{eqnarray} C_0(n)=\frac{8n^2(n-1)^2}{(n-2)^3} \left(\frac{n-2}{2}\right)^{\frac{4}{n}}. \end{eqnarray} \end{theo} \noindent {\em Proof.} This follows from the arguments in [CL] and [Si] with some modification. By the rescaling invariance of (\ref{greenestimate}) we can assume $vol_g(M)=1.$ Differentiating the equation (\ref{greenequal}), setting $y=x$ and replacing $t$ and $s$ by $\frac{t}{2}$ we deduce for $t>0$ \begin{eqnarray} \label{green-t} \frac{\partial G}{\partial t}(x,x,t)=\int_M \frac{\partial G}{\partial t}(x,z, \frac{t}{2}) G(x,z, \frac{t}{2}) dz=\int_M (\Delta_z G(x, z, \frac{t}{2})) G(x, z, \frac{t}{2}) dz. \end{eqnarray} We integrate (\ref{green-t}) by parts to derive \begin{eqnarray} \label{green-nabla} \frac{\partial G}{\partial t}(x,x,t)=-\int_M |\nabla_z G(x, z, \frac{t}{2})|^2 dz. \end{eqnarray} Applying the Poincar\'{e}-Sobolev inequality (\ref{poincare2}) we then obtain \begin{eqnarray} -\frac{\partial G}{\partial t}(x,x,t) \ge \left(\frac{4(n-1)}{n-2}C_{N,I}(M, g)\right)^{-2} \left(\int_M |G(x,z,\frac{t}{2})|^{\frac{2n}{n-2}}dz \right)^{\frac{n-2}{n}}. \end{eqnarray} By H\"{o}lder's inequality we have \begin{eqnarray} \left(\int_M |G(x,z,\frac{t}{2})|^2dz \right)^{\frac{n+2}{n}} &=& \left(\int_M |G(x,z,\frac{t}{2})|^{\frac{4}{n+2}}|G(x,z, \frac{t}{2})|^{\frac{2n}{n+2}} dz \right)^{\frac{n+2}{n}} \nonumber \\ &\le& \left(\int_M |G(x,z,\frac{t}{2})|^{\frac{2n}{n-2}}dz\right)^{\frac{n-2}{n}} \left(\int_M |G(x,z,\frac{t}{2})|dz\right)^{\frac{4}{n}}. \nonumber \\ \end{eqnarray} Next observe that $\int_M |G(x, z,t)|dz \le 2$ because $H(x, z, t)>0$. Hence we arrive at \begin{eqnarray} \label{greensquare} -\frac{\partial G}{\partial t}(x,x,t) \ge C \left(\int_M |G(x,z,\frac{t}{2})|^{2}dz \right)^{\frac{n+2}{n}}=C G(x,x,t)^{\frac{n+2}{n}}, \end{eqnarray} where \begin{eqnarray} C=\left( \frac{4(n-1)}{n-2} C_{N,I}(M,g)\right)^{-2}. \end{eqnarray} Integrating (\ref{greensquare}) we derive \begin{eqnarray} G(x,x,t)^{-\frac{2}{n}} \ge G(x,x,s)^{-\frac{2}{n}}+\frac{n}{2}C (t-s) \end{eqnarray} for $t>s>0$. (Note that $G(x,x,t)>0$ by Lemma \ref{greenformula}.) Letting $s \rightarrow 0$ we infer $G(x,x,t)^{-\frac{2}{n}} \ge \frac{n}{2}Ct$, and hence $G(x,x,t) \le C^{-\frac{n}{2}}(\frac{n}{2})^{\frac{n}{2}} t^{-\frac{n}{2}}$. Now we have by Lemma \ref{greenformula} \begin{eqnarray} |G(x,y,t)| &=& |\int_M G(x,z,\frac{t}{2})G(z,y,\frac{t}{2})dz| \nonumber \\ &\le& \left(\int_M G(x,z,\frac{t}{2})^2dz \right)^{\frac{1}{2}} \left(\int_M G(z,y, \frac{t}{2})^2dz\right)^{\frac{1}{2}} \nonumber \\ &=& G(x,x,\frac{t}{2})^{\frac{1}{2}} G(y,y,\frac{t}{2})^{\frac{1}{2}} \le C^{-\frac{n}{2}}(\frac{n}{2})^{\frac{n}{2}} t^{-\frac{n}{2}}. \end{eqnarray} Since $H(x,y,t)>0$ we have $G(x,y,t) \ge -\frac{1}{vol_g(M)}=-1$. We deduce for each $\tau>0$ \begin{eqnarray} \label{G} G(x,y)&=& \int_0^{\infty}G(x,y,t)dt \ge -\int_0^{\tau} dt -\int_{\tau}^{\infty} C^{-\frac{n}{2}}(\frac{n}{2})^{\frac{n}{2}} t^{-\frac{n}{2}} \nonumber \\ &=& -\tau-\frac{n-2}{2}C^{-\frac{n}{2}}(\frac{n}{2})^{\frac{n}{2}} \tau^{-\frac{n-2}{2}}=-\tau-C_1\tau^{-\frac{n-2}{2}}, \end{eqnarray} where $C_1=\frac{n-2}{2}C^{-\frac{n}{2}}(\frac{n}{2})^{\frac{n}{2}}$. The maximum of the function $\tau+C_1\tau^{-\frac{n-2}{2}}$ is achieved at $\tau=({C_1(n-2)/2})^{2/n}$ and hence equals \begin{eqnarray} \frac{n}{n-2} (C_1\frac{n-2}{2})^{\frac{2}{n}}=\frac{8n^2(n-1)^2}{(n-2)^3} \left(\frac{n-2}{2}\right)^{\frac{4}{n}}C_{I, N}(M,g)^2.\nonumber \end{eqnarray} we arive at \begin{eqnarray} G(x,y) \ge -\frac{8n^2(n-1)^2}{(n-2)^3} \left(\frac{n-2}{2}\right)^{\frac{4}{n}}C_{I, N}(M,g)^2, \end{eqnarray} which leads to (\ref{greenestimate}) by rescaling. \\ \sect{Neumann Type Maximum Principles} In this section we consider a fixed closed Riemannian manifold $(M, g)$ of dimension $n \ge 3$ as before. \begin{lem} \label{holder} 1) There hold for $f_1 \in L^p(M)$ and $ f_2 \in L^q(M)$ with $p^{-1}+q^{-1}=1$ \begin{eqnarray} \label{holder1} \|f_1f_2 \|_1^* \le \|f_1\|^*_p \cdot \|f_2\|^*_q. \end{eqnarray} 2) There holds for $p \ge q \ge 1$ and $f \in L^p(M)$ \begin{eqnarray} \label{holder2} \|f\|_q^* \le \|f\|_p^*. \end{eqnarray} \end{lem} \noindent {\em Proof.} These follow straightforwardly from the classical H\"{o}lder inequality. \hspace*{\fill}\rule{3mm}{3mm}\quad \\ \begin{lem} \label{maxth1} Let $n \ge 3$. Assume that $u \in W^{1,2}(M)$ satisfies \begin{eqnarray} \label{delta} \Delta u \ge f+\mbox{ div } \Phi \end{eqnarray} in the weak sense, i.e. \begin{eqnarray} \label{nabla} \int_M \nabla u \cdot \nabla \phi \le -\int_M f \phi + \int_M \Phi \cdot \nabla \phi \end{eqnarray} for all {\it nonnegative} $\phi \in W^{1,2}(M)$, where $f$ is a measurable function on $M$ such that $f^- \in L^p(M)$, and $\Phi \in L^{2p}(TM)$, with $p>\frac{n}{2}$. Then we have \begin{eqnarray} \label{max1} \sup_M \, (u-\lambda) &\le& A C_1 (\|f^-\|^*_p +\|\Phi\|_{2p}^*) \nonumber \\ &&+A(C_1+\sqrt{2}) \|(u-\lambda)^+\|^*_2 \end{eqnarray} for each $\lambda \in {\bf R}$, where $A$ and $C_1$ are positive numbers depending only on $n, p$ and $C_{N,I}^*(M,g)$. Their explicit values are given in the proof below. \end{lem} \noindent {\em Proof.} The arguments here are inspired by some arguments in [GT]. A special new feature in our argument is the use of the volume-normalized isoperimetric constant (or Sobolev constant) and the volume-normalized $L^p$ norms. We set \begin{eqnarray} a=\|f^-\|^*_p+\|\Phi\|^*_{2p}. \end{eqnarray} Then we set $b=a$ if $a>0$ and $b=1$ if $a=0$. For $L>|\lambda|$ we set $w=\min\{(u-\lambda)^+, L\}+b$. Then $w \in W^{1,2}(M)$ and is bounded. It follows that $w^{\gamma} \in W^{1,2}(M)$ for $\gamma \ge 1$. Moreover, we have $\nabla w=0$ if $u \ge \lambda+ L$, $\nabla w=\nabla u$ if $\lambda<u<L+\lambda$, and $\nabla w=0$ if $u \le \lambda$. Choosing $\phi=w^{\gamma}({\mathrm{Vol} M})^{-1}$ with ${\mathrm{Vol} M}=vol_g(M)$ in (\ref{nabla}) we obtain \begin{eqnarray} \frac{ \gamma}{\mathrm{Vol} M} \int_M |\nabla w|^2 w^{\gamma-1} &\le& -\frac{1}{\mathrm{Vol} M}\int_M f w^{\gamma} + \frac{ \gamma }{\mathrm{Vol} M} \int_M w^{\gamma-1}\Phi \cdot \nabla w \nonumber \\ &\le& \frac{1}{\mathrm{Vol} M} \int_M |f^-| w^{\gamma} + \frac{ \gamma }{\mathrm{Vol} M} \int_M w^{\gamma-1}\Phi \cdot \nabla w. \end{eqnarray} First we choose $\gamma=1$ to deduce \begin{eqnarray} \label{gamma=1} \|\nabla w\|_2^{*2} &\le& \| f^- w \|_1^* +\| \Phi \cdot \nabla w\|_1^* \nonumber \\ &\le& \frac{1}{2}\|f^-\|_2^{*2}+ \frac{1}{2}\|w\|_2^{*2} + \frac{1}{2}\|\Phi\|_2^{*2} + \frac{1}{2} \|\nabla w\|_2^{*2}, \end{eqnarray} where we have used Lemma~\ref{holder}. It follows that \begin{eqnarray} \label{gamma=11} \|\nabla w\|_2^{*2} \le \|f^-\|_2^{*2}+\|\Phi\|_{2}^{*2}+ \|w\|_2^{*2} \le \|f^-\|_p^{*2}+\|\Phi\|_{2p}^{*2}+ \|w\|_2^{*2}, \end{eqnarray} where the second inequality follows from Lemma \ref{holder}. Applying the Sobolev inequality (\ref{sobolev*}) we then deduce \begin{eqnarray} \label{w-2n/n-2} \|w\|^*_{\frac{2n}{n-2}} &\le& \frac{2(n-1)}{n-2}C^*_{N,I}(M,g) \|\nabla w\|^*_2+\sqrt{2}\|w\|^*_2 \nonumber \\ &\le& C_1 (\|f^-\|^*_p+\|\Phi\|^*_{2p}) +(C_1+\sqrt{2}) \|w\|^*_2, \end{eqnarray} where $C_1=\frac{2(n-1)}{n-2}C^*_{N,I}(M,g)$. Next we consider general $\gamma\ge 1$. We deduce \begin{eqnarray} &&\frac{\gamma}{\mathrm{Vol} M} \int_M |\nabla w|^2 w^{\gamma-1} \le \frac{1}{b \mathrm{Vol} (M)}\int_M |f^-| w^{\gamma+1} +\frac{\gamma}{b \mathrm{Vol} (M)}\int_M w^{\gamma} |\Phi| |\nabla w| \nonumber \\ &&\le \frac{1}{b} \|f^- w^{\gamma+1}\|_1^* +\frac{\gamma}{2} \| |\nabla w|^2 w^{\gamma-1} \|_1^*+ \frac{\gamma}{2b^2} \| w^{\gamma+1} |\Phi|^2 \|_1^*. \end{eqnarray} It follows that, on account of Lemma \ref{holder} \begin{eqnarray} && \frac{\gamma}{2}\| |\nabla w|^2 w^{\gamma-1} \|_1^* \nonumber \\ &\le& \frac{1}{b} \| |f^- w^{\gamma+1}\|_1^* + \frac{\gamma}{2b^2} \| w^{\gamma+1} |\Phi|^2 \|_1^* \nonumber \\ &\le& \frac{1}{b}\|f^-\|_p^* \cdot \|w\|_{(\gamma+1)\frac{p}{p-1}}^{*(\gamma+1)} + \frac{\gamma}{2b^2} {\|w\|}_{(\gamma+1)\frac{p}{p-1}}^{*(\gamma+1)} \cdot \|\Phi\|^{*2}_{2p} \nonumber \\ &\le& \|w\|_{(\gamma+1)\frac{p}{p-1}}^{*(\gamma+1)} + \frac{\gamma}{2} \|w\|_{(\gamma+1)\frac{p}{p-1}}^{*(\gamma+1)}. \end{eqnarray} It follows that \begin{eqnarray} \label{moser1} \|\nabla w^{\frac{\gamma+1}{2}}\|_2^{*2} \le \frac{(\gamma+2)(\gamma+1)^2}{4\gamma}\|w\|_{(\gamma+1)\frac{p}{p-1}}^{*(\gamma+1)}. \end{eqnarray} Now we apply the Sobolev inequality (\ref{sobolev*}) and (\ref{moser1}) to deduce \begin{eqnarray} \label{moser2} \|w\|_{(\gamma+1) \frac{n}{n-2}}^{*\frac{\gamma+1}{2}} &=& \|w^{\frac{\gamma+1}{2}}\|^*_{\frac{2n}{n-2}}\le A_{\gamma}\|w\|_{(\gamma+1)\frac{p}{p-1}}^{*\frac{\gamma+1}{2}} +\sqrt{2}\|w^{\frac{\gamma+1}{2}}\|^*_2 \nonumber \\ &=& A_{\gamma}\|w\|_{(\gamma+1)\frac{p}{p-1}}^{*\frac{\gamma+1}{2}}+\sqrt{2}\|w\|_{\gamma+1}^{*\frac{\gamma+1}{2}} \nonumber \\ &\le& (A_{\gamma}+\sqrt{2})\|w\|_{(\gamma+1)\frac{p}{p-1}}^{*\frac{\gamma+1}{2}}, \end{eqnarray} where \begin{eqnarray} A_{\gamma}=C^*_{N,I}(M,g) \frac{(n-1)(\gamma+1)}{n-2} \sqrt{\frac{\gamma+2}{\gamma}}. \end{eqnarray} Consequently, we obtain \begin{eqnarray} \|w\|^*_{(\gamma+1) \frac{n}{n-2}} \le (A_{\gamma}+\sqrt{2})^{\frac{2}{\gamma+1}}\|w\|^*_{(\gamma+1)\frac{p}{p-1}}. \end{eqnarray} Replacing $\gamma+1$ by $\gamma \ge 2$ we infer \begin{eqnarray} \label{first} \|w\|^*_{\gamma \frac{n}{n-2}} \le (A_{\gamma-1}+\sqrt{2})^{\frac{2}{\gamma}}\|w\|^*_{\gamma\frac{p}{p-1}}. \end{eqnarray} Now we choose $ \gamma_0=1+\frac{n(p-2)+2p}{(n-2)p}$ and $\gamma_k=\gamma_{k-1} \frac{n(p-1)}{(n-2)p}$ for $k \ge 1$, i.e. $\gamma_k=\gamma_0 (\frac{n(p-1)}{(n-2)p})^k$. Since $p>\frac{n}{2}$, we have $\gamma_0>2$ and $\frac{n(p-1)}{(n-2)p}>1$. We also have $ \gamma_0 \frac{p}{p-1}=\frac{2n}{n-2}$. We deduce \begin{eqnarray} \label{second} \|w\|^*_{\gamma_k} \le \left(\prod\limits_{1\le i\le k} (A_{\gamma_i-1}+\sqrt{2})^{\frac{2}{\gamma_i}}\right) \|w\|^*_{\frac{2n}{n-2}}. \end{eqnarray} Since $\frac{n(p-1)}{(n-2)p}>1$, the product $\prod\limits_{1\le i<\infty} (A_{\gamma_i-1}+\sqrt{2})^{\frac{2}{\gamma_i}}$ converges. We denote its value by $A$. Letting $k \rightarrow \infty$ we infer, on account of (\ref{w-2n/n-2}) \begin{eqnarray} \|w\|_{\infty} \le A\|w\|^*_{\frac{2n}{n-2}} \le A C_1 (\|f^-\|^*_p+\|\Phi\|^*_{2p}) +A(C_1+\sqrt{2}) \|w\|^*_2. \end{eqnarray} Letting $L \rightarrow \infty$ we then arrive at (\ref{max1}). \\ \hspace*{\fill}\rule{3mm}{3mm}\quad \\ \noindent {\bf Remark 3} An important point in the above proof is to break the scaling invariance. Basically, the construction of the function $w$ is not scaling invariant. More precisely, if $g$ is transformed to $\bar g=\alpha g$ for a positive constant $\alpha$, then (\ref{delta}) is tranformed to \begin{eqnarray} \Delta_{\bar g} u \ge \bar f+div \bar \Phi, \end{eqnarray} where $\bar f=\alpha^{-1}f$ and $\bar \Phi=\alpha^{-1} \Phi$. We have \begin{eqnarray} \|\bar f\|^*_{p, \bar g}+ \|\bar \Phi\|^*_{2p, \bar g}=\alpha^{-1} ( \|\bar f\|^*_{p}+ \|\bar \Phi\|^*_{2p}). \end{eqnarray} It follows that $a$ and hence $b$ are not scaling invariant. The fact that the estimate (\ref{max1}) is not scaling invariant is a result of this. This unconventional feature is needed for our purpose of controlling the constants in the estimates in terms of $C_{I, N}^*(M, g)$ alone. \\ \noindent {\bf Remark 4} Since the estimate (\ref{max1}) is not scaling invariant, one may wonder what happens to it if one lets the above scaling factor $\alpha$ go to $0$ or $\infty$. The answer to this question is simple: the estimate deteriorates in the process. Indeed, as $\alpha \rightarrow 0$, the factor $AC_1$ in the estimate converges to a positive constant depending only on the dimension, but (the transformed) $\|f\|_p^*+\|\Phi\|^*_{2p}$ approaches $\infty$. As $\alpha \rightarrow \infty$, $AC_1$ approaches $\infty$ and $\|f\|_p^*+\|\Phi\|^*_{2p}$ approaches $0$, but the former has more weight than the latter. On the other hand, the non-invariance of the estimate allows us to vary $\alpha$ to obtain the optimal estimate. We do not pursue this in this paper because it is not needed for our main purpose. The same question can be asked about the estimate in Theorem A. The answer is obviously the same. \\ \begin{theo} \label{maxth2} Let $n \ge 3$. Assume that $u \in W^{1,2}(M)$, $f \in L^p(M)$ and $\Phi \in L^{2p}(TM)$ for some $p>\frac{n}{2}$, which satisfy \begin{eqnarray} \label{delta2} \Delta u=f+\mbox{ div } \Phi \end{eqnarray} in the weak sense, i.e. \begin{eqnarray} \label{nabla2} \int_M \nabla u \cdot \nabla \phi = -\int_M f \phi + \int_M \Phi \cdot \nabla \phi \end{eqnarray} for all $\phi \in W^{1,2}(M)$. Then we have \begin{eqnarray} \label{max2} \sup_M \, |u-u_M| \le C_2 (\|f\|^*_p+\|\Phi\|^*_{2p}), \end{eqnarray} where $C_2=AC_1\left[1+2\max\{C_1,1\}(C_1+\sqrt{2})\right]$ with $A$ and $C_1$ being from Lemma \ref{maxth1}. \end{theo} \noindent {\em Proof.} Choosing $\phi=(u-u_M)({\mathrm{Vol} M})^{-1}$ in (\ref{nabla2}) we deduce by applying Lemma \ref{holder}, as in (\ref{gamma=1}) and (\ref{gamma=11}) \begin{eqnarray} \frac{1}{\mathrm{Vol}(M)} \int_M |\nabla u|^2 &=& -\frac{1}{\mathrm{Vol}(M)}\int_M f(u-u_M)+ \frac{1}{\mathrm{Vol}(M)} \int_M \Phi \cdot (u-u_M) \nonumber \\ &\le& \|f\|^*_2 \cdot \|u-u_M\|^*_2 +\|\Phi\|^*_{2} \cdot \|\nabla u\|^*_2 \nonumber \\ &\le& \|f\|^*_p \cdot \|u-u_M\|_2 +\|\Phi\|^*_{2p} \cdot \|\nabla u\|_2 \nonumber \\ &\le& \|f\|^*_p \cdot \|u-u_M\|^*_2 +\frac{1}{2} \|\Phi\|^{*2}_{2p} +\frac{1}{2} \|\nabla u\|_2^{*2}. \end{eqnarray} Hence \begin{eqnarray} \|\nabla u\|_2^{*2} \le 2\|f\|^*_p \cdot \|u-u_M\|^*_2 +\|\Phi\|^{*2}_{2p}. \end{eqnarray} Combining this with the Poincar\'{e} inequality (\ref{poincare1*}) we then obtain \begin{eqnarray} \|u-u_M\|^*_2 &\le& \frac{2(n-1)}{n-2}C^*_{N,I}(M,g) \left(\sqrt{2}\|f\|^{*\frac{1}{2}}_p \cdot \|u-u_M\|^{*\frac{1}{2}}_2+\|\Phi\|^*_{2p}\right) \nonumber \\ &\le& \frac{1}{2} \|u-u_M\|^*_2 +C_1^2\|f\|^*_p +C_1\|\Phi\|^*_{2p}, \end{eqnarray} where $C_1=\frac{2(n-1)}{n-2}C^*_{N,I}(M,g)$ as before. It follows that \begin{eqnarray} \label{square} \|u-u_M\|^*_2 \le 2C_1^2\|f\|^*_p +2C_1\|\Phi\|^*_{2p} \le 2\max\{C_1^2,C_1\}(\|f\|^*_p+\|\Phi\|^*_{2p}). \end{eqnarray} Combining (\ref{max1}) with $\lambda=u_M$ and (\ref{square}) we then arrive at \begin{eqnarray} \sup_M \, (u-u_M) \le C_2 (\|f\|^*_p+ \|\Phi\|^*_{2p}). \end{eqnarray} Replacing $u$ by $-u$ we obtain \begin{eqnarray} \inf_M \, (u-u_M) \ge -C_2 (\|f\|^*_p+ \|\Phi\|^*_{2p}). \end{eqnarray} The estimate (\ref{max2}) follows. \\ \hspace*{\fill}\rule{3mm}{3mm}\quad \\ \noindent {\bf Proof of Theorem A} \\ Replacing $f$ by $f^-$ we can assume $f\le 0$. There is a unique weak solution $v \in W^{1, 2p}(M)$ of the equation \begin{eqnarray} \label{help} \Delta v=f-f_M+\mbox{ div } \Phi \end{eqnarray} with $v_M=0$. Indeed, we can minimize the functional \begin{eqnarray} \label{F} F(v)=\int_M (|\nabla v|^2-(f-f_M)v-\Phi \cdot \nabla v) \end{eqnarray} for $v \in W^{1,2}(M)$ under the constraint $v_M=0$. By the H\"older inequality and the Poincar\'{e} inequality (\ref{poincare1}) we have \begin{eqnarray} F(v) \ge c\|v\|_{1,2}^2-C(\|f-f_M\|_p^2 +\|\Phi\|_{2p}^2) \end{eqnarray} for some positive constants $c$ and $C$, where $\|v\|_{1,2}$ denotes the $W^{1,2}$ norm of $v$. Hence a minimizer $v$ exists, which is a desired solution of (\ref{help}). Its uniqueness follows from Theorem \ref{maxth2}. The property $v \in W^{1,2p}(M)$ follows from the regularity theory for elliptic operators in divergence form. By Theorem \ref{maxth2} we have \begin{eqnarray} \label{v} \sup_M \, v \le C_2 (\|f-f_M\|^*_p + \|\Phi\|^*_{2p}) \le C_2(2\|f\|^*_p+\|\Phi\|^*_{2p}) . \end{eqnarray} We set $w=u-v$. Then we have $w \in W^{1, q}(M)$ with $q =\min\{2p, \alpha\}>n$. There holds \begin{eqnarray} \Delta w =\Delta u -\Delta v \ge f_M \end{eqnarray} in the weak sense, i.e. \begin{eqnarray} \label{weak} \int_M \nabla w \cdot \nabla \phi \le -f_M \int_M \phi \end{eqnarray} for all nonnegative $\phi \in W^{1, \frac{q}{q-1}}(M)$. Now we apply (\ref{greenrep1}) to $w$ to deduce \begin{eqnarray} w(x)=w_M+ \int_M \nabla_y G_0(x, y) \cdot \nabla_y w(y) dy \end{eqnarray} for a.e. $x \in M$. Set $\sigma=\inf_{x \not =y} G_0(x, y)$. By (\ref{greenestimate}) there holds \begin{eqnarray} \sigma \ge -C_0(n) C^*_{I,N}(M,g)^2 vol_g(M)^{-1}. \end{eqnarray} Next we set \begin{eqnarray} G (x,y)=G_0(x,y)-\sigma. \end{eqnarray} We have $G(x,y) \ge 0$ and \begin{eqnarray} w(x)=w_M+ \int_M \nabla_y G(x, y) \cdot \nabla_y w(y) dy \end{eqnarray} for a.e. $x \in M$. By Lemma \ref{greenintegral} and the fact $\frac{q}{q-1} <\frac{n}{n-1}$, (\ref{weak}) holds true with $\phi=G(x, \cdot)$ for each given $x\in M$, i.e. \begin{eqnarray} \int_M \nabla_y G(x, y) \cdot \nabla_y w dy \le -f_M \int_M G(x, y)dy. \end{eqnarray} We then deduce \begin{eqnarray} \label{w} w(x)-w_M &\le& -f_M \int_M G(x,y)dy \nonumber \\ &=& |f_M| (\int_M G_0(x,y)dy) -\sigma vol_g(M)) \nonumber \\ &=& -\sigma vol_g(M) |f_M| \nonumber \\ &\le& C_0(n) C^*_{I,N}(M,g)^2 |f_M| \le C_0(n) C^*_{I,N}(M,g)^2 \|f\|^*_1 \nonumber \\ &\le& C_0(n) C^*_{I,N}(M,g)^2 \|f\|^*_p \end{eqnarray} for a.e. $x \in M$. Since $w_M=u_M$, combining (\ref{v}) and (\ref{w}) yields \begin{eqnarray} u(x)\le u_M+(C_0(n)C^*_{I,N}(M,g)+2C_2) \|f\|^*_p + C_2\|\Phi\|^*_{2p} \end{eqnarray} for a.e. $x \in M$. We arrive at (\ref{maxA}) (note that $\sup_M u$ means the essential supremum). \hspace*{\fill}\rule{3mm}{3mm}\quad \\ \noindent {\bf Proof of Theorem B} \\ The estimate (\ref{maxB}) follows straightforwardly from Theorem A and Theorem \ref{gallot}. \\ \hspace*{\fill}\rule{3mm}{3mm}\quad \\ \noindent {\bf Proof of Theorem C} \\ We have $\hat \kappa_{\hat Ric}=0$. By Bonnet-Myers Theorem we have $diam_g(M) \le \pi$. Hence Theorem B implies \begin{eqnarray} u(x) \le u_M+ C(n, p, 0, \pi) (\|f^-\|_p^*+ \|\Phi\|_{2p}^*). \nonumber \end{eqnarray} \hspace*{\fill}\rule{3mm}{3mm}\quad \\
{ "timestamp": "2007-11-12T00:04:22", "yymm": "0703", "arxiv_id": "math/0703505", "language": "en", "url": "https://arxiv.org/abs/math/0703505", "abstract": "In this paper we present a proof of a Neumann type maximum principle for the Laplace operator on compact Riemannian manifolds. A key p oint is the simple geometric nature of the constant in the a priori estimate of this maximum principle. In particular, this maximum principle can be applied to manifolds with Ricci curvature bounded from below and diameter bounded from above to yield a maximum estimate without dependence on a positive lower bound for the volume.", "subjects": "Differential Geometry (math.DG)", "title": "A Neumann Type Maximum Principle for the Laplace Operator on Compact Riemannian Manifolds", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692277960746, "lm_q2_score": 0.831143054132195, "lm_q1q2_score": 0.811751844867362 }
https://arxiv.org/abs/0707.2156
On Hilbert's construction of positive polynomials
In 1888, Hilbert described how to find real polynomials in more than one variable which take only non-negative values but are not a sum of squares of polynomials. His construction was so restrictive that no explicit examples appeared until the late 1960s. We revisit and generalize Hilbert's construction and present many such polynomials.
\section{History and Overview} A real polynomial $f(x_1,\dots,x_n)$ is {\it psd} or {\it positive} if $f(a) \ge 0$ for all $a \in \mathbb R^n$; it is {\it sos} or a {\it sum of squares} if there exist real polynomials $h_j$ so that $f = \sum h_j^2$. For forms, we follow the notation of \cite{CL1} and use $P_{n,m}$ to denote the cone of real psd forms of even degree $m$ in $n$ variables, $\Sigma_{n,m}$ to denote its subcone of sos forms and let $\Delta_{n,m} = P_{n,m} \smallsetminus \Sigma_{n,m}$. The Fundamental Theorem of Algebra implies that $\Delta_{2,m} = \emptyset$; $\Delta_{n,2} = \emptyset$ follows from the diagonalization of psd quadratic forms. The first suggestion that a psd form might not be sos was made by Minkowski in the oral defense of his 1885 doctoral dissertation: Minkowski proposed the thesis that not every psd form is sos. Hilbert was one of his official ``opponents'' and remarked that Minkowski's arguments had convinced him that this thesis should be true for ternary forms. (See \cite{Hi3}, \cite{Min} and \cite{Sch}.) Three years later, in a single remarkable paper, Hilbert \cite{Hi1} resolved the question. He first showed that $F \in P_{3,4}$ is a sum of three squares of quadratic forms; see \cite{Ru} and \cite{Sw} for recent expositions and \cite{PR,PRSS} for another approach. Hilbert then described a construction of forms in $\Delta_{3,6}$ and $\Delta_{4,4}$; after multiplying these by powers of linear forms if necessary, it follows that $\Delta_{n,m} \neq \emptyset$ if $n \ge 3$ and $m \ge 6$ or $n \ge 4$ and $m \ge 4$. The goal of this paper is to isolate the underlying mechanism of Hilbert's construction, show that it applies to situations more general than those in \cite{Hi1}, and use it to produce many new examples. In \cite{Hi1}, Hilbert first worked with polynomials in two variables, which homogenize to ternary forms. Suppose $f_1(x,y)$ and $f_2(x,y)$ are two relatively prime real cubic polynomials with nine distinct real common zeros -- $\{\pi_i\}$, indexed arbitrarily -- so that no three of the $\pi_i$'s lie on a line and no six lie on a quadratic. By counting coefficients, one sees that there exists a non-zero quadratic $\phi(x,y)$ with zeros at $\{\pi_1,\dots,\pi_5\}$ and a non-zero quartic $\psi(x,y)$ with the same zeros, and which is singular at $\{\pi_6,\pi_7,\pi_8\}$: the sextic $\phi\psi$ is thus singular at $\{\pi_1,\dots,\pi_8\}$. Hilbert showed that $(\phi\psi)(\pi_9) \neq 0$ and that there exists $c \neq 0$ so that the perturbed polynomial $p = f_1^2 + f_2^2 + c\phi\psi$ is positive. If $p = \sum h_j^2$, then each $h_j$ would be a cubic which vanishes on $\{\pi_1,\dots,\pi_8\}$. But Cayley-Bacharach implies that $h_j(\pi_9) = 0$ for each $j$, hence $p(\pi_9) = 0$, a contradiction. Thus, $p$ homogenizes to a form $P \in \Delta_{3,6}$. Hilbert also considered in \cite{Hi1} three relatively prime real quadratic polynomials, $f_i(x,y,z)$, $1 \le i \le 3$, with eight distinct real common zeros -- $\{\pi_i\}$, indexed arbitrarily -- so that no four of the zeros lie on a plane. There exists a non-zero linear $\phi(x,y,z)$ with zeros at $\{\pi_1,\pi_2,\pi_3\}$ and a non-zero cubic $\psi(x,y,z)$ with the same zeros, and which is singular at $\{\pi_4,\pi_5,\pi_6,\pi_7\}$. Similarly, $(\phi\psi)(\pi_8) \neq 0$ and there exists $c \neq 0$ so that $f_1^2 + f_2^2 + f_3^2+ c\phi\psi$ is positive and not sos. This homogenizes to a form in $\Delta_{4,4}$. In 1893, Hilbert \cite{Hi2} showed that if $F \in P_{3,m}$ with $m \ge 4$, then there exists a form $G \in P_{3,m-4}$ and forms $H_{k}$, $1 \le k \le 3$, so that $GF= H_{1}^2 + H_{2}^2 + H_{3}^2$. (Hilbert's construction does not readily identify $G$ or the $H_k$'s.) In particular, if $F \in P_{3,6}$, then there exists $Q\in P_{3,2}$ so that $QF \in \Sigma_{3,8}$; since $Q\cdot QF \in \Sigma_{3,10}$, $F$ is a sum of squares of rational functions with common denominator $Q$. An iteration of this argument shows that if $F \in P_{3,m}$, then there exists $G$ so that $G^2F$ is sos. Hilbert's 17th Problem \cite{Hi23} asked whether this representation as a sum of squares of rational functions exists for forms in $P_{n,m}$ when $n \ge 4$. For much more on the history of this subject up to 1999, see \cite{Re2}. Recently, Blekherman \cite{Bl} has shown that for fixed degree $m$, the ``probability'' that a psd form is sos goes to 0 as $n$ increases. This result highlights the importance of understanding psd forms which are not sos. Hilbert's restriction on the common zeros meant that no very simple or symmetric example could be constructed, and the first explicit example of any $P \in \Delta_{n,m}$ did not appear for many decades. The only two detailed references to Hilbert's construction before the late 1960s (known to the author) are by Terpstra \cite{Ter} (on biquadratic forms, related to $\Delta_{4,6}$, thanks to Roland Hildebrand for the reference), and an exposition \cite[pp.232-235]{GV} by Gel'fand and Vilenkin of the sextic case only. At a 1965 conference on inequalities, Motzkin \cite{Mo} presented a specific sextic polynomial $m(x,y)$ which is positive by the arithmetic-geometric inequality and not sos by the arrangement of monomials in its Newton polytope. (Hilbert's last assistant, Olga Taussky-Todd, who had a lifelong interest in sums of squares, heard Motzkin speak, and informed him that $m(x,y)$ was the first specific polynomial known to be positive but not sos.) After homogenization, Motzkin's example is \begin{equation} M(x,y,z) = x^4y^2 + x^2y^4 + z^6 - 3x^2y^2z^2 \in \Delta_{3,6}. \end{equation} Around the same time and independently, R. M. Robinson \cite[p.264]{Ro} wrote that he saw ``an unpublished example of a ternary sextic worked out recently by W. J. Ellison using Hilbert's Method. It is, as would be expected, very complicated. After seeing this, I discovered that an astonishing simplification would be possible by dropping some unnecessary assumptions made by Hilbert." Robinson observed that the cubics $f_1(x,y) = x^3-x$ and $f_2(x,y) = y^3-y$ have nine common zeros: the $3 \times 3$ square $\{-1,0,1\}^2$. There are eight lines which each contain three of the zeros. Still, the sextic $(x^2-1)(y^2-1)(1-x^2-y^2)$ is positive at (0,0) and singular at the other eight points. By taking the maximum value for $c$ in Hilbert's construction and homogenizing, Robinson showed that \begin{equation} R(x,y,z) = x^6 + y^6 + z^6 - x^4y^2 - x^2y^4 - x^4z^2-y^4z^2 - x^2z^4-y^2z^4+3x^2y^2z^2 \end{equation} is in $\Delta_{3,6}$. Similarly, by taking the three quadratics $x^2-x$, $y^2-y$ and $z^2-z$, whose common zeros are $\{0,1\}^3$, choosing $(1,1,1)$ as the eighth point, and then homogenizing, Robinson showed that \begin{equation} \tilde R(x,y,z,w) = x^2(x-w)^2 + y^2(y-w)^2+z^2(z-w)^2+2xyz(x+y+z-2w) \end{equation} is in $\Delta_{4,4}$. (The only other published implementation of Hilbert's Method known to the author is a 1979 sextic studied by Schm\"udgen \cite{Schm} using $\{-2,0,2\}^2$, with ninth point $(2,0)$.) The papers of Motzkin and Robinson renewed interest in these polynomials, and two more examples in the style of $M$ were presented by Choi and Lam \cite{CL1,CL2}: \begin{equation} S(x,y,z) = x^4y^2 + y^4z^2 + z^4x^2 - 3x^2y^2z^2 \in \Delta_{3,6}, \\ \end{equation} \begin{equation} Q(x,y,z,w) = x^2y^2 + x^2z^2 + y^2z^2 + w^4 - 4wxyz \in \Delta_{4,4}. \end{equation} Here is an overview of the rest of the paper. In section two, we present some preliminary material, mainly from curve theory; it is important to consider reducible (as well as irreducible) polynomials. In section three, we present our version of Hilbert's Method (see Theorem 3.4), based on more general perturbations and contradictions. There is a class of perturbations of a given positive polynomial with fixed zeros by a polynomial which is singular at these zeros, in which positivity is preserved. By counting dimensions, under certain circumstances, there are polynomials of degree $2d$ which are singular on a set $A$, but are not in the vector space generated by products of pairs of polynomials of degree $d$ which vanish on $A$. If such a polynomial is positive, it cannot be sos. In Robinson's work, the set of cubics vanishing at the eight points is spanned by $\{f_1,f_2\}$, but the vector space of sextics which are singular at the eight points has dimension four and so cannot be spanned by $\{f_1^2,f_1f_2,f_2^2\}$. It is not necessary to construct $\phi$ and $\psi$ to find this new sextic, although its behavior at the ninth point must be analyzed to show that a successful perturbation is possible. We show in Theorem 4.1 that Hilbert's Method works when $f$ and $g$ are ternary cubics with exactly nine real intersections, whether or not three are on a line or six on a quadratic. (In other words, Robinson's ``astonishing simplification'' always works.) We also show that Hilbert's Method applies to the set of cubics which vanish on a set of seven zeros, no four on a line, not all on a quadratic; see Theorem 4.3. \begin{example} Let \begin{equation} \begin{gathered} {\mathcal A} = \{(1,0,0),(0,1,0),(0,0,1),(1,1,1),(1,1,-1),(1,-1,1),(1,-1,-1)\}, \\ F_1(x,y,z) = x(y^2-z^2), F_2(x,y,z) = y(z^2-x^2), F_3(x,y,z) = z(x^2-y^2), \\ G(x,y,z) = (x^2-y^2)(x^2-z^2)(y^2-z^2). \end{gathered} \end{equation} It is easy to show that the $F_k$'s span the set of ternary cubics which vanish on $\mathcal A$ and that $G$ is singular on $\mathcal A$ and not in the span of the $F_jF_k$'s. It follows from Theorem 4.3 that for some $c>0$, $P_c = F_1^2+F_2^2+F_3^2+cG$ is psd and not sos. In fact, $P_1 = 2S$, providing a new construction of (1.4). \end{example} In section five, we look at the sections of the cones $P_{3,6}$ and $\Sigma_{3,6}$ consisting of ternary sextics with the eight zeros of Theorem 4.1. In addition to some general results, we give a one-parameter family $\{R_t: t > 0\}$ of forms in $\Delta_{3,6}$ with ten zeros and such that $R_1 = R$: \begin{equation} \begin{gathered} R_t(x,y,z) := \\ \left(\frac{t^4+2t^2-3}{3} \right)(x^3-x z^2)^2 + \left(\frac{1+2t^2-3t^4}{3t^4}\right)(y^3-y z^2)^2 + R(x,y,z). \end{gathered} \end{equation} We give necessary and sufficient conditions for a sextic polynomial $p(x,y)$ with zeros at $\{-1,0,1\}^2 \setminus (0,0)$ to be psd and to be sos. In section six, we present more examples in $\Delta_{3,6}$. This paper would not be complete without an explicit illustration of Hilbert's Method under his original restrictions. Theorems 4.1 and 4.3 and other techniques are then applied to produce new forms in $\Delta_{3,6}$, including one-parameter families which include $R$, $S$ and $M$. For $t^2 < \frac 12$, let \begin{equation} \begin{gathered} M_t(x,y,z) = (1-2t^2)(x^4y^2+x^2y^4) + t^4(x^4z^2+y^4z^2)\\- (3 - 8t^2+2t^4)x^2y^2z^2 -2t^2(x^2+y^2)z^4 + z^6; \end{gathered} \end{equation} $M_t \in \Delta_{3,6}$ has ten zeros and $M_0 = M$. Let \begin{equation} \begin{gathered} S_t(x,y,z) = t^4(x^6+y^6+z^6) + (1-2t^6)(x^4y^2+y^4z^2+z^4x^2)\\ + (t^8 - 2t^2)(x^2y^4+y^2z^4+z^2x^4)-3(1-2t^2+t^4-2t^6+t^8)x^2y^2z^2; \end{gathered} \end{equation} $S_t \in \Delta_{3,6}$ has ten zeros if $t>0$. Note that $S_0 = S$ and $S_1 = R$, so $S_t$ provides a ``homotopy'' between $S$ and $R$ in $\Delta_{3,6}$ in the set of forms with ten zeros. We also show that \begin{equation} U_c(x,y,z) = x^2y^2(x-y)^2 + y^2z^2(y-z)^2 + z^2x^2(z-x)^2 + cxyz(x-y)(y-z)(z-x) \end{equation} is psd if and only if $|c| \le 4\sqrt{\sqrt 2 - 1}$ and sos only if $c=0$. We conclude the section by returning to a subject brought up by Robinson: $(ax^2+by^2+cz^2)R(x,y,z)$ is sos if and only if $a,b,c \ge 0$ and $\sqrt a, \sqrt b, \sqrt c$ are the sides of a (possibly degenerate) triangle. In section seven, we discuss the zeros of extremal ternary forms, using the perturbation argument from Hilbert's Method and show that if $p \in \Delta_{3,6}$ has exactly ten zeros, then it is extremal in the cone $P_{3,6}$. We present supporting evidence for the conjecture that, at least in a limiting sense, all extremal forms in $\Delta_{3,6}$ have ten zeros. Finally, in section eight, we apply Hilbert's Method to provide a family of positive polynomials in two variables in even degree $\ge 6$ which are not sos. We also speculate on the general applicability of Hilbert's Method in higher degree. Bezout's Theorem becomes more complicated in more variables, and for that reason, we have confined our discussions to ternary forms. However, we wish to record a somewhat unexpected connection between $\tilde R$ and $Q$ (c.f. (1.3), (1.5)): \begin{equation} \tilde R(x-w,y-w,z-w,x+y+z-w) = 2 Q(x,y,z,w). \end{equation} Robinson's example, after homogenization and this change in variables, gives a new derivation of the Choi-Lam example. The set of quaternary quadratics which vanish on \begin{equation} \begin{gathered} {\mathcal A} = \{(1,0,0,0),(0,1,0,0),(0,0,1,0),\\ (1,1,1,1),(1,1,-1,-1), (1,-1,1,-1),(1,-1,-1,1)\} \end{gathered} \end{equation} is spanned by $\{xy - zw, xz - yw, xw - yz\}$, and any such quadratic also vanishes at $(0,0,0,1)$. The form $Q$ is evidently psd by the arithmetic-geometric inequality, singular on $\mathcal A$ and positive at $(0,0,0,1)$, and so is not sos. Parts of this paper have been presented at many conferences over the last several years. The author thanks the organizers for their many invitations to speak, and his friends and colleagues for their encouragement and suggestions. \section{Preliminaries} Throughout this paper, we toggle between forms $F$ in $k$ variables and polynomials $f$ in $k-1$ variables, with the ordinary convention that \begin{equation} \begin{gathered} f(x_1,\dots,x_{k-1}) := F(x_1,\dots,x_{k-1},1), \\ F(x_1,\dots,x_k) := x_k^d f(\tfrac{x_1}{x_k}, \dots, \tfrac{x_{k-1}}{x_k}), \end{gathered} \end{equation} where $d = \deg f$. For even $d$, it is easy to see that $F$ and $f$ are simultaneously psd or sos. It is usually more convenient to use forms, since $F \in P_{k,m}$ if and only if $F(u) \ge 0$ for $u$ in the compact set $S^{k-1}$, simplifying perturbation. On the other hand, the zeros of $f$ can be isolated, whereas those of $F$ are not. Following \cite{CLR1}, we define the {\it zero-set} of any $k$-ary $m$-ic form $F$ by \begin{equation} {\mathcal Z}(F):= \{(a_1,\dots,a_k) \in \mathbb R^k\ : \ F(a_1,\dots,a_k) =0 \}. \end{equation} We have $0 \notin {\mathcal Z}(F)$ by convention, $|{\mathcal Z}(F)|$ will be interpreted as the number of lines in ${\mathcal Z}(F)$ and only only one representative of each line need be given. If $a \in {\mathcal Z}(F)$ and $a_k \neq 0$, then $a$ corresponds to a unique zero of $f$; if $a_k = 0$, then $a$ corresponds to a {\it zero of $f$ at infinity}. We also define \begin{equation} {\mathcal Z}(f):= \{(a_1,\dots,a_{k-1}) \in \mathbb R^{k-1}\ : \ f(a_1,\dots,a_{k-1}) =0 \}, \end{equation} for non-homogeneous $f$. It is possible for a strictly positive $f$ to be have zeros at infinity. Consider $f(x,y) = x^2 + (xy-1)^2$ (and $F(x,y,z) = x^2z^2 + (xy-z^2)^2$): clearly, $f(a,b)>0$ for $(a,b) \in \mathbb R^2$ and ${\mathcal Z}(F) = \{(1,0,0),(0,1,0)\}$. If $f$ is positive and $a \in {\mathcal Z}(f)$, then of course $\frac{\partial f}{\partial x_i}(a) = 0$ for all $i$. We shall say that $f$ is {\it round at $a$} if $f_a$, the second-order component of the Taylor series to $f$ at $a$, is a positive definite quadratic form. This is a ``singular non troppo'' zero for a positive polynomial. The corresponding second-order component of Taylor series for $F$ is psd but not positive definite, since $F$ vanishes on lines through the origin. If $F \in P_{n,m}$ (resp. $\Sigma_{n,m}$), and $G$ is derived from $F$ by an invertible linear change of variables, then $G \in P_{n,m}$ (resp. $\Sigma_{n,m}$). Thus, it is harmless to assume when convenient that ${\mathcal Z}(F)$ avoids the hyperplane $a_{n} = 0$; that is, $f$ has no zeros at infinity. Let $\mathbb R_{n,d} \subset \mathbb R[x_1,\dots,x_n]$ denote the $\binom{n+d}n$-dimensional vector space of real polynomials $f(x_1,\dots,x_n)$ with $\deg f \le d$. Suppose $A = \{\pi_1,\dots,\pi_r\} \subset \mathbb R^n$ is given. Let $I_{s,d}(A)$ denote the vector space of those $p \in \mathbb R_{n,d}$ which have an $s$-th order zero at each $\pi_j$. In particular, \begin{equation} \begin{gathered} I_{1,d}(A) = \{ p \in \mathbb R_{n,d}\ : \ p(\pi_j) = 0, \quad 1 \le j \le r \}; \\ I_{2,2d}(A) = \left\{ p \in \mathbb R_{n,2d}\ : \ p(\pi_j) = \tfrac {\partial p}{\partial x_i}(\pi_j) = 0, \quad 1 \le i \le n,\quad 1 \le j \le r \right\}. \end{gathered} \end{equation} Since an $s$-th order zero in $n$ variables imposes $\binom{n+s-1}n$ linear conditions, \begin{equation} \dim I_{s,d}(A) \ge \binom{n+d}n - r\binom{n+s-1}n. \end{equation} In Hilbert's sextic construction, $A = \{\pi_1,\dots,\pi_9\}$ is the set of common zeros of $f_1(x,y)$ and $f_2(x,y)$, and $\dim(I_{1,3}(A)) = 2 > \binom 52 - 9\binom 22$. Let \begin{equation} I_{1,d}^2(A): = \left\{ \sum f_ig_i\ : \ f_i, g_i \in I_{1,d}(A) \right\}. \end{equation} Clearly, $I_{1,d}^2(A) \subseteq I_{2,2d}(A)$. It is essential to Hilbert's Method that this inclusion may be strict; for example, $\phi\psi(\pi_9) > 0$ so $\phi\psi \in I_{2,6}(A) \smallsetminus I_{1,3}^2(A)$. We also need to consider the ``forced'' zeros, familiar from the Cayley-Bacharach Theorem; see \cite{EGH}. Suppose $A\subset \mathbb R^n$ and $I_{1,d}(A)$ are given as above. Let \begin{equation} \tilde A := \bigcap_{j=1}^r {\mathcal Z}(f_j) \smallsetminus A = {\mathcal Z} \biggl(\sum_{j=1}^r f_j^2 \biggr) \smallsetminus A. \end{equation} Unfortunately, this notation fails to capture forced zeros at infinity. Accordingly, for $A \subset \mathbb R^n$, define the associated projective set ${\mathcal A} \subset \mathbb R^{n+1}$ by \begin{equation} (a_1,\dots,a_n) \in A \iff (a_1,\dots, a_n,1) \in {\mathcal A}. \end{equation} As before, we define $I_{s,d}({\mathcal A})$ to be the set of $d$-ic forms $F(x_1,\dots,x_{n+1})$ which have $s$-th order zeros on $\mathcal A$. Then $f \in I_{s,d}(A)$ if and only if $F \in I_{s,d}({\mathcal A})$. We define \begin{equation} \tilde {\mathcal A} := \bigcap_{j=1}^r {\mathcal Z}(F_j) \smallsetminus {\mathcal A} = {\mathcal Z} \biggl(\sum_{j=1}^r F_j^2 \biggr) \smallsetminus {\mathcal A}. \end{equation} Given $A \subset \mathbb R^n$, $\tilde{\mathcal A} = \emptyset$ when there are no forced zeros, even at infinity. We say that $I_{1,d}(A)$ is {\it full} if, for any $\pi \in A$ and $v \in \mathbb R^n$, there exists $f\in I_{1,d}(A)$ such that $\vec\nabla f(\pi) = v$. Equivalently, if $\{f_1,\dots,f_s\}$ is a basis for $I_{1,d}(A)$ and $f = \sum_j f_j^2$, then $I_{1,d}(A)$ is full if and only if $f$ is round at each $\pi \in A$. Bezout's Theorem in a relatively simple form is essential to our proofs. Suppose $f_1(x,y)$ and $f_2(x,y)$ are relatively prime polynomials of degrees $d_1$ and $d_2$. Let ${\mathcal Z} \subset \mathbb C^2$ denote the set of common (complex) zeros of $f_1$ and $f_2$. Then \begin{equation} d_1d_2 = \sum_{\pi \in {\mathcal Z}} {\mathcal I}_\pi(f_1,f_2), \end{equation} where ${\mathcal I}_\pi(f_1,f_2)$ measures the singularity of the intersection of the curves $f_1=0$ and $f_2=0$ at $\pi$. In particular, ${\mathcal I}_\pi(f_1,f_2) = 1$ if and only if the curves $f_1=0$ and $f_2=0$ are nonsingular at $\pi$ and have different tangents. Thus, ${\mathcal I}_\pi(f_1,f_2) = 1$ if and only if $f_1^2+f_2^2$ is round at $\pi$, and ${\mathcal I}_\pi(f_1,f_2) \ge 2$ otherwise. If $f_1$ and $f_2$ are both singular at $\pi$, then ${\mathcal I}_\pi(f_1,f_2) \ge 4$. \begin{lemma} Suppose $f_1(x,y), f_2(x,y) \in \mathbb R_{2,d}$ and $|{\mathcal Z}(f_1) \cap {\mathcal Z} (f_2)| = d^2$. If $A \subseteq {\mathcal Z}(f_1) \cap {\mathcal Z} (f_2)$ is such that $I_{1,d}(A)$ has basis $\{f_1,f_2\}$, then $A$ is full. \end{lemma} \begin{proof} It follows from (2.10) that any common zero of $f_1$ and $f_2$ must be real, and that ${\mathcal I}_\pi(f_1,f_2) = 1$ for each common zero $\pi$. It follows that $A$ is full. \end{proof} The next proposition collects some useful information from curve theory. As is customary, if $f(\pi) = 0$, we say that $\pi$ {\it lies on $f$} or $f$ {\it contains} $\pi$. \begin{proposition} All polynomials herein are assumed to be in $\mathbb R[x,y]$, and all enumerated sets of points are assumed to be distinct. These results apply to ternary forms with the obvious modifications. \begin{enumerate} \item If a quadratic $q$ is singular at $\pi$ and $q(\pi') = 0$ for some $\pi' \neq \pi$, then $q = \ell_1\ell_2$ is a product of two linear forms $\ell_j$ containing $\pi$. \item If a set of eight points $A = \{\pi_1,\dots,\pi_8\}$ is given, no four on a line and no seven on a quadratic, then $\dim I_{1,3}(A) = 2$. \item In the last situation, if $A_j = A \smallsetminus \{\pi_j\}$, then there exists a cubic $f$ so that $f |_{A_j} = 0$, but $f(\pi_j) \neq 0$; in particular, $\dim I_{1,3}(A_j) = 3$. \item Suppose $f(x,y)$ and $g(x,y)$ are cubics, $A = {\mathcal Z}(f) \cap {\mathcal Z} (g) = \{\pi_1,\dots,\pi_9\}$ and $A_j = A \smallsetminus \{\pi_j\}$. For each $j$, $I_{1,3}(A_j) = I_{1,3}(A)$. In other words, if eight of the points lie on a cubic $h$, then so will the ninth. \item Under the same conditions as (4), no four of the $\pi_i$'s lie on a line and no seven lie on a quadratic. Three of the $\pi_i$'s lie on a line if and only if the other six lie on a quadratic if and only if $I_{1,3}(A)$ contains a reducible cubic. \end{enumerate} \end{proposition} \begin{proof} For (1), write $q(x,y) = a + bx + cy + dx^2 + exy + fy^2$ and assume by translation that $\pi = (0,0)$. Then $a=b=c=0$ and $q(x,y) = dx^2 + exy + fy^2$. If $\pi' = (r,s) \neq (0,0)$, then $sx - ry$ is a factor of $q$. The next two assertions are classical and proofs can be found, for example, in \cite[Ch.15]{Bi}; (4) is well-known and is often attributed to Cayley-Bacharach, but it was discovered by Chasles; see \cite{EGH}. For (5), if four $\pi_i$'s lie on a line $\ell$, then $\ell$ divides both $f$ and $g$ by Bezout, so that $|{\mathcal Z}(f) \cap {\mathcal Z} (g)| = \infty$. If seven $\pi_i$'s lie on a reducible quadratic $q = \ell_1\ell_2$, then at least four lie on one $\ell_i$, and we are in the earlier case. If they lie on an irreducible $q$, then it must be indefinite, and again, $q$ divides both $f$ and $g$ by Bezout, so that $|{\mathcal Z}(f) \cap {\mathcal Z} (g)| = \infty$. Suppose now that three points of $A$, say $\{\pi_1,\pi_2,\pi_3\}$, lie on the line $\ell$ and let $q$ be the quadratic containing $\{\pi_4,\dots,\pi_8\}$. Then $\ell q = 0$ on $A_8$, so by (4), $(\ell q)(\pi_9) = 0$. Since $\ell(\pi_9)\neq 0$, we must have $q(\pi_9) = 0$; thus six zeros lie on $q$ and $\ell q \in I_{1,3}(A_j)$. (A similar proof follows if we start with six points lying on the quadratic $q$.) Finally, if $\ell q \in I_{1,3}(A)$, then at most three of the $\pi_i$'s can lie on $\ell$, and at most six can lie on $q$, hence these numbers are exact. \end{proof} \begin{lemma} Suppose $A$ is a set of eight distinct points, no four on a line and no seven on a quadratic, and let $\{f_1,f_2\}$ be a basis for $I_{1,3}(A)$. Then $f_1$ and $f_2$ are relatively prime. \end{lemma} \begin{proof} If $f_1$ and $f_2$ have a common quadratic factor $q$, then $f_i = \ell_i q$ and at most six points of $A$ lie on $q$, so $\ell_1$ and $\ell_2$ share two points and so are proportional, a contradiction. If $f_1$ and $f_2$ have only a common linear factor $\ell$, then $f_i = \ell q_i$, and at most three points of $A$ lie on $\ell$, so $q_1$ and $q_2$ share five points and so are proportional, again a contradiction. \end{proof} In the situation of Lemma 2.3, Bezout's Theorem has one of three possible implications: (a) there is a ninth point $\pi \in \tilde A$ so that $f_1(\pi) = f_2(\pi) = 0$; (b) $\tilde A = \emptyset$, but $(a,b,0) \in \tilde {\mathcal A}$ is a common zero of $f_1$ and $f_2$ at infinity; (c) ${\mathcal I}_\pi(f_1,f_2) = 2$ for some $\pi \in A$. The first two cases are essentially the same: if (b) occurs, we homogenize and change variables so that the zero is no longer at infinity after dehomogenization. Any necessary construction can then be performed, and the variables changed back. The third case is singular, but seems to be difficult to identify in advance, and is equivalent to the existence of a cubic in $I_{1,3}(A)$ which is singular at some $\pi \in A$. We shall say that a set of eight points $A$ for which (a) or (b) occurs is {\it copacetic}. Since $f_1$ and $f_2$ are real, $f_1(\pi) = f_2(\pi) = 0 \implies f_1(\bar \pi) = f_2(\bar \pi) = 0$. Bezout implies that $\pi = \bar \pi$; that is, the ninth point $\pi$ must be real. We have the following corollary to Lemma 2.1. \begin{lemma} If $A$ is copacetic, then it is full. \end{lemma} The following lemma was probably known a hundred years ago. \begin{lemma} Suppose seven points $A = \{\pi_1,\dots,\pi_7\}$ in the plane are given, not all on a quadratic and no four on a line. Then, up to multiple, there is a unique cubic $f(x,y)$ which is singular at $\pi_1$ and contains $\{\pi_2,\dots,\pi_7\}$. \end{lemma} \begin{proof} Since $1 \cdot 3+6\cdot 1< 10$ linear conditions are given, at least one such nonzero $f$ exists. Suppose $f_1$ and $f_2$ satisfy these properties and are not proportional. Then $\sum_j {\mathcal I}_{\pi_j}(f_1,f_2) \ge 2^2 + 6\cdot 1 > 3 \cdot 3$, hence $f_1$ and $f_2$ have a common factor. The common factor could be an irreducible quadratic, a reducible quadratic, or linear. In the first case, $f_1 = \ell_1 q$ and $f_2 = \ell_2 q$, where $q(\pi_1) = \ell_i(\pi_1) = 0$ by Prop. 2.2(1). At least one point, say $\pi_7$, does not lie on $q$, hence $\ell_i(\pi_7) = 0$ as well. Thus the two $\ell_i$'s share two zeros and are proportional, a contradiction. In the second case, we have $f_1 = \ell_1\ell_2\ell_3$ and $f_2 = \ell_1\ell_2\ell_4$, and $\pi_1$ lies on at least two of $\{\ell_1,\ell_2,\ell_3\}$ and two of $\{\ell_1,\ell_2,\ell_4\}$. If $\ell_1(\pi_1) = \ell_2(\pi_1) = 0$, then $\ell_1$ and $\ell_2$ together can contain at most four of the six points $\{\pi_2,\dots,\pi_7\}$, hence $\ell_3$ and $\ell_4$ must each contain at least two points in common, and so are proportional, again a contradiction. Otherwise, without loss of generality, $\ell_1(\pi_1)=0$ and $\ell_2(\pi_1) \neq 0$, hence $\ell_3(\pi_1)=\ell_4(\pi_1)=0$. In this case, $\ell_1$ and $\ell_2$ can together contain at most five of the six points $\{\pi_2,\dots,\pi_7\}$, so that $\ell_3$ and $\ell_4$ must contain also some $\pi_j$ other than $\pi_1$. This is again a contradiction. Finally, suppose $f_1 = \ell q_1$ and $f_2 = \ell q_2$, where $q_1$ and $q_2$ are relatively prime quadratics, so they share at most four points. If $\ell(\pi_1) = 0$, then $q_j(\pi_1) = 0$ as well and since at least four of $\{\pi_2,\dots,\pi_7\}$ do not lie on $\ell$, they must lie on both $q_1$ and $q_2$. Thus $q_1$ and $q_2$ share five points, a contradiction. If $\ell(\pi_1) \neq 0$, then $h_1 = \ell \ell_1 \ell_2$ and $h_2 = \ell \ell_3 \ell_4$, where the $\ell_i$'s are distinct lines containing $\pi_1$. But if $\ell(\pi_j) \neq 0$ (which is true for at least four $\pi_j$'s) then $\pi_j$ must also lie on one of $\{\ell_1,\ell_2\}$ and one of $\{\ell_3,\ell_4\}$. That is, the line through $\pi_1$ and $\pi_j$ divides both $\ell_1\ell_2$ and $\ell_3\ell_4$, a final contradiction. \end{proof} The last lemma in this section is used in the proof of Theorem 4.3. \begin{lemma} If d=3 and $A$ is a set of seven points in $\mathbb R^2$, no four on a line and not all on a quadratic, then $A$ is full and $\tilde{\mathcal A} = \emptyset.$ \end{lemma} \begin{proof} Choose $\pi_8$ to avoid any line between two points of $A$ and any quadratic determined by five points of $A$. Then $A \cup \{\pi_8\}$ has no four points in a line and no seven on a quadratic, and so $\dim I_{1,3}(A) = 3$ by Prop. 2.2(3). Suppose $\{f_1,f_2,f_3\}$ is a basis for $I_{1,3}(A)$ and for each $j$, consider the map \begin{equation} T_j: (c_1,c_2,c_3) \mapsto \sum_{k=1}^3 c_k \vec\nabla f_k(\pi_j). \end{equation} By Lemma 2.5, $\dim(\ker(T_j)) = 1$, hence each $T_j$ is surjective, and so $A$ is full. Suppose $\pi \in \tilde {\mathcal A}$; after an invertible linear change, we may assume without loss of generality that $\pi \in \tilde A$. By the contrapositive to Prop. 2.2(3), either $A \cup \{\pi\}$ has four points in a line or has seven points on a quadratic. Again, choose $\pi_8$ so that $A_1 = A \cup \{\pi_8\}$ has no four points in a line and no seven on a quadratic. By Prop. 2.2(2), we may assume without loss of generality that $I_{1,3}(A_1)$ has basis $\{f_1,f_2\}$, so $\pi \in \tilde A_1$. Let $A_2 = A_1 \cup \{\pi\}$. Thus $f_1$ and $f_2$ are two cubics which vanish on a set $A_2$ with four points on a line $\ell$ or seven points on a quadratic $q$, and so $f_1$ and $f_2$ have a common factor by Bezout, a contradiction. \end{proof} \section{Hilbert's Method} We begin this section with a general perturbation result. \begin{lemma}[The Perturbation Lemma] Suppose $f,g \in \mathbb R_{n,2d}$ satisfy the following conditions: \begin{enumerate} \item The polynomial $f$ is positive with no zeros at infinity, and $2d = \deg f \ge \deg g$; \item There is a finite set $V_1$ so that if $v \in V_1$, then $f$ is round at $v$ and $g$ vanishes to second-order at $v$; \item The set $V_2:= {\mathcal Z}(f)\smallsetminus V_1$ is finite and if $w \in V_2$, then $g(w) > 0$. \end{enumerate} Then there exists $c = c(f,g)>0$ so that $f+cg $ is a positive polynomial. \end{lemma} \begin{proof} For $v \in V_1$, let $g_v$ denote the second-order (lowest degree) term of the Taylor series for $g$ at $v$. Since $f_v$ is positive definite, there exists $\alpha(v) > 0$ so that $f_v + \alpha g_v$ is positive definite for $0 \le \alpha \le \alpha(v)$. If $\alpha_0 = \min_v \alpha(v)$, then there exist neighborhoods $\mathcal N_v$ of each $v$ so that $f + \alpha_0 g$ is positive on each ${\mathcal N_v} \smallsetminus \{v\}$. Further, for $w\in V_2$, $(f+\alpha_0 g)(w) = \alpha_0 g(w) > 0$, hence there is a neighborhood $\mathcal N_w$ of $w$ on which $f + \alpha_0 g$ is positive. It follows that $f + \alpha_0g$ is non-negative on the open set $\mathcal N = \cup \mathcal N_v \cup \mathcal N_w$. Homogenize $f,g$ to forms $F,G$ of degree $2d$ in $n+1$. For $x \in \mathbb R^{n}$, let $||x|| = (1+\sum_i x_i^2)^{1/2}$ and let $\widetilde {\mathcal N}$ be the image of $\mathcal N$ under the map \begin{equation} (x_1,\dots,x_n) \mapsto \left( \frac{x_1}{||x||},\dots, \frac{x_n}{||x||}, \frac 1{||x||}\right) \in S^n. \end{equation} Then $\widetilde {\mathcal N}$ is open and $(F+\alpha G)(x) \ge 0$ for $x \in \widetilde {\mathcal N}$. By hypothesis, ${\mathcal Z}(F) \subset \widetilde {\mathcal N}$, hence $F$ is positive on the complement $\widetilde{\mathcal N}^c$, so it achieves a positive minimum on the compact set $\widetilde{\mathcal N}^c$. Since $G$ is bounded on $S^n$, there exists $\beta > 0$ so that $(F+\beta G)(x) \ge 0$ for $x \in \widetilde {\mathcal N}^c$. It follows that $F+cG$ is psd, where $c = \min\{\alpha_0,\beta\}$. The desired result follows upon dehomogenizing. \end{proof} The following two theorems generalize the contradiction of Hilbert's construction. \begin{theorem} If $p \in I_{2,2d}(A)$ is sos, then $p \in I_{1,d}^2(A)$. \end{theorem} \begin{proof} If $p = \sum_k h_k^2$, then $p(a) = 0$ for $a \in A$, hence $h_k(a) = 0$, and so $h_k \in I_{1,d}(A)$, implying $p \in I_{1,d}^2 (A)$. \end{proof} Let $I_{1,d}(A)$ have basis $\{f_1,\dots,f_r\}$, and suppose the $\binom {r+1}2$ polynomials $f_if_j, 1 \le i \le j \le r$ are linearly independent; in other words, for each $p \in I_{1,d}^2(A)$ there is a unique quadratic form $Q$ so that $p = Q(f_1,\dots,f_r)$. We call this the {\it independent case}. (We have been unable to find $I_{1,d}(A)$ for which this does not hold.) Let \begin{equation} R_f:= \{ (f_1(x),\dots,f_r(x))\ : \ x \in \mathbb R^n \} \subseteq \mathbb R^r. \end{equation} denote the range of the basis as an $r$-tuple. \begin{theorem} Suppose $p = Q(f_1,\dots,f_r) \in I_{1,d}^2(A)$ in the independent case: \begin{enumerate} \item $p$ is sos if and only if $Q$ is an sos quadratic form; \item $p$ is psd if and only if $Q(u) \ge 0$ for $u \in R_f$; \item if $n=2$, $r=2$, and $f_1$ and $f_2$ are relatively prime polynomials with odd degree $d$, then $R_f = {\mathbb R}^2$, hence $p \in I_{1,d}^2(A)$ is psd if and only if it is sos. \end{enumerate} \end{theorem} \begin{proof} If $p = \sum_k h_k^2$ is sos, then as in the last proof, $h_k \in I_{1,d}(A)$. To be specific, if $h_k = \sum_\ell c_{k\ell} f_\ell$, then by the uniqueness of $Q$, $Q(u_1,\dots,u_r) = \sum_\ell (\sum_\ell c_{k\ell}u_\ell)^2$. Conversely, if $Q = \sum_\ell T_\ell^2$ for linear forms $T_\ell$, then $p = \sum_\ell T_\ell(f_1,\dots,f_r)^2$. The assertion in (2) is immediate. For (3), we first note that, since $(f_1(\lambda x), f_2(\lambda x)) = \lambda^d (f_1(x),f_2(x))$, it suffices to show that every line through the origin intersects $R_f$. By hypothesis, ${\mathcal Z}(f_1)$ and ${\mathcal Z}(f_2)$ are infinite sets, but $|{\mathcal Z}(f_1) \cap {\mathcal Z}(f_2)| \le d^2$. It follows that there exist $\pi$ and $\pi'$ so that $(f_1(\pi),f_2(\pi)) = (1,0)$ and $(f_1(\pi'),f_2(\pi')) = (0,1)$. Now take a curve $\gamma(t) \in {\mathbb R}^2$ and so that $\gamma(0) = \pi$, $\gamma(1) = \pi'$ and $\gamma(2) = -\pi$ and $\gamma(t) \notin {\mathcal Z}(f_1) \cap {\mathcal Z}(f_2)$, and let $h(t) =(f_1(\gamma(t)),f_2(\gamma(t)))$. We have $h(0) = (1,0)$, $h(1) = (0,1)$, $h(2) = (-1,0)$ and $h(t) \neq (0,0)$, so by continuity, each line through the origin contains some $h(t)$, $0 \le t \le 2$. \end{proof} The hypotheses of Theorem 3.3(3) applies in Hilbert's original construction, with $d=3$. We show in Example 3.1 below that $R_f \neq {\mathbb R}^r$ in general, and combine this with Theorem 3.3(2) to give one instance of a positive form in a $I_{1,d}^2(A)$ which is not sos. \begin{theorem}[Hilbert's Method] Suppose a finite set $A \subset \mathbb R^n$ is such that $I_{1,d}(A)$ has basis $\{f_1,\dots,f_s\}$, where $\tilde A$ is finite, $A$ is full and $f = \sum_j f_j^2$ has no zeros at infinity. Further, suppose there exists $g \in I_{2,2d}(A) \smallsetminus I_{1,d}^2(A)$ so that $g(w) > 0$ for each $w \in \tilde A$. Then there exists $c > 0$ so that \begin{equation} p_c = \sum_{j=1}^s f_j^2 + c g \end{equation} is positive and not sos. \end{theorem} \begin{proof} In the notation of Lemma 3.1, let $V_1 = A$ and $V_2 = \tilde A$. Since $f$ has no zeros at infinity, $\deg f = 2d$, and $A$ is full, the hypotheses of Lemma 3.1 are satisfied. Thus there exists $c>0$ so that $p_c$ is positive, and since $p_c \notin I_{1,d}^2(A)$, it is not sos by Theorem 3.2. \end{proof} \begin{remarks} \qquad \begin{enumerate} \item If $\tilde A = \emptyset$, then the Perturbation Lemma can be applied to $(f,\pm g)$ for both signs, so that $f \pm c g$ is positive for some $c > 0$ and both choices of sign. \item In any particular case, the condition that $f$ is round at $v \in V_1$ can be relaxed in the Perturbation Lemma, so long as a stronger condition is imposed on $g$ to insure that $f + \alpha g$ is positive in some punctured neighborhood $\mathcal N_v$ of $v$. \item Since Hilbert's Method applies to any basis of $I_{1,d}(A)$, we may replace $\sum_j f_j^2$ by any positive definite quadratic form in the $f_j$'s. \item Hilbert's original sextic contradiction follows from $(\phi\psi)(\pi_9) \neq 0$, which implies that $\phi\psi \in I_{2,6}(A)\smallsetminus I_{1,3}^2(A)$. \item Theorem 4.3 covers a situation in which $\tilde A = \emptyset$, but that $I_{2,2d}(A)\smallsetminus I_{1,d}^2(A)$ is non-empty, so Hilbert's Method still applies. \end{enumerate} \end{remarks} \begin{example} We revisit Example 1.1, keeping the notation of (1.6). It is easy to check that \begin{equation} \{F_1^2,F_2^2,F_3^2,F_1F_2,F_1F_3,F_2F_3\} \end{equation} is linearly independent, so that Theorem 3.3 applies. Let \begin{equation} Q(u_1,u_2,u_3) = 5u_1^2 + 5u_2^2 + 5u_3^2 -6u_1u_2-6u_1u_3-6u_2u_3; \end{equation} evidently, $Q$ is not a psd quadratic form. We show now (in two ways) that \begin{equation} T:= Q(F_1,F_2,F_3) \end{equation} is psd; note that $T$ is not sos by Theorem 3.3(1). Let \begin{equation} \begin{gathered} P(v_1,v_2,v_3) : = v_1^4 +v_2^4+v_3^4 - 2v_1^2v_2^2-2v_1^2v_3^2-2v_2^2v_3^2 \\ = (v_1+v_2+v_3)(v_1+v_2-v_3)(v_1-v_2+v_3)(v_1-v_2-v_3). \end{gathered} \end{equation} A computation shows that \begin{equation} P(F_1,F_2,F_3) = (x^2-y^2)^2(x^2 - z^2)^2(y^2 - z^2)^2 \end{equation} is psd, hence $R_F \subseteq \{(x,y,z): P(x,y,z) \ge 0\}$. We claim that that $Q \ge 0$ on $R_F$ and so $T$ is psd by Theorem 3.3(2). Since \begin{equation} 5u_1^2 + 5u_2^2 + 5u_3^2 -6u_1u_2-6u_1u_3 \end{equation} is psd, if $\bar u_2\bar u_3 < 0$, say, then $Q(\bar u_1,\bar u_2,\bar u_3) \ge 0$. By symmetry, it follows that $Q(v_1,v_2,v_3)\ge 0$ unless the $v_i$'s have the same sign, and it suffices to suppose $v_1\ge v_2 \ge v_3 \ge 0$. The first three linear factors of $P$ in (3.7) are always positive, so $P(v_1,v_2,v_3) \ge 0$ if and only if $v_1 = v_2 + v_3 + t$ with $t \ge 0$. Since \begin{equation} Q(v_2+v_3+t,v_2,v_3) = 4(v_2-v_3)^2 + t(4v_2+4v_3+5t), \end{equation} the claim is verified. The second proof is direct. We note that $T$ is symmetric: \begin{equation} T(x,y,z) = 5\sum^6 x^4y^2 + 6\sum^3 x^4yz + 6\sum^3 x^3y^3 - 6\sum^6 x^3y^2z - 30x^2y^2z^2. \end{equation} A calculation shows that \begin{equation} \begin{gathered} 2(x^2+y^2+z^2-xy-xz-yz)T(x,y,z) = (x-y)^4(xy + 3xz+3yz+z^2)^2 \\ + (x-z)^4(xz + 3xy+3yz+y^2)^2 + (y-z)^4(yz + 3xy+3xz+x^2)^2, \end{gathered} \end{equation} so $T$ is psd. Although $|{\mathcal Z}(T)| = 7$, the zeros at $(1,1,-1),(1,-1,1),(-1,1,1)$ are not round. In fact, $T(1+t,1-t,-1) = 48t^4 + 4t^6$, etc. These singularities are useful in constructing the representation (3.12). \end{example} \section{Two applications of Hilbert's Method to ternary sextics} In this section we show that Robinson's simplification of Hilbert's Method works in general. By Theorem 2.2(5), the assumption that no three of the nine points are on a line and no six are on a quadratic is equivalent to saying that no $\alpha f_1 + \beta f_2$ is reducible. Theorem 4.1 removes this restriction. In Theorem 4.3, we show that Hilbert's Method also applies to the set of ternary sextics which share seven zeros, no four in a line, no seven on a quadratic. \begin{theorem} Suppose $f_1(x,y)$ and $f_2(x,y)$ are two relatively prime real cubics with exactly nine distinct real common zeros. Then Hilbert's Method applies to any subset $A$ of eight of the common zeros. \end{theorem} \begin{proof} Lemma 2.4 shows that if $A = \{\pi_1,\dots,\pi_8\}$ is copacetic, as is assumed here, then $\tilde A =\{\pi_9\}$ and $A$ is full. It follows from (2.5) that $\dim I_{2,6}(A) \ge \binom 82 - 3 \cdot 8 = 4$. Since $I_{1,3}^2(A)$ is spanned by $\{f_1^2,f_1f_2,f_2^2\}$, there exists $0 \neq g \in I_{2,6}(A) \smallsetminus I_{1,3}^2(A)$. If we can show that $g(\pi_9) \neq 0$, then $\pm g(\pi_9) > 0$ for some choice of sign, and Theorem 3.4 applies. Suppose to the contrary that $g(\pi_9) = 0$. Either $g$ is singular at $\pi_9$, or there exists $(\alpha_1,\alpha_2) \neq (0,0)$ so that the tangents of $g$ and $\alpha_1f_1+\alpha_2f_2$ are parallel at $\pi_9$. Since the choice of basis for $I_{1,3}(A)$ was arbitrary, we may assume without loss of generality that $(\alpha_1,\alpha_2) = (1,0)$ from the beginning. In either case, $ {\mathcal I}_{\pi_9}(f_1,g) \ge 2$, so \begin{equation} \sum_{j=1}^9 {\mathcal I}_{\pi_j}(f_1,g) \ge 2 \cdot 9 = \deg(f_1) \cdot \deg(g). \end{equation} Since $f_1$ is a real cubic, there exists $\pi_0 \notin A \cup \tilde A$ so that $f_1(\pi_0) = 0$ and, necessarily, $f_2(g_0) \neq 0$. Now let \begin{equation} \tilde g = g - \frac{g(\pi_0)}{f_2^2(\pi_0)} f_2^2, \end{equation} so that $\tilde g(\pi_0) = 0$. Observe that $\tilde g \in I_{2,6}(A) \smallsetminus I_{1,3}^2(A)$, and $g$ and $\tilde g$ agree to second-order at $\pi_9$. In particular, they are either both singular or have the same tangents. Thus, we may replace $g$ by $\tilde g$ for purposes of the argument, and assume that $g(\pi_0) = 0$. Combining ${\mathcal I}_{\pi_0}(f_1,g) \ge 1$ with (4.1), we see that $f_1$ and $g$ have a common factor by Bezout. Let $d = \deg(\gcd(f_1,g))$. If $d=3$, then $g = f_1 k$ for some cubic $k$. Since $g$ is singular on $A$ and $f_1$ is singular at no point of $A$, we must have $k \in I_{1,3}(A)$, so that $g \in I_{1,3}^2(A)$, a contradiction. (Under Hilbert's restrictions, $f_1$ is irreducible, so this is the only case.) Suppose $d = 2$ and write $f_1 = \ell q$ and $g = p q$, where $\ell$ is linear, $q$ is quadratic and $p$ is quartic and $\ell$ and $p$ are relatively prime. Then $\ell=0$ on exactly three of the $\pi_i$'s. After reindexing, there are two cases: either $\ell = 0$ on $\{\pi_1,\pi_2,\pi_3\}$ or $\ell = 0$ on $\{\pi_1,\pi_2,\pi_9\}$, with $q = 0$ on the complementary sets. In the first case, $q(\pi_i)\neq 0$ for $i=1,2,3$, so $p$ is singular at these three points and ${\mathcal I}_{\pi_1}(\ell,p) + {\mathcal I}_{\pi_2}(\ell,p) + {\mathcal I}_{\pi_3}(\ell,p) \ge 6 > 1 \cdot 4$. Since $\ell$ and $p$ are relatively prime, this is a contradiction by Bezout. In the second case, $p$ is still singular at $\pi_1,\pi_2$ and $q(\pi_9) \neq 0$, so $p(\pi_9) = 0$ and ${\mathcal I}_{\pi_1}(\ell,p) + {\mathcal I}_{\pi_2}(\ell,p) + {\mathcal I}_{\pi_9}(\ell,p) \ge 5 > 2+2+1$, another contradiction. Finally, suppose $d = 1$ and write $f_1 = \ell q$ and $g = \ell p$, where $\ell$ is linear, $q$ is quadratic and $p$ is quintic and $q$ and $p$ are relatively prime. With either case for $\ell$ as above, $\ell\neq0$ and $p$ is singular at $\pi_4,\dots,\pi_8$ and ${\mathcal I}_{\pi_4}(q,p) + \dots + {\mathcal I}_{\pi_8}(q,p) \ge 10 = 2 \cdot 5$. In the first case, $\ell(\pi_9) \neq 0$, so ${\mathcal I}_{\pi_9}(q,p) \ge 1$; in the second case, $\ell(\pi_3) \neq 0$, so ${\mathcal I}_{\pi_3}(q,p) \ge 2$. In either case Bezout implies that $q$ and $p$ are not relatively prime, and this contradiction completes the proof. \end{proof} It is possible for $g$ and the $f_i$'s to have a common factor, provided it does not contain $\pi_9$. This happens in Robinson's example: $f_1 = x(x^2-1)$, $f_2 = y(y^2-1)$ and $g = (x^2-1)(y^2-1)(1-x^2-y^2)$. \begin{corollary} If $A$ is copacetic, then there exists a positive sextic polynomial $p(x,y)$ so that $A \subseteq {\mathcal Z}(p)$ and $p$ is not sos. \end{corollary} \begin{theorem} Suppose $A = \{\pi_1,\dots,\pi_7\}\subset \mathbb R^2$, with no four $\pi_i$'s in a line and not all seven on one quadratic. Then Hilbert's Method applies to $A$. \end{theorem} \begin{proof} It follows from Lemma 2.6 that $A$ is full and $\tilde {\mathcal A} = \emptyset$. We have $\dim I_{1,3}(A) = 3$, so $\dim I_{1,3}^2(A) \le 6$, but by (2.5), $\dim I_{2,6}(A) \ge \binom 82 - 7\cdot \binom 32 = 7$. Thus there exists $g \in \dim I_{2,6}(A) \smallsetminus I_{1,3}^2(A)$ and since $\tilde {\mathcal A} = \emptyset$, Hilbert's Method can be applied. \end{proof} Theorem 4.3 is implemented in Examples 1.1 and 6.3. \begin{corollary} If $A$ is a set of seven points in $\mathbb R^2$, no four on a line and not all on a quadratic, then there exists a positive sextic polynomial $p(x,y)$ so that $A \subseteq {\mathcal Z}(p)$ and $p$ is not sos. \end{corollary} \section{Psd and sos sections} We now consider $I_{2,6}({\mathcal A}) \cap P_{3,6}$ and $I_{2,6}({\mathcal A}) \cap \Sigma_{3,6}$ in detail. Our motivation is that $P_{3,6}$ and $\Sigma_{3,6}$ lie in $\mathbb R^{28}$ and are difficult to visualize. These two sections, in general, lie in $\mathbb R^4$, and thus are more comprehensible. We work in the homogeneous case. \begin{theorem} In the notation of Theorem 4.1, suppose \begin{equation} P = c_1 F_1^2 + 2 c_2 F_1F_2 + c_3 F_2^2 + c_4 G. \end{equation} If $P$ is sos, then $c_4 = 0$. If $c_4=0$, then $P$ is sos if and only if $P$ is psd if and only if $c_1 \ge 0$, $c_3 \ge 0$ and $c_1c_3 \ge c_2^2$. \end{theorem} \begin{proof} These are Theorems 3.2 and 3.3(1),(2) in the homogeneous case. \end{proof} Because $G$ is only defined modulo $I_{1,d}^2({\mathcal A})$, it is difficult to make any general statements about the circumstances under which $P$ is psd. However, one can identify the possible zeros of $P$. \begin{theorem} Suppose $P = c_1 F_1^2 + 2 c_2 F_1F_2 + c_3 F_2^2 + c_4 G$ is psd, where $c_4 \neq 0$ and let $J$ be the Jacobian of $F_1,F_2$ and $G$. Then \begin{equation} {\mathcal Z}(P) \subseteq {\mathcal Z}(F_1) \cup {\mathcal Z}(F_2) \cup {\mathcal Z}(J). \end{equation} \end{theorem} \begin{proof} If $P(a) = 0$ and $(F_1(a),F_2(a)) \neq (0,0)$, then $P$ and $(F_1(a)F_2-F_2(a)F_1)^2$ are linearly independent sextics which are both singular at $a$. Thus the Jacobian of $(F_1^2, F_1F_2, F_2^2,G)$, when evaluated at $a$, has rank $\le 2$. In particular, the $3 \times 3$ minor omitting $F_1F_2$ vanishes; this minor reduces to $4F_1F_2J$. \end{proof} A maximal perturbation might not lead to a new zero, but rather to a greater singularity at a pre-existing zero; see Example 3.1. In the special case of Robinson's example, we are able to give a much more precise description of these sections. Let $A = \{-1,0,1\}^2 \smallsetminus \{(0,0)\}$. A routine calculation shows that $f_1(x,y) = x^3-x$ and $f_2(x,y) = y^3-y$ span $I_{1,3}(A)$ and $f_1^2,f_1f_2,f_2^2$ and $g(x,y) = (x^2-1)(y^2-1)(1-x^2-y^2)$ span $I_{2,6}(A)$. It is convenient to replace $g$ with $f_1^2+f_2^2+g$, which homogenizes to $R$. Consider now \begin{equation} \begin{gathered} \Phi[c_1,c_2,c_3,c_4](x,y,z):= c_1F_1^2 + 2c_2F_1F_2+c_3F_2^2+c_4R \\ = c_1(x^3-xz^2)^2 + 2c_2(x^3-xz^2)(y^3-yz^2)+c_3(y^3-yz^2)^2 \\ + c_4(x^6 + y^6 + z^6 - x^4y^2 - x^2y^4 - x^4z^2-y^4z^2 - x^2z^4-y^2z^4+3x^2y^2z^2). \end{gathered} \end{equation} This is the general form of $\Phi \in I_{2,6}({\mathcal A})$, where \begin{equation} {\mathcal A} = \{(\pm 1,0,1), (0,\pm 1,1),(\pm 1, \pm 1,1)\}. \end{equation} Theorem 5.1 implies that $\Phi(c_1,c_2,c_3,0)$ is psd if and only if it is sos if and only if $c_1,c_3,c_1c_3-c_2^2 \ge 0$, so we may henceforth assume that $c_4 \neq 0$. We begin our discussion of positivity with a collection of short observations. \begin{lemma} Suppose $\Phi[c_1,c_2,c_3,c_4]$ is psd. Then the following are true: \begin{enumerate} \item $c_4 \ge 0$; \item $\Phi[c_1,-c_2,c_3,c_4]$ and $\Phi[c_3,c_2,c_1,c_4]$ are psd; \item $\Gamma(x,y):= (c_1+c_4) x^6 - c_4 x^4y^2 + 2c_2 x^3y^3 - c_4 x^2y^4 + (c_3+c_4) y^6$ is psd; \item $\Phi[c_1,0,c_3,c_4]$ is psd. \end{enumerate} \end{lemma} \begin{proof} The first observation follows from evaluation at $(0,0,1)$, the second from taking $(x,y,z)\mapsto (x,-y,z),(y,x,z)$, the third from setting $z=0$, and the fourth from averaging the psd forms $\Phi[c_1,\pm c_2,c_3,c_4]$. \end{proof} In view of Lemma 5.3(1), it suffices now to assume $c_4 = 1$. For $t > 0$, let \begin{equation} \alpha(t) = \frac{2t^2+t^4}3, \qquad \beta(t) = \frac{1+2t^2}{3t^4}, \qquad \gamma(t) = \beta(\alpha^{-1}(t)). \end{equation} Then $\beta(t) = \alpha(t^{-1})$, and as $t$ increases from 0 to $\infty$, so does $\alpha(t)$, monotonically. \begin{lemma} For $t > 0$, the sextic $\Phi_t(x,y):= \alpha(t)x^6 - x^4y^2 - x^2y^4 + \beta(t)y^6$ is positive with zeros at $(1,\pm t)$. \end{lemma} \begin{proof} A computation shows that \begin{equation} \Phi_t(x,y) = \frac {(t^2x^2-y^2)^2((t^4+2t^2)x^2 + (2t^2+1)y^2)}{3t^4}. \end{equation} \end{proof} Let $K= \{(x,y): x > 0,\ y \ge \gamma(x))\}$ denote the region lying above the curve $C = \{(\alpha(t),\beta(t)): t > 0\}$, which partially parametrizes the quartic curve $27x^2y^2 - 18x y - 4x - 4y - 1 =0$. For this reason, \begin{equation} \gamma(x) = \frac{2+9x + 2(1+3x)^{3/2}}{27x^2}. \end{equation} \begin{lemma} The binary sextic $\Psi(x,y) = r x^6 - x^4y^2 - x^2y^4 + s y^6$ is psd if and only if $(r,s) \in K$. \end{lemma} \begin{proof} A necessary condition for the positivity of $\Psi$ is $r > 0$. Let $t_0 = \alpha^{-1}(r)>0$, so \begin{equation} \Psi(x,y) = \Phi_{t_0}(x,y) + (s - \gamma(t_0))y^6. \end{equation} If $(r,s) \in K$; that is, if $s\ge \gamma(t_0)$, then Lemma 5.4 and (5.8) show that $\Psi$ is positive. Conversely, $\Psi(1,t_0) = (s - \gamma(t_0))t_0^6$, so if $\Psi$ is positive, then $s\ge \gamma(t_0)$. \end{proof} \begin{theorem} The sextic $\Phi[c_1,0,c_3,1]$ is psd if and only if $(1+c_1,1+c_3) \in K$. \end{theorem} \begin{proof} One direction is clear by Lemmas 5.3(3) and 5.5. For the converse, note that $(1+c_1,1+c_3) \in K$ if and only if $1+c_1 = \alpha(t_0)$ implies $1+c_3 \ge \beta(t_0)$. In other words, we need to show that, with $\lambda = 1 + c_3 - \beta(t_0)$, \begin{equation} \Phi[\alpha(t_0) -1,0,\beta(t_0)+\lambda -1,1] = \Phi[\alpha(t_0) -1,0,\beta(t_0)-1,1] + \lambda F_2^2 \end{equation} is psd whenever $\lambda \ge 0$. To this end, for $t >0$, define \begin{equation} \begin{gathered} R_t(x,y,z) := \Phi[\alpha(t) -1,0,\beta(t)-1,1](x,y,z) = \\ \left(\frac{t^4+2t^2-3}{3} \right)^2 F_1^2(x,y,z)+ \left(\frac{1+2t^2-3t^4}{3t^4}\right)F_2^2(x,y,z) + R(x,y,z). \end{gathered} \end{equation} Note that $R_1 = R$, $R_{1/t}(x,y,z) = R_t(y,x,z)$ and that for $t \neq 1$, the coefficients of $F_1^2$ and $F_2^2$ have opposite sign. The following algebraic identity gives $Q_tR_t$ as a sum of four squares for a psd quadratic form $Q_t(x,y)$, which implies that $R_t$ is psd, and completes the proof. \begin{equation} \begin{gathered} ((2t^4+t^2)x^2+(t^2+2)y^2)3t^4R_t(x,y,z) \\= 3t^6(1 + 2t^2)x^2z^2(x^2 - z^2)^2 + 3t^4(2 + t^2)y^2z^2(y^2 - z^2)^2 \\+ t^2(t^2 - 1)^2x^2y^2(t^2x^2 - y^2 + (1 - t^2)z^2)^2\\ + (2 + t^2)(1 + 2t^2)(t^4x^4 - y^4 - t^4x^2z^2 + y^2z^2)^2. \end{gathered} \end{equation} \end{proof} For $t=1$, (5.11) essentially appears in \cite[p.273]{Ro}. In view of the foregoing, ${\mathcal Z}(R_t)$ contains, at least, ${\mathcal A} \cup \{(1,\pm t,0)\}$. If $R_t(a,b,c)=0$, then each of the squares in (5.9) vanishes. In particular, $cF_1(a,b,c)=cF_2(a,b,c) = 0$, so either $c=0$ or $(a,b,c) \in {\mathcal A} \cup \{(0,0,1)\}$. These cases have already been discussed and we may conclude that ${\mathcal Z}(R_t)= {\mathcal A} \cup \{(1,\pm t,0)\}$ and $|{\mathcal Z}(R_t)| = 10$. We now complete our discussion of the psd case. \begin{theorem} The sextic $\Phi[c_1,c_2,c_3,1]$ is psd if and only if $(c_1,c_3) \in K$ and $|c_2| \le \sigma(c_1,c_3)$ for a function $\sigma(c_1,c_3) \ge 0$ defined on $K$ (see (5.15)). If $c_2 = \pm \sigma(c_1,c_3)$, then $\Phi[c_1,c_2,c_3,1]=R_t+\alpha(t^3 F_1 \pm F_2)^2$ (for suitable $t, \alpha$ and choice of sign). \end{theorem} \begin{proof} First, suppose $\Phi[c_1,c_2,c_3,1]$ is psd. Then $(c_1,c_3) \in K$ by Lemma 5.3(4) and Theorem 5.6. Setting $z=0$, we obtain the psd binary sextic \begin{equation} \Gamma(x,y) = (1+c_1)x^6 - x^4y^2 + 2c_2 x^3y^3 - x^2y^4 + (1+c_3)y^6. \end{equation} Define $t_0$ so that $1+c_1 = \alpha(t_0)$. If $1+c_3 = \beta(t_0)$, then $\Gamma(1,\pm t_0) = \pm c_2t_0^3$ implies that $c_2 = 0$; otherwise, $(1+c_1,1+c_3)$ lies strictly above $C$. Suppose now that $c_2 < 0$ without loss of generality (taking $y \mapsto -y$ if necessary), so that for $u > 0$, \begin{equation} \Gamma(1,-u) > \Gamma(1,u) = (1+c_1) - u^2 - 2|c_2| u^3 - u^4 +(1+c_3)u^6 \ge 0. \end{equation} Let $\Psi(u) = (1+c_1)u^{-3} - u^{-1} - u + (1+c_3)u^3$, so that \begin{equation} 0 \le u^{-3}\Gamma(1,u) = u^3(\Psi(u) - 2 |c_2|). \end{equation} Now define \begin{equation} \sigma(c_1,c_3): = \min_{u > 0} \tfrac 12 \Psi(u) = \tfrac 12 \Psi(v); \end{equation} since $\Psi(u) \to \infty$ as $t \to 0$ or $t \to \infty$, the minimum exists. It follows that $|c_2| \le \sigma(c_1,c_3)$. (Although $\sigma(c_1,c_3)$ is computable explicitly, it is quite complicated. For example, $2\sigma(1,0)$ is the unique real positive root of the sextic $729x^6 - 22518 x^4 + 182774 x^2 - 111392$, approximately $.81392$.) We must now show that every $\Phi[c_1,\pm\sigma(c_1,c_3),c_3,1]$ is psd. Since $\Psi'(v) = 0$, we have the system \begin{equation} \begin{gathered} \sigma(c_1,c_3) = \frac12 \left( (1+c_3)v^3 - v - v^{-1} + (1+c_1)v^{-3}\right); \\ 3(1+c_3)v^2 - 1 + v^{-2} - 3(1+c_1)v^{-4} = 0. \end{gathered} \end{equation} A calculation shows that (5.16) implies \begin{equation} \Phi[c_1,-\sigma(c_1,c_3),c_3,1] = R_v + \mu (v^3F_1 -F_2)^2, \end{equation} where $R_v$ is defined in (5.10) and \begin{equation} \mu = \frac{3(1+c_3)v^4 -(2v^2+1)}{3v^4}. \end{equation} We are done if we can show that $\mu \ge 0$. By hypothesis, both sides of (5.17) vanish at $(1,v,0)$. But if we evaluate (5.17) at $(1,-v,0)$, we have already seen that the left-hand side is positive, and the right-hand side is $0+4v^6\mu$, hence $\mu > 0$. \end{proof} If $\Phi[c_1,c_2,c_3,1](a,b,c) = 0$, then Theorem 5.2 implies that $(a,b,c) \in \mathcal A$ or \begin{equation} abc(a^2-c^2)(b^2-c^2)(a^2-ab+b^2-c^2)(a^2+ab+b^2-c^2)= 0. \end{equation} This includes the new zeros of $R_t$ on $c=0$ but also the extraneous points $(a,b,c)$ for which $a^2+b^2-c^2 = \pm ab$, which never appear non-trivially as zeros for any $R_t$. To sum up, we have described sections of the two cones \begin{equation} \begin{gathered} P = \{(c_1,c_2,c_3,c_4) : c_1F_1^2 +2c_2F_1F_2 + c_3F_2^2 + c_4R \in P_{3,6} \} \subseteq \mathbb R^4, \\ \Sigma= \{(c_1,c_2,c_3,c_4) : c_1F_1^2 +2c_2F_1F_2 + c_3F_2^2 + c_4R \in \Sigma_{3,6} \} \subseteq \mathbb R^4; \end{gathered} \end{equation} at $c_4=0$ and at $c_4=1$. In the first case, the sections coincide and are literally a right regular cone. In the second case $\Sigma$ disappears, and if we think of $(c_1,c_3)$ as lying in a plane and $c_2$ as the vertical dimension, then $P$ is a kind of clam-shell, with a convex boundary curve $C$ lying in the plane and rays emanating at varying angles from the points on the boundary. \section{More ternary sextic examples} \begin{example} Let $A= \{\pi_i\} = \{(a_i,b_i)\}$ be given by $\pi_1 = (-1,0), \pi_2 = (-1,-1), \pi_3 = (0,1), \pi_4 = (0,-1), \pi_5 = (1,0), \pi_6 = (2,2), \pi_7 = (2,-2), \pi_8 = (1,-3)$. By looking at the $3 \times 3$ minors of the matrix with rows $(1,a_i,b_i)$ and the $6 \times 6$ minors of the matrix with rows $(1,a_i,b_i,a_i^2,a_ib_i,b_i^2)$, one can check that no three of the $\pi_i$'s lie in a line, and no six on a quadratic. According to Mathematica, $I_{1,3}(A)$ is spanned by \begin{equation} \begin{gathered} f_1(x,y) = -42 + 49 x + 42x^2 - 49x^3 - 20 y - 38xy + 4x^2y + 42y^2 + 20y^3, \\ f_2(x,y) = -22 + 31 x + 22 x^2 - 31x^3 - 12y - 18xy + 22y^2 + 4xy^2 + 12y^3, \end{gathered} \end{equation} and $\tilde A = \{\left(\frac{2516}{1297},\frac{4991}{2594}\right)\}$, so $A$ is copacetic. In Hilbert's notation, $\phi(x,y) = x^2 -xy+y^2-1$ and \begin{equation} \begin{gathered} \psi(x,y) = -6136 + 2924x + 5784x^2 - 2924x^3 + 352x^4\\ - 2804y - 7000xy + 6299x^2y - 1049x^3y + 5818y^2\\ - 7803xy^2 + 1811x^2y^2 + 2804y^3 - 1402xy^3 + 318y^4. \end{gathered} \end{equation} It follows that there exists $c>0$ so that $f_1^2+f_2^2 + c\phi\psi$ is psd and not sos. We do not offer an estimate for $c$. \end{example} In the examples in the rest of this section, the symmetries are more clearly seen when the polynomials are homogenized. \begin{example} We present one of several ways to generalize Robinson's original set of eight points. For $t > 0$, let \begin{equation} A_t = \{(\pm 1,\pm 1), (\pm t, 0), (0, \pm t)\}. \end{equation} It is not hard to see that $A_t$ is copacetic (with ninth point $(0,0)$) unless $t = \sqrt 2$, in which case $A_t$ lies on $x^2 + y^2 =2$. Since $A_t \mapsto A_{2/t}$ under the invertible map $(x,y) \mapsto ((x+y)/t,(x-y)/t)$, we may assume $0 < t < \sqrt 2$. After homogenizing to $\mathcal A_t $, we note that a basis of $I_{1,3}({\mathcal A_t})$ is given by \begin{equation} \{F_{1,t},F_{2,t}\} = \{x(x^2 + (t^2-1)y^2 - t^2z^2), y((t^2-1)x^2 + y^2 - t^2z^2)\} \end{equation} and that $\tilde{\mathcal A_t} = (0,0,1)$. It is not hard to see that \begin{equation} G_t(x,y,z) = (x^2 + (t^2-1)y^2 - t^2z^2)((t^2-1)x^2 + y^2 - t^2z^2)(-x^2-y^2+t^2z^2) \end{equation} is singular on $\mathcal{A}_t$ and is positive on $(0,0,1)$. (Robinson's example is recovered by setting $t=1$.) Consider now \begin{equation} \begin{gathered} P_t := F_{1,t}^2 + F_{2,t}^2 + 1\cdot G_t^2 = (2-t^2)(x^6 -x^4y^2-x^2y^4+ y^6) + \\ (2t^4 - 3t^2)(x^4+y^4)z^2 +(6t^2-4t^4+t^6)x^2y^2z^2 - t^6 (x^2z^4 + y^2z^4-z^6). \end{gathered} \end{equation} The proof that $P_t$ is psd follows from the identity \begin{equation} \begin{gathered} (x^2+y^2)P_t = (2-t^2)(x^2-y^2)^2(x^2+y^2-t^2z^2)^2 + \\ t^2x^2z^2(x^2+(t^2-1)y^2-t^2z^2)^2 +t^2y^2z^2((t^2-1)x^2+y^2-t^2z^2)^2. \end{gathered} \end{equation} For $t=1$, this formula is in \cite{Ro}. For $t = 0, \sqrt 2$, $P_t$ is sos. It is not hard to show that if $0<t<\sqrt 2$, then ${\mathcal Z}(P_t) = {\mathcal A}_t \cup \{(1,\pm 1,0) \}$ has 10 points and $P_t$ is not sos. \end{example} \begin{example} Let \begin{equation} {\mathcal A} = \{(1,0,0),(0,1,0),(0,0,1),(1,1,0),(1,0,1),(1,1,0),(1,1,1) \}. \end{equation} It is again simple to show that $I_{1,3}({\mathcal A})$ is spanned by \begin{equation} F_1(x,y,z) = xy(x-y),\quad F_2(x,y,z) = yz(y-z),\quad F_3(x,y,z) = zx(z-x), \end{equation} and that \begin{equation} G(x,y,z) = xyz(x-y)(y-z)(z-x) \end{equation} is in $I_{2,6}({\mathcal A}) \smallsetminus I_{1,3}^2({\mathcal A})$. Accordingly, by Theorem 4.3, there exists $c>0$ so that \begin{equation} U_c(x,y,z) = x^2y^2(x-y)^2 + y^2z^2(y-z)^2 + z^2x^2(z-x)^2 + cxyz(x-y)(y-z)(z-x) \end{equation} is psd and not sos. Since $U_c(x,y,z) \ge 0$ whenever $xyz=0$, we define \begin{equation} \begin{gathered} Q_c(x,y,z) := \frac{U_c(x,y,z)}{x^2y^2z^2} \\ = \frac {(x-y)^2}{z^2} + \frac{(y-z)^2}{x^2} + \frac{(z-x)^2}{y^2} + c \left(\frac {x-y}{z}\right) \left(\frac {y-z}{x}\right) \left(\frac {z-y}{x}\right). \end{gathered} \end{equation} It is now sensible to make a substitution: let \begin{equation} u := \frac {x-y}{z};\quad v := \frac {y-z}{x}; \quad w := \frac {z-x}{y}. \end{equation} Then $Q_c = u^2 + v^2 + w^2 + cuvw$; somewhat surprisingly, $\{u,v,w\}$ is not algebraically independent: in fact, \begin{equation} u+v+w+uvw = 0. \end{equation} An application of Lagrange multipliers to minimize $Q_c$, subject to (6.14), shows that two of $\{u,v,w\}$ are equal; by symmetry, we may take $u=v$, so that $w = -\frac{2u}{u^2+1}$, and \begin{equation} Q_c\left(u, u, -\tfrac{2u}{u^2+1}\right) = \frac{2u^2(u^4+2u^2+3 -cu(1+u^2))}{(1+u^2)^2}. \end{equation} Let $\sigma = \sqrt{\sqrt 2 + 1}$. A little calculus shows that the numerator is psd provided $|c| \le c_0:= 4/\sigma$, with $Q_{c_0} = 0$ when $u = \pm \sigma$. Solving back for $(x,y,z)$ yields, up to multiple, that $(1+\sigma, 1+\sigma^2,1-\sigma)$ and its cyclic images are in ${\mathcal Z}(U_{c_0})$, together with (6.8). Here, $| {\mathcal Z}(U_{c_0}) | = 10$. \end{example} \begin{example} The Motzkin form $M$ cannot be derived directly from Theorems 4.1 or 4.3 because $|{\mathcal Z}(M)| = 6$; however, $M$ has zeros at $(1,0,0)$ and $(0,1,0)$ which vanish to the sixth order in the $z$-direction. It is possible to construct psd ternary sextics $M_t$ with $|{\mathcal Z}(M_t)| = 10$ for $t>0$ and such that $M_t \to M$ as $t \to 0$. We do this with an Ansatz by supposing that there is a non-zero even ternary sextic which is symmetric in $(x,y)$ and lies in $I_{2,6}({\mathcal A_t})$ for \begin{equation} {\mathcal A_t} = \{(1,0,0), (0,1,0), (1, 0, \pm t), (0, 1, \pm t), (1, \pm 1, \pm 1)\}. \end{equation} Although these impose 30 equations on the 28 coefficients of a ternary sextic, there is some redundancy, and it can be verified that \begin{equation} \begin{gathered} M_t(x,y,z) = (1-2t^2)(x^4y^2+x^2y^4) + t^4(x^4z^2+y^4z^2)\\- (3 - 8t^2+2t^4)x^2y^2z^2 -2t^2(x^2+y^2)z^4 + z^6 \end{gathered} \end{equation} satisfies this criterion. It is not clear that $M_t$ is psd; in fact, it is not psd when $t^2 > 1/2$. We note that $M_0 = M$ and $M_t$ is a square when $t^2 = 1/2$. The proof that $M_t$ is psd for $t^2 < 1/2$ is given by an sos representation of $Q_tM_t$: \begin{equation} \begin{gathered} (x^2+y^2)M_t(x,y,z) = (1-2t^2)x^2y^2(x^2+y^2-2z^2)^2 + \\ y^2z^2(t^2(x^2-y^2)-(x^2-z^2))^2 + x^2z^2(t^2(y^2-x^2)-(y^2-z^2))^2. \end{gathered} \end{equation} This equation also shows that, at least when $t^2 < 1/2$, ${\mathcal Z}(M_t)= \mathcal A_t$. We may also derive $M_t$ using Theorem 4.1, by first choosing any eight points in $\mathcal A_t$. \end{example} \begin{example} Similarly, one can approach $S(x,y,z)$ by Ansatz and look for a cyclically symmetric even sextic $S_t$ which is singular at \begin{equation} {\mathcal A_t} = \{(\pm t,1,0), (0, \pm t,1), (1,0,\pm t), (1, \pm 1, \pm 1)\}. \end{equation} Again, although there is no reason to expect a non-zero solution, there is one: \begin{equation} \begin{gathered} S_t(x,y,z) = t^4(x^6+y^6+z^6) + (1-2t^6)(x^4y^2+ y^4z^2+z^4x^2)\\ + (t^8 - 2t^2)(x^2y^4+y^2z^4+z^2x^4)-3(1-2t^2+t^4-2t^6+t^8)x^2y^2z^2. \end{gathered} \end{equation} We find that $t^8S_{1/t}(x,y,z) = S_t(x,z,y)$, $S_0(x,y,z) = S(x,y,z)$ and $S_1(x,y,z) = R(x,y,z)$. The proof that $S_t$ is psd follows from yet another algebraic identity: \begin{equation} \begin{gathered} (x^2+y^2)S_t(x,y,z) = (t^2x^4+x^2y^2-t^4x^2y^2-t^2y^4-x^2z^2+t^4y^2z^2)^2\\ + y^2z^2(y^2-x^2+t^2(x^2-z^2))^2 + t^4x^2z^2(y^2-z^2 + t^2(x^2-y^2))^2 \\ +(t^2-1)^2x^2y^2((z^2-x^2)+t^2(y^2-z^2))^2. \end{gathered} \end{equation} When $t=1$, (5.11) and (6.21) coincide. This example was announced, without proof, in \cite[p.261]{Re2}. \end{example} Robinson \cite[p.273]{Ro} observed that $(ax^2+by^2+cz^2)R(x,y,z)$ is sos, ``at least if $0 \le a \le b+c,\ 0 \le b \le a+c,\ 0 \le c \le a+b$." We revisit this situation and simultaneously illustrate the method used to discover (5.11), (6.7), (6.18) and (6.21). \begin{theorem} If $r,s,t \ge 0$, then $(r^2x^2 + s^2y^2 + t^2z^2)R(x,y,z)$ is sos if and only if $r \le s+t$, $s \le r+t$ and $t \le r+s$. \end{theorem} \begin{proof} It was shown in \cite[p.569]{CLR2} (by a polarization argument) that an even sos polynomial $F$ has an sos representation $F = \sum H_j^2$ in which each $H_j^2$ is even. Suppose \begin{equation} (r^2x^2 + s^2y^2 + t^2z^2)R(x,y,z) = \sum_{j=1}^r H_j^2(x,y,z) \end{equation} is such an ``even'' representation. Then ${\mathcal Z}(R) \subseteq {\mathcal Z}(H_j)$ for the quartic $H_j$'s (c.f. (5.4)). It follows that \begin{equation} \begin{gathered} H_j(x,y,z) = c_{1j} xy(x^2-y^2) + c_{2j} xz(x^2-z^2) + c_{3j} yz(y^2-z^2) \\ + (c_{4j}(x^2-z^2)(x^2-y^2+z^2)+c_{5j}(y^2-z^2)(-x^2+y^2+z^2)). \end{gathered} \end{equation} Each $H_j^2$ is even, so the only cross-terms which can appear in any $H_j^2$ are $c_{4j}c_{5j}$ and \begin{equation} \begin{gathered} (r^2x^2 + s^2y^2 + t^2z^2)R(x,y,z) = \lambda_1x^2y^2(x^2-y^2)^2 + \lambda_2x^2z^2(x^2-z^2)^2 \\ + \lambda_3 y^2z^2(y^2-z^2)^2 + \lambda_4 (x^2-z^2)^2(x^2-y^2+z^2)^2 +\\ 2\lambda_5 (x^2-z^2)(x^2-y^2+z^2)(y^2-z^2)(-x^2+y^2+z^2) \\+ \lambda_6 (y^2-z^2)^2(-x^2+y^2+z^2)^2, \end{gathered} \end{equation} for $\lambda_j$'s, defined by \begin{equation} \begin{gathered} \lambda_1 = \sum_j c_{1j}^2,\quad \lambda_2 = \sum_j c_{2j}^2, \quad \lambda_3 = \sum_j c_{3j}^2,\\ \lambda_4 = \sum_j c_{4j}^2,\quad \lambda_5 = \sum_j c_{4j}c_{5j}, \quad \lambda_6 = \sum_j c_{5j}^2. \end{gathered} \end{equation} We solve for the $\lambda_j$ in (6.24): \begin{equation} \lambda_1 = t^2,\quad \lambda_2 = s^2,\quad \lambda_3 = r^2,\quad \lambda_4 = r^2,\quad \lambda_6 = s^2,\quad \lambda_5 = (t^2-r^2-s^2)/2. \end{equation} There exist $c_{ij}$ to satisfy (6.25) and (6.26) if and only if \begin{equation} 0 \le \lambda_4\lambda_6- \lambda_5^2 = \frac14 (r+s-t)(r+t-s)(s+t-r)(r+s+t) \end{equation} If, say, $r \ge s \ge t\ge 0$, then $r+s\ge t$ and $r+t \ge s$ automatically, and so (6.27) holds if and only if $s+t \ge r$. By symmetry, we see that (6.27) is true if and only if all three inequalities hold. \end{proof} \section{Extremal psd ternary forms} In 1980, Choi, Lam and the author \cite{CLR1} studied $|{\mathcal Z} (F)|$ for $F \in P_{3,m}$. Let \begin{equation} \alpha(m):= \max \left( \frac {m^2}4, \frac {(m-1)(m-2)}2 \right). \end{equation} By Theorem 3.5 in \cite{CLR1}, if $F \in P_{3,m}$, then $|{\mathcal Z}(F)| > \alpha(m)$ implies $|{\mathcal Z}(F)| = \infty$, and this occurs if and only if $F$ is divisible by the square of an indefinite form. Let \begin{equation} B_{3,m} = \{ \sup |{\mathcal Z}(F)| \ : \ F \in P_{3,m},\ |{\mathcal Z}(F)| < \infty \}. \end{equation} Then by Theorem 4.3 in \cite{CLR1}, \begin{equation} \begin{gathered} \frac{m^2}4 \le B_{3,m} \le \frac {(m-1)(m-2)}2; \\B_{3,6k} \ge 10k^2,\quad B_{3,6k+2} \ge 10k^2+1,\quad B_{3,6k+4} \ge 10k^2+4. \end{gathered} \end{equation} In particular, $B_{3,6} = 10$. Further, if $F \in P_{3,6}$, and $|{\mathcal Z}(F)| > 10$, then $|{\mathcal Z}(F)| = \infty$ and $F\in \Sigma_{3,6}$ is a sum of three squares (Theorem 3.7). If $G$ is a ternary sextic and $|{\mathcal Z}(G)| = 10$, then one of $\pm G$ is psd and not sos (Corollary 4.8). We wrote (p.12): ``it would be of interest to determine, if possible, {\it all} forms $p \in P_{3,6}$ with exactly 10 zeros. From a combinatorial point of view, it would already be of interest to determine (or classify) all configurations of 10-point sets $S \subset \mathbb P^2$ for which there exist $p \in P_{3,6}$ such that $S = {\mathcal Z}(p)$ $\dots$ The only known psd ternary sextic with 10 zeros is $R$.'' Sections five and six of this paper are inspired by this remark. \begin{lemma} If $F \in P_{3,6}$ is reducible, then $F \in \Sigma_{3,6}$. \end{lemma} \begin{proof} If $F$ has an indefinite factor $H$, then $F = H^2G$, where $G \in P_{3,2d} = \Sigma_{3,2d}$ for $2d \le 4$. If $F = F_1F_2$ for definite $F_i$, then $\deg F_i \le 4$ again implies $F \in \Sigma_{3,6}$. \end{proof} A form $F$ in the closed convex cone $P_{n,m}$ is {\it extremal} if $F = G_1 + G_2$ for $G_j \in P_{n,m}$ implies that $G_j = \lambda_j F$ for $0 \le \lambda_j \in {\mathbb R}$. Equivalently, $F$ is extremal if $F \ge G \ge 0$ implies $G = \lambda F$. The set of extremal forms in $P_{n,m}$ is denoted by $E(P_{n,m})$. \begin{theorem} Suppose $F \in P_{3,6}$ and $|{\mathcal Z}(F)| = 10$. Then $F \in E(P_{3,6})$. \end{theorem} \begin{proof} Since $F \in \Delta_{3,6}$ by \cite{CLR1}, Lemma 7.1 implies that $F$ is irreducible. Suppose $F \ge G \ge 0$. Then $F$ and $G$ are both singular at the ten zeros of $F$, and since $10\cdot 2^2 > 6\cdot 6$, Bezout implies that $F$ and $G$ have a common factor. Thus $G = \lambda F$ and $F$ is extremal. \end{proof} Theorems 5.1 and 5.7 imply that if $F \in E(P_{3,6})$ has Robinson's 8 zeros, then either $F = P_t \in \Delta_{3,6}$ for some $t > 0$ has ten zeros, or $F = (\alpha F_1 + \beta F_2)^2 \in E(\Sigma_{3,6})$. We can use the Perturbation Lemma to put a strong restriction on those extremal forms which only have round zeros. \begin{theorem} If $P \in E(P_{3,2d})\cap \Delta_{3,2d}$ and all zeros of $P$ are round, then $|{\mathcal Z}(P)| \ge \frac{(d+1)(d+2)}2$. \end{theorem} \begin{proof} Suppose $P$ is psd, all its zeros are round, and $|{\mathcal Z}(P)|< \frac{(d+1)(d+2)}2$. Then there exists a non-zero $H \in I_{1,d}({\mathcal Z}(P))$ and the Perturbation Lemma applies to $(P,\pm H^2)$. It follows that $P \pm cH^2$ is psd for some $c>0$ and $P$ is not extremal because \begin{equation} P= \tfrac 12(P - cH^2) + \tfrac 12(P+ cH^2); \end{equation} $P \neq \lambda H^2$ since $P$ is not sos. \end{proof} \begin{corollary} If $p \in E(P_{3,6})\cap \Delta_{3,6}$ and all zeros of $P$ are round, then $|{\mathcal Z}(p)|=10$. \end{corollary} \begin{lemma} If $P \in P_{3,6}$, and ${\mathcal Z}(P)$ contains four points in a line or seven points on a quadratic, then $P \in \Sigma_{3,6}$. \end{lemma} \begin{proof} If ${\mathcal Z}(P)$ contains four points $\pi_i$ on the line $L$, then since $P$ is singular at its zeros, Bezout implies that $L$ divides $P$ and $P \in \Sigma_{3,6}$ by Lemma 7.1. Similarly, if ${\mathcal Z}(P)$ contains seven points $\pi_i$ on the quadratic $Q$, then Bezout again implies that $P$ is reducible. \end{proof} \begin{theorem} If $P \in E(P_{3,6})\cap \Delta_{3,6}$ and all zeros of $P$ are round, then $P$ can be derived by Hilbert's Method using Theorem 4.3. \end{theorem} \begin{proof} Let $A$ denote any subset of seven of the ten zeros of $P$. By Lemma 7.5, $A$ meets the hypothesis of Theorem 4.3. \end{proof} Given positive $f \in \mathbb R_{n,2d}$ and $\pi \in \mathbb R^n$, let $E(f,\pi)$ denote the set of $g \in \mathbb R_{n,d}$ such that there exists a neighborhood ${\mathcal N}_g$ of $\pi$ and $c > 0$ so that $f - cg^2$ is non-negative on ${\mathcal N}_g$. \begin{lemma} $E(f,\pi)$ is a subspace of $\mathbb R_{n,d}$. \end{lemma} \begin{proof} Clearly, $g \in E(f,\pi)$ implies $\lambda g \in E(f,\pi)$ for $\lambda \in \mathbb R$. Suppose $g_1, g_2 \in E(f,\pi)$; specifically, $f - c_1g_1^2 \ge 0$ on ${\mathcal N}_1$ and $f - c_2g_2^2 \ge 0$ on ${\mathcal N}_2$, and let $\mathcal N = {\mathcal N}_{1} \cap {\mathcal N}_{2}$ and $c = \min(c_1,c_2)$. The identity \begin{equation} f - \tfrac c4(g_1+g_2)^2 = \tfrac 12( f - cg_1^2) + \tfrac 12(f - cg_2^2) + \tfrac c4(g_1-g_2)^2 \end{equation} shows that $g_1+g_2 \in E(f,\pi)$. \end{proof} If $f(\pi) > 0$, then $E(f,\pi) = \mathbb R_{n,d}$. Let \begin{equation} \delta(f,\pi) := \binom{n+d}d - \dim E(f,\pi) \end{equation} measure the singularity of the zero of $f$ at $\pi$; the argument of the Perturbation Lemma shows that $\delta(f,\pi) = 1$ if and only if $f$ has a round zero at $\pi$. These definitions also apply in the obvious way to the homogeneous case. \begin{theorem} If $P \in E(P_{3,2d})\cap\Delta_{3,2d}$, then \begin{equation} \delta(P): = \sum_{\pi \in {\mathcal Z}(P)} \delta(P,\pi) \ge \frac{(d+1)(d+2)}2. \end{equation} \end{theorem} \begin{proof} If $f(\pi) > 0$, then $E(f,\pi) = \mathbb R_{n,d}$. Let \begin{equation} {\mathcal E}:= \bigcap_{\pi \in {\mathcal Z}(P)} E(f,\pi). \end{equation} Since \begin{equation} \dim {\mathcal E} \ge \frac{(d+1)(d+2)}2 - \delta(P), \end{equation} if (7.7) fails, then there exists $0 \neq H \in {\mathcal E}$. The argument of the Perturbation Lemma applies to $(P,\pm H^2)$, so that (7.4) holds for some $c > 0$, and $P$ is not extremal. \end{proof} It can be checked that $M$ has round zeros at $(1,\pm 1, \pm 1)$. Let $\pi = (1,0,0)$. If $M - c F^2$ is non-negative near $(1,0,0)$ for a ternary cubic $F$, then by the method of cages (see \cite[\S 3]{CLR3}), $x^3, x^2z, xz^2$ cannot appear in $F$, whereas every other monomial is in $E(M,\pi)$, and so $\delta(M,\pi) = 3$. By symmetry, $\delta(M,(0,1,0)) = 3$, so that $\delta(M) = 4\cdot 1 + 2 \cdot 3 = 10.$ A similar calculation for $S$ shows that it has round zeros at $(1,\pm 1, \pm 1)$ and that $\delta(S,e_i) = 2$ at the unit vectors $e_i$ so $\delta(S) = 4 \cdot 1 + 3 \cdot 2 = 10$ as well. Examples 6.4 and 6.5 were constructed under a heuristic in which ``coalescing'' zeros explain higher-order singularities. These lead to a perhaps overly-optimistic conjecture: \begin{conjecture} If $P \in E(P_{3,6}) \cap \Delta_{3,6}$, then $\delta(P) = 10$, and either $P$ has ten round zeros, or is the limit of psd extremal ternary sextics with ten round zeros. \end{conjecture} These results are likely more complicated in higher degree. The ternary octic \begin{equation} T(x,y,z) = x^4y^4 + x^2z^6+y^2z^6 - 3x^2y^2z^4 = x^4y^4z^6M(1/x,1/y,1/z) \end{equation} is in $E(P_{3,8}) \cap \Delta_{3,8}$; see \cite[p.372]{Re0}. It has five round zeros at $(0,0,1)$ and $(1,\pm1, \pm 1)$, and more singular zeros at $(1,0,0)$ and $(0,1,0)$ at which $\delta = 5$, so that $\delta(T) = 15$. On the other hand, for \begin{equation} U(x,y,z) = x^2(x-z)^2(x-2z)^2(x-3z)^2 + y^2(y-z)^2(y-2z)^2(y-3z)^2 \in \Sigma_{3,8}, \end{equation} ${\mathcal Z}(U) = \{(i,j,1): 0 \le i, j \le 3\}$, so $\delta(U) = 16$. Thus, there is no threshold value for $\delta$ separating $\Sigma_{3,8}$ and $\Delta_{3,8}$, as there is for sextics. \section{Ternary forms in higher degree} For $d \ge 3$, let \begin{equation} T_d = \{(i,j)\ : \ 0 \le i, j,\ i+j \le d\} \subset \mathbb Z^2 \end{equation} denote a right triangle of $\frac{(d+1)(d+2)}2$ lattice points. Define the falling product by \begin{equation} (t)_m=\prod_{j=0}^{m-1}(t-j). \end{equation} The following construction is due to Biermann \cite{Bie}, see \cite[pp.31-32]{Re1}. For $(r,s) \in T_d$, let \begin{equation} \phi_{r,s,d}(x,y) := \frac{ (x)_r(y)_s(d-x-y)_{d-r-s}}{r!s!(d-r-s)!}. \end{equation} \begin{lemma} If $(i,j) \in T_d$, then $\phi_{r,s,d}(i,j) = 0$ if $(i,j) \neq (r,s)$ and $\phi_{r,s,d}(r,s) = 1$. \end{lemma} \begin{proof} Observe that $(n)_m=0$ if $n \in \{0,\dots,m-1\}$ and $(m)_m = m!$. If $(i,j) \in T_d$, then $0 \le i$, $0 \le j$ and $0 \le d - i - j$. Thus $\phi_{r,s,d}(i,j) = 0$ unless $i \ge r$, $j \ge s$ and $d-i-j \ge d - r - s$, or $i+j \le r+s$; that is, unless $(i,j) = (r,s)$. The second assertion is immediate. \end{proof} \begin{theorem} Suppose $B \subseteq T_d$ and $A = T_d \smallsetminus B$. Then a basis for $I_{1,d}(A)$ is given by $\{\phi_{r,s,d} : (r,s) \in B\}$. \end{theorem} \begin{proof} The set $\{\phi_{r,s,d}: (r,s) \in T_d\}$ consists of the correct number of linearly independent polynomials and so is a basis for ${\mathbb R}_{2,d}$. If $p \in {\mathbb R}_{2,d}$, then upon evaluation at $(r,s) \in T_d$, we immediately obtain \begin{equation} p(x,y) = \sum_{(r,s) \in T_d} p(r,s)\phi_{r,s,d}(x,y). \end{equation} If $p \in I_{1,d}(A)$, then $\phi_{r,s,d}$ has non-zero coefficient in (8.4) only if $(r,s) \in B$. \end{proof} We use this construction in the following example, which was inspired by looking at the regular pattern of pine trees below the Sulphur Mountain tram, during a break in the October 2006 BIRS program on ``Positive Polynomials and Optimization''. \begin{example}[The Banff Gondola Polynomials] Suppose $d \ge 3$ and let \begin{equation} A_d = T_d \smallsetminus \{(d,0),(0,d)\} = \{(i,j): 0 \le i,j \le d-1, i+j \le d\}. \end{equation} By Theorem 8.2, $I_{1,d}(A_d)$ is spanned by $f_1(x,y) = \phi_{d,0,d}(x,y) = (x)_d$ and $f_2(x,y) = \phi_{0,d,d}(x,y) = (y)_d$, and it is easy to see that ${\mathcal Z}(f_1) \cap {\mathcal Z}(f_2) = \{0,\dots,d-1\}^2$, so that \begin{equation} \tilde A_d = \{(i,j): 0 \le i,j \le d-1, i+j \ge d+1\}. \end{equation} Note that $(i,j) \in \tilde A_d$ implies that $i,j \ge 2$. Let \begin{equation} \begin{gathered} g_d(x,y) = (x)_2(y)_2(x+y-2)_{d-1}(x+y-4)_{d-3} \\= x(x-1)y(y-1)(x+y-2)(x+y-3) \prod_{k=0}^{d-4} (x+y-4-k)^2, \end{gathered} \end{equation} We claim that $g_d$ is singular at $\pi \in A_d$ and positive at $\pi \in \tilde A_d$. First, it is easy to check that each point in $A_3$ lies on at least two of the lines, and $g_3(2,2) =8$. Now suppose $d \ge 4$ and $(r,s) \in A_d$. If $4 \le r+s \le d$, then $(r,s)$ lies on a squared factor; if $2 \le r+s \le 3$, then $(r,s)$ lies on $x+y-2=0$ or $x+y-3=0$, but also, at least one of $\{r,s\}$ is 0 or 1. Finally, if $0 \le r+s \le 1$, then $\{r,s\}\subseteq \{0,1\}$. If $(r,s) \in \tilde A_d$ for any $d$, then $r,s \ge 2$ and $r+s \ge d+1$, so each factor in $g_d$ is positive at $(r,s)$. It follows from Theorem 3.4 that there exists $c_d > 0$ so that \begin{equation} (x)_d^2 + (y)_d^2 + c_d(x)_2(y)_2(x+y-2)_{d-1}(x+y-4)_{d-3} \end{equation} is positive and not a sum of squares. Note that this polynomial has at least $|A_d|$ zeros, so $B_{3,2d} \ge \frac{d^2+3d-2}2$. This improves the lower bound in (7.3) for $2d = 8, 10$. It can be shown that $c(3) = 4/3$ (exactly) and that $c(d) \le 12d^{-2}$, so $c(d) \to 0$. \end{example} We conclude with some speculations about Hilbert's Method in degree $d\ge 4$. Suppose $A$ is a set of $\binom {d+2}2-2$ points in general position, so that $I_{1,d}(A)$ has basis $\{f_1,f_2\}$. By Bezout, we can only say that $|\tilde A| \le d^2 - |A| = \binom{d-1}2$ as the common zeros do not have to be real or distinct. We have $\dim I_{1,d}^2(A) = 3$ and, from (2.5), \begin{equation} \dim I_{2,2d}(A) \ge \binom{2d+2}2 - 3\left(\binom {d+2}2-2\right) = \binom{d-1}2 + 3. \end{equation} There exist $\binom{d-1}2$ linearly independent polynomials in $I_{2,2d}(A) \smallsetminus I_{1,d}^2(A)$, and it is plausible that one is positive on $\tilde A$. If so, then Hilbert's Method could be applied. If $r \ge 3$, and $A$ is a set of $\binom {d+2}2-r$ points in general position, so that $\dim I_{1,d}(A) = r$, then it is plausible to expect $\tilde {\mathcal A} = \emptyset$. We have \begin{equation} \begin{gathered} \dim I_{2,2d}(A) \ge \binom{2d+2}2 - 3\left(\binom {d+2}2-r\right) = \binom{d-1}2 + 3r-3\\ = \frac {r(r+1)}2 + \frac{(d+1-r)(d+r-4)}2 \ge \dim I_{1,d}^2 + \frac{(d+1-r)(d+r-4)}2, \end{gathered} \end{equation} so if $r \le d$, $I_{2,2d}(A) \smallsetminus I_{1,d}^2(A)$ would be non-empty, and again Hilbert's Method could be applied. We hope to return to these questions elsewhere.
{ "timestamp": "2007-07-14T17:15:54", "yymm": "0707", "arxiv_id": "0707.2156", "language": "en", "url": "https://arxiv.org/abs/0707.2156", "abstract": "In 1888, Hilbert described how to find real polynomials in more than one variable which take only non-negative values but are not a sum of squares of polynomials. His construction was so restrictive that no explicit examples appeared until the late 1960s. We revisit and generalize Hilbert's construction and present many such polynomials.", "subjects": "Algebraic Geometry (math.AG); Number Theory (math.NT)", "title": "On Hilbert's construction of positive polynomials", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9845754470129648, "lm_q2_score": 0.8244619306896955, "lm_q1q2_score": 0.811744973953979 }
https://arxiv.org/abs/2005.01931
A strategy for Isolator in the Toucher-Isolator game on trees
In the Toucher-Isolator game, introduced recently by Dowden, Kang, Mikalački and Stojaković, Toucher and Isolator alternately claim an edge from a graph such that Toucher aims to touch as many vertices as possible, while Isolator aims to isolate as many vertices as possible, where Toucher plays first. Among trees with $n$ vertices, they showed that the star is the best choice for Isolator and they asked for the most suitable tree for Toucher. Later, Räty showed that the answer is the path with $n$ vertices. We give a simple alternative proof of this result. The method to determine where Isolator should play is by breaking down the gains and losses in each move of both players.
\section{Introduction}\label{section-introduction} A Maker-Breaker game, introduced by Erd\H{o}s and Selfridge~\cite{erdos1973combinatorial} in 1973, is a positional game played on the complete graph $K_n$ with $n$ vertices, by two players: Maker and Breaker, who alternately claim an edge from the (remaining) graph, where Maker plays first. Maker wins if she can build a particular structure (e.g., a clique~\cite{MR2788689,MR2854626}, a perfect matching~\cite{MR2467817,MR3870440} or a Hamiltonian cycle ~\cite{MR2467817,MR2726601}) from her claimed edges, while Breaker wins if he can prevent this. There are several variants of Maker-Breaker games, many of which are studied recently (see~\cite{espig2015walker,forcan2019walkermaker,gledel2019maker,MR3963857}). The \textit{Toucher-Isolator} game, introduced by Dowden, Kang, Mikala\v{c}ki and Stojakovi\'{c}~\cite{MR4025410} in 2019, is a quantitative version of a Maker-Breaker game played on a finite graph by two players: \textit{Toucher} and \textit{Isolator}, who alternately claim an edge from the (remaining) graph, where Toucher plays first. A vertex is \emph{touched} if it is incident to at least one edge claimed by Toucher, and a vertex is \emph{untouched} if all edges incident to it are claimed by Isolator. The \textit{score} of the game is the number of untouched vertices at the end of the game when all edges have been claimed. Toucher aims at minimizing the score, while Isolator aims at maximizing the score. For a graph $G$, let $u(G)$ be the score of the game on $G$ when both players play optimally. The above mentioned authors gave general upper and lower bounds for $u(G)$, leaving the asymptotic behavior of $u(C_n)$ and $u(P_n)$ as the most interesting unsolved cases, where $C_n$ is a cycle with $n$ vertices and $P_n$ is a path with $n$ vertices. Later in 2019, R\"{a}ty~\cite{raty2019achievement} determined the exact values of $u(C_n)$ and $u(P_n)$, showing that \begin{center} $u(C_n)=\left\lfloor\dfrac{n+1}{5}\right\rfloor$ \quad and \quad $u(P_n)=\left\lfloor\dfrac{n+3}{5}\right\rfloor.$ \end{center} Moreover, the first set of authors showed that for any tree $T$ with $n\geq3$ vertices, \begin{center} $\dfrac{n+2}{8}\leq u(T)\leq\dfrac{n-1}{2}$, \end{center} where the upper bound is tight when $T$ is a star, but the only tight example they found for the lower bound is a path with six vertices. Therefore, they asked whether there is an infinite family of tight examples for lower bound, or it can be improved for large $n$. Later in 2020, R\"{a}ty~\cite{raty2020toucher} improved the lower bound for $u(T)$ by showing that the path $P_n$ is the most suitable tree with $n$ vertices for Toucher. \begin{thm}\label{thm:uT} Let $T$ be a tree with $n\geq3$ vertices. Then \begin{center} $u(T)\geq\left\lfloor\dfrac{n+3}{5}\right\rfloor$. \end{center} \end{thm} In this paper, we give a simple new proof of this theorem. The argument proceeds as follows. The strategy for Isolator is that he claims an edge which immediately creates an untouched vertex in every move for as long as he can (see Figure~\ref{figure:thm}: left). When no such an edge exists, we modify the graph before the game continues. The edges claimed by Isolator can be deleted as their disappearance does not change the touched/untouched status of any vertex (see Figure~\ref{figure:thm}: middle). Observe that the leaves of the remaining tree are touched otherwise Isolator would have claimed the edge incident to it. Then we delete the edges $e$ claimed by Toucher one by one and, in order to keep the game equivalent to the original game, we replace the edges $u_1v,\dots,u_tv$ sharing a vertex $v$ with $e$ by new edges $u_1v_1,\dots,u_tv_t$ keeping their respective Toucher/Isolator status, where the new vertices $v_1,\dots,v_t$ are considered touched. The resulting graph is a forest all of whose leaves are considered touch (see Figure~\ref{figure:thm}: right). Therefore, this motivates us to study the \emph{non-leaf Isolator-Toucher game} on a forest $F$ which is a variant of the Toucher-Isolator game on $F$ where Isolator plays first and the score of the game is the number of untouched vertices, which are not leaves of $F$, at the end of the game. The aim of Toucher is to minimize the score, while the aim of Isolator is to maximize the score. We remark that this game is inspired by the proof of the lower bound for $P_n$ in~\cite{raty2019achievement}. Our main lemma gives a lower bound for the minimum score $\alpha(m,k,l)$ of the non-leaf Isolator-Toucher game on $F$ when both players play optimally, among all forests $F$ with $m$ edges, $k$ components, and $l$ leaves. \begin{figure}[H] \centering \begin{tikzpicture}[baseline=2ex,scale=0.75] \draw[thick,blue] (0.5,7) -- (1,6); \draw[thick,blue] (1,1) -- (2,1); \draw[thick,blue] (1,5) -- (2,5) -- (1.5,4); \draw[thick,blue] (1,8) -- (2,8); \draw[thick,blue] (4,1) -- (5,1); \draw[thick,blue] (3,6) -- (3.5,7) -- (3.5,8); \draw[thick,blue] (1,5) -- (2,5); \draw[thick,blue] (4.5,3) -- (4.5,4); \draw[thick,blue] (4,5) -- (5,5); \draw[thick,red] (1,3) -- (2,3); \draw[thick,red] (1,6) -- (2,6); \draw[thick,red] (2,1) -- (3,0.5); \draw[thick,red] (2,5) -- (2.5,4); \draw[thick,red] (2,8) -- (2.5,9); \draw[thick,red] (3,3) -- (4,3); \draw[thick,red] (3,5) -- (3.5,4); \draw[thick,red] (5,1) -- (5.5,2); \draw[thick,red] (4.5,4) -- (5.5,4); \draw[thick,red] (4.5,7) -- (4,6) -- (5,6); \draw[thick] (2,1) -- (2.5,2) -- (3,3) -- (2,3); \draw[thick] (3,3) -- (2.5,4); \draw[thick] (2,5) -- (3,5) -- (3,6) -- (2,6) ; \draw[thick] (2,8) -- (2.5,7) -- (3,6) -- (4,6) ; \draw[thick] (3.5,4) -- (4,5) -- (4.5,4); \draw[thick] (4,3) -- (4,2) -- (5,1); \draw[thick,->] (6,4.5) -- (7,4.5); \draw[fill=black] (0.5,7) circle (2.5pt); \draw[fill=black] (1,1) circle (2.5pt); \draw[fill=black] (1,3) circle (2.5pt); \draw[fill=black] (1,5) circle (2.5pt); \draw[fill=black] (1,6) circle (2.5pt); \draw[fill=black] (1,8) circle (2.5pt); \draw[fill=black] (1.5,4) circle (2.5pt); \draw[fill=black] (2,1) circle (2.5pt); \draw[fill=black] (2,3) circle (2.5pt); \draw[fill=black] (2,5) circle (2.5pt); \draw[fill=black] (2,6) circle (2.5pt); \draw[fill=black] (2,8) circle (2.5pt); \draw[fill=black] (2.5,2) circle (2.5pt); \draw[fill=black] (2.5,4) circle (2.5pt); \draw[fill=black] (2.5,7) circle (2.5pt); \draw[fill=black] (2.5,9) circle (2.5pt); \draw[fill=black] (3,0.5) circle (2.5pt); \draw[fill=black] (3,3) circle (2.5pt); \draw[fill=black] (3,5) circle (2.5pt); \draw[fill=black] (3,6) circle (2.5pt); \draw[fill=black] (3.5,4) circle (2.5pt); \draw[fill=black] (3.5,7) circle (2.5pt); \draw[fill=black] (3.5,8) circle (2.5pt); \draw[fill=black] (4,1) circle (2.5pt); \draw[fill=black] (4,2) circle (2.5pt); \draw[fill=black] (4,3) circle (2.5pt); \draw[fill=black] (4,5) circle (2.5pt); \draw[fill=black] (4,6) circle (2.5pt); \draw[fill=black] (4.5,3) circle (2.5pt); \draw[fill=black] (4.5,4) circle (2.5pt); \draw[fill=black] (4.5,7) circle (2.5pt); \draw[fill=black] (5,1) circle (2.5pt); \draw[fill=black] (5,5) circle (2.5pt); \draw[fill=black] (5,6) circle (2.5pt); \draw[fill=black] (5.5,2) circle (2.5pt); \draw[fill=black] (5.5,4) circle (2.5pt); \draw[thick,red] (7.5,3) -- (8.5,3); \draw[thick,red] (7.5,6) -- (8.5,6); \draw[thick,red] (8.5,1) -- (9.5,0.5); \draw[thick,red] (8.5,5) -- (9,4); \draw[thick,red] (8.5,8) -- (9,9); \draw[thick,red] (9.5,3) -- (10.5,3); \draw[thick,red] (9.5,5) -- (10,4); \draw[thick,red] (11.5,1) -- (12,2); \draw[thick,red] (11,4) -- (12,4); \draw[thick,red] (11,7) -- (10.5,6) -- (11.5,6); \draw[thick] (8.5,1) -- (9,2) -- (9.5,3) -- (8.5,3); \draw[thick] (9.5,3) -- (9,4); \draw[thick] (8.5,5) -- (9.5,5) -- (9.5,6) -- (8.5,6) ; \draw[thick] (8.5,8) -- (9,7) -- (9.5,6) -- (10.5,6) ; \draw[thick] (10,4) -- (10.5,5) -- (11,4); \draw[thick] (10.5,3) -- (10.5,2) -- (11.5,1); \draw[thick,->] (12.5,4.5) -- (13.5,4.5); \draw[fill=black] (7.5,3) circle (2.5pt); \draw[fill=black] (7.5,6) circle (2.5pt); \draw[fill=black] (8.5,1) circle (2.5pt); \draw[fill=black] (8.5,3) circle (2.5pt); \draw[fill=black] (8.5,5) circle (2.5pt); \draw[fill=black] (8.5,6) circle (2.5pt); \draw[fill=black] (8.5,8) circle (2.5pt); \draw[fill=black] (9,2) circle (2.5pt); \draw[fill=black] (9,4) circle (2.5pt); \draw[fill=black] (9,7) circle (2.5pt); \draw[fill=black] (9,9) circle (2.5pt); \draw[fill=black] (9.5,0.5) circle (2.5pt); \draw[fill=black] (9.5,3) circle (2.5pt); \draw[fill=black] (9.5,5) circle (2.5pt); \draw[fill=black] (9.5,6) circle (2.5pt); \draw[fill=black] (10,4) circle (2.5pt); \draw[fill=black] (10.5,2) circle (2.5pt); \draw[fill=black] (10.5,3) circle (2.5pt); \draw[fill=black] (10.5,5) circle (2.5pt); \draw[fill=black] (10.5,6) circle (2.5pt); \draw[fill=black] (11,4) circle (2.5pt); \draw[fill=black] (11,7) circle (2.5pt); \draw[fill=black] (11.5,1) circle (2.5pt); \draw[fill=black] (11.5,6) circle (2.5pt); \draw[fill=black] (12,2) circle (2.5pt); \draw[fill=black] (12,4) circle (2.5pt); \draw[thick] (14,3) -- (15,3); \draw[thick] (14,0.5) -- (14.5,1.5) -- (15,2.5); \draw[thick] (14.5,4.5) -- (15,3.5); \draw[thick] (14,5) -- (15,5); \draw[thick] (14.5,8) -- (15,7) -- (15.5,6) -- (15.5,5); \draw[thick] (14.5,6) -- (16.5,6) ; \draw[thick] (16.5,3) -- (16.5,2) -- (17.5,1); \draw[thick] (16,4) -- (16.5,5) -- (17,4) ; \draw[fill=black] (14,3) circle (2.5pt); \draw[fill=black] (14,5) circle (2.5pt); \draw[fill=black] (14,0.5) circle (2.5pt); \draw[fill=black] (14.5,4.5) circle (2.5pt); \draw[fill=black] (14.5,6) circle (2.5pt); \draw[fill=black] (14.5,8) circle (2.5pt); \draw[fill=black] (14.5,1.5) circle (2.5pt); \draw[fill=black] (15,3) circle (2.5pt); \draw[fill=black] (15,5) circle (2.5pt); \draw[fill=black] (15,7) circle (2.5pt); \draw[fill=black] (15,2.5) circle (2.5pt); \draw[fill=black] (15,3.5) circle (2.5pt); \draw[fill=black] (15.5,5) circle (2.5pt); \draw[fill=black] (15.5,6) circle (2.5pt); \draw[fill=black] (16,4) circle (2.5pt); \draw[fill=black] (16.5,2) circle (2.5pt); \draw[fill=black] (16.5,3) circle (2.5pt); \draw[fill=black] (16.5,5) circle (2.5pt); \draw[fill=black] (16.5,6) circle (2.5pt); \draw[fill=black] (17.5,1) circle (2.5pt); \draw[fill=black] (17,4) circle (2.5pt); \end{tikzpicture} \caption{The strategy for Isolator in the Toucher-Isolator game on a tree and the modification of the graph, where the red and blue edges are Toucher and Isolator edges respectively.} \label{figure:thm} \end{figure} \begin{lem} \label{lem:alpha} For non-negative numbers $m$, $k$ and $l$, \begin{center} $\alpha(m,k,l)\geq\left\lfloor\dfrac{m+4k-3l+4}{5}\right\rfloor$. \end{center} \end{lem} The strategy for Isolator in the non-leaf Isolator-Toucher game is that he claims consecutive edges which immediately creates an untouched vertex in every move except the first one for as long as he can, and then he repeats in a different part of the forest. The key step is to determine which part of the forest is the most profitable for Isolator to play in. We do this by breaking down the gains and losses in each move of both players. The rest of this paper is organized as follows. Section~\ref{section-the proofs} is devoted to proving Lemma~\ref{lem:alpha} and then applying it to prove Theorem~\ref{thm:uT}. In Section~\ref{section-conclusion}, we give some concluding remarks and mention related interesting questions. \section{The Proofs}\label{section-the proofs} Before proving Lemma~\ref{lem:alpha} and Theorem~\ref{thm:uT}, we give some definitions necessary for the proofs and make observations regarding how to modify the graph after deleting some edges, to keep the game equivalent to the original game, and how much Isolator gains in each move of both players. For convenience, we first give some names to vertices and edges in a forest. A \emph{leaf} is a vertex of degree $1$. A \emph{small vertex} is a vertex of degree $2$. A \emph{big vertex} is a vertex of degree at least~$3$. A \emph{big edge} is an edge incident to a big vertex. A \emph{leaf edge} is an edge incident to a leaf. An \textit{internal vertex} of a subgraph is a vertex adjacent to no vertex outside the subgraph. We also give some names to paths in a forest. A \emph{path component} is a path such that the non-endpoint vertices are internal and both endpoints are leaves. A \emph{branch} is a path such that the non-endpoint vertices are internal and both endpoints are big. A \emph{twig} is a path such that the non-endpoint vertices are internal and one endpoint is a leaf while the other is big. Finally, we define some game related terms. A \emph{Toucher edge} is an edge claimed by Toucher. An \emph{Isolator edge} is an edge claimed by Isolator. An \textit{Isolator subgraph} is a subgraph whose edges are Isolator edges. An \emph{Isolator path} is an Isolator subgraph which is either a path component, a brach or a twig. A \emph{partially played graph} is a graph where each edge is either a Toucher edge, an Isolator edge or an unclaimed edge. Now we say how a partially played graph should be modified after deleting a Toucher edge or an Isolator subgraph, in order to keep the game equivalent to the original game. For a partially played graph $G$ with a Toucher edge $uv$, we define $G\circleddash uv$ to be the partially played graph obtained from $G$ by \begin{itemize} \item deleting the vertices $u$ and $v$, \item adding new vertices $u_1,\dots,u_{d(u)-1}$ and joining $u_i$ to $u'_i$ where $N(u)\setminus \{v\}=\{u'_1,\dots,u'_{d(u)-1}\}$ such that if $uu'_i$ has been claimed by a player, then we let $u_iu'_i$ be claimed by the same player, \item adding new vertices $v_1,\dots,v_{d(v)-1}$ and joining $v_i$ to $v'_i$ where $N(v)\setminus \{u\}=\{v'_1,\dots,v'_{d(v)-1}\}$ such that if $vv'_i$ has been claimed by a player, then we let $v_iv'_i$ be claimed by the same player, \end{itemize} where $N(v)$ denotes the neighborhood of vertex $v$ and $d(v)$ denotes the degree of vertex $v$. \begin{figure}[H] \centering \begin{tikzpicture}[baseline=2ex] \draw[thick,red] (2,2) -- (3,2); \draw[thick,red] (9,2.5) -- (10,2.5); \draw[thick,red] (1,2.5) -- (2,2); \draw[thick,red] (5,1.5) -- (4,2); \draw[thick,red](13.5,1.5) -- (12.5,2); \draw[thick,blue] (9,1.5) -- (10,1.5); \draw[thick,blue] (4,2) -- (5,2); \draw[thick,blue] (1,1.5) -- (2,2); \draw[thick,blue] (12.5,2) -- (13.5,2); \draw[thick] (0,1) -- (1,1.5) -- (0,2); \draw[thick] (2,2) -- (1,2); \draw[thick] (0,3) -- (1,2.5); \draw[thick] (3,2) -- (4,2); \draw[thick] (4,2) -- (5,2.5); \draw[thick,->] (6,2) -- (7,2); \draw[thick] (8,1) -- (9,1.5) -- (8,2); \draw[thick] (9,2) -- (10,2); \draw[thick] (9,2.5) -- (8,3); \draw[thick] (11.5,2) -- (12.5,2); \draw[thick] (12.5,2) -- (13.5,2.5); \draw[fill=black] (0,1) circle (2.5pt); \draw[fill=black] (0,2) circle (2.5pt); \draw[fill=black] (0,3) circle (2.5pt); \draw[fill=black] (1,1.5) circle (2.5pt); \draw[fill=black] (1,2) circle (2.5pt); \draw[fill=black] (1,2.5) circle (2.5pt); \draw[fill=black] (2,2) circle (2.5pt) node[above=2pt] {$u$}; \draw[fill=black] (3,2) circle (2.5pt) node[above=2pt] {$v$}; \draw[fill=black] (4,2) circle (2.5pt); \draw[fill=black] (5,2) circle (2.5pt); \draw[fill=black] (5,1.5) circle (2.5pt); \draw[fill=black] (5,2.5) circle (2.5pt); \draw[fill] (2.5,0.5) node[above=2pt] {$G$}; \draw[fill=black] (8,1) circle (2.5pt); \draw[fill=black] (8,2) circle (2.5pt); \draw[fill=black] (8,3) circle (2.5pt); \draw[fill=black] (9,1.5) circle (2.5pt); \draw[fill=black] (9,2) circle (2.5pt); \draw[fill=black] (9,2.5) circle (2.5pt); \draw[fill=black] (10,1.5) circle (2.5pt) node[right=2pt] {$u_3$}; \draw[fill=black] (10,2) circle (2.5pt) node[right=2pt] {$u_2$}; \draw[fill=black] (10,2.5) circle (2.5pt) node[right=2pt] {$u_1$}; \draw[fill=black] (11.5,2) circle (2.5pt) node[left=2pt] {$v_1$}; \draw[fill=black] (12.5,2) circle (2.5pt); \draw[fill=black] (13.5,1.5) circle (2.5pt); \draw[fill=black] (13.5,2) circle (2.5pt); \draw[fill=black] (13.5,2.5) circle (2.5pt); \draw[fill] (11,0.5) node[above=2pt] {$G\circleddash uv$}; \end{tikzpicture} \caption{The partially played graph $G\circleddash uv$, where the red and blue edges are Toucher and Isolator edges respectively.} \end{figure} For a partially played graph $G$ with an Isolator subgraph $H$, we define $G\circleddash H$ to be the partially played graph obtained from $G$ by deleting the edges of $H$ and the internal vertices of~$H$. \begin{figure}[H] \centering \begin{tikzpicture}[baseline=2ex] \draw[thick,blue] (0,1) -- (1,1.5) -- (0,2); \draw[thick,blue] (0,3) -- (1,2.5); \draw[thick,blue] (3,2) -- (4,2) -- (5,2); \draw[thick,blue] (2,2) -- (1,1.5); \draw[thick,red] (1,2.5) -- (2,2); \draw[thick,red] (9,2.5) -- (10,2); \draw[thick,red] (12,2) -- (13,2); \draw[thick,red] (5,2) --(6,2); \draw[thick] (1,2) -- (3,2); \draw[thick] (1,2) -- (2,2); \draw[thick] (6,1.5) -- (5,2) -- (6,2.5); \draw[thick,->] (7,2) -- (8,2); \draw[thick] (10,2) -- (9,2); \draw[thick] (9,2) -- (10,2) -- (11,2) ; \draw[thick] (13,1.5) -- (12,2) -- (13,2.5); \draw[fill=black] (0,1) circle (2.5pt); \draw[fill=black] (0,2) circle (2.5pt); \draw[fill=black] (0,3) circle (2.5pt); \draw[fill=black] (1,1.5) circle (2.5pt); \draw[fill=black] (1,2) circle (2.5pt); \draw[fill=black] (1,2.5) circle (2.5pt); \draw[fill=black] (2,2) circle (2.5pt); \draw[fill=black] (3,2) circle (2.5pt); \draw[fill=black] (4,2) circle (2.5pt); \draw[fill=black] (5,2) circle (2.5pt); \draw[fill=black] (6,2) circle (2.5pt); \draw[fill=black] (6,1.5) circle (2.5pt); \draw[fill=black] (6,2.5) circle (2.5pt); \draw[fill] (3.5,0.5) node[above=2pt] {$G$}; \draw[fill=black] (9,2) circle (2.5pt); \draw[fill=black] (9,2.5) circle (2.5pt); \draw[fill=black] (10,2) circle (2.5pt); \draw[fill=black] (11,2) circle (2.5pt); \draw[fill=black] (12,2) circle (2.5pt); \draw[fill=black] (13,2) circle (2.5pt); \draw[fill=black] (13,1.5) circle (2.5pt); \draw[fill=black] (13,2.5) circle (2.5pt); \draw[fill] (11,0.5) node[above=2pt] {$G\circleddash H$}; \end{tikzpicture} \caption{The partially played graph $G\circleddash H$, where the red and blue edges are Toucher and Isolator edges respectively.} \end{figure} \begin{prop} \label{prop:equi} \begin{enumerate}[(i)] \item The non-leaf Isolator-Toucher game on a partially played graph $G$ with a Toucher edge $e$ is equivalent to that on $G\circleddash e$. \item The Toucher-Isolator and the non-leaf Isolator-Toucher games on a partially played graph $G$ with an Isolator subgraph $H$ with $r$ internal vertices are equivalent to their respective versions on $G\circleddash H$ with an extra score of $r$. \item The score of the non-leaf Isolator-Toucher game on a partially played graph $G$ when both players play optimally is equal to that on $G-U$, where $U$ is the set of vertices of path components of length $1$ in $G$. \end{enumerate} \end{prop} \begin{proof} $(i)$ Clearly, there is a bijection between the edges of $G-e$ and $G\circleddash e$. The endpoints of the Toucher edge $e$ in the game on $G$ and the new leaves in the game on $G\circleddash e$ are not counted in the score of each game. $(ii)$ Clearly, there is a bijection between the edges of $G-E(H)$ and $G\circleddash H$. Deleting an Isolator edge does not change the touch/untouched status of its endpoints. An extra score of $r$ comes from the internal vertices on $H$. $(iii)$ A player gains nothing by claiming a path component of length $1$ because its vertices are leaves which are not counted in the score. \end{proof} Next, in order to determine which part of the forest is the most profitable for Isolator to play in, it is useful to calculate the changes in the number of edges, components and leaves of the forest when deleting a Toucher edge or an Isolator path. Moreover, deleting path components of length $1$ also produces a profit. \begin{prop} \label{prop:profit} \begin{enumerate} [(i)] \item Let $G$ be a partially played graph which is a forest with $m$ edges, $k$ components and $l$ leaves, and let $uv$ be a Toucher edge in $G$. Suppose $G\circleddash uv$ is a forest with $m+\Delta m$ edges, $k+\Delta k$ components and $l+\Delta l$ leaves. Then the change in $m+4k-3l$ is as in Table~\ref{table:toucher} and the profit $p_T(G,uv)=\Delta(m+4k-3l)+3$ is non-negative. \begin{table}[H] \centering \resizebox{\textwidth}{!}{ \begin{tabular}{|p{12m}|p{12mm}|c|r|r|r|c|} \hline \multicolumn{2}{|c|}{Toucher edge $uv$} & \multirow{2}{*}{$\Delta m$} & \multicolumn{1}{c|}{\multirow{2}{*}{$\Delta k$}} & \multicolumn{1}{c|}{\multirow{2}{*}{$\Delta l$}} & \multicolumn{1}{c|}{\multirow{2}{*}{$\Delta(m+4k-3l)$}} & \multicolumn{1}{c|}{\multirow{2}{*}{$p_T(G,uv)$}} \\ \cline{1-2} \multicolumn{1}{|c|}{\phantom{text}$u$\phantom{text}} & \multicolumn{1}{c|}{\phantom{text}$v$\phantom{text}} & & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} \\ \hline \multicolumn{1}{|c|}{small} & \multicolumn{1}{c|}{small} & $-1$ & $1$ & $2$ & $-3$& \phantom{$\geq$ }$0$ \\ \hline \multicolumn{1}{|c|}{small} &\multicolumn{1}{c|}{big} & $-1$ & $d(v)-1$ & $d(v)$ & $d(v)-5\geq-2$ & $\geq1$ \\ \hline \multicolumn{1}{|c|}{small} &\multicolumn{1}{c|}{leaf} & $-1$ & $0$ & $0$ & $-1$ & \phantom{$\geq$ }$2$\\ \hline \multicolumn{1}{|c|}{big} & \multicolumn{1}{c|}{big} & $-1$ & $d(u)+d(v)-3$ & $d(u)+d(v)-2$ &$d(u)+d(v)-7\geq-1$ & $\geq2$\\ \hline \multicolumn{1}{|c|}{big} & \multicolumn{1}{c|}{leaf} & $-1$ & $d(u)-2$ & $d(u)-2$ & $d(u)-3\geq\phantom{-}0$ & $\geq3$ \\ \hline \multicolumn{1}{|c|}{leaf} & \multicolumn{1}{c|}{leaf} & $-1$ & $-1$ & $-2$ & $1$ & \phantom{$\geq$ }$4$ \\ \hline \end{tabular}} \caption{The profit of deleting a Toucher edge.}\label{table:toucher} \end{table} \item Let $G$ be a partially played graph which is a forest with $m$ edges, $k$ components and $l$ leaves, and let $P$ be an Isolator path of length $r+1$ in $G$. Suppose $G\circleddash P$ is a forest with $m+\Delta m$ edges, $k+\Delta k$ components and $l+\Delta l$ leaves. Then the change in $m+4k-3l$ is as in Table~\ref{table:isolator} and the profit $p_I(G,P)=\Delta(m+4k-3l)+r-1$ is non-negative. \begin{table}[H] \centering \begin{tabular}{|c|c|c|r|r|c|c|} \hline \multicolumn{2}{|c|}{$u,v$-Isolator path} & \multirow{2}{*}{\phantom{t}$\Delta m$\phantom{t}} & \multicolumn{1}{c|}{\multirow{2}{*}{\phantom{t}$\Delta k$\phantom{t}}} & \multicolumn{1}{c|}{\multirow{2}{*}{\phantom{t}$\Delta l$\phantom{t}}} & \multicolumn{1}{c|}{\multirow{2}{*}{\phantom{t}$\Delta(m+4k-3l)$\phantom{t}}} & \multicolumn{1}{c|}{\multirow{2}{*}{\phantom{t}$p_I(G,P)$\phantom{t}}} \\ \cline{1-2} \multicolumn{1}{|c|}{\phantom{text}$u$\phantom{text}} & \multicolumn{1}{c|}{\phantom{text}$v$\phantom{text}} & & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} \\ \hline \multicolumn{1}{|c|}{leaf} & \multicolumn{1}{c|}{leaf} & $-(r+1)$ & $-1$ & $-2$ & $-r+1$ & $0$ \\ \hline \multicolumn{1}{|c|}{big} & \multicolumn{1}{c|}{leaf} & $-(r+1)$ & $0$ & $-1$ & $-r+2$ & $1$ \\ \hline \multicolumn{1}{|c|}{big} & \multicolumn{1}{c|}{big} & $-(r+1)$ &$1$ & $0$ & $-r+3$ & $2$ \\ \hline \end{tabular} \caption{The profit of deleting an Isolator path.} \label{table:isolator} \end{table} \item Let $G$ be a partially played graph which is a forest with $m$ edges, $k$ components, $l$ leaves and $q$ path components of length $1$. Suppose $G-U$ is a forest with $m+\Delta m$ edges, $k+\Delta k$ components and $l+\Delta l$ leaves. Then the change in $m+4k-3l$ is as in Table~\ref{table:p2} and the profit $p_L(G,U)=\Delta(m+4k-3l)$ is equal to $q$. \begin{table}[H] \centering \begin{tabular}{|c|r|r|c|c|} \hline \multicolumn{1}{|c|}{\phantom{t}$\Delta m$\phantom{t}} & \multicolumn{1}{c|}{\phantom{t}$\Delta k$\phantom{t}} & \multicolumn{1}{c|}{\phantom{t}$\Delta l$\phantom{t}} & \multicolumn{1}{c|}{\phantom{t}$\Delta(m+4k-3l)$\phantom{t}} & \multicolumn{1}{c|}{\phantom{t}$p_L(G,U)$\phantom{t}} \\ \hline $-q$ & $-q$ & $-2q$ & $q$ & $q$\\ \hline \end{tabular} \caption{The profit of deleting $q$ path components of length $1$.} \label{table:p2} \end{table} \end{enumerate} \end{prop} \begin{proof} The calculation steps are shown in the tables. The profit $p_T(G,uv)\geq0$ since the term $+3$ in the definition of $p_T(G,uv)$ comes from $(-1)$ times the minimum value of $\Delta(m+4k-3l)$ in Table~\ref{table:toucher}. The profit $p_I(G,P)\geq0$ since the term $+(r-1)$ in the definition of $p_I(G,uv)$ comes from $(-1)$ times the minimum value of $\Delta(m+4k-3l)$ in Table~\ref{table:isolator}. \end{proof} We are now ready to prove our main lemma which provides a lower bound for $\alpha(m,k,l)$ of the non-leaf Isolator-Toucher game on a forest. \begin{proof}[Proof of Lemma~\ref{lem:alpha}] We use induction on the number $m$ of edges in a forest. Let $F$ be a forest with $n$ vertices, $m$ edges, $k$ components, $l$ leaves, $a$ small vertices and $b$ big vertices. The base case is when all path components have length at most 2, all branches have length at most $2$, and all twigs have length at most $1$. In this case, we shall show that $\left\lfloor\frac{m+4k-3l+4}{5}\right\rfloor\leq0$, and so there is nothing to prove. Since $\sum_{v\in F}d(v)=2m=2(n-k)$, we have $l+2a+\sum_{d(v)\geq3}d(v)=2l+2a+2b-2k$. Then $l=\sum_{d(v)\geq3}d(v)-2b+2k$ and so $l\geq b+2k$. Since every edge in a non-path component is adjacent to a big vertex and every path component contains at most $2$ edges, it follows that \begin{center} $m\leq\displaystyle\displaystyle\sum_{d(v)\geq3}d(v)+2k=l+2b\leq 3l-4k$. \end{center} Now, we suppose that there is either a path component of length at least $3$, a branch of length at least $3$, or a twig of length at least $2$. Isolator's strategy is to keep claiming consecutive edges, for as long as he can, to form an Isolator path. Therefore, he only plays within a path component, a branch, or a twig, say $P$. We label the edges of $P$ by $e_1, e_2,\dots, e_s$ respectively starting from a big edge (if exists). Note that we shall use this convention to label any path component, branch, or twig in this proof. Assuming he has claimed the edges $e_t, e_{t+1},\dots, e_{t+r}$, he then claims $e_{t-1}$ or $e_{t+r+1}$ if it is available, otherwise he stops. That is, he stops if ($t=1$ or $e_{t-1}$ is a Toucher edge) and ($t+r=s$ or $e_{t+r+1}$ is a Toucher edge). Suppose Isolator stops with edges $e_t, e_{t+1},\dots, e_{t+r}$. Then these edges form a path $Q$. So far, both players have claimed $r+1$ edges each since Isolator plays first, and the score is $r$ since Isolator creates an untouched vertex in every move except the first one. We note that the case where Toucher has claimed only $r$ edges because all edges had been claimed, can be proved similarly. Let $G$ be the partially played graph at this step. If $f_1,\dots,f_{r+1}$ be the Toucher edges in $G$, then let $G_1=G\circleddash f_1\circleddash\dots\circleddash f_{r+1}$ be a forest with $m_1$ edges, $k_1$ components and $l_1$ leaves, let $G_2=G_1\circleddash Q$ be a forest with $m_2$ edges, $k_2$ components and $l_2$ leaves, and let $G_3=G_2-U$ be a forest with $m_3$ edges, $k_3$ components and $l_3$ leaves, where $U$ is the set of vertices of path components of length $1$ in $G_2$. By Proposition~\ref{prop:equi}, the game on $G$ is equivalent to the game on $G_1$ which is equivalent to the game on $G_2$ with an extra score of $r$, and the score of the game on $G_2$ when both players play optimally is equal to that on $G_3$. Therefore, it follows that \begingroup \allowdisplaybreaks \begin{align*} \alpha(m,k,l)&\geq r+\alpha(m_3,k_3,l_3) \\ &\geq r+\left\lfloor\dfrac{m_3+4k_3-3l_3+4}{5}\right\rfloor \hskip 5mm \text{(by the induction hypothesis)} \\ &= r+\left\lfloor\dfrac{m+4k-3l+4}{5}+\dfrac{\Delta_1(m+4k-3l)}{5}+\dfrac{\Delta_2(m+4k-3l)}{5} +\dfrac{\Delta_3(m+4k-3l)}{5} \right \rfloor \\ &= r+\left\lfloor\dfrac{m+4k-3l+4}{5}+\dfrac{\sum_{i=0}^{r} (-3+p_T(G\circleddash f_1\circleddash\dots\circleddash f_{i},f_{i+1}))}{5}\right. \\ &\hskip 38mm \left.+\dfrac{-r+1+p_I(G_1,Q)}{5}+\dfrac{p_{L}(G_2,U)}{5}\right\rfloor \\ &\hskip 38mm \hskip 5mm \text{(by Proposition~\ref{prop:profit} since }Q\text{ is an Isolator path in }G_1) \\ &= r+\left\lfloor\dfrac{m+4k-3l+4}{5}+\dfrac{-3(r+1)+p_T}{5}+\dfrac{-r+1+p_I}{5}+\dfrac{p_{L}}{5}\right\rfloor \\ &=\left\lfloor\dfrac{m+4k-3l+4}{5}+\dfrac{r+p_T+p_I+p_{L}-2}{5}\right\rfloor, \end{align*} where \begin{align*} \Delta_1(m+4k-3l)&=(m_1+4k_1-3l_1)-(m+4k-3l), \\ \Delta_2(m+4k-3l)&=(m_2+4k_2-3l_2)-(m_1+4k_1-3l_1), \\ \Delta_3(m+4k-3l)&=(m_3+4k_3-3l_3)-(m_2+4k_2-3l_2), \\ p_T=p_T(G\circleddash f_1\circleddash\dots\circleddash &f_{i},f_{i+1}),\, p_I=p_I(G_1,Q)\text{ and }\, p_L=p_L(G_2,U). \end{align*} \endgroup Therefore, it suffices to show that $r+p_T+p_I+p_{L}\geq2$. Since every term in the sum $r+\sum p_T(G\circleddash f_1\circleddash\dots\circleddash f_{i},f_{i+1})+p_I+p_{L}$ is non-negative by Proposition~\ref{prop:profit}, we shall find a subset whose sum is at least 2. Recall that there is either a path component of length at least $3$, a branch of length at least $3$, or a twig of length at least $2$. The proof is divided into five cases. \textbf{Case 1.} There is a path component of length $3$. Isolator claims the edge $e_2$ in his first move. If Toucher claims the leaf edge $e_1$ or $e_3$ in some move, then $p_T\geq2$ by Proposition~\ref{prop:profit}. Otherwise, Isolator claims the edges $e_1$ and $e_3$, hence $r=2$. \textbf{Case 2.} There is a path component of length at least $4$. Isolator claims the edge $e_3$ in his first move. If Toucher claims the leaf edge $e_1$ in some move, then $p_T\geq2$ by Proposition~\ref{prop:profit}. If Toucher claims the edge $e_2$ in some move (but not $e_1$), then $G_2$ has a path component $e_1$ of length $1$ and so $p_{L}\geq1$ by Proposition~\ref{prop:profit}. Clearly, $r\geq1$, hence it follows that $r+p_{L}\geq2$. Otherwise, Isolator claims the edges $e_1$ and $e_2$, hence $r\geq2$. \textbf{Case 3.} There is a branch of length at least $3$. Isolator claims the edge $e_2$ in his first move. If Toucher claims the big edge $e_1$ in some move, then $p_T\geq1$ by Proposition~\ref{prop:profit}. Clearly, $r\geq1$, hence it follows that $r+p_T\geq2$. If Toucher claims the edge $e_3$ in some move, then $p_I\geq1$ by Proposition~\ref{prop:profit} since Isolator claims the big edge $e_1$. Clearly, $r=1$, hence it follows that $r+p_I\geq2$. Otherwise, Isolator claims the edges $e_1$ and $e_3$, hence $r\geq2$. \textbf{Case 4.} There is a twig of length $2$. Isolator claims the edge $e_1$ in his first move. If Toucher claims the leaf edge $e_2$ in some move, then $p_T\geq2$ by Proposition~\ref{prop:profit}. Otherwise, Isolator claims the edge $e_2$, hence $p_I\geq1$ by Proposition~\ref{prop:profit} since Isolator claims the big edge $e_1$. Clearly, $r=1$, hence it follows that $r+p_I\geq2$. \textbf{Case 5.} There is a twig of length at least $3$. Isolator claims the edge $e_2$ in his first move. If Toucher claims the big edge $e_1$ in some move, then $p_T\geq 1$ by Proposition~\ref{prop:profit}. Clearly, $r\geq1$, hence it follows that $r+p_T\geq2$. If Toucher claims the edge $e_3$ in some move, then $p_I\geq1$ by Proposition~\ref{prop:profit} since Isolator claims the big edge $e_1$. Clearly, $r=1$, it follows that $r+p_I\geq2$. Otherwise, Isolator claims the edges $e_1$ and $e_3$, hence $r\geq2$. This completes the proof of Lemma~\ref{lem:alpha}. \end{proof} We now prove Theorem~\ref{thm:uT} which improves the lower bound for $u(T)$ of the Toucher-Isolator game, by applying the result on the non-leaf Isolator-Toucher game in Lemma~\ref{lem:alpha}. \begin{proof}[Proof of Theorem~\ref{thm:uT}] Let $T$ be a tree with $m\geq2$ edges, $k$ components and $l$ leaves. We shall show that \begin{center} $u(T)\geq\left\lfloor\dfrac{m+4}{5}\right\rfloor$. \end{center} For a partially played graph $G$, a \emph{meta-leaf} in $G$ is a leaf in the graph obtained from $G$ by deleting all Isolator edges, and a \emph{meta-leaf edge} in $G$ is an edge incident to a meta-leaf in $G$. Isolator's strategy is to keep claiming an edge which produces a new untouched vertex in every move i.e., he claims a meta-leaf edge in the current partially played graph if it is available, otherwise he stops (see Figure~\ref{figure:thm}: left). That is, he stops when all meta-leaf edges are Toucher edges. We note that he always obtains a score of one in every move because if he claims the edge $uv$ where $u$ is a meta-leaf, then all already played edges incident to $u$ are Isolator edge, and so $u$ becomes untouched. If the process stops after Isolator's move, i.e. all edges have been claimed by both players, then Isolator obtains a score of $\left\lfloor\frac{m}{2}\right\rfloor\geq\left\lfloor\frac{m+4}{5}\right\rfloor$, as required. Therefore, we may assume that the process stops after Toucher's move, and in particular, $m\geq3$. Suppose Isolator stops after $r$ moves. Let $G$ be the partially played graph at this step. Then $G$ has $r+1$ Toucher edges and $r$ Isolator edges since Toucher plays first. Let $H$ be the Isolator subrgaph of $G$ formed by all Isolator edges, and let $G_1=G\circleddash H$ be a forest with $m_1$ edges, $k_1$ components and $l_1$ leaves (see Figure~\ref{figure:thm}: middle). Since Isolator claimed only meta-leaf edges and all meta-leaf edges in $G$ are Toucher edges, $G_1$ is a tree all of whose leaves are touched. By $m\geq3$, each leaf of $G_1$ is incident to a distinct Toucher edge, and so $r+1\geq l_1$. Let $f_1,\dots,f_{r+1}$ be the Toucher edges in $G$, and let $G_2=G_1\circleddash f_1\circleddash\dots\circleddash f_{r+1}$ be the forest with $m_2$ edges, $k_2$ components and $l_2$ leaves (see Figure~\ref{figure:thm}: right). By Proposition~\ref{prop:equi} and the fact that the leaves in $G_1$ are touched, the Toucher-Isolator game on $G$ where Isolator plays first is equivalent to the non-leaf Isolator-Toucher game on $G_1$ which is equivalent to the non-leaf Isolator-Toucher game on $G_2$ with an extra score of $r$. Therefore, it follows that \allowdisplaybreaks \begin{align*} u(T)&\geq r+\alpha(m_2,k_2,l_2)\\ &\geq r+\left\lfloor\dfrac{m+4k-3l+4}{5}+\dfrac{\Delta_1(m+4k-3l)}{5}+\dfrac{\Delta_2(m+4k-3l)}{5}\right\rfloor \hskip 5mm \text{(by Lemma }\ref{lem:alpha})\\ &= r+\left\lfloor\dfrac{m+4k-3l+4}{5}+\dfrac{(m_1-m)+4(k_1-k)-3(l_1-l)}{5}\right. \\ &\phantom{= r. } \left.+\dfrac{\sum_{i=0}^{r} (-3+p_T(G_1\circleddash f_1\circleddash\dots\circleddash f_{i},f_{i+1}))}{5}\right\rfloor \\ &\geq r+\left\lfloor\dfrac{m+4k-3l+4}{5}+\dfrac{(-r)+4(0)-3(l_1-l)}{5}\right. \\ &\phantom{= r. } \left.+\dfrac{-3(r+1)+2l_1}{5}\right\rfloor \hskip 5mm \text{(by Proposition~\ref{prop:profit} since }G_1\text{ has }l_1\text{ leaf edges}) \\ &=\left\lfloor\dfrac{m+4k+r-l_1+1}{5}\right\rfloor \\ &\geq\left\lfloor\dfrac{m+4k}{5}\right\rfloor \hskip 5mm \text{($r-l_1-1\geq0$)} \\ &=\left\lfloor\dfrac{m+4}{5}\right\rfloor, \hskip 5mm \text{($k=1$)} \end{align*} where \begin{align*} \Delta_1(m+4k-3l)&=(m_1+4k_1-3l_1)-(m+4k-3l), \\ \Delta_2(m+4k-3l)&=(m_2+4k_2-3l_2)-(m_1+4k_1-3l_1). \qedhere \end{align*} \end{proof} \section{Concluding Remarks}\label{section-conclusion} As a result of Theorem~\ref{thm:uT}, for any tree $T$ with $n\geq3$ vertices, \begin{center} $u(P_n)\leq u(T)\leq u(S_n)$, \end{center} where $S_n$ is a star with $n$ vertices. Moreover, Theorem~\ref{thm:uT} implies that, for a forest with $k$ trees, $u(F)\geq\sum_{i=1}^{k}\left\lfloor\frac{n_i+3}{5}\right\rfloor$, where $n_i$ is the number of vertices of the $i^{\text{th}}$ tree in $F$ because, in each move, Isolator can play optimally on the tree Toucher just played. However, the lower bound of $\left\lfloor\frac{n+3k}{5}\right\rfloor$ is not possible because for example, $u(kP_3)=k$ where $kP_3$ is the disjoint union of $k$ copies of $P_3$. Many interesting questions about the Toucher-Isolator game are still open (see~\cite{MR4025410}). For example, find a $3$-regular graph $G$ with $n$ vertices that maximizes $u(G)$. Dowden, Kang, Mikala\v{c}ki and Stojakovi\'{c} showed that the largest proportion of untouched vertices for a $3$-regular graph is between $\frac{1}{24}$ and $\frac{1}{8}$. \section*{Acknowledgment} The first author is grateful for financial support from the Science Achievement Scholarship of Thailand. \bibliographystyle{siam}
{ "timestamp": "2020-05-07T02:23:51", "yymm": "2005", "arxiv_id": "2005.01931", "language": "en", "url": "https://arxiv.org/abs/2005.01931", "abstract": "In the Toucher-Isolator game, introduced recently by Dowden, Kang, Mikalački and Stojaković, Toucher and Isolator alternately claim an edge from a graph such that Toucher aims to touch as many vertices as possible, while Isolator aims to isolate as many vertices as possible, where Toucher plays first. Among trees with $n$ vertices, they showed that the star is the best choice for Isolator and they asked for the most suitable tree for Toucher. Later, Räty showed that the answer is the path with $n$ vertices. We give a simple alternative proof of this result. The method to determine where Isolator should play is by breaking down the gains and losses in each move of both players.", "subjects": "Combinatorics (math.CO)", "title": "A strategy for Isolator in the Toucher-Isolator game on trees", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9845754497285468, "lm_q2_score": 0.8244619263765707, "lm_q1q2_score": 0.8117449719462762 }
https://arxiv.org/abs/2207.02271
Maximum size of a triangle-free graph with bounded maximum degree and matching number
Determining the maximum number of edges under degree and matching number constraints have been solved for general graphs by Chvátal and Hanson (1976), and by Balachandran and Khare (2009). It follows from the structure of those extremal graphs that deciding whether this maximum number decreases or not when restricted to claw-free graphs, to $C_4$-free graphs or to triangle-free graphs are separately interesting research questions. The first two cases being already settled, respectively by Dibek, Ekim and Heggernes (2017), and by Blair, Heggernes, Lima and D.Lokshtanov (2020). In this paper we focus on triangle-free graphs. We show that unlike most cases for claw-free graphs and $C_4$-free graphs, forbidding triangles from extremal graphs causes a strict decrease in the number of edges and adds to the hardness of the problem. We provide a formula giving the maximum number of edges in a triangle-free graph with degree at most $d$ and matching number at most $m$ for all cases where $d\geq m$, and for the cases where $d<m$ with either $d\leq 6$ or $Z(d)\leq m < 2d$ where $Z(d)$ is a function of $d$ which is roughly $5d/4$. We also provide an integer programming formulation for the remaining cases and as a result of further discussion on this formulation, we conjecture that our formula giving the size of triangle-free extremal graphs is also valid for these open cases.
\section{Introduction} In extremal graph theory, an important series of problems, including the celebrated Turán's graphs \cite{turan}, investigate the maximization or the minimization of the number of edges in a graph under a given set of constraints. A question of this kind is to determine the maximum number of edges of a graph when its maximum degree is at most $d$ and its matching number is at most $m$ for two given integers $d$ and $m$. This is a special case of a more general problem posed by Erdős and Rado in 1960 \cite{erdos-rado}. It is worth mentioning that this problem is equivalent to determining Ramsey numbers for line graphs \cite{ramseyline}. This question has been first solved in 1974 by Chv\'{a}tal and Hanson \cite{hanson} using some optimization techniques. A proof constructing an ``extremal'' graph with maximum number of edges under given degree and matching number constraints has only came out much later in 2009 by Balachandran and Khare \cite{j}. Balachandran and Khare \cite{j} exhibit an extremal graph whose connected components consist of stars, complete graphs and in some cases ``almost complete graphs'' that contain $C_4$'s (cycles of length 4), but do not inform us on the unicity of these extremal graphs. This gives rise to a natural question: what happens if we restrict the structure of extremal graphs? Can the same upper bound be still achieved? The structure of extremal graphs given in \cite{j} makes this question especially interesting for three classes of graphs obtained by restricting the above-mentioned types of components: claw-free graphs obtained by forbidding the smallest star (which is not an edge), triangle-free graphs obtained by forbidding the smallest complete graph (which is not an edge), and $C_4$-free graphs (since $C_4$'s occur in ``almost complete graphs''). Among these directions, the situation of claw-free graphs has been settled by Dibek et al. in \cite{Pinar}. The authors exhibit cases where the maximum number of edges remains the same as for general graphs, and other cases where it is strictly less. More recently, Blair et al. \cite{chordal} investigated chordal graphs which are much more restricted than $C_4$-free graphs, the class of graphs that would exclude the ``almost complete graph'' components occurring in the extremal graphs provided in \cite{j}. The authors showed that replacing the ``almost complete graph'' components by chordal graphs having the same size, the bound for general graphs is also achieved by chordal graphs. In the same spirit, M{\aa}land addressed the restriction to bipartite graphs, split graphs, disjoint unions of split graphs and unit interval graphs in \cite{thesie}. In this paper, we investigate the direction that remained open and consider triangle-free graphs from the same perspective. We start with same preliminaries in Section \ref{sec:prem}. In Section \ref{sec:dgeqm}, we first determine the maximum number of edges of a triangle-free graph when its maximum degree is at most $d$ and its matching number is at most $m$ for two given integers $d$ and $m$ such that $d>m$ or $d=m$. Besides, for $m>d$, we derive some structural properties for the connected components of an edge-extremal graph, which allows us to identify the desired extremal value in further sections. Using these structural properties, in Section \ref{sec:d456}, we solve the problem for $m>d$ with either $d\leq 6$ or $Z(d)\leq m <2d$ where $Z(d)$ is roughly $5d/4$. For claw-free graphs and chordal graphs, the size of edge-extremal graphs are the same as the general upper bound in most of the cases. Clearly, this guarantees the optimality of the size once a graph with desired properties is constructed. Unlike these cases, the size of edge-extremal triangle-free graphs that we find in this paper is, in most of the time, strictly less than the general case. This adds to the difficulty of proving the optimality in our results. In Section \ref{sec:conc}, we present all our findings as a unique formula providing the size of the extremal graphs (in Theorem \ref{thm:ALL-IN-ONE}) and compare it with the size of general extremal graphs. Last but not least, in Section \ref{sec:discussion}, we investigate the remaining cases, namely for natural numbers $m$ and $d$ such that $7\leq d<m$ with either $m<Z(d)$ or $m\geq 2d$. For these open cases, we suggest an integer programming formulation based on our earlier observations. With further discussion on this formulation, we conjecture that the formula we provide in Theorem \ref{thm:ALL-IN-ONE} is valid in general, with no condition on $d$ and $m$. Lastly, again based on our former structural results, we reformulate our problem as a variant of the extremal problem addressed in Turan's Theorem \cite{turan} with an additional constraint on the maximum degree; or in Erdős-Stone's Theorem which has been described as a fundamental theorem of extremal graph theory (see \cite{bollobas}). Indeed, the problem of finding the maximum number of edges in a $K_r$-free graph with given number of vertices and maximum degree at most $d$ is an interesting problem for itself. \section{Notation and Preliminaries}\label{sec:prem} Throughout this paper, $G=(V(G),E(G))$ is a simple undirected graph. We call $|V(G)|$ and $|E(G)|$ the {\it order} and the {\it size} of $G$, respectively. For any vertex $v\in V(G)$, the number of vertices adjacent to $v$ is said to be the \textit{degree} of $v$, denoted by $d(v)$. We say a graph $G$ is \textit{$d$-regular} if $d(v)=d$ for all $v\in V(G)$. Moreover, if $d(w)=d-1$ for some $w\in V(G)$ and $d(v)=d$ for all $v\in V(G)-w$, then $G$ is said to be \textit{almost $d$-regular}. We denote the maximum degree of $G$ by $\Delta(G)$, and the minimum degree of $G$ by $\delta(G)$. The minimum number of colors to color all edges of a graph $G$ in such a way that two adjacent edges receive different colors is called the \textit{chromatic index} of $G$, and denoted by $\chi'(G)$. According to Vizing's Theorem, we have $\Delta(G)\leq \chi'(G)\leq \Delta(G)+1$ for any graph $G$ \cite{vizing}. Given a graph, a set of edges having pairwise no common end vertex is called a \textit{matching}. The size of a maximum matching of $G$ is called \textit{the matching number} and denoted by $\nu(G)$. We say that $G$ has a \textit{perfect matching} if $\nu(G)=n/2$, where $n=|V(G)|$. The complete graph of order $n$ and the complete bipartite graph with sets of sizes $m$ and $n$ are denoted by $K_n$ and $K_{m,n}$, respectively. The graph $K_{1,d}$ is called a \textit{$d$-star}. A graph is \textit{triangle-free} if it does not contain $K_3$ as an induced subgraph. For a given graph class $\mathbf{C}$ and two given positive integers $d$ and $m$, we define $\mathbb{M}_\mathbf{C}(d,m)$ to be the set of all graphs $G$ in $\mathbf{C}$ satisfying $\Delta(G)\leq d$ and $\nu(G)\leq m$. A graph in $\mathbb{M}_\mathbf{C} (d,m)$ with the maximum number of edges is called \textit{edge-extremal}, and the number of edges of an edge-extremal graph in $\mathbb{M}_\mathbf{C} (d,m)$ is denoted by $f_{\mathbf{C}}(d,m)$. Let $\vartriangle$ be the class of triangle-free graphs. In this paper, we assume that edge-extremal graphs have no isolated vertices since adding isolated vertices to a graph does not increase the number of edges. We note that in general, if one of the two parameters $\Delta(G)$ and $\nu(G)$ is not bounded, then the size of $G$ is not bounded neither (for general graphs). Indeed, a star has matching number one no matter how large its degree, thus its size. Likewise, the graph consisting of an unbounded number of independent $K_2$'s (that is, sharing no common vertex) is an example where the maximum degree is bounded (by one) but the matching number is not, neither the size. It follows from this discussion that, in general, one should bound both the matching number and the degree of a graph so that its size is also bounded. In this case, Vizing's Theorem provides us with a natural upper bound on the size of a graph. For any graph $G$, since the set of edges having the same color in an edge-coloring of $G$ forms a matching whose size is at most $\nu(G)$, and we have $\chi'(G)\leq \Delta(G)+1$ by Vizing's Theorem; we obtain $|E(G)|\leq (\Delta(G)+1)\nu(G)$. For given bounds $\Delta(G)\leq d$ and $\nu(G) \leq m$, an edge-extremal graph can thus have at most $dm+m$ edges. The maximum size of a general graph with $\Delta(G)\leq d$ and $\nu(G) \leq m$ obtained in \cite{hanson} and \cite{j} shows that this upper bound is actually met when some divisibility conditions hold, and we are ``pretty close'' to it otherwise. The following theorem gives not only the formula for the maximum size of a (general) graph with $\Delta(G)\leq d$ and $\nu(G) \leq m$, but also describes an edge-extremal graph. Let $\mathcal{GEN}$ denote the class of general graphs. \begin{theorem}[\hspace{-0.01mm}\cite{j}] \label{thm:general} With the preceding notation, we have, \[f_{\mathcal{GEN}}(d,m)=dm + \left\lfloor \frac{d}{2} \right\rfloor \left\lfloor \frac{m}{\lceil \frac{d}{2} \rceil} \right\rfloor. \] Moreover, a graph with $f_{\mathcal{GEN}}(d,m)$ edges is obtained by taking the disjoint union of $r$ copies of $d$-star and $q$ copies of \[ \begin{cases} K_{d+1} & $if d+1 is odd$ \\ K'_{d+1} & $if d+1 is even$, \end{cases}\] where $q$ is the largest integer such that $m = q \left\lceil \frac{d}{2} \right\rceil + r$ and $r \ge 0$; and where $K'_{d+1}$ is the graph obtained by removing a perfect matching from the complete graph $K_{d+1}$ on $d+1$ vertices, adding a new vertex $v$, and making $v$ adjacent to $d$ of the other vertices. \end{theorem} In this paper, we find the size of triangle-free extremal graphs in most cases; apart from two simple cases, namely $d=1$ and $m<\lfloor d/2\rfloor$, none of them achieves the general upper bound given in Theorem \ref{thm:general}. Let us now introduce a key lemma that describes the structure of edge-extremal graphs. A graph $G$ is said to be \textit{ factor-critical} if $G\setminus v$ has a perfect matching for all $v\in V(G)$. By definition, being factor-critical for a graph $G$ directly implies that $|V(G)|=2\nu(G)+1$. We will use the following well-known result which is a sufficient condition for a graph to be factor-critical. \begin{lemma}(Gallai's Lemma, \cite{g})\label{gallai} If $G$ is a connected graph such that for all $v\in V(G)$, $\nu(G\setminus v)=\nu(G)$, then $G$ is factor-critical and hence $|V(G)|=2\nu(G)+1$. \end{lemma} The following lemma has been first given in \cite{j} for general graphs, and then restated slightly differently in \cite{chordal}. It establishes a connection between edge-extremal graphs and factor-critical graphs for a wide range of graph classes, including triangle-free graphs. For the sake of completeness, we also provide a short proof. Let us introduce a special class of extremal graphs that will be our main focus in the rest of the paper. \begin{definition} $\mathcal{G}_{\mathbf{C}}(d,m)$ is the subclass of the set of edge-extremal graphs in $\mathbb{M}_{\mathbf{C}} (d,m)$ which consists of the graphs having maximum number of connected components isomorphic to a $d$-star. \end{definition} \begin{lemma}\cite{j,chordal}\label{1} Let $d,m$ be natural numbers, and let $\mathbf{C}$ be a graph class that is closed under vertex deletion and closed under taking disjoint union with stars. Take a graph $G\in \mathcal{G}_{\mathbf{C}}(d,m)$. Then, every connected component of $G$ that is not a $d$-star is factor-critical. \end{lemma} \begin{proof} Suppose on the contrary that $W$ is a connected component of $G$ which is neither a $d$-star nor factor-critical. By Lemma \ref{gallai}, there is a vertex $v$ in $W$ such that $\nu(W\setminus v)<\nu(W)$. Now we construct a new graph $G'$ whose components are the components of $G$ except $W$, $W\setminus v$ and a $d$-star. One can observe that $G'\in \mathbb{M}_\mathbf{C}(d,m)$ and $|E(G')|=|E(G\setminus v)| +d\geq |E(G)|$. So $G'$ is an edge-extremal graph in $\mathbb{M}_\mathbf{C}(d,m)$ with more star components than in $G$, a contradiction with the assumptions on $G$. \end{proof} Lastly, we derive a result that will be useful in Section \ref{sec:d456}. Let $\chi(G)$ denote the minimum number of colors needed to color all vertices of $G$ in such a way that two adjacent vertices get different colors. \begin{lemma}\cite{a}\label{erdos} Let $r\geq 3$. For any graph $G$ on $n$ vertices, at most two of the following properties can hold: \begin{enumerate} \item $G$ does not contain $K_r$ as an induced subgraph, \item $\delta(G)> \dfrac{3r-7}{3r-4} n$, \item $\chi(G)\geq r$. \end{enumerate} \end{lemma} The following corollary states that for $r=3$, if properties 1 and 2 of Lemma \ref{erdos} hold, then property 3 is not satisfied. \begin{corollary}\label{key} Any triangle-free graph of order $n$ with minimum degree greater than $\dfrac{2n}{5}$ is bipartite. \end{corollary} \section{Edge-extremal triangle-free graphs with $d\geq m$}\label{sec:dgeqm} In this section, we find the maximum number of edges in a triangle-free graph with matching number at most $m$ and degree at most $d$ and where $d> m$. Besides, we also solve the case where $d=m$. Solving these cases allows us to further strengthen our assumption in Lemma \ref{1} on the structure of an edge-extremal triangle-free graph. Stated in Corollary \ref{cor:structure-not-star}, this structural property will play a key role to obtain our main results for $d<m$ in Sections \ref{sec:d456} and \ref{sec:conc}. First, let us bound the number of edges in a factor-critical triangle-free graph in terms of the matching number. \begin{lemma}\label{lem:factor-critical-triangle-free} Let $H$ be a factor-critical triangle-free graph. Then, we have $|E(H)|\leq 1+\nu(H)^2$. \end{lemma} \begin{proof} Since $H$ is factor-critical, we have $|V(H)|=2h+1$ where $h:=\nu(H)$. Bipartite graphs are not factor-critical, therefore $H$ has an odd cycle. Let us take the smallest (induced) odd cycle $C_{2s+1}$ in $H$. Notice that $s\geq 2$ since $H$ is triangle-free. Moreover, there is at most $(h-s)^2$ edges within $H-C_{2s+1}$ by Turan's theorem since $H-C_{2s+1}$ has $2h-2s$ vertices. On the other hand, any vertex in $H-C_{2s+1}$ can have at most $s$ neighbors in $C_{2s+1}$ because otherwise there would be a triangle. As a result, we get \begin{eqnarray*} |E(G)|&\leq& (2s+1)+(h-s)^2+(2h-2s)s\\ &\leq & (s^2+1)+(h-s)^2+(2h-2s)s=1+h^2, \end{eqnarray*} which completes the proof. \end{proof} \noindent By using Lemma \ref{lem:factor-critical-triangle-free}, we can derive the following structural property for the graphs in $\mathcal{G}_{\vartriangle}(d,m)$. \begin{lemma}\label{lem:notstar} Let $G\in \mathcal{G}_{\vartriangle}(d,m)$. Then, for any connected component $H$ of $G$ that is not a $d$-star, we have \begin{itemize} \item[(i)] $|E(H)|\leq 1+\nu(H)^2$, and \item[(ii)] $\nu(H)\geq d$. \end{itemize} \end{lemma} \begin{proof} Since $H$ is not a star, by Lemma \ref{1}, $H$ is factor-critical. Then, part (i) follows from Lemma \ref{lem:factor-critical-triangle-free}. Now, suppose $\nu(H)<d$. Since $H$ is triangle-free and non-bipartite, we have $\nu(H)\geq 2$. Thus, we get $|E(H)|\leq 1+\nu(H)^2<d\cdot \nu(H)$. Then, take $\nu(H)$ copies of $d$-stars instead of $H$; this increases the number of edges while keeping $\nu(G)$ and $\Delta(G)$ the same. This contradicts with $G\in \mathcal{G}_{\vartriangle}(d,m)$. Therefore, we get $\nu(H)\geq d$, so the result follows. \end{proof} \noindent Lemma \ref{lem:notstar} allows us to answer the cases $d > m\geq 1$ (in Theorem \ref{thm:case-d-greater-m}) and $d=m$ (in Theorem \ref{thm:case-d-d}). \begin{theorem}\label{thm:case-d-greater-m} With the preceding notation, $f_{\vartriangle}(d,m)=dm$ for $d>m\geq 1$. \end{theorem} \begin{proof} Assume $d>m$, and take $G\in\mathcal{G}_{\vartriangle}(d,m)$. If $G$ has a component $G_1$ that is not a $d$-star, then by Lemma \ref{lem:notstar} (ii), we would get $m\geq\nu(G_1)\geq d$, which is a contradiction. Hence, all the components of $G$ are $d$-stars, so we get $|E(G)|=dm$. \end{proof} \begin{theorem}\label{thm:case-d-d} With the preceding notation, $f_{\vartriangle}(1,1)=1$ and $f_{\vartriangle}(d,d)=d^2+1$ for $d\geq2$. \end{theorem} \begin{proof} Firstly, any graph $G$ with $\Delta(G)=\nu(G)=1$ can contain only one edge, so $f_{\vartriangle}(1,1)=1$ follows. Now, consider the graph $A_d$ shown in Figure \ref{fig:A_d}. It can be easily seen that $A_d\in \mathbb{M}_{\vartriangle}(d,d)$ and $|E(A_d)|=d^2+1$. Then, let us take $G\in \mathcal{G}_{\vartriangle}(d,m)$. By definition of $G$, we have $|E(G)|\geq |E(A_d)|$ giving $|E(G)|\geq d^2+1$. If all the components of $G$ are $d$-stars, then we would get $|E(G)|\leq d^2$, which is a contradiction. Hence, $G$ has at least one component which is not a $d$-star; let us denote it by $G_1$. By Lemma \ref{lem:notstar} (ii), we have $\nu(G_1)\geq d$. Since $d=\nu(G)\geq \nu(G_1)$ we obtain $G_1=G$. Now, by Lemma \ref{lem:notstar} (i), we have $|E(G)|\leq d^2+1$, which completes the proof. \end{proof} \begin{figure}[h!] \centering \includegraphics[scale=0.2]{new-A-d-graph.png} \caption{\small{$A_d$ is a graph on $2d+1$ vertices which is a blow-up of a cycle of length five. A circle and the number inside it represent an independent set of that size, and straight lines between two circles or between a vertex and a circle indicate that all possible edges are present. } } \label{fig:A_d} \end{figure} We close this section with a corollary of Lemmas \ref{1} and \ref{lem:notstar}, which states that for any edge-extremal graph in $\mathcal{G}_{\vartriangle}(d,m)$ (whose number of $d$-star components is maximum), every component $H$ of it which is not a $d$-star is a factor-critical and edge-extremal graph in $\mathbb{M}_{\vartriangle}(d,\nu(H))$ with matching number $\nu(H)\geq d$. An extremal graph with these properties will be useful to prove our results in Section \ref{sec:d456}. \begin{corollary}\label{cor:structure-not-star} Let $d$ and $m$ be natural numbers, and let $G\in \mathcal{G}_{\vartriangle}(d,m)$. Then, for every connected component $H$ of $G$, one of the following is true: \begin{itemize} \item[(i)] $H$ is a $d$-star. \item[(ii)] $|E(H)|=f_{\vartriangle}(d,\nu(H))$ and $|V(H)|=2\cdot \nu(H)+1$ where $\nu(H)\geq d$. \end{itemize} \end{corollary} \begin{proof} Let $H$ be a connected component of $G$ that is not a $d$-star. First of all, we know $\nu(H)\geq d$ by Lemma \ref{lem:notstar}. Also, from Lemma \ref{1}, we know that $H$ is factor-critical, thus $|V(H)|=2\cdot \nu(H)+1$. Hence, we have $H\in \mathbb{M}_{\vartriangle}(d,\nu(H))$ since $\Delta(H)\leq d$, which implies $|E(H)|\leq f_{\vartriangle}(d,\nu(H))$. On the other hand, if $|E(H)|<f_{\vartriangle}(d,\nu(H))$, we would get $|E(G)|<|E(G_1)|$ by taking $G_1$ as the disjoint union of $G-H$ and $H_1$ for some $H_1\in\mathbb{M}_{\vartriangle}(d,\nu(H))$, which leads to a contradiction. As a result, we get $|E(H)|=f_{\vartriangle}(d,\nu(H))$. \end{proof} \section{Edge-extremal triangle-free graphs with $m> d$} \label{sec:d456} \noindent We start this section with the trivial case $d=1$. Then, we will investigate a deeper study on the structure of extremal graphs to settle two cases with $m>d$, namely $Z(d)\leq m <2d$ for some function $Z(d)$ introduced in Definition \ref{def:minimum-order}, and $d\leq 6$. \begin{theorem}\label{thm:case-d-1} With the preceding notation, we have $f_{\vartriangle}(1,m)=m$ for all $m\geq 1$. \end{theorem} \begin{proof} If $\Delta(G)=1$ for a graph $G$, then $G$ is the disjoint union of $\nu(G)$ edges, so the result follows. \end{proof} \noindent In the rest of this section, we assume $d\geq 2$. Our results will be based on the following key property. We will show that if $H$ is a connected component of a graph $G\in \mathcal{G}_{\vartriangle}(d,m)$ and if it is not a $d$-star then in addition to the assumption $\nu(H)\geq d$ given in Corollary \ref{cor:structure-not-star} ii), we can also bound $\nu(H)$ from above by $Z(d)$ (see Lemma \ref{lem:bound-5d-over-4}) where $Z(d)$ is defined below and described in Lemma \ref{lem:lower-bound-Z(d)}. \begin{definition}\label{def:minimum-order} For any $d\geq2$, let $Z(d)$ be the smallest natural number $n$ such that there exists a $d$-regular (if $d$ is even) or almost $d$-regular (if $d$ is odd) triangle-free and factor-critical graph $G$ with $\nu(G)=n$. \end{definition} \noindent Let us introduce the graph $B_d$ given in Figure \ref{fig:BLOW-UP}; it is a (almost) $d$-regular triangle-free and factor-critical graph, which shows the existence of $Z(d)$. The \textit{blow-up} of a graph is obtained by replacing every vertex with a finite collection of copies so that the copies of two vertices are adjacent if and only if the originals are. In particular, the copies of the same vertex form an independent set in the blow-up graph. Let us emphasize some properties of the graph $B_d$ in Proposition \ref{prop:properties-B(d)}. \begin{figure}[h!] \centering \includegraphics[width=0.7cm,angle=0,height=5.65cm]{blow-up-4k+1-LARGE.png} \hfill \includegraphics[width=0.7cm,angle=0,height=5.65cm]{blow-up-4k+2-LARGE.png} \hfill \includegraphics[width=0.7cm,angle=0,height=5.65cm]{blow-up-4k+3-LARGE.png} \hfill \includegraphics[width=0.7cm,angle=0,height=5.65cm]{blow-up-4k-LARGE.png} \caption{\small{ The graph $B_d$ for $d\geq2$ depending on $d \pmod{4}$. Each graph is obtained from a blow-up of a cycle of length 5 by removing some perfect matchings. For simplicity, edges of the blow-up graph are not shown although they are all present. The copies of the same vertex in the blow-up graph are divided into bags shown by dotted or continuous circles, each one containing as many copies as the number indicated in it. The lines between different bags represent perfect matchings between the corresponding sets of vertices that are removed from the graph.} } \label{fig:BLOW-UP} \end{figure} \begin{proposition}\label{prop:properties-B(d)} The graph $B_d$ in Figure \ref{fig:BLOW-UP} is (almost) $d$-regular, triangle-free, and factor-critical. Moreover, we have $|V(B_d)|=2\nu(B_d)+1$ and $|E(B_d)|=d \nu(B_d)+\lfloor d/2\rfloor$ where $$\nu(B_d)=\begin{cases} \lfloor5d/4\rfloor,&\text{ if }d\text{ is even,}\\ \lfloor5(d+1)/4\rfloor,&\text{ if }d\text{ is odd.} \end{cases}$$ \end{proposition} \begin{proof} Firstly, it can be easily checked that $B_d$ is $d$-regular when $d$ is even. For odd values of $d$, all the vertices except the vertex in $A_{11}$ have degree $d$, and the vertex in $A_{11}$ has degree $d-1$. Therefore, $B_d$ is almost $d$-regular when $d$ is odd. Moreover, each $B_d$ is a (partial) subgraph of a graph that is a blow-up of a cycle of length five, which implies that each $B_d$ is triangle-free. Therefore, we only need to show that $B_d$ is factor-critical. We note that $B_{4k+1}$ and $B_{4k+3}$ can be obtained from $B_{4k+2}$ and $B_{4k+4}$, respectively, by deleting some edges. Thus, it suffices to show that $B_{4k+1}$ and $B_{4k+3}$ are factor-critical. For every vertex $v$ in $B_d$, we will show that $B_d-v$ has a perfect matching. Due to symmetry, it is enough to examine the cases $v\in A_1\cup A_2\cup A_4$. Since all the examinations are quite similar and straight-forward, we will only show the case $v\in A_1$ and leave the rest to the reader. It is well-known that any regular bipartite graph has a perfect matching. We will show that the vertices in $B_d-v$ can be partitioned into some pairs of subsets so that each pair induces a regular bipartite graph and thus admits a perfect matching. If $v\in A_{11}$, then we can partition the vertices into pairs of subsets as $(A_{22},A_{13})$, $(A_{12},A_{32})$, $(A_{21}\cup A_{23},A_{42})$, $(A_{31}\cup A_{33}, A_{52})$ and $(A_{41},A_{51})$. If $v\in A_{12}\cup A_{13}$, without loss of generality we can assume $v\in A_{12}$. Similarly, we can partition the vertices into pairs of subsets $(A_{11}\cup (A_{12}-v), A_{32})$, $(A_{13},A_{22})$, $(A_{21}\cup A_{23},A_{42})$, $(A_{31}\cup A_{33},A_{52})$ and $(A_{41},A_{51})$. Since $B_d$ is factor-critical, we have $|V(B_d)|=2\nu(B_d)+1$, and a maximum matching saturates all vertices but one; expressing the number of vertices as a function of $k$ in each one of the four cases, it can be checked that we have $\nu(B_d)= \lfloor5d/4\rfloor$ if $d$ is even, and $\nu(B_d)=\lfloor5(d+1)/4\rfloor$ if $d$ is odd. Lastly, $|E(B_d)|=d \nu(B_d)+\lfloor d/2\rfloor$ follows from the fact that $B_d$ is factor-critical and (almost) $d$-regular. \end{proof} \noindent For any $d\geq2$, let $C_d$ be a (almost) $d$-regular triangle-free factor-critical graph with matching number $Z(d)$. An important consequence of the properties of the graphs $B_d$ shown in Proposition \ref{prop:properties-B(d)} is the following: \begin{corollary}\label{cor:exist} For every $d\geq 2$, the value $Z(d)$ and a triangle-free factor-critical (almost) $d$-regular graph $C_d$ with matching number $Z(d)$ exist. \end{corollary} \noindent Now, we are ready to show that the matching number of each connected component of a graph $G\in \mathcal{G}_{\vartriangle}(d,m)$ is bounded above by $Z(d)$. Indeed, this additional information on the structure of connected components in an extremal graph will be very useful in both calculating $f_{\vartriangle}(d,m)$ in the rest of this section, and guiding us for future research to complete the remaining open cases. \begin{lemma}\label{lem:bound-5d-over-4} Let $d$ and $m$ be natural numbers with $d\geq 2$, and let $G\in \mathcal{G}_{\vartriangle}(d,m)$. Then, for every connected component $H$ of $G$, we have $\nu(H)\leq Z(d)$. \end{lemma} \begin{proof} For a contradiction, let $G\in \mathcal{G}_{\vartriangle}(d,m)$ and $H$ be a connected component of $G$ with $\nu(H)=Z(d)+t$ for some $t\geq 1$. By Corollary \ref{cor:structure-not-star} (ii), we know that $|V(H)|=2\nu(H)+1$. Since $\Delta(H)\leq d$, we have $$|E(H)|\leq \lfloor(2\nu(H)+1)d/2\rfloor=\nu(H)d+\lfloor d/2 \rfloor.$$ On the other hand, let $G_1$ be the graph obtained by taking the disjoint union of $G-H$, the graph $C_d$, and $t$ many $d$-stars. Notice that $G_1$ has more $d$-stars than $G$, so we have $|E(G_1)|<|E(G)|$ by definition of $\mathcal{G}_{\vartriangle}(d,m)$. However, we can write \begin{eqnarray*} |E(G_1)| &=& |E(G-H)|+(d Z(d)+\lfloor d/2\rfloor)+dt\\ &=&|E(G-H)|+\nu(H)d+\lfloor d/2\rfloor\\ &\geq& |E(G-H)|+|E(H)|=|E(G)|, \end{eqnarray*} which is a contradiction. \end{proof} We can use Lemma \ref{lem:factor-critical-triangle-free} to find the exact value of $Z(d)$ for small values of $d$. These values, on one hand, will allow us to show that $Z(d)\geq d$, thus we can address the case $Z(d)\leq m\leq 2d$ within the case $m>d$, on the other hand, they will be useful while solving the case $d\leq 6$. \begin{lemma}\label{lem:smaller-Z(d)-values} We have $Z(d)=d$ for $d\in\{2,3\}$, and $Z(d)= d+1$ for $d\in\{4,5\}$. Moreover, $Z(d)\geq d+1$ holds for all $d\geq 4$. \end{lemma} \begin{proof} By Lemma \ref{lem:factor-critical-triangle-free}, we have $|E(C_d)|\leq 1+Z(d)^2$ since $C_d$ is factor-critical and triangle-free. Since $C_d$ is (almost) $d$-regular, we get $$|E(C_d)|=\lfloor(2 Z(d)+1)d/2\rfloor=d Z(d)+\lfloor d/2\rfloor.$$ Hence, we obtain $\lfloor d/2\rfloor-1\leq Z(d) (Z(d)-d)$. Since $\lfloor d/2\rfloor-1\geq 0$ for all $d\geq2$, and $\lfloor d/2\rfloor-1\geq 1$ for all $d\geq 4$, we get $Z(d)\geq d$ for all $d\geq2$ and $Z(d)\geq d+1$ for all $d\geq 4$. By Proposition \ref{prop:properties-B(d)}, $B_2$ and $B_4$ are factor-critical and triangle-free graphs with $\nu(B_2)=2$ and $\nu(B_4)=5$, respectively. Also, $B_2$ is $2$-regular and $B_4$ is $4$-regular. Therefore, we get $Z(2)=2$ and $Z(4)=5$. Besides, $A_3$ (see Figure \ref{fig:A_d}) is an almost $3$-regular triangle-free and factor-critical graph with $\nu(A_3)=3$, which shows $Z(3)=3$. Finally, we identified using a computer search the graph $M_5$ given in Figure \ref{fig:M} as the unique triangle-free graph which is both factor-critical with $\nu(M_5)=6$ and almost $5$-regular; this shows $Z(5)=6$. \end{proof} \begin{figure}[h!] \centering \includegraphics[scale=0.4]{norm.pdf} \caption{\small{The graph $M_5$.}}\label{fig:M} \end{figure} \noindent As for larger $d$, Corollary \ref{key} allows us to obtain the exact value of $Z(d)$ for even values of $d$, and to identify a very restricted interval for $Z(d)$ if $d$ is odd. \begin{lemma}\label{lem:lower-bound-Z(d)} For $d\geq 2$, if $d$ is even then we have $Z(d)=\lfloor5d/4\rfloor$; if $d$ is odd then we have $\lfloor5(d-1)/4\rfloor \leq Z(d)\leq \lfloor5(d+1)/4\rfloor $. \end{lemma} \begin{proof} Since factor-critical graphs are non-bipartite, $|V(C_d)|=2 Z(d)+1$ and $\delta(C_d)=2\lfloor d/2\rfloor$, we get $2\lfloor d/2\rfloor\leq \dfrac{2(2 Z(d)+1)}{5}$ by Corollary \ref{key}, which gives $Z(d)\geq \lfloor5d/4\rfloor$ when $d$ is even and $Z(d)\geq \lfloor5(d-1)/4\rfloor$ when $d$ is odd. On the other hand, we have $\nu(B_d)=\lfloor 5d/4\rfloor$ when $d$ is even and $\nu(B_d)=\lfloor 5(d+1)/4\rfloor$ when $d$ is odd. Since $Z(d)\leq \nu(B_d)$, the result follows. \end{proof} \noindent Now, by Lemmas \ref{lem:smaller-Z(d)-values} and \ref{lem:lower-bound-Z(d)}, it is clear that $d\leq Z(d) <2d$ for any $d\geq 2$. Now we have the necessary ingredients to give the exact value of $f_{\vartriangle}(d,m)$ for $Z(d) \leq m < 2d$. \begin{theorem}\label{thm:5d-over-4-case} With the preceding notation, for $d\geq 2$ and $Z(d)\leq m <2d$, we have $f_{\vartriangle}(d,m)=d m+\lfloor d/2\rfloor$. \end{theorem} \begin{proof} Let $T$ be the disjoint union of $C_d$ and $m-Z(d)$ many $d$-stars. Clearly we have $\Delta(T)=d$, $\nu(T)=m$ and $$|E(T)|=d Z(d)+\lfloor d/2\rfloor+d(m-Z(d))=dm+\lfloor d/2\rfloor,$$ which shows $f_{\vartriangle}(d,m)\geq dm+\lfloor d/2\rfloor$. Then, let us take $G\in \mathcal{G}_{\vartriangle}(d,Z(d))$. Hence, we have $|E(G)|=f_{\vartriangle}(d,Z(d))\geq dm+\lfloor d/2\rfloor$, so it suffices to show that $|E(G)|\leq dm +\lfloor d/2\rfloor$. Assume $G$ has at least two connected components $H_1$ and $H_2$ that are not $d$-stars. Then, by part (ii) of Corollary \ref{cor:structure-not-star}, we get $2d \leq \nu(H_1)+\nu(H_2)\leq m$, however $m<2d$ by assumption, which is a contradiction. Moreover, if all the connected components of $G$ are $d$-stars, then we would get $|E(G)|=dm$, which contradicts with $|E(G)|\geq dm+\lfloor d/2\rfloor$. Therefore, $G$ has exactly one connected component that is not a $d$-star. Suppose $G$ has $t$ many connected components that are $d$-stars, and let $H$ be the connected component of $G$ that is not a $d$-star. Again, by part (ii) of Corollary \ref{cor:structure-not-star}, we know $|V(H)|=2\nu(H)+1$. On the other hand, since $\Delta(H)\leq d$, we have $$|E(H)|\leq \lfloor(2\nu(H)+1)d/2\rfloor=\nu(H)d+\lfloor d/2\rfloor.$$ Hence, by using $m=t+\nu(H)$, we get $$|E(G)|\leq dt+(\nu(H)d+\lfloor d/2\rfloor)=dm+\lfloor d/2\rfloor,$$ which completes the proof. \end{proof} In the sequel, we will reformulate our problem in a slightly different way to calculate $f_{\vartriangle}(d,m)$ for the case where $m>d$ and $d\leq 6$. This reformulation will be revisited in Section \ref{sec:discussion} to suggest an integer programming formulation and discuss future research directions for the remaining open cases in Section \ref{sec:discussion}. Let us take a graph $G\in\mathcal{G}_{\vartriangle}(d,m)$ for some natural numbers $1\leq d\leq m$. For any connected component of $G$ that is not a $d$-star, say $H$, we know $d\leq \nu(H)\leq Z(d)$ and $|E(H)|=f_{\vartriangle}(d,\nu(H))$ by Corollary \ref{cor:structure-not-star} and Lemma \ref{lem:bound-5d-over-4}. Then, let $x_i$ be the number of connected components of $G$ whose matching number is $i$ where $d\leq i\leq Z(d)$. Clearly, we have $\displaystyle{\sum_{i=d}^{Z(d)}ix_i\leq m}$, and $G$ has $\displaystyle{m-\sum_{i=d}^{Z(d)}ix_i}$ many connected components that are $d$-stars. Therefore, we can write the number of edges in $G$ in terms of $x_i$'s as follows: \begin{eqnarray*} f_{\vartriangle}(d,m)=|E(G)|&=&d\Big(m-\sum_{i=d}^{Z(d)}ix_i\Big)+\sum_{i=d}^{Z(d)}f_{\vartriangle}(d,i)x_i\\ &=&dm-\sum_{i=d}^{Z(d)}dix_i+\sum_{i=d}^{Z(d)}f_{\vartriangle}(d,i)x_i\\ &=&dm+\sum_{i=d}^{Z(d)}(f_{\vartriangle}(d,i)-di)x_i. \end{eqnarray*} As a result, for a fixed $d$, we can determine the value of $f_{\vartriangle}(d,m)$ for all natural numbers $m$ by finding the values of $f_{\vartriangle}(d,i)$ and corresponding $x_i$ values only for $d\leq i\leq Z(d)$. For a simpler notation, let us define \begin{eqnarray*} \mathcal{F}(d,m)&:=&\Big\{\big(x_d,x_{d+1},\ldots,x_{Z(d)}\big):x_i\in\mathbb{Z}_{\geq 0}\text{ for }d\leq i\leq Z(d),\sum_{i=d}^{Z(d)}ix_i\leq m\Big\},\\ g_{\vartriangle}(d,i)&:=&f_{\vartriangle}(d,i)-di\text{ for any }i. \end{eqnarray*} Observe that we have $g_{\vartriangle}(d,d)=1$ for $d\geq 2$ by Theorem \ref{thm:case-d-d}. Also, we get $g_{\vartriangle}(d,Z(d))=\lfloor d/2\rfloor$ for $d\geq 2$ by Theorem \ref{thm:5d-over-4-case}. Now, we state the discussion above as a lemma since it will be used in the calculations for the cases $2\leq d\leq 6$. \begin{lemma}\label{lem:MAIN-RECURSION} For all natural numbers $1\leq d\leq m$, we have $$g_{\vartriangle}(d,m)=\max_{(x_d,\ldots,x_{Z(d)})\in\mathcal{F}(d,m)}\sum_{i=d}^{Z(d)}g_{\vartriangle}(d,i)x_i.$$ \end{lemma} Notice that we have $Z(d)=d$ for $d\in\{2,3\}$ and $Z(d)=d+1$ for $d\in\{4,5,6\}$ by Lemmas \ref{lem:smaller-Z(d)-values} and \ref{lem:lower-bound-Z(d)}. Therefore, we have a simple expression for $\mathcal{F}(d,m)$ for $2\leq d\leq 6$, which helps to find the exact value of $f_{\vartriangle}(d,m)$. \begin{theorem}\label{thm:d123} With the preceding notation, for $m>d$ and $d\in\{2,3\}$, $$f_{\vartriangle}(d,m)=dm+\lfloor m/d\rfloor \lfloor d/2\rfloor.$$ \end{theorem} \begin{proof} Since $d=Z(d)$ for $d\in\{2,3\}$ by Lemma \ref{lem:smaller-Z(d)-values}, $\mathcal{F}(d,m)$ contains only $1$-dimensional elements, so we get $\mathcal{F}(d,m)=\{x_d\in\mathbb{Z}_{\geq 0}:dx_d\leq m\}=\{ x_d\in\mathbb{Z}_{\geq 0}:x_d\leq \lfloor m/d\rfloor\}.$ Then, we have $$g_{\vartriangle}(d,m)=\max_{x_d\in\mathcal{F}(d,m)}g_{\vartriangle}(d,d)x_d=g_{\vartriangle}(d,d) \lfloor m/d\rfloor$$ by Lemma \ref{lem:MAIN-RECURSION}. As we have already inferred from Theorem \ref{thm:5d-over-4-case} that $g_{\vartriangle}(d,d)=g_{\vartriangle}(d,Z(d))=\lfloor d/2\rfloor$, we get $$f_{\vartriangle}(d,m)-dm=g_{\vartriangle}(d,m)=\lfloor d/2\rfloor \lfloor m/d\rfloor,$$ and the result follows. \end{proof} \begin{theorem}\label{thm:d4567} With the preceding notation, for $m>d$ and $d\in\{4,5,6\}$ we have, $$f_{\vartriangle}(d,m)=\begin{cases} 1+dm+\lfloor d/2\rfloor \lfloor m/(d+1)\rfloor,&\text{if }m+1\text{ is divisible by }d+1,\\ dm+\lfloor d/2\rfloor \lfloor m/(d+1)\rfloor,&\text{otherwise}.\\ \end{cases}$$ \end{theorem} \begin{proof} Let $d\in\{4,5,6\}$. Since $Z(d)=d+1$ by Lemmas \ref{lem:smaller-Z(d)-values} and \ref{lem:lower-bound-Z(d)}, $\mathcal{F}(d,m)$ contains 2-dimensional elements, and we get $$\mathcal{F}(d,m)=\{(x_d,x_{d+1})\in \mathbb{Z}_{\geq0}^2:x_d\leq (m-(d+1)x_{d+1})/d\}.$$ On the other hand, we have $$g_{\vartriangle}(d,m)=\max_{(x_d,x_{d+1})\in\mathcal{F}(d,m)}g_{\vartriangle}(d,d)x_d+g_{\vartriangle}(d,d+1)x_{d+1}$$ by Lemma \ref{lem:MAIN-RECURSION}. Since $g(d,d)=1$ and $g(d,d+1)=g(d,Z(d))=\lfloor d/2\rfloor$, we can write \begin{eqnarray*} g_{\vartriangle}(d,m)&=&\max_{(x_d,x_{d+1})\in\mathcal{F}(d,m)}(x_d+\lfloor d/2\rfloor x_{d+1})\\ &=&\max_{0\leq x_{d+1}\leq m/(d+1)} \lfloor(m-(d+1)x_{d+1})/d\rfloor+\lfloor d/2\rfloor x_{d+1}\\ &=&\max_{0\leq x_{d+1}\leq m/(d+1)} \lfloor (m-x_{d+1})/d\rfloor + \lfloor (d-2)/2\rfloor x_{d+1}. \end{eqnarray*} Since $\lfloor (d-2)/2\rfloor\geq 1$, the quantity $\lfloor (m-x_{d+1})/d\rfloor + \lfloor (d-2)/2\rfloor x_{d+1}$ increases with respect to $x_{d+1}$. Therefore, $g_{\vartriangle}(d,m)$ is obtained by assigning $x_{d+1}=\lfloor m/(d+1)\rfloor$ which is the maximum possible value for $x_{d+1}$. Then, by writing $m=(d+1)k+r$ for some $k,r\in\mathbb{Z}$ where $k=\lfloor m/(d+1)\rfloor$ and $0\leq r\leq d$, we see that $(x_d,k)\in \mathcal{F}(d,m) $ implies $x_d\leq 1$ if $r=d$ and $x_d\leq 0$ otherwise as $x_d\leq (m-(d+1)x_{d+1})/d$. Note that $r=d$ is equivalent to the case that $m+1$ is divisible by $d+1$. Therefore, the value $g_{\vartriangle}(d,m)$ is attained at $x_d=1$, $x_{d+1}=k$ if $m+1$ is divisible by $d+1$, and it is attained at $x_d=0$, $x_{d+1}=k$ otherwise. As a result, we find $$g_{\vartriangle}(d,m)=\begin{cases} 1+\lfloor d/2\rfloor \lfloor m/(d+1)\rfloor,&\text{if }m+1\text{ is divisible by }d+1,\\ \lfloor d/2\rfloor \lfloor m/(d+1)\rfloor,&\text{otherwise},\\ \end{cases}$$ so the result follows. \end{proof} \section{Main Result}\label{sec:conc} We determined the value of $f_{\vartriangle}(d,m)$ for all the cases with $d\geq m$ (Theorems \ref{thm:case-d-greater-m} and \ref{thm:case-d-d}), and for the cases $d<m$ with either $Z(d) \leq m <2d$ (Theorem \ref{thm:5d-over-4-case}) or $d\leq 6$ (Theorems \ref{thm:d123} and \ref{thm:d4567}). It is possible to summarize those findings in a single formula that we state as our main result. Recall that $C_d$ is a (almost) $d$-regular triangle-free factor-critical graph with matching number $Z(d)$ whose existence is guaranteed by Proposition \ref{prop:properties-B(d)} and Corollary \ref{cor:exist}. \begin{theorem}\label{thm:ALL-IN-ONE} Let $d$ and $m$ be natural numbers with $d\geq 2$, and let $k$ and $r$ be non-negative integers such that $m=k Z(d)+r$ with $0\leq r<Z(d)$. Then, for all the cases with $d\geq m$, and for the cases $d<m$ with either $d\leq 6$ or $Z(d)\leq m < 2d$, we have \begin{equation}\label{eqn:single-formula} f_{\vartriangle}(d,m)=\begin{cases} dm+k\lfloor d/2\rfloor&\text{if }r<d,\\ dm+k\lfloor d/2\rfloor+r-d+1&\text{if }r\geq d, \end{cases}\tag{*} \end{equation} where a graph in $\mathcal{G}_{\vartriangle}(d,m)$ can be constructed as the disjoint union of $k$ copies of $C_d$ and \begin{itemize} \item[(i)] $A_d$ if $r\geq d$, \item[(ii)] $r$ copies of $d$-stars if $r<d$. \end{itemize} \end{theorem} \begin{proof} Let $m=k Z(d)+r$ for some non-negative integers $k$ and $r$ with $0\leq r<Z(d)$. If $m<d$, then we find $k=0$ and $r=m$ since $Z(d)\geq d$ by Lemma \ref{lem:smaller-Z(d)-values}, so \eqref{eqn:single-formula} holds by Theorem \ref{thm:case-d-greater-m}. If $d=m\in\{2,3\}$, then we get $k=1$ and $r=0$ since $Z(d)=d$ by Lemma \ref{lem:smaller-Z(d)-values}. Since $\lfloor d/2\rfloor=1$, \eqref{eqn:single-formula} holds by Theorem \ref{thm:case-d-d}. If $d=m\geq 4$, then we get $k=0$ and $r=d$ since $Z(d)=d+1$ by Lemma \ref{lem:smaller-Z(d)-values}. Since $r-d+1=1$, \eqref{eqn:single-formula} holds by Theorem \ref{thm:case-d-d}. Suppose $d<m$. If $d\in\{2,3\}$, then we get $Z(d)=d$, which implies $k=\lfloor m/d\rfloor$ and $r<d$, so \eqref{eqn:single-formula} holds by Theorem \ref{thm:d123}. If $d\in\{4,5,6\}$, then we get $Z(d)=d+1$, which implies $k=\lfloor m/(d+1)\rfloor$. Moreover, we find $r=d$ if $m+1$ is divisible by $k+1$ and $r<d$ otherwise. Therefore, if $m+1$ is divisible by $k+1$ then $r-d+1=1$, so \eqref{eqn:single-formula} holds by Theorem \ref{thm:d4567}. Finally, if $Z(d) \leq m <2d$, then since $d\leq Z(d)$ we have $m<2Z(d)$, thus $k=1$ and $r<d$, then \eqref{eqn:single-formula} holds by Theorem \ref{thm:5d-over-4-case}. \end{proof} Now, we can report the difference between $f_{\mathcal{GEN}}(d,m)$ and $f_{\vartriangle}(d,m)$ based on our findings. Theorems \ref{thm:general} and \ref{thm:ALL-IN-ONE} give $$h_{\vartriangle}(d,m):=f_{\mathcal{GEN}}(d,m)-f_{\vartriangle}(d,m)=\left\lfloor \frac{d}{2} \right\rfloor \Bigg(\left\lfloor \frac{m}{\lceil \frac{d}{2} \rceil} \right\rfloor-k\Bigg)-(r-d+1),$$ where $m=k Z(d)+r$ with $0\leq r<Z(d)$ provided that $d$ and $m$ satisfy one of the following conditions: \begin{itemize} \item[(i)] $d\geq m$, \item[(ii)] $d<m$ and $d\leq 6$, \item[(iii)] $d<m$ and $Z(d)\leq m < 2d$. \end{itemize} The difference $h_{\vartriangle}(d,m)$ corresponds to the number of edges that we loose in the triangle-free case as compared to the general one. Thus, we get the following where we observe that apart from two simple cases (where $m<\lfloor d/2\rfloor$ or $1=d< m$), we loose edges (with respect to the general case) by restricting the extremal graphs to be triangle-free: \[h_{\vartriangle}(d,m)=\begin{cases} 0,&\text{if }m<\lfloor d/2\rfloor,\\ \lfloor d/2\rfloor,&\text{if }\lfloor d/2\rfloor\leq m<d\\ d,&\text{if }d=m\text{ and $d$ is even},\\ \lfloor d/2\rfloor,&\text{if }d=m\text{ and $d$ is odd},\\ 0,&\text{if $1=d< m$,}\\ m-\lfloor m/2\rfloor,&\text{if $2=d<m$,}\\ \lfloor m/2\rfloor-\lfloor m/3\rfloor,&\text{if $3=d<m$,}\\ 2\lfloor m/2\rfloor-2\lfloor m/5\rfloor,&\text{if $4=d<m$ and $m+1$ is not divisible by 5,}\\ 2\lfloor m/2\rfloor-2\lfloor m/5\rfloor-1,&\text{if $4=d<m$ and $m+1$ is divisible by 5,}\\ 2\lfloor m/3\rfloor-2\lfloor m/6\rfloor,&\text{if $5=d<m$ and $m+1$ is not divisible by 6,}\\ 2\lfloor m/3\rfloor-2\lfloor m/6\rfloor-1,&\text{if $5=d<m$ and $m+1$ is divisible by 6,}\\ 3\lfloor m/3\rfloor-3\lfloor m/7\rfloor,&\text{if $6=d<m$ and $m+1$ is not divisible by 7,}\\ 3\lfloor m/3\rfloor-3\lfloor m/7\rfloor-1,&\text{if $6=d<m$ and $m+1$ is divisible by 7,}\\ \lfloor d/2\rfloor,&\text{if $d\geq 7$ and $Z(d)\leq m<3\lceil d/2\rceil$,}\\ 2\lfloor d/2 \rfloor,&\text{if $d\geq 7$ and $3\lceil d/2\rceil \leq m<2d$.} \end{cases}\] In light of Theorem \ref{thm:ALL-IN-ONE}, the remaining open cases are for $7\leq d< m$, and either $m<Z(d)$ or $m\geq 2d$. In what follows, we will discuss further formulations to solve these remaining cases and suggest some conjectures. \section{An integer programming formulation and further discussions}\label{sec:discussion} To solve the open cases, namely for natural numbers $m$ and $d$ such that $7\leq d< m$ with either $m<Z(d)$ or $m\geq 2d$, we develop an integer programming formulation based on our earlier observations. In Conjecture \ref{con:Turan-generalization}, we provide all the parameters involved in this formulation, which is already a challenging problem. Then, under the assumption that Conjecture \ref{con:Turan-generalization} holds, we show that our integer program admits an optimal solution with a special structure. This, in turn, allows us to formulate in Conjecture \ref{con:general-formula-wrt-Z(d)} that Theorem \ref{thm:ALL-IN-ONE} is valid for all $m$ and $d$. Lastly, we also conjecture unknown values of $Z(d)$ (see Lemma \ref{lem:lower-bound-Z(d)}), which plays a crucial role in the solution of our problem. We conclude our paper with a reformulation of our problem as a variant of the well-known extremal problem addressed in Turan's Theorem. By Corollary \ref{cor:structure-not-star} and Lemma \ref{lem:bound-5d-over-4}, there is an edge-extremal graph $G\in \mathcal{G}_{\vartriangle} (d,m)$ whose components are either $d$-stars, or edge-extremal factor-critical triangle-free graphs $H$ where $d\leq \nu(H)\leq Z(d)$. In other words, by letting $x_i$ to be the number of connected components of $G$ whose matching number is $i$, we have (as expressed in Lemma \ref{lem:MAIN-RECURSION} in terms of $g_{\vartriangle}(d,m)$): $$f_{\vartriangle}(d,m)= d\Big(m-\sum_{i=d}^{Z(d)}ix_i\Big)+\sum_{i=d}^{Z(d)}f_{\vartriangle}(d,i)x_i = dm+\sum_{i=d}^{Z(d)}(f_{\vartriangle}(d,i)-di)x_i.$$ It follows that, for a fixed $d$, the value of $f_{\vartriangle}(d,m)$ can be determined for all natural numbers $m$ by finding the values of $f_{\vartriangle}(d,i)$ and corresponding $x_i$ values only for $d\leq i\leq Z(d)$. Accordingly, $f_{\vartriangle}(d,m)$ can be computed as the optimal value of the following integer programming:\\ \begin{center} \flushleft \textbf{Model 1: } $$\displaystyle{\max \,\, dm+\sum_{i=d}^{Z(d)}(f_{\vartriangle}(d,i)-di)x_i}$$ $$\displaystyle{ {\mbox{subject to }}\sum_{i=d}^{Z(d)}ix_i\leq m}$$ $$x_i\geq 0, x_i\in\mathbb{Z}$$ \end{center} This formulation can be seen as a bounded knapsack problem where there is a bounded number of items of each type. The utilities of the items are $(f_{\vartriangle}(d,i)-di)$ for $d\leq i\leq Z(d)$ and the volumes of the items range from $d$ to $Z(d)$ which is yet unknown if $d$ is odd (see Lemma \ref{lem:lower-bound-Z(d)}). Recall that we have $f_{\vartriangle}(d,d)=d^2+1$ for $d\geq 2$ by Theorem \ref{thm:case-d-d}, and $f_{\vartriangle}(d,Z(d))=dZ(d)+\lfloor d/2\rfloor$ for $d\geq 2$ by Theorem \ref{thm:5d-over-4-case}. It remains to compute $f_{\vartriangle}(d,i)$ for $d < i < Z(d)$. We suggest Conjecture \ref{con:Turan-generalization} for the value of $f_{\vartriangle}(d,i)$ for $d < i < Z(d)$, which in turn, allows us to conjecture that the formula for $f_{\vartriangle}(d,m)$ in Theorem \ref{thm:ALL-IN-ONE} can be extended to all the remaining cases as well (in Conjecture \ref{con:general-formula-wrt-Z(d)}). Lastly, bearing in mind that the formula giving the value of $f_{\vartriangle}(d,m)$ can only be computed if $Z(d)$ is known; we suggest Conjecture \ref{con:open-Z(d)-values} to settle the values of $Z(d)$ for odd $d\geq 21$, which is left open (see Lemma \ref{lem:lower-bound-Z(d)}). In what follows, we conjecture that for $d< i<Z(d)$, $f_{\vartriangle}(d,i)$ follows the same trend as what we identified in other cases. \begin{conjecture}\label{con:Turan-generalization} Theorem \ref{thm:ALL-IN-ONE} holds also for $7\leq d<m<Z(d)$. In other words, for $7\leq d< i<Z(d)$, we have $$f_{\vartriangle}(d,i)=di+i-d+1.$$ \end{conjecture} \noindent If Conjecture \ref{con:Turan-generalization} holds, then we get $f_{\vartriangle}(d,i)-di=i-d+1$ for $2\leq d\leq i<Z(d)$. Since $f_{\vartriangle}(d,Z(d))=dZ(d)+\lfloor d/2 \rfloor$ and the constant term $dm$ in the objective function does not effect the optimal solution, Model 1 is equivalent to solve the following optimization problem: \begin{center} \flushleft \textbf{Model 2:} $$\displaystyle{\max \,\, \lfloor d/2\rfloor x_{Z(d)}+\sum_{i=d}^{Z(d)-1}(i-d+1)x_i}$$ $$\displaystyle{ {\mbox{subject to }}\sum_{i=d}^{Z(d)}ix_i\leq m}$$ $$x_i\geq 0, x_i\in\mathbb{Z}$$ \end{center} \noindent We claim that if Conjecture \ref{con:Turan-generalization} holds, then Model 2 admits an optimal solution with nice properties. First, we need a direct consequence of Conjecture \ref{con:Turan-generalization}. \begin{proposition}\label{prop:Conjecture-1-Z(7)} If Conjecture \ref{con:Turan-generalization} is true, then we have $Z(7)\in\{8,9\}$. \end{proposition} \begin{proof} We claim that $Z(7)$ is the smallest natural number $k$ satisfying $f_{\vartriangle}(7,k)=7k+3$. Indeed, we know that $f_{\vartriangle}(7,Z(d))=7Z(d)+3$ by Theorem \ref{thm:5d-over-4-case}. Now, suppose $f_{\vartriangle}(7,k)=7k+3$ for some $k<Z(7)$. Then, we can take a graph $G\in \mathcal{G}_{\vartriangle}(7,k)$ with $7k+3$ edges. By Corollary \ref{cor:structure-not-star}, we know that each connected component $H$ of $G$ that is not a $7$-star is factor-critical with $\nu(H)\geq 7$. Since $\nu(H)\leq k<Z(7)$, it follows that $H$ is not almost $7$-regular, so $E(H)\leq ((2\nu(H)+1)7-3)/2=7\nu(H)+2$. By summing the number of edges over all connected components of $G$, we find $7k+3=|E(G)|\leq 7k+2$, which is a contradiction. As a result, we have $f_{\vartriangle}(7,k)<7k+3$ for $k<Z(d)$. Now, if Conjecture \ref{con:Turan-generalization} holds and $Z(7)\geq 10$, the above discussion implies that $f_{\vartriangle}(7,9)=66< 7\times 9+3$, which is a contradiction. Since $Z(7)\geq 8$ by Lemma \ref{lem:smaller-Z(d)-values}, the result follows. \end{proof} \noindent Now, we are ready to discuss the optimal solution to Model 2 that admits nice properties. \begin{proposition}\label{prop:Conjecture-1-Implies} If Conjecture \ref{con:Turan-generalization} is true, then for $7\leq d<m<Z(d)$, Model 2 admits an optimal solution with $\displaystyle{\sum_{i=d}^{Z(d)-1}x_i\leq 1}$, that is where $x_{Z(d)}$ is maximized, and there is at most one other $x_i$ which is 1 (all the rest being zero). \end{proposition} \begin{proof} Let $(x_{d},x_{d+1},\ldots,x_{Z(d)-1},x_{Z(d)})$ be an optimal solution for Model 2 with optimal value $opt$ and such that $x_d+x_{Z(d)}$ is maximal. We first show that $x_j\leq 1$ for all $d<j<Z(d)$. Assume the contrary. If $j\geq \dfrac{d+Z(d)}{2}$, then we can decrease $x_j$ by two, and increase each of $x_{2j-Z(d)}$ and $x_{Z(d)}$ by one, which gives another feasible solution for Model 2 with objective value $opt+(2j-Z(d)-d+1)+\lfloor d/2\rfloor-2(j-d+1)=opt+\lfloor(3d-2)/2\rfloor-Z(d)$. Since we increased $x_d+x_{Z(d)}$ by at least one, the new solution is not optimal by assumption, thus we have $\lfloor(3d-2)/2\rfloor\leq Z(d)-1$. By Lemma \ref{lem:lower-bound-Z(d)}, we obtain $\lfloor5(d+1)/4\rfloor\geq Z(d)\geq \lfloor 3d/2\rfloor$, which is a contradiction for $d\geq 8$. On the other hand, for $d=7$, Proposition \ref{prop:Conjecture-1-Z(7)} implies that $Z(7)\leq9$, which contradicts with $\lfloor(3d-2)/2\rfloor\leq Z(d)-1$. As a result, if $j\geq \dfrac{d+Z(d)}{2}$, we have $x_j\leq 1$. \\ \noindent If $j<\dfrac{d+Z(d)}{2}$, then we can decrease $x_j$ by two, and increase each one of $x_d$ and $x_{2j-d}$ by one, which would give a feasible solution with the objective value $opt+1+(2j-d+1)-2(j-d+1)=opt$. Since we increased $x_d+x_{Z(d)}$ by at least one, we get a contradiction. Therefore, we have $x_j\leq 1$ for all $d<j<Z(d)$.\\ \noindent Now, suppose $x_a=x_b=1$ for some $d<a<b<Z(d)$. If $a+b\geq Z(d)+d$, then let us decrease $x_a$ and $x_b$ by one, and increase $x_{a+b-Z(d)}$ and $x_{Z(d)}$ by one. By this way, we get a feasible solution with the objective value $$opt-(a-d+1)-(b-d+1)+(a+b-Z(d)-d+1)+\lfloor d/2\rfloor.$$ Since we increased $x_d+x_{Z(d)}$ by at least one, we should have $\lfloor(3d-2)/2\rfloor\leq Z(d)-1$ by the assumption. As similar to previous cases, this inequality does not hold for $d\geq 7$. If $a+b<d+Z(d)$, then let us decrease $x_a$ and $x_b$ by one, and increase $x_d$ and $x_{a+b-d}$ by one. By this way, we get a feasible solution with the objective value $$opt-(a-d+1)-(b-d+1)+1+((a+b-d)-d+1)=opt.$$ Since we increased $x_d+x_{Z(d)}$ by at least one, we get a contradiction. Therefore, we can say that $x_j\geq1$ holds for at most one $j$ value with $d<j<Z(d)$.\\ \noindent Now, if $x_d\geq 2$, let us decrease $x_d$ by two, and increase $x_{Z(d)}$ by 1. This yields a feasible solution with the objective value $opt-2+\lfloor d/2\rfloor>opt$, which is a contradiction. Hence, we have $x_d\leq 1$. The only remaining case is $x_d=1$ and there is exactly one $j$ value with $x_j=1$, $d<j<Z(d)$.\\ \noindent Let us decrease $x_d$ and $x_j$ by one, and increase $x_{Z(d)}$ by one. Then, we would get a feasible solution with the optimal value $$opt-1-(j-d+1)+\lfloor d/2\rfloor=opt+\lfloor(3d-4)/2\rfloor-j.$$ If $\lfloor(3d-4)/2\rfloor=j$, then we can obtain the same optimal value with $x_d=x_j=0$, so the result follows. Thus, we are done if we show the inequality $\lfloor(3d-4)/2\rfloor\geq j$ for $d\geq 7$. For $d=7$, note that $Z(7)\in\{8,9\}$ by Proposition \ref{prop:Conjecture-1-Z(7)}. If $Z(7)=8$, then there are no $j$ values with $7<j<Z(7)$, so we are done. If $Z(7)=9$, then, we get $j=8$ and so the equality is satisfied. For $d=8$, we know $Z(8)=10$ by Lemma \ref{lem:lower-bound-Z(d)}, which gives $j\leq 9$ and so $\lfloor(3d-4)/2\rfloor-j>0$. For $d\geq 9$, by using Lemma \ref{lem:lower-bound-Z(d)}, we have $$\lfloor(3d-4)/2\rfloor\geq\lfloor(5d+1)/4\rfloor\geq Z(d)-1\geq j,$$ which completes the proof. \end{proof} \noindent Proposition \ref{prop:Conjecture-1-Implies} can be interpreted as follows: under the assumption that Conjecture \ref{con:Turan-generalization} holds, one can reach $f_{\vartriangle}(d,m)$ edges by taking the graph $C_d$ as much as possible and adding either one more graph that is extremal for $f_{\vartriangle}(d,r)$ or $r$ many stars, depending on $r\geq d$ where $r$ is the remainder of $m$ when divided by $Z(d)$. Notice that this is exactly how we construct an extremal graph in Theorem \ref{thm:ALL-IN-ONE}. Therefore, the formula in Theorem \ref{thm:ALL-IN-ONE} would be valid for all integers $d$ and $m$ if Conjecture \ref{con:Turan-generalization} is true: \begin{conjecture}\label{con:general-formula-wrt-Z(d)} Let $m=k Z(d)+r$ for some $0\leq r<Z(d)$. Then, we have $$f_{\vartriangle}(d,m)=\begin{cases} dm+k\lfloor d/2\rfloor&\text{if }r<d\\ dm+k\lfloor d/2\rfloor+r-d+1&\text{if }r\geq d. \end{cases}$$ \end{conjecture} \noindent Our next conjecture is about the value of $Z(d)$ which plays a crucial role in the computation of $f_{\vartriangle}(d,m)$ and the construction of extremal graphs. Recall that Lemma \ref{lem:smaller-Z(d)-values} together with Lemma \ref{lem:lower-bound-Z(d)} give $Z(d)=\lfloor 5d/4\rfloor$ if $d$ is even or $d\in\{3,5\}$; moreover there is a narrow interval for possible values of $Z(d)$ in the remaining cases (that is $d\geq 7$ odd). In \cite{Haggkvist1}, it is stated that every triangle-free graph $G$ with $\delta(G)>3| V(G)|/8$ is a subgraph of a blow-up of the cycle of length five. For odd values of $d$, since the graphs $C_d$ realizing $Z(d)$ are triangle-free, almost regular, and factor-critical (by the definition of $Z(d)$), we have $\delta(C_d)\geq d-2$ and $|V(C_d)|=2Z(d)+1\leq 2\lfloor 5(d+1)/4\rfloor+1$ by Lemma \ref{lem:lower-bound-Z(d)}. Since $d-2> 3(2\lfloor 5(d+1)/4\rfloor+1)/8$ for all but a few small values of $d$, this result implies that these $C_d$ graphs should be blow-up graphs of the cycle of length 5 provided that $d$ is sufficiently large. We could show for some cases that if $Z(d) < \lfloor5(d+1)/4\rfloor$ then it is not possible to construct $C_d$ which is the blow-up of a cycle of length 5 and (almost) regular with degree $d$; thus $Z(d)=\lfloor5(d+1)/4\rfloor$ for these cases. We believe that this also holds for the remaining cases if $d$ is large enough, which we formulate as a conjecture: \begin{conjecture}\label{con:open-Z(d)-values} For $d\geq 21$ and odd, we have $Z(d)=\lfloor5(d+1)/4\rfloor$. \end{conjecture} Last but not least, let us reformulate the computation of $f_{\vartriangle}(d,i)$ for $d \leq i \leq Z(d)$ as a generalized version of Erdős-Stone's Theorem which has been described as a fundamental theorem of extremal graph theory (see \cite{bollobas}). The extremal number $\text{ex}(n,H)$ is defined as the maximum number of edges in a graph on $n$ vertices not containing a subgraph isomorphic to $H$. Note that the classical Turan's Theorem \cite{turan} addresses the answer for $\text{ex}(n,K_r)$. In our case, we seek for the maximum number of edges in a triangle-free graph whose maximum degree is also bounded by some parameter. Indeed, by Corollary \ref{cor:structure-not-star} and Lemma \ref{lem:bound-5d-over-4}, for a graph $G\in \mathcal{G}_{\vartriangle} (d,m)$, every connected component of $G$ which is not a $d$-star is a factor-critical edge-extremal triangle-free graph with $f_{\vartriangle}(d,i)$ edges, thus with $2i+1$ vertices, where $d\leq i\leq Z(d)$. Hence, by forbidding not a single graph $H$ but any graph in a family $\mathcal{F}$ in the Erdős-Stone's Theorem, we can write $f_{\vartriangle}(d,i)=\text{ex}(2i+1,\{K_3,K_{1,d}\})$ for $d\leq i\leq Z(d)$. It follows that we have reduced our original problem of determining the maximum number of edges in a triangle-free graph with degree and matching number bounds into determining $\text{ex}(2i+1,\{K_3,K_{1,d}\})$ for $d\leq i\leq Z(d)$. Let us conclude by noting that Erdős-Stone's Theorem investigates the asymptotic behavior of $\text{ex}(n,\mathcal{F})$ whereas we seek for the exact value in the particular case $\text{ex}(2i+1,\{K_3,K_{1,d}\})$.
{ "timestamp": "2022-07-07T02:01:49", "yymm": "2207", "arxiv_id": "2207.02271", "language": "en", "url": "https://arxiv.org/abs/2207.02271", "abstract": "Determining the maximum number of edges under degree and matching number constraints have been solved for general graphs by Chvátal and Hanson (1976), and by Balachandran and Khare (2009). It follows from the structure of those extremal graphs that deciding whether this maximum number decreases or not when restricted to claw-free graphs, to $C_4$-free graphs or to triangle-free graphs are separately interesting research questions. The first two cases being already settled, respectively by Dibek, Ekim and Heggernes (2017), and by Blair, Heggernes, Lima and D.Lokshtanov (2020). In this paper we focus on triangle-free graphs. We show that unlike most cases for claw-free graphs and $C_4$-free graphs, forbidding triangles from extremal graphs causes a strict decrease in the number of edges and adds to the hardness of the problem. We provide a formula giving the maximum number of edges in a triangle-free graph with degree at most $d$ and matching number at most $m$ for all cases where $d\\geq m$, and for the cases where $d<m$ with either $d\\leq 6$ or $Z(d)\\leq m < 2d$ where $Z(d)$ is a function of $d$ which is roughly $5d/4$. We also provide an integer programming formulation for the remaining cases and as a result of further discussion on this formulation, we conjecture that our formula giving the size of triangle-free extremal graphs is also valid for these open cases.", "subjects": "Combinatorics (math.CO)", "title": "Maximum size of a triangle-free graph with bounded maximum degree and matching number", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9899864308540234, "lm_q2_score": 0.8198933403143929, "lm_q1q2_score": 0.811683281658829 }
https://arxiv.org/abs/1601.04091
Multigrid Methods for Constrained Minimization Problems and Application to Saddle Point Problems
The first order condition of the constrained minimization problem leads to a saddle point problem. A multigrid method using a multiplicative Schwarz smoother for saddle point problems can thus be interpreted as a successive subspace optimization method based on a multilevel decomposition of the constraint space. Convergence theory is developed for successive subspace optimization methods based on two assumptions on the space decomposition: stable decomposition and strengthened Cauchy-Schwarz inequality, and successfully applied to the saddle point systems arising from mixed finite element methods for Poisson and Stokes equations. Uniform convergence is obtained without the full regularity assumption of the underlying partial differential equations. As a byproduct, a V-cycle multigrid method for non-conforming finite elements is developed and proved to be uniform convergent with even one smoothing step.
\section{Introduction} Given a quadratic energy $E(v)$ defined on a Hilbert space $\mathcal V$, we consider the constrained minimization problem: \begin{equation}\label{intro:main-opt} \min _{v\in \mathcal K} E(v), \end{equation} where $\mathcal K\subset \mathcal V$ is the null space of a linear and bounded operator $B$ defined on $\mathcal V$. By introducing the Lagrange multiplier for the constraint, we can find the minimizer of \eqref{intro:main-opt} by solving a saddle point system. In this paper, we shall design and analyze multigrid methods for the constrained minimization problem \eqref{intro:main-opt} and apply them to the saddle point systems arisings from mixed finite element discretization of Poisson, Darcy, and Stokes equations. We shall adapt the constraint decomposition methods developed by Tai for nonlinear variational inequalities~\cite{Tai.X2003} to the constrained minimization problem. Let $\mathcal K = \sum_{i=1}^N \mathcal K_i$ be a space decomposition. Our method consists of solving a local constrained minimization problem in each subspace $\mathcal K_i$ which is equivalent to solving a small saddle point problem. Thus our relaxation can be interpreted as a multiplicative overlapping Schwarz method which is known as Vanka smoother~\cite{Vanka1986} in the context of computational fluid dynamics. With a proper multilevel decomposition, our method becomes the classical V-cycle multigrid method. Assuming that the decomposition $\mathcal K = \sum_{i=1}^N \mathcal K_i$ satisfies two assumptions: energy stable decomposition (SD) and strengthened Cauchy-Schwarz inequality (SCS), we are able to prove the convergence of our method $$ E(u^{k+1}) - E(u)\leq \left (1-\frac{1}{1+C_AC_S}\right) \left [E(u^k) - E(u)\right ], $$ where $u^{k}$ is the $k$-th iteration, and $C_A$ and $C_S$ are positive constants in (SD) and (SCS). We also extend the analysis to the case where the local constrained minimization problem is not solved exactly but one gradient iteration is applied. It is known that numerically multiplicative Schwarz smoother leads to an efficient multigrid methods for saddle point problems~\cite{Schoberl1999,Schoberl2003}, however, theoretical analysis for the convergence is only available for less efficient additive versions~\cite{Schoberl1999,Schoberl2003}. Our new framework can fill this gap. Furthermore, the optimal choice of the relaxation parameter used in the inexact solvers of local problems can be derived from the minimization point of view. We then apply our method to the saddle point systems arising from mixed finite element methods of Poisson, Darcy, and Stokes equations. By verifying assumptions (SD) and (SCS) for multilevel decompositions of H(div) element spaces, we will prove the uniform convergence of a V-cycle multigrid method for mixed finite element methods for the Poisson and Darcy equations. Our smoother is related to the overlapping Schwarz method developed for H(div) problems in~\cite{Ewing.R;Wang.J1992a,Vassilevski.P;Wang.J1992,Mathew1993,Mathew1993a,Arnold.D;Falk.R;Winther.R2000}. But our analysis from the energy minimization point of view is more transparent. We note that a similar stable multilevel decomposition for the Raviart-Thomas space has been proposed in~\cite{Vassilevski.P;Wang.J1992} in two dimensions and in~\cite{Hiptmair1999,Arnold.D;Falk.R;Winther.R2000} in three dimensions. Our decomposition for three dimensional case is new and does not require the duality argument and thus relax the full regularity assumption needed in~\cite{Hiptmair1999,Arnold.D;Falk.R;Winther.R2000}. We use the equivalence between Crouzeix-Raviart (CR) non-conforming methods and mixed methods to develop a V-cycle multigrid method for non-conforming methods of Poisson equation and prove its uniform convergence. Existing convergence proofs of multigrid methods for non-conforming methods~\cite{Brenner.S1989,Oswald.P1997,Brenner.S1999,Brenner2003a,Oswald.P2008} cannot cover V-cycles with few smoothing steps while our new framework can. The two ingredients of our new multigrid method for non-conforming methods are: the overlapping Schwarz smoothers, and inter-grid transfer operators through the nested flux spaces. For discrete Stokes equations, we apply our theory to divergence free and nested finite element spaces, e.g., Scott-Voligious elements~\cite{Scott.L;Vogelius.M1985}. Again traditional multigrid convergence proofs for Stokes equations requires the full regularity assumption~\cite{Verfurth1984,Brenner.S1990,Brenner1996a,Braess.D;Sarazin.R1997,Zulehner.W2000,Olshanskii2011}. Using the framework developed in this paper, we can obtain multigrid convergence without the full regularity assumption. Very recently, Brenner, Li, and Sung~\cite{Brenner2014} have developed new multigrid methods for Stokes equations and have proved the uniform convergence without the full regularity assumption. The convergence result of~\cite{Brenner2014} is, however, restricted to W-cycle multigrid methods with sufficient many smoothing steps. Here we consider V-cycle multigrid with only one smoothing. Furthermore, smoothers developed in~\cite{Brenner2014} are less efficient than Vanka-type smoothers considered here; see numerical examples in~\cite{Schoberl1999,Brenner2014}. On the other hand, the framework developed in~\cite{Brenner2014} can be applied to any stable mixed finite element discretization of Stokes equation and in \cite{Brenner2015} such convergence theory is also extended to the Darcy systems, while the current theory can be only applied to the case when the constrained subspaces are nested. For non-nested constrained subspaces, an additional projector is needed and an analysis for W-cycle multigrid without the full regularity assumption can be found in~\cite{Chen2015b}. For popular finite element pairs of Stokes equations, a fast multigrid method using least square distributive Gauss-Sedel smoother has been developed in~\cite{Wang2013} for Stokes equations and generalize to Oseen problem in~\cite{Chen2013e}. Although most of the abstract theory, either based on the Xu-Zikatanov identity~ \cite{Xu.J;Zikatanov.L2002} or following the Tai-Xu approach~\cite{Tai2001}, has been developed in certain form in the literature, the application to multigrid methods for solving saddle point systems are new and lead to several new contribution of the multigrid theory for saddle point systems: a convergence proof of V-cycle with even one smoothing step, a convergence proof without full regularity assumption, and a convergence proof for the multiplicative Schwarz smoother. Stable decomposition of several finite element spaces established in this paper also have their own interest. The rest of this paper is structured as follows. In Section 2, we introduce the algorithm. In Section 3, we give a convergence proof using the X-Z identity and in Section 4, we present an alternative proof based on the constraint subspace optimization method. We extend the convergence proof to the inexact local solver in Section 5. In Section 6, 7, and 8, we apply our method to mixed finite element methods for the Poisson and Darcy equations, non-conforming finite element methods for the Poisson equation, and mixed finite element methods for the Stokes equations, respectively. In the last section, we give conclusion and outlook for future work. \section{Algorithm}\label{sec:algorithm} Let $\mathcal H$ be a Hilbert space equipped with inner product $(\cdot,\cdot)$ and $\mathcal V\subset \mathcal H$ be a closed subspace and thus $\mathcal V$ is also a Hilbert space. Suppose $A: \mathcal V\to \mathcal V$ is a symmetric and positive definite (SPD) operator with respect to $(\cdot,\cdot)$, which introduces a new inner product $(u,v)_A := (Au,v)=(u,Av)$ on $\mathcal V$. The norm associated to $(\cdot,\cdot)$ or $(\cdot,\cdot)_A$ will be denoted by $\|\cdot\|$ or $\|\cdot\|_A$, respectively. Let $\mathcal P$ be another Hilbert space and let $B: \mathcal V\to \mathcal P$ be a linear operator. With a slight abuse of notation, we still denote the inner product of $\mathcal P$ by $(\cdot,\cdot)$. In most problems of consideration, the inner product $(\cdot,\cdot)$ for $\mathcal H$ is the vector $L^2$-inner product while for $\mathcal P$ it is the scalar $L^2$-inner product. The transpose $B^T: \mathcal P \to \mathcal V$ is the adjoint of $B$ in the $(\cdot, \cdot)$ inner product, i.e., $(Bv, q) = (v, B^Tq)$ for all $v\in \mathcal V, q\in \mathcal P$. For an $f\in \mathcal H$, we define the Dirichlet-type energy: \begin{equation} E(v) = \frac{1}{2}\|v\|_A^2 - ( f,v ), \quad \text{ for } v\in \mathcal V. \end{equation} In this paper we always identify a functional in the dual space $\mathcal H'$ as an element in $\mathcal H$ through the Riesz map induced by $(\cdot,\cdot)$. Denote by $\mathcal K = \ker(B)$ the subspace satisfying the constraint $Bv=0$, i.e., the null space of $B$. We are interested in the following constrained minimization problem: \begin{equation}\label{main-opt} \min _{v\in \mathcal K} E(v). \end{equation} Since the energy is quadratic and convex, there exists a unique solution to~\eqref{main-opt} and the minimizer $u$ of~\eqref{main-opt} is characterized as the solution of the equation: Find $u\in \mathcal K$ such that \begin{equation}\label{spd} (Au, v) = ( f, v ) \quad \text{ for all } v\in \mathcal K. \end{equation} We introduce the operator $A_{\mathcal K}: \mathcal K\to \mathcal K$ as $(A_{\mathcal K} u, v) = (Au, v)$ for all $u, v\in \mathcal K$ and the operator $Q_{\mathcal K}: \mathcal H\to \mathcal K$ as the $(\cdot,\cdot)$-projection, i.e., for a given function $f\in \mathcal H$, $Q_{\mathcal K}f\in \mathcal K$ satisfies $(Q_{\mathcal K} f, v) = (f, v)$ for all $v\in \mathcal K$. Then the operator form of~\eqref{spd} is: Find $u\in \mathcal K$ such that \begin{equation}\label{AK} A_{\mathcal K} u = Q_{\mathcal K}f \quad \text{in } \mathcal K. \end{equation} As it might be difficult to find bases for the subspace $\mathcal K$, instead of solving the symmetric positive definite formulation~\eqref{AK}, we shall consider an equivalent saddle point formulation. Let us introduce the Lagrange multiplier $p\in \mathcal P$, equation~\eqref{spd} can be rewritten as the following saddle point system: Find $u\in \mathcal V, p\in \mathcal P$ such that \begin{align*} (Au, v) + (p, Bv) &= ( f, v ) &\text{for all } v\in \mathcal V,\\ (Bu, q) \qquad \quad \quad \; & = 0&\text{for all } q\in \mathcal P, \end{align*} which will be written in the operator form \begin{equation}\label{ABBO} \begin{pmatrix} A & B^T\\ B & O \end{pmatrix} \begin{pmatrix} u\\ p \end{pmatrix} = \begin{pmatrix} f\\ 0 \end{pmatrix}. \end{equation} Let $\|\cdot\|_V$ and $\|\cdot\|_P$ be two appropriate norms for space $\mathcal V$ and $\mathcal P$, respectively. It is well known that~\eqref{ABBO} is well posed if and only if the following so-called Brezzi conditions \cite{Brezzi.F;Fortin.M1991} hold: \begin{enumerate} \item Continuity of operators $A$ and $B$: there exist constants $c_a, c_b > 0$ such that $$ (Au, v) \leq c_a\|u\|_{V}\|v\|_{V}, \quad (Bv, q)\leq c_b\|v\|_{V}\|q\|_P, \quad \text{for all } u, v \in \mathcal V, q\in \mathcal P. $$ \item Coercivity of $A$ in the kernel space. There exists a constant $\alpha>0$ such that $$ (Au, u) \geq \alpha \|u\|_{V}^2 \quad \text{for all }u \in \ker(B). $$ \item Inf-sup condition of $B$. There exists a constant $\beta >0$ such that $$ \inf_{p\in \mathcal P, p\neq 0} \sup_{v \in \mathcal V, \tau \neq 0} \frac{(Bv, p)}{\|v\|_{V}\|p\|_{P}} \geq \beta. $$ \end{enumerate} Choices of norms $\| \cdot \|_V$ and $\| \cdot \|_P$ are not unique~\cite{Zulehner2011} and $\| \cdot \|_V = \| \cdot \|_A$ may not be always a good choice since $B$ may not be continuous in $\|\cdot \|_A$ norm, c.f. the mixed formulation of Poisson equation in Section \ref{sec:mixPoisson}. Throughout this paper, we will assume the well-posedness of~\eqref{ABBO} and focus on its efficient solvers. Problems~\eqref{AK} and~\eqref{ABBO} are equivalent theoretically but will lead to different algorithms. In practice, the saddle point formulation will be easier to solve when bases of $\mathcal K$ are not available or expensive to form. We shall develop and analyze multigrid methods for solving the saddle point system \eqref{ABBO} based on subspace correction methods~\cite{Xu1992} and its adaptation to optimization problems~\cite{Tai2001,Tai.X2003}. Let $$ \mathcal V = \mathcal V_1 + \mathcal V_2 + \cdots + \mathcal V_N, \; \mathcal V_i\subset \mathcal V, i=1,\ldots, N, $$ be a space decomposition of $\mathcal V$ satisfying the condition $$ \mathcal K = \mathcal K_1 + \mathcal K_2 + \cdots + \mathcal K_N, \; \mathcal K_i = \mathcal V_i \cap \ker(B), i=1,\ldots, N. $$ For $k\geq 0$ and a given approximated solution $u^k\in \mathcal K$, one step of the Successive Subspace Optimization (SSO) method~\cite{Tai2001} is as follows: \begin{figure}[htbp] \begin{center} \includegraphics[width = 4.2in]{SSO.pdf} \end{center} \end{figure} If we write the Euler equation of the local minimization problem, it reads as \begin{equation}\label{correction} (A e_i, \phi_i) = ( f - Av_{i-1}, \phi_i )\quad \text{for all } \phi_i\in \mathcal K_i. \end{equation} Namely $e_i$ is the solution of the residual equation restrict to $\mathcal K_i$. We can thus treat SSO as the subspace correction method for solving \eqref{spd} using the space decomposition $\mathcal K = \sum _{i=1}^N\mathcal K_i$. We can analyze the convergence from this point of view. Using the fact $Au = f$ in $\mathcal K'$ and $v_i = v_{i-1} + e_i$, equation~\eqref{correction} is also equivalent to the $A$-orthogonality \begin{equation}\label{orth} (u - v_i, \phi_i)_A = 0 \quad \text{for all } \phi_i\in \mathcal K_i, \end{equation} which can be also written as \begin{equation}\label{orth2} (E'(v_i), \phi_i) = 0 \quad \text{for all } \phi_i\in \mathcal K_i. \end{equation} Let $\mathcal P_i = \mathcal P\cap B(\mathcal V_i)$. Define $A_i: \mathcal V_i \to \mathcal V_i$ as for $u_i\in \mathcal V_i$, $Au_i\in \mathcal V_i$ such that $(A_i u_i, v_i) = (Au_i, v_i)$ for all $v_i \in \mathcal V_i$, and $B_i: \mathcal V_i \to \mathcal P_i$ as for $u_i\in \mathcal V_i$, $Bu_i\in \mathcal P_i$ such that $(B_i u_i, q_i) = (Bu_i, q_i)$ for all $q_i \in \mathcal P_i$. Let $Q_i: \mathcal H \to \mathcal V_i$ be the projection in $(\cdot,\cdot)$ inner product. The constrained minimization problem in the constraint subspace $\mathcal K_i$ will be solved by solving a small saddle point system in $\mathcal V_i$: \begin{equation}\label{localproblem} \begin{pmatrix} A_i & B^T_i\\ B_i & O \end{pmatrix} \begin{pmatrix} e_i\\ p_i \end{pmatrix} = \begin{pmatrix} Q_i(f - Av_{i-1})\\ 0 \end{pmatrix}. \end{equation} A typical multilevel decomposition is given as follows. First we construct a macro-decomposition $\mathcal V = \sum_{k=1}^J \mathcal V_k$ with nested subspaces $\mathcal V_1\subset \mathcal V_2 \subset \ldots \subset \mathcal V_J = \mathcal V$. Usually they are based on a sequence of successively refined meshes. For each subspace $\mathcal V_k, k=1, \ldots, J$, we introduce a micro-decomposition $\mathcal V_k = \sum_{i=1}^{N_k}\mathcal V_{k,i}$ and set $\mathcal K_{k,i} = \mathcal V_{k,i}\cap \ker(B)$. Note that the assumption $\mathcal K = \sum_{k=1}^{J}\sum_{i=1}^{N_k}\mathcal K_{k,i}$ requires a careful choice of the micro-decomposition of $\mathcal V_k$. Roughly speaking, each subspace $\mathcal V_{k,i}$ should be big enough to contain a basis function of $\mathcal K$ and each basis function of $\mathcal K$ should be contained in at least one $\mathcal V_{k,i}$. Similar decomposition is required to design robust multigrid methods for nearly singular system~\cite{Lee.Y;Wu.J;Xu.J;Zikatanov.L2006}. \begin{remark}\rm Solving local saddle problems in $\mathcal V_{k,i}$ sequentially in the $k$-th level can be interpret as a multiplicative Schwarz smoother which is better known as the Vanka smoother~\cite{Vanka1986} for Navier-Stokes equations. \qed \end{remark} Due to the nestedness of the macro-decomposition, restriction and prolongation operators are needed only for two consecutive levels. In summary, SSO based on this multilevel decomposition leads to a V-cycle multigrid method for the saddle point problem~\eqref{ABBO} with a multiplicative Schwarz smoother. Thanks to the assumption $\mathcal K_{k,i}\subset \mathcal K$, if $u^k\in \mathcal K$, then $u^{k+1} =$ SSO$(u^k)$ is still in $\mathcal K$. Namely the iteration remains in the constrained subspace. Uzawa method~\cite{Uzawa1958}, another popular iterative method for solving the saddle point problem, will not preserve the constraint and thus is not considered here. We shall use either the unconstrained SPD formulation~\eqref{spd} and~\eqref{correction} or constrained saddle point formulation~\eqref{ABBO} and~\eqref{localproblem}. They are equivalent forms for the convergence analysis but different algorithmically. We end this section with a discussion of the non-homogenous constraint, i.e., the saddle point problem \begin{equation}\label{ABBOfg} \begin{pmatrix} A & B^T\\ B & O \end{pmatrix} \begin{pmatrix} u\\ p \end{pmatrix} = \begin{pmatrix} f\\ g \end{pmatrix}. \end{equation} To change to the form \eqref{ABBO}, we can first find a $u_*\in \mathcal V$ satisfying $Bu_* = g$ and let $u = u_* + \delta u$. Then the equation for $\delta u$ is in the form \eqref{ABBO}. There are several ways to find such $u_*$. One way is to solve \begin{equation}\label{ABBO0g} \begin{pmatrix} I & B^T\\ B & O \end{pmatrix} \begin{pmatrix} u_*\\ p_* \end{pmatrix} = \begin{pmatrix} 0\\ g \end{pmatrix}, \end{equation} which is supposed to be easier than solving \eqref{ABBOfg}. For Stokes equations, solving \eqref{ABBO0g} essentially requires a Poisson solver for pressure for which fast solvers are available. For Darcy equations, $A$ is a weighted mass matrix with possibly highly oscillatory coefficients, while \eqref{ABBO0g} is again just a Poisson operator. When the space $\mathcal P$ consists of discontinuous elements, which is the case of most applications considered in this paper, we can find such $u_*$ by one V-cycle with post-smoothing only; see Section \ref{sec:mixPoisson} for details. \section{Convergence Analysis based on the XZ identity} In this section, we provide a convergence analysis using the SPD formulation~\eqref{spd} and~\eqref{correction}. The analysis is based on the XZ identity~\cite{Xu.J;Zikatanov.L2002} for the multiplicative iterative methods and can be found in~\cite{Xu.J;Chen.L;Nochetto.R2009}. Denoted by $P_i$ the $A$-orthogonal projection onto $\mathcal K_i$ for $i=1, \ldots, N$. Then the error operator of SSO can be written as $(I - P_N)(I-P_{N-1})\cdots (I-P_1)$, i.e., $u - u^{k+1} = \prod _{i=1}^N (I-P_{i}) (u-u^k)$, where $u^{k+1} = SSO(u^k)$. The following XZ identity was established in~\cite{Xu.J;Zikatanov.L2002} \begin{equation}\label{XZ} \Big \|\prod _{i=1}^N (I-P_{i}) \Big \|_A^2 = 1 - \frac{1}{1+c_0}, \end{equation} where $$ c_0 = \sup _{\|v\|_A=1}\inf _{\sum _{i=1}^Jv_{i}=v, \\ v_i\in \mathcal K_i} \sum _{i=1}^N \Big \|P_i\sum _{j=i+1}^Jv_j \Big \|^2_{A}. $$ For an elementary proof of \eqref{XZ}, we refer to Chen~\cite{Chen.L2009c}. In order to estimate the constant $c_0$, we propose two important properties of the space decomposition. \smallskip \noindent{\bf Stable decomposition (SD)}: for every $v\in \mathcal K$, there exists $v_i\in \mathcal K_i, i=1, \ldots, N$ such that $$ v = \sum _{i=1}^N v_i, \quad \text{ and }\quad \sum_{i=1}^N \|v_i\|_A^2 \leq C_A\|v\|_A^2. $$ \noindent{\bf Strengthened Cauchy Schwarz inequality (SCS)}: for any $u_i\in \mathcal K_i$ and $v_j\in \mathcal K_j$ $$ \sum_{i=1}^N\sum_{j=i+1}^N (u_i, v_j)_A \leq C_S^{1/2} \left (\sum_{i=1}^N \|u_i\|^2_A\right )^{1/2}\left (\sum_{j=1}^N \|v_j\|^2_A\right )^{1/2}. $$ With assumptions (SD) and (SCS), we shall provide a upper bound of $c_0$ and thus obtain a convergence proof of SSO method for solving the saddle point problem~\eqref{ABBO}. \begin{theorem}\label{th:exactrate} Assume that the space decomposition $\mathcal K = \sum_{i=1}^N \mathcal K_i$ satisfy assumptions (SD) and (SCS). For SSO method, we have $$ \Big \|\prod _{i=1}^N (I-P_{i})\Big \|^2_A \leq 1-\frac{1}{1+C_AC_S}. $$ \end{theorem} \begin{proof} We apply (SCS) with $u_i = P_i \sum_{j=i+1}^N v_j$ to obtain \begin{align*} \sum _{i=1}^N \|u_i\|_A^2 &= \sum _{i=1}^N (u_i,P_i \sum _{j=i+1}^N v_j)_A = \sum _{i=1}^N \sum _{j=i+1}^N (u_i, v_j)_A\\ &\leq C_S^{1/2}\left (\sum _{i=1}^N \|u_i\|_A^2\right)^{1/2} \left (\sum _{i=1}^N \|v_i\|_{A}^2 \right)^{1/2}, \end{align*} which leads to the inequality \begin{equation} \sum _{i=1}^N \|u_i\|_A^2 \leq C_S\sum _{i=1}^N \|v_i\|_{A}^2. \end{equation} Consequently, we choose $v=\sum _{i=1}^N v_i$ as a stable decomposition satisfying (SD) to get $$ \sum _{i=1}^N \Big \|P_{i} \sum _{j=i+1}^N v_{j}\Big \|_{A}^2=\sum _{i=1}^N \|u_i\|_A^2\leq C_S\sum _{i=1}^N \|v_i\|_{A}^2 \leq C_SC_A\|v\|_A^2, $$ which implies $c_0\leq C_SC_A$. The desired result then follows from the X-Z identity~\eqref{XZ}. \end{proof} The assumption (SCS) is relatively easy to verify. The key is to construct a stable decomposition of the constraint space $\mathcal K$. \section{Convergence Analysis based on Constrained Optimization} In this section we provide an alternative proof using the constraint optimization approach established by Tai~\cite{Tai.X2003}. It also provides a better approach to extend the convergence proof to inexact and/or nonlinear local solvers. We will always denote by $u$ the global minimizer of \eqref{main-opt}. Given an initial guess $u^0\in \mathcal K$, let $u^k$ be the $k$th iteration in SSO algorithm for $k=1,2,\cdots$. We aim to prove a linear reduction of the energy difference \begin{equation}\label{linearreduction} E(u^{k+1}) - E(u) \leq \rho \left [ E(u^{k}) - E(u) \right ], \end{equation} with a contraction factor $\rho \in (0,1)$. Ideally $\rho$ is independent of the size of the problem. The proof is developed in~\cite{Tai2001,Tai.X2003} for a nonlinear and convex energy but simplified here for the quadratic energy. We first explore the relation between the energy and the $A$-norm of the error. \begin{lemma}\label{lm:energyandnorm} For any $w, v\in \mathcal V$, we have \begin{equation}\label{Ewv} E(w)-E(v) = \frac{1}{2}\|w - v\|_A^2 + ( E'(v), w-v ). \end{equation} Consequently for the minimizer $u\in \mathcal K$ and any $w\in \mathcal K$, \begin{equation}\label{Ewu} E(w)-E(u) = \frac{1}{2}\|w - u\|_A^2. \end{equation} \end{lemma} \begin{proof} Verification of~\eqref{Ewv} and~\eqref{Ewu} is straightforward. \end{proof} Based on the identity~\eqref{Ewu}, the target inequality~\eqref{linearreduction} becomes a more familiar one \begin{equation} \|u^{k+1} - u\|_A \leq \rho^{1/2}\|u^k - u\|_A. \end{equation} Let $d_k = E(u^k) - E(u)$ and $\delta _k = E(u^k) - E(u^{k+1}).$ The quantity $d_k$ is the distance of the current energy to the lowest one, $\delta_k$ is the amount of the energy decreased in one iteration, and they are connected by the identity $\delta_k = d_k - d_{k+1}$. By Lemma \ref{lm:energyandnorm}, we have $d_k = \frac{1}{2}\|u^k - u\|_A^2$ but in general $\delta_k \neq \frac{1}{2}\|u^k - u^{k+1}\|_A^2$ since $u^{k+1}$ may not be the minimizer. For each $v_i, i=1,\ldots,N,$ in SSO, we do have $$E(v_{i-1}) - E(v_i) = \frac{1}{2}\|v_{i-1} - v_i\|_A^2,$$ since $v_i$ is the local minimizer and $v_{i-1} - v_i = - e_i\in \mathcal K_i$; see also the orthogonality~\eqref{orth2}. Borrowing the terminology of the convergence theory of adaptive finite element methods~\cite{Nochetto.R;Siebert.K;Veeser.A2009}, we shall present our proof based on the following two inequalities. \medskip \noindent{\bf Discrete Lower Bound.} There exists a positive constant $C_L$ such that for $k=0,1,2, \ldots$ $$ \delta _k \geq C_L \sum _{i=1}^N \|e_i\|^2_A. $$ \noindent{\bf Upper Bound.} There exists a positive constant $C_U$ such that for $k=0,1,2, \ldots$ $$ d_{k+1} \leq C_U \sum _{i=1}^N \|e_i\|^2_A. $$ \begin{theorem} Assume that the discrete lower bound and upper bound hold with constants $C_L$ and $C_U$ respectively. We then have $$ d_{k+1}\leq \frac{c_0}{1+c_0}d_k, $$ where $c_0=C_U/C_L$. \end{theorem} \begin{proof} The proof is straightforward by assumptions and rearrangement of the following inequality $$ d_{k+1} \leq C_U \sum _{i=1}^N \|e_i\|^2_A \leq C_U/C_L \delta _k = c_0( d_k - d_{k+1}). $$ \end{proof} Verifying the lower bound is relatively easy since $E$ is convex. Indeed we have the following identity which characterizes exactly the amount of energy decreased in one step of SSO. Again in the sequel, $u^{k+1}={\rm SSO}(u^k)$ and $e_i$ is the $i$th correction in $\mathcal K_i$, for $i=1,\ldots,N$. \begin{theorem} $$ E(u^k) - E(u^{k+1}) = \frac{1}{2} \sum _{i=1}^N \|e_i\|^2_A. $$ \end{theorem} \begin{proof} By the identity \eqref{Ewv} and the orthogonality \eqref{orth2}, we have, for $i=1,\ldots, N$, $$ E(v_{i-1}) - E(v_i) = \frac{1}{2}\|v_{i-1}-v_i\|^2_A = \frac{1}{2}\|e_i\|^2_A, $$ and consequently $$ E(u^k) - E(u^{k+1}) = \sum _{i=1}^N\left [ E(v_{i-1}) - E(v_i)\right ] = \frac{1}{2} \sum _{i=1}^N \|e_i\|^2_A. $$ \end{proof} Proving the upper bound is more delicate. We first present a lemma which can be verified directly by definition and Lemma \ref{lm:energyandnorm}. \begin{lemma}\label{lm:E'u} \begin{equation}\label{eq:E'u} (E'(u^{k+1}) - E'(u), u^{k+1} - u) = \|u^{k+1}-u\|_A^2 = 2\left [E(u^{k+1}) - E(u)\right ]. \end{equation} \end{lemma} We then give a multilevel decomposition of the left-hand side of \eqref{eq:E'u}. \begin{lemma}\label{lm:doublesum} For any decomposition $u^{k+1}-u = \sum _{i=1}^N w_i, w_i\in \mathcal K_i, i=1, 2, \ldots, N$, $$ (E'(u^{k+1}) - E'(u), u^{k+1} - u) = \sum _{i=1}^N\sum _{j>i}^N (e_j, w_i )_A $$ \end{lemma} \begin{proof} \begin{align*} &( E'(u^{k+1}) - E'(u), u^{k+1} - u ) = ( E'(u^{k+1}) , u^{k+1} - u ) \\ &= \sum _{i=1}^N( E'(u^{k+1}) - E'(v_i), w_i ) = \sum _{i=1}^N\sum _{j>i}^N ( E'(v_j) - E'(v_{j-1}), w_i ) = \sum _{i=1}^N\sum _{j>i}^N ( e_j, w_i )_A. \end{align*} In the first step, we use the fact $E'(u) = 0$ in $\mathcal K'$ since $u$ is the minimizer and $u^{k+1}-u \in \mathcal K$. In the second step we use $E'(v_i) = 0$ in $\mathcal K_i'$ since $v_i$ is the minimizer in $\mathcal K_i$ and $w_i\in \mathcal K_i$; see also~\eqref{orth}. \end{proof} \begin{lemma}\label{lm:upper} Assume that the space decomposition satisfies assumptions (SD) and (SCS). Then we have the upper bound $$ E(u^{k+1}) - E(u) \leq \frac{1}{2}C_SC_A\sum _{i=1}^N\|e_i\|^2_A. $$ \end{lemma} \begin{proof} We shall chose a stable decomposition for $u^{k+1} - u = \sum _{i=1}^N w_i, w_i\in \mathcal K_i, i=1, 2, \ldots, N$. By Lemma \ref{lm:doublesum} and (SCS), we have \begin{align*} ( E'(u^{k+1}) - E'(u), u^{k+1} - u ) &= \sum _{i=1}^N\sum _{j>i}^N ( e_j, w_i )_A \\ &\leq C_S^{1/2}\left (\sum_{j=1}^N \|e_j\|^2_A\right )^{1/2}\left (\sum_{i=1}^N \|w_i\|^2_A\right )^{1/2}\\ &\leq (C_SC_A)^{1/2}\left (\sum_{i=1}^N \|e_i\|^2_A\right )^{1/2}\|u^{k+1} - u\|_A. \end{align*} Substituting the identity (see Lemma \ref{lm:E'u}) $$ \|u^{k+1} - u\|_A^2 = ( E'(u^{k+1}) - E'(u), u^{k+1} - u ) $$ into the above inequality and canceling one $\|u^{k+1} - u\|_A$, we can obtain $$ \|u^{k+1} - u\|_A^2 \leq C_SC_A\sum_{i=1}^N\|e_i\|_A^2. $$ Using the identity $E(u^{k+1}) - E(u) = \|u^{k+1} - u\|_A^2/2$, we obtain the desired result. \end{proof} We summarize our convergence result into the following theorem. \begin{theorem} Assume that the space decomposition $\mathcal K = \sum_{i=1}^N \mathcal K_i$ satisfies assumptions (SD) and (SCS). Then $$ E(u^{k+1}) - E(u)\leq \left (1-\frac{1}{1+C_AC_S}\right) \left [E(u^k) - E(u)\right ]. $$ \end{theorem} \begin{remark}\rm The estimate is consistent with the one obtained by the XZ identity which indicates that our energy estimate is sharp. \end{remark} \section{Convergence Analysis with Inexact Local Solvers} In the algorithm SSO, we assume that the local problem is solved exactly which may be costly when the dimension of the local space is large. In this section, we consider inexact solvers using one gradient iteration and establish the corresponding convergence proof. Note that XZ identity cannot be applied to the nonlinear solvers considered here. Recall that the local constrained minimization problem is: let $r_i = Q_i(f - Av_{i-1})$, find $e_i^*\in \mathcal K_i$ such that \begin{equation}\label{localstokes} \begin{pmatrix} A_i & B_i^T\\ B_i & O \end{pmatrix} \begin{pmatrix} e_i^*\\ p_i^* \end{pmatrix} = \begin{pmatrix} r_i\\ 0 \end{pmatrix}. \end{equation} Here we use $e_i^*, p_i^*$ to denote the solution obtained by the exact solver. In the inexact solver proposed below, the constraint is still satisfied but operator $A_i$ is replaced by a simpler one $D_i$, e.g., the diagonal of $A_i$. In general, let $D_i$ be an SPD operator on $\mathcal V_i$, we first solve the local problem \begin{equation}\label{localapproximatedstokes} \begin{pmatrix} D_i & B_i^T\\ B_i & O \end{pmatrix} \begin{pmatrix} s_i\\ p_i \end{pmatrix} = \begin{pmatrix} r_i\\ 0 \end{pmatrix}. \end{equation} Then we apply the line search along the direction $s_i$ to find an optimal scaling: \begin{equation}\label{linesearch} \min _{\alpha \in \mathbb R}E(v_{i-1} - \alpha s_i), \end{equation} whose solution is \begin{equation}\label{alpha} \alpha = \frac{(r_i,s_i)}{(As_i, s_i)}. \end{equation} We update $$v_i = v_{i-1} - \alpha s_i.$$ This is one step of a preconditioned gradient method and $D_i$ is a preconditioner of $A_i$. In this section, we will always denote by $e_i^*$ the solution of~\eqref{localstokes} and $e_i = \alpha s_i$ with $s_i$ being the solution of~\eqref{localapproximatedstokes} and $\alpha$ giving by~\eqref{alpha}. With such choice of $\alpha$, we still have the first order condition \begin{equation}\label{localorth} (E'(v_i), e_i) = 0. \end{equation} \begin{remark}\rm In the original Vanka smoother for Navier-Stokes equation, $D_i = \omega \,{\rm diag} (A_i)$ with a suitable parameter $\omega \in (0.5,0.8)$~\cite{Vanka1986} and no line search is applied, i.e., $\alpha = 1$. $\Box$\end{remark} Using the first order condition~\eqref{localorth}, we still have the following identity. \begin{lemma} $$ E(u^k) - E(u^{k+1}) = \frac{1}{2} \sum _{i=1}^N \|e_i\|^2_A. $$ \end{lemma} Again the upper bound is more delicate. We first adapt the analysis in~\cite{Braess1999} to establish the following inequalities. Recall that for an SPD operator $M$, $\kappa (M) = \lambda_{\max}(M)/\lambda_{\min}(M)$ is the condition number of $M$. \begin{lemma}For the inexact local solver described above, we have \begin{equation}\label{localepsilon} \|e_i^* - e_i\|_A \leq \epsilon \|e_i^*\|_A, \quad\text{ with } \epsilon = \frac{\kappa (D^{-1}_iA_i) - 1}{\kappa (D^{-1}_iA_i) + 1} \in (0,1). \end{equation} Consequently by the triangle inequality \begin{equation}\label{localepsilon2} \|e_i^* - e_i\|_A \leq \frac{\epsilon}{1-\epsilon} \|e_i\|_A = \frac{1}{2}(\kappa(D_i^{-1}A_i)-1)\|e_i\|_A. \end{equation} \end{lemma} \begin{proof} To simplify the notation, we suppress the subscript $i$ in the proof. Let $\tilde e = \omega s$ where $s$ is determined by \eqref{localapproximatedstokes} and $\omega \in \mathbb R$ is a parameter. Then following~\cite{Braess.D;Sarazin.R1997,Braess1999} we have the error equation \begin{equation} e^* - \tilde e = P_D(I-\omega D^{-1}A)e^* = (I - \omega D^{-1}A_\mathcal K)e^*, \end{equation} where $P_D = I - D^{-1}B^T(BD^{-1}B^T)^{-1}B$ is the projection to $\mathcal K$ in the $(\cdot, \cdot)_D:=(D\cdot,\cdot)$ inner product, and $A_\mathcal K = DP_DD^{-1}AP_D$. Note that $P_D$ is symmetric in $(\cdot, \cdot)_D$. We can then verify $A_\mathcal K$ is symmetric and semi-positive definite and $(\cdot, \cdot)_{A_\mathcal K} = (\cdot, \cdot)_A$ restricted to $\mathcal K$. Since the operator $D^{-1}A_\mathcal K$ is symmetric w.r.t. $(\cdot, \cdot)_{A_\mathcal K}$ and $e^*, \tilde e\in \mathcal K$, we have $$ \|e^* - \tilde e\|_A = \|e^* - \tilde e\|_{A_\mathcal K}\leq \| I - \omega D^{-1}A_{\mathcal K} \|_{A_\mathcal K} \|e^*\|_{A}. $$ By subtracting a fixed energy $E(v_i^*)$ from $E(v_{i-1}-\alpha s_i)$, it is easy to see the line search~\eqref{linesearch} is equivalent to $\min _{\alpha\in \mathbb R}\|e_i^* - \alpha s_i\|_A$. Therefore $$ \|e^* - e\|_A\leq \|e^* - \tilde e\|_A\leq \| I - \omega D^{-1}A_{\mathcal K} \|_{A_\mathcal K} \|e^*\|_{A}. $$ Consequently $$ \|e^* - e\|_A\leq \inf_{\omega \in \mathbb R}\| I - \omega D^{-1}A_{\mathcal K} \|_{A_{\mathcal K}} \|e^*\|_{A} = \frac{\kappa (D^{-1}A_{\mathcal K}) - 1}{\kappa (D^{-1}A_{\mathcal K}) + 1}\|e^*\|_{A}. $$ The condition number $\kappa (D^{-1}A_\mathcal K)$, which is not easy to estimate since $A_{\mathcal K}$ is not formed explicitly, can be bounded by $$ \kappa (D^{-1}A_\mathcal K) = \kappa (P_DD^{-1}AP_D)\leq \kappa(D^{-1}A). $$ \end{proof} \begin{lemma} Assume that the space decomposition satisfies assumptions (SD) and (SCS). For SSO with the local in-exact solver described in this section, we have $$ E(u^{k+1}) - E(u) \leq \frac{1}{2}C_A\left [C_S^{1/2} + \frac{1}{2}\left (\max_{1\leq i\leq N}\kappa(D_i^{-1}A_i)-1\right ) \right ]^2 \sum _{i=1}^N\|e_i\|^2_A. $$ \end{lemma} \begin{proof} As before, we chose a stable decomposition for $u^{k+1} - u = \sum _{i=1}^N w_i, w_i\in \mathcal K_i, i=1, 2, \ldots, N$ and split as \begin{align*} ( E'(u^{k+1}) - E'(u), u^{k+1} - u ) & = ( E'(u^{k+1}) , u^{k+1} - u ) \\ &= \sum _{i=1}^N( E'(u^{k+1}) - E'(v_i), w_i ) + (E'(v_i) - E'(v_i^*), w_i) \\ & = \sum _{i=1}^N\left [\sum _{j>i}^N ( E'(v_j) - E'(v_{j-1}), w_i ) + (v_i - v_i^*, w_i)_A\right ]. \end{align*} The first term can be bounded as before, c.f. Lemma~\ref{lm:upper} $$ \sum _{i=1}^N\sum _{j>i}^N ( E'(v_j) - E'(v_{j-1}), w_i ) \leq (C_SC_A)^{1/2}\left (\sum_{i=1}^N \|e_i\|^2_A\right )^{1/2}\|u^{k+1} - u\|_A. $$ For the second term, using \eqref{localepsilon2}, we have \begin{align*} \sum_{i=1}^N (v_i - v_i^*, w_i)_A &\leq \frac{\epsilon}{1-\epsilon}\sum_{i=1}^N\|e_i\|_A\|w_i\|_A \\ &\leq \frac{\epsilon}{1-\epsilon} \left (\sum_{i=1}^N\|e_i\|_A^2\right )^{1/2}\left (\sum_{i=1}^N\|w_i\|_A^2\right )^{1/2}\\ &\leq \frac{\epsilon}{1-\epsilon}C_A^{1/2} \left (\sum_{i=1}^N\|e_i\|_A^2\right )^{1/2}\|u-u^{k+1}\|_A. \end{align*} Combining these two estimates, we then get the desired result. \end{proof} \begin{theorem} Assume that the space decomposition satisfies assumptions (SD) and (SCS). For SSO with the local in-exact solver described in this section, we have $$ E(u^{k+1}) - E(u) \leq \rho \left [ E(u^{k}) - E(u) \right ], $$ with contraction rate $$ \rho = 1-\frac{1}{1+C_A\left [C_S^{1/2} + (\max_{1\leq i\leq N}\kappa(D_i^{-1}A_i)-1)/2\right ]^2}. $$ \end{theorem} We end this section with several remarks. \begin{remark}\rm To be an efficient local solver, $D_i$ is usually a diagonal matrix which may not be a good preconditioner for elliptic operators. The rate will deteriorate as $\epsilon$ becomes close to one, i.e., $\kappa(D^{-1}_iA_i)\gg 1$. On the other hand, for elliptic operators in $\mathbb R^n$, and for $D_i = {\rm diag}(A_i)$, we have estimate $\kappa(D^{-1}_iA_i) \lesssim \dim (\mathcal V_i)^{2/n}$~\cite{Bank.R;Scott.L1989}. We can thus apply the estimate to a decomposition such that each local problem is of size $\mathcal O(1)$. $\Box$\end{remark} \begin{remark}\rm The solver considered here is one step of the preconditioned gradient method. The same analysis is applicable to a more efficient Preconditioned Conjugate-Gradient (PCG) solver with more than one iteration. The first order condition~\eqref{localorth} still holds for the PCG iterations. $\Box$\end{remark} \begin{remark}\rm The local gradient method is a nonlinear iterative method since the parameter $\alpha$ depends on the iteration. To prove the energy contraction for the linear constraint smoother, i.e., with a fixed parameter $\alpha$, we need to estimate the spectrum of the operator $P_DD^{-1}AP_D$ which is not easy since the projection $P_D$ is in a $L^2$-type inner product not the $A$-inner product. Technically, the first order condition~\eqref{localorth} may not hold for a fixed parameter $\alpha$. $\Box$\end{remark} \section{Application to Mixed methods for Poisson and Darcy Equation}\label{sec:mixPoisson} In this section, we consider mixed finite element methods for solving the Poisson equation and Darcy equation in two and three dimensions. Let $\Omega$ be a polygon or polyhedron domain and triangulated into a quasi-uniform mesh $\mathcal T_h$ with mesh size $h$. Assume that $\mathcal T_h$ is obtained by uniform refinements from an initial mesh $\mathcal T_1$ of $\Omega$, i.e., there exists a sequence of meshes $\mathcal T_1, \mathcal T_2, \ldots, \mathcal T_J = \mathcal T_h$. The triangulation $\mathcal T_1$ is a shape regular triangulation of $\Omega$ and $\mathcal T_{k+1}$ is obtained by dividing each element in $\mathcal T_{k}$ into four congruent small elements (two dimensions) or eight small elements (three dimensions). The mesh size $\mathcal T_k$ will be denoted by $h_k$. By the construction $h_k/h_{k+1} = 2$. \subsection{Problem setting} We consider the Poisson equation with Neumann boundary $$ -\Delta p = f \text{ in } \Omega, \quad \partial_n p = 0 \text{ on } \partial \Omega $$ where $n$ is the outwards normal vector of $\partial \Omega$. Let $u = \nabla p$. We obtain the mixed formulation of Poisson equation: find $u\in H_0(\div;\Omega):=\{v\in (L^2(\Omega))^2, \div v \in L^2(\Omega), v\cdot n |_{\partial \Omega}= 0\}$, where $v\cdot n$ should be understood in the trace sense, and $p\in L^2_0(\Omega) := \{q\in L^2(\Omega), \int_{\Omega}q \,{\rm d}x = 0\}$ such that \begin{align*} (u, v) - (\div v, p) &= 0, \quad \forall v\in H_0(\div;\Omega), \\ -(\div u, q) &= (f, q), \quad \forall q\in L^2_0(\Omega). \end{align*} Choose finite element spaces $\mathcal V \subset H_0(\div;\Omega)$ and $\mathcal P \subset L^2_0(\Omega)$ so that the following sequence is exact \begin{equation}\label{2Dexact} \mathcal S \stackrel{{\rm curl\,}}{\longrightarrow} \mathcal V \stackrel{\div}{\longrightarrow} \mathcal P \to 0, \end{equation} where $\mathcal S$ is another appropriate finite element space. Choices of $\mathcal S, \mathcal V$, and $\mathcal P$ will be made clear in the context. Subscript $k$ will be used when spaces are associated with triangulation $\mathcal T_k$ and when $k=J$ the subscript will be suppressed. The saddle point problem can be written as follows: Given $f\in \mathcal P$, find $w\in \mathcal V, p\in \mathcal P$ such that \begin{equation}\label{mixedpoisson0} \begin{pmatrix} M & B^T\\ B & O \end{pmatrix} \begin{pmatrix} w\\ p \end{pmatrix} = \begin{pmatrix} 0\\ f \end{pmatrix}, \end{equation} where $M$ is the mass matrix of $\mathcal V $ and $B$ is the discretization of $-\div$ operator. For this problem, $A=M$ and the $A$-norm is just the standard $L^2$-norm. Our method and analysis can be readily adapted to the second order elliptic equation with variable coefficients $K$ i.e., Darcy equation, for which the constitutive equation becomes $(K^{-1} u, v) - (\div v, p) = 0$. The $A$-norm is a weighted $L^2$-norm and the exact sequence \eqref{2Dexact} still holds. The constant $C_A$, however, could depend on the condition number of $K$; see Remark \ref{rm:weightednorm}. To apply our framework, we should first find a $u_*\in \mathcal V$ satisfying $Bu_* = f$. Set $w= u_* + u$, the system \eqref{mixedpoisson0} can be changed to the form of \eqref{ABBO}: \begin{equation}\label{mixedpoisson} \begin{pmatrix} M & B^T\\ B & O \end{pmatrix} \begin{pmatrix} u\\ p \end{pmatrix} = \begin{pmatrix} -Mu_*\\ 0 \end{pmatrix}. \end{equation} As discussed in Section \ref{sec:algorithm}, we can find such $u_*$ by solving $BB^Tu_* = Bf$. We now discuss a more efficient way utilizing the hierarchical structure of meshes. We start from a solution $u_*^1$ of \eqref{mixedpoisson0} on the coarsest mesh $\mathcal T_1$ which can be found by direct solvers. For $k=1,\ldots, J-1$, when $u_*^k$ on $\mathcal T_k$ with property $Bu_*^k = f$ holds element-wise on $\mathcal T_k$ is found, for each element $T\in \mathcal T_{k}$, we solve \eqref{mixedpoisson0} in $\mathcal V_{k+1}$ restricted to $T$ and with boundary condition $u\cdot n|_{\partial T} = u_*^k\cdot n|_{\partial T}$. That is we use $\mathcal T_k$ to get a domain decomposition of $\mathcal T_{k+1}$ and $u_*^k$ as the boundary condition to decompose a global problem into local problems on elements. The local problem is well defined since the compatible condition is enforced by $Bu_*^k = f$ on $T$ and the solution of the local problem will give $u_*^{k+1}$ with the property $Bu_*^{k+1} = f$ for each element in $\mathcal T_{k+1}$. The whole procedure is just one V-cycle with post-smoothing only and using a non-overlapping Schwarz method as a smoother. The computational cost is thus negligible. Thanks to the exact sequence \eqref{2Dexact}, we have a clear characterization of $\ker (B) = {\rm curl\,} (\mathcal S)$ which will be helpful to construct a stable multilevel decomposition of $\mathcal K$. Based on the hierarchy of the meshes, we have a macro-decomposition of $\mathcal S = \sum _{k=1}^J \mathcal S_k$. For each space $\mathcal S_k$, we decompose into one dimensional subspaces $\Phi_{k,j}$ spanned by one basis function, i.e., $\mathcal S_k = \sum_{j=1}^{N_k}\Phi_{k,j}$ with $N_k = \dim \mathcal S_k$. Let $\Omega_{k,i}$ be the support of $\Phi_{k,i}$. We chose $\mathcal V_{k,i} = H_0(\div;\Omega_{k,i})\cap \, \mathcal V_k$ for $i = 1,\ldots, N_k$. Then we have the decomposition \begin{equation}\label{RTdec} \mathcal V = \sum_{k=1}^J\sum_{i=1}^{N_k}\mathcal V_{k,i}, \end{equation} and $$ \mathcal K = \sum_{k=1}^J\sum_{i=1}^{N_k}\mathcal K_{k,i} \text{ with } \mathcal K_{k,i} = {\rm curl\,} \Phi_{k,i}. $$ We shall apply SSO based on the space decomposition $\mathcal K = \sum_{k=1}^J\sum_{i=1}^{N_k}\mathcal K_{k,i}$ and prove its uniform convergence. As an example, for the lowest order RT element \cite{Raviart.P;Thomas.J1977}, the smoother is an overlapping multiplicative Schwarz smoother requiring solving a small saddle point system at in the patch of each vertex in two dimensions and in the patch of each edge in three dimensions. Since the construction of a stable decomposition in two and three dimensions is different, we split the discussion into two subsections. \begin{remark}\rm One can use a basis of $\mathcal S$ to reduce the saddle point system into a SPD one and develop multigrid methods or domain decomposition methods for the SPD formulation; see e.g.~\cite{Ewing.R;Wang.J1992a,Hiptmair1999,Cai2003}. $\Box$\end{remark} \subsection{Two dimensions} In two dimensions, the space $\mathcal S\subset H_0^1(\Omega)$ is a Lagrange element space based on the mesh $\mathcal T_h$. To be specific, we will consider the important case when $\mathcal S$ is the simplest linear finite element space, $\mathcal V$ is the lowest order Raviart-Thomas element space~\cite{Raviart.P;Thomas.J1977}, and $\mathcal P$ is the piecewise constant space. Extension to high order elements is straightforward. The subspace $\mathcal V_{k,i}$ is spanned by basis vectors of edges connecting to the $i$th vertex in triangulation $\mathcal T_k$. We first verify the stable decomposition for the macro-decomposition $\mathcal K = \sum_{k=1}^J\mathcal K_{k}$. We denoted by $Q_k$ the $L^2$ projection $Q_k: \mathcal S_J \to \mathcal S_k$ for $k=1,\ldots, J$ and set $Q_0 = 0$. Note that due to the nestedness $Q_l Q_k = Q_l$ for $l\leq k$. \begin{lemma}\label{lm:poissonmacro} For every $v\in \mathcal K$, there exists $v_k\in \mathcal K_i, k=1,\ldots,J$ such that $v = \sum_{k=1}^J v_k$ and $\sum_{k=1}^J\|v_k\|^2\lesssim \|v\|^2$. \end{lemma} \begin{proof} In two dimensions, we have the relation $({\rm curl\,} \phi, {\rm curl\,} \psi) = (\nabla \phi, \nabla \psi)$. Therefore the stable decomposition (SD) comes from that for the Lagrange elements. More specifically, since $u\in \mathcal K$, there exists a unique $\phi \in \mathcal S$ such that $u = {\rm curl\,} \phi$. We then chose the $H^1$-stable decomposition of $\phi$ as $\phi = \sum_{k=1}^J(Q_k - Q_{k-1})\phi$ and let $u_k = {\rm curl\,} (Q_k - Q_{k-1})\phi$. The stable decomposition for the decomposition $u = \sum_{k}u_k$ in $M$-norm is equivalent to that of $\phi = \sum_{k=1}^J(Q_k - Q_{k-1})\phi$ in $H^1$-norm which is well known; see e.g.~\cite{Xu1992}. \end{proof} \begin{remark}\label{rm:weightednorm}\rm For Darcy equation with variable coefficients $K$, the $A$-norm of $v$ is changed to a weighted $H^1$ norm of $\phi$ for $v = {\rm curl\,} \phi$. If assuming $K$ is piecewise constant on the coarsest mesh, we can find a multilevel decomposition using hierarchical basis such that the inequality $\sum_{k=1}^J\|v_k\|^2\leq C|\log h|\|v\|^2$ holds with a penalty factor $|\log h|$ but with a constant $C$ independent of the variation of $K$; see \cite{Bank1988}. \qed \end{remark} We then verify the micro-decomposition is stable. \begin{lemma}\label{lm:poissonmicro} Let $\phi_k = (Q_k - Q_{k-1})\phi = \sum_{i=1}^{N_k}\phi_{k,i}$ be the nodal basis decomposition and let $u_{k,i} = {\rm curl\,} \phi_{k,i}$. Then the decomposition $u_k = \sum_{i=1}^{N_k} u_{k,i}$ is stable in $L^2$-norm. \end{lemma} \begin{proof} We apply the inverse inequality and the stability of the nodal basis decomposition in $L^2$-norm to get $$ \sum_{i=1}^{N_k} \|u_{k,i}\|^2 = \sum_{i=1}^{N_k} \| {\rm curl\,} \phi_{k,i}\|^2\lesssim h_k^{-2}\sum_{i=1}^{N_k} \| \phi_{k,i}\|^2\lesssim h_k^{-2}\| \phi_{k}\|^2. $$ We write the term $\phi_k = (Q_k - Q_{k-1})\phi = (I- Q_{k-1}) (Q_k - Q_{k-1})\phi$ and bound it as $$ \|\phi_k\| \lesssim h_k\| {\rm curl\,} (Q_k - Q_{k-1})\phi\| = h_k \|u_k\|. $$ The desired inequality then follows. \end{proof} Combination of Lemma \ref{lm:poissonmacro} and \ref{lm:poissonmicro} leads to a stable multilevel decomposition. \begin{theorem}\label{th:SDpoisson} For every $v\in \mathcal K$, there exists $v_{k,i}\in \mathcal K_{k,i}, k=1,\ldots,J, i=1,\ldots, N_k$ such that $v = \sum_{k=1}^J\sum_{i=1}^{N_k} v_{k,i}$ and $\sum_{k=1}^J\sum_{i=1}^{N_k}\|v_{k,i}\|^2\lesssim \|v\|^2$. \end{theorem} To verify assumption (SCS), we first present the following inequality and refer to \cite{Xu1992} for a proof. \begin{lemma}\label{lm:SCS} For any $\phi_k\in \mathcal S_k, \phi_l\in \mathcal S_l, l\geq k$, we have $$ ({\rm curl\,} \phi_k, {\rm curl\,} \phi_l) \lesssim \left (\frac{1}{2}\right )^{l-k}\|{\rm curl\,} \phi_k\| h_{l}^{-1}\|\phi_l\|. $$ \end{lemma} We use the lexicographical order of the double index, i.e., $(l,j)>(k,i)$ if $l>k$ or $l=k,j>i$. \begin{theorem}For any $u_{k,i}\in \mathcal K_{k,i}$ and $v_{l,j}\in \mathcal K_{l,j}$, we have $$ \sum_{k=1}^J\sum_{i=1}^{N_k} \sum_{(l,j)>(k,i)}(u_{k,i}, v_{l,j}) \lesssim \left (\sum_{k=1}^J\sum_{i=1}^{N_k}\|u_{k,i}\|^2 \right )^{1/2} \left (\sum_{l=1}^J\sum_{j=1}^{N_l}\|v_{l,j}\|^2\right )^{1/2}. $$ \end{theorem} \begin{proof} We can write $u_{k,i} = {\rm curl\,} \phi_{k,i}$ and $v_{l,j} = {\rm curl\,} \psi_{l,j}$ for some $\phi_{k,i}\in \mathcal S_k, \psi_{l,j}\in \mathcal S_l$. We split the summation $\sum_{(l,j)>(k,i)}$ into two parts $\sum_{l>k}\sum_{j=1}^{N_l}$ and $\sum_{l=k,j>i}^{N_k}$. For the first part, we apply Lemma \ref{lm:SCS} and note that $h_{l}^{-1}\|\psi_{l,j}\|\eqsim \|{\rm curl\,} \psi_{l,j}\|$ to get \begin{align*} \sum_{k=1}^J\sum_{i=1}^{N_k} \sum_{l>k}\sum_{j=1}^{N_l}(u_{k,i}, v_{l,j}) &= \sum_{k=1}^J\sum_{i=1}^{N_k} \sum_{l>k}\sum_{j=1}^{N_l}({\rm curl\,} \phi_{k,i}, {\rm curl\,} \psi_{l,j})\\ & \leq \sum_{k=1}^J\sum_{i=1}^{N_k} \sum_{l>k}\sum_{j=1}^{N_l} \left (\frac{1}{2}\right )^{l-k}\|{\rm curl\,} \phi_{k,i}\|\|{\rm curl\,} \psi_{l,j}\|\\ &\lesssim \left (\sum_{k=1}^J\sum_{i=1}^{N_k}\|u_{k,i}\|^2 \right )^{1/2} \left (\sum_{l=1}^J\sum_{j=1}^{N_l}\|v_{l,j}\|^2\right )^{1/2}. \end{align*} For the second part, we use the finite overlapping property of finite element spaces. Namely, in the $k$th level, the index set $n_k(i) = \{j\in \{1, \ldots, N_k\}, \Omega_{k,i}\cap \Omega_{k,j} \neq \emptyset \}$ is finite. Then \begin{align*} \sum_{k=1}^J\sum_{i=1}^{N_k} \sum_{j>i}^{N_k}(u_{k,i}, v_{k,j}) &=\sum_{k=1}^J\sum_{i=1}^{N_k} \sum_{j\in n_k(i)}(u_{k,i}, v_{k,j})\\ &\lesssim \left (\sum_{k=1}^J\sum_{i=1}^{N_k}\|u_{k,i}\|^2 \right )^{1/2} \left (\sum_{l=1}^J\sum_{j=1}^{N_l}\|v_{l,j}\|^2\right )^{1/2}. \end{align*} \end{proof} \subsection{Three dimensions} We consider the same problem in three dimensions which is much more difficult than the two dimensional case. The reason is that the previous space $\mathcal S$ is an edge element space and a stable multilevel decomposition for $\mathcal S$ is non-trivial. We again consider the lowest order case. Now $\mathcal S$ is the lowest order N\'ed\'elec edge element space~\cite{Nedelec.J1980,Nedelec.J1986} of $H_0({\rm curl\,},\Omega) :=\{v\in (L^2(\Omega))^3, {\rm curl\,} v \in (L^2(\Omega))^3, v\times n |_{\partial \Omega}= 0\}$, $\mathcal V$ is the lowest order Raviart-Thomas element space of $H_0(\div,\Omega)$, and $\mathcal P\subset L^2_0(\Omega)$ is the piecewise constant space. Furthermore let $\mathcal U\subset H_0^1(\Omega)$ be the linear finite element space. We have the following exact sequence~\cite{Hiptmair.R2002,Arnold2006} $$ 0 \hookrightarrow \mathcal U \stackrel{{\rm grad\,}}{\longrightarrow}\mathcal S \stackrel{{\rm curl\,}}{\longrightarrow} \mathcal V \stackrel{\div}{\longrightarrow} \mathcal P \to 0. $$ To verify (SD), we need the following discrete regular decomposition for edge elements~\cite{Hiptmair.R;Xu.J2007}. In the sequel, the operator $\Pi^{{\rm curl\,}}_h$ is the canonical interpolation to $\mathcal S$: for a smooth enough function $w$, $\Pi^{{\rm curl\,}}_h w \in S$ satisfying $\int_E \Pi^{{\rm curl\,}}_h w\cdot t \,{\rm d} s = \int_E w\cdot t \,{\rm d} s$ for all edges $E$ of $\mathcal T_h$ where $t$ is a tangential vector of $E$. Similarly $\Pi^{{\rm curl\,}}_k$ is the canonical interpolation to $\mathcal S_k$ on mesh $\mathcal T_k$ for $k=1,\ldots, J$. \begin{lemma}[Discrete Regular Decomposition~\cite{Hiptmair.R;Xu.J2007}]\label{th:disregdec} For every $\phi \in \mathcal S$, there exist $\tilde \phi \in \mathcal S, w \in \mathcal U^3$, and $\psi\in \mathcal S\cap \ker({\rm curl\,})$ such that \begin{gather} \label{dec} \phi = \tilde \phi + \Pi ^{{\rm curl\,}}_hw + \psi,\; \text{ and }\\ \label{stable} \|h^{-1}\tilde \phi\| + \|w\|_1\lesssim \|{\rm curl\,} \phi\|. \end{gather} \end{lemma} In the decomposition \eqref{dec}, $\psi \in \ker({\rm curl\,})$ and there is no need to control the norm of $\psi$. The component $\tilde \phi$ is of high frequency and the component $w\in \mathcal U^3$ for which a stable multilevel decomposition for the linear finite element can be applied. The following decomposition can be found in~\cite{Xu.J;Chen.L;Nochetto.R2009}. \begin{lemma}\label{th:curldec} For every $\phi \in \mathcal S$, there exist $\tilde \phi\in \mathcal S$, $w_k\in \mathcal U_k^3$, and $\psi \in \mathcal S\cap \ker({\rm curl\,})$ such that \begin{gather} \label{muldeccurl} \phi = {\tilde \phi} + \sum _{k=1}^J \Pi_k^{{\rm curl\,}}w_k + \psi,\quad \text{and }\\ \label{mulstablecurl} \|h^{-1}\tilde \phi\|^2 + \sum _{k=1}^Jh_k^{-1} \|w_k\|^2\lesssim \|{\rm curl\,} \phi\|^2. \end{gather} \end{lemma} \begin{theorem} For every $v\in \mathcal V\ \cap \ \ker(\div)$, there exists a decomposition $v = \sum_{k=1}^J\sum_{i=1}^{N_k} v_{k,i}$ such that \begin{equation}\label{muldec} \sum_{k=1}^J\sum_{i=1}^{N_k} \|v_{k,i}\|^2\lesssim \|v\|^2. \end{equation} \end{theorem} \begin{proof} For $v\in \mathcal V\cap \ \ker(\div)$, there exists $\phi\in \mathcal S$ such that $v = {\rm curl\,} \phi$. We then apply Lemma \ref{th:curldec} to obtain a decomposition of $\phi$ in the form of \eqref{muldeccurl}. We can write the first two terms in \eqref{muldeccurl} into multilevel basis decomposition, i.e., \begin{equation}\label{3Ddec} {\tilde \phi} + \sum _{k=1}^J \Pi_k^{{\rm curl\,}}w_k = \sum_{k=1}^J\sum_{i=1}^{N_k} \phi_{k,i}. \end{equation} Decomposition of $v$ is obtained by choosing $v_{k,i} = {\rm curl\,} \phi_{k,i}$. The stability \eqref{muldec} is from the inverse inequality, the stability of bases decomposition of edge element spaces in $L^2$-norm, and the stability of the decomposition \eqref{mulstablecurl}: $$ \sum_{k=1}^J\sum_{i=1}^{N_k} \|v_{k,i}\|^2\lesssim \sum_{k=1}^Jh_k^{-2}\sum_{i=1}^{N_k} \|\phi_{k,i}\|^2\lesssim \|h^{-1}\tilde \phi\|^2 + \sum _{k=1}^Jh_k^{-1} \|w_k\|^2\lesssim \|{\rm curl\,} \phi\|^2. $$ \end{proof} The (SCS) can be proved similarly as in the two dimensional case. \subsection{Numerical examples} In this subsection we present two numerical examples to support our theory. We perform the numerical experiments using the $i$FEM package \cite{Chen.L2008c}. We consider four examples on the Darcy equations $$ K^{-1} u + \nabla p = 0, \quad \div u = f \quad \text{ in } \Omega $$ with given flux boundary condition $u\cdot n = g$ on $\partial \Omega$. We chose $\Omega = (0,1)^2$. Since we focus on the performance of solvers, we only specify the tensor $K$ used in these examples. \begin{itemize} \item Example 1. $K$ is the identity $2\times 2$ matrix, i.e., $Id_{2\times 2}$ and the grid is uniform. \item Example 2. The grid is still uniform but the tensor is non-diagonal $$ K = \begin{pmatrix} 1 + 4(x^2+y^2) & 3 xy\\ 3xy & 1 + 11(x^2+y^2) \end{pmatrix}. $$ This is the Example 5.2 considered in \cite{Rusten1992}. The spectrum of $K$ is in $[1,25]$ and thus contains certain anisotropy. \item Example 3. The tensor $K = a(x)Id_{2\times 2}$ with piecewise constant $a(x)$ on the initial $4\times 4$ uniform partition of $\Omega$. The scalar function $a = 10^{-p}$ where $p$ is a random integer such that $0\leq p \leq 5$. \item Example 4. The same tensor in Example 3, except the initial grid is distorted; see Fig. \ref{fig:mesh} (b). The interior grid points are randomly perturbed by up to $40\%$ of the mesh size $h=1/4$. Example 3 and 4 are two dimensional version of the example used in \cite{Wilson2009}. \end{itemize} \begin{figure}[htbp] \label{fig:mesh} \subfigure[The uniform mesh with $h=1/4$]{ \begin{minipage}[t]{0.5\linewidth} \centering \includegraphics*[width=3cm]{squaremesh.pdf} \end{minipage} \subfigure[A distorted mesh] {\begin{minipage}[t]{0.5\linewidth} \centering \includegraphics*[width=3cm]{distortedmesh.pdf} \end{minipage}} \caption{The initial mesh of Example 1 - 3 is the uniform mesh in (a) and the initial mesh of Example 4 is a distorted mesh in (b).} \end{figure} We discretize the Darcy equations using the lowest order $RT$ element and apply the SSO method with the decomposition \eqref{RTdec}. We implement SSO in a V-cycle formulation and perform only one pre-smoothing and one post-smoothing. The smoother is an overlapping multiplicative Schwarz smoother requiring solving a small saddle point system in the patch of each vertex. The local problem is solved exactly as the dimension of the local problem is small and admit a very efficient direct solver described below. Let $n_i$ denote the number of edges connected to the $i$-th vertex patch. If we orientated these interior edges with normal direction counterclockwise, then locally the divergence free basis is represented by the constant vector $(1,\ldots, 1)_{n_i\times 1}^T$. Note that the local mass matrix $M$ is tridiagonal, given a residual vector, the local problem can be solved in $4n_i$ addition and one division (no multiplication required as the divergence basis corresponds to a constant vector). Let $N$ be the number of interior vertices. The total cost of the local solver is thus $4\sum_{i=1}^Nn_i$. In average $n_i \approx 6$ and thus the cost is around $24 N$. On the other hand, the size of the saddle point system is the number of interior edges plus the number of triangles, which is around $5N$, and the number of non zeros of this matrix is around $21N$. A matrix-vector product thus requires $21N$ multiplication which is way costly than the $24N$ addition needed in the local solvers. Similar calculation holds for the 3D local problem with different constant. We conclude that the dominated cost of the smoother will be the evaluation of the residual and one step of the smoother requires just one matrix-vector product. We stop the iteration when an approximated relative error in the energy norm is less than or equal to $10^{-8}$. Let $r$ be the current residual of the iterate $u$ and $Br$ is the correction obtained by one V-cycle. Then we use the error formulae $\sqrt{(Br,r)/(u,f)}$ which is better than using the relative residual error as $B\approx A^{-1}$. We report iteration steps of V-cycle required for the four examples. We did not include the CPU time since it depends on the implementation and testing environment: the programming language, optimization of codes, and the hardware (memory and cache), etc. The operation count we did before indicates that our method can be implemented very efficiently. \begin{table}[htdp] \caption{Iteration steps of V-cycle multigrid for the saddle point system with $1$ pre-smoothing and $1$ post-smoothing step. Stopping criterion is the approximated relative error is less than $10^{-8}$.} \begin{center} \begin{tabular}{cccccc} \hline \hline $h$ & size & Ex 1 & Ex 2 & Ex 3 & Ex 4\\ \hline 1/8 & 336 & 10 & 13 & 7 & 20\\ \hline 1/16 & 1,312 & 11 & 15 & 10 & 19\\ \hline 1/32 & 5,184 & 11 & 16 & 13 & 25\\ \hline 1/64 & 20,608 & 11 & 16 & 13 & 25\\ \hline \hline \end{tabular} \end{center} \label{tab:iteration} \end{table}% Based on the numerical results in Table \ref{tab:iteration}, we conclude that our multigrid method is convergent uniformly to the mesh size and pretty robust to the variation of the tensor $K$ and the distortion of meshes. A popular Uzawa type preconditioned conjugate gradient (PCG) method for solving the Schur complement $BM^{-1}B^T$ equation requires the evaluation of $M^{-1}$ (the so-called inner iteration) for each PCG iteration (the so-called outer iteration) and an effective preconditioner for the Schur complement. As noticed in \cite{Bramble.J;Pasciak.J1988a,Rusten1992}, the inner iteration of computing $M^{-1}$ should be very accurate and thus the overall inner-outer iteration process is costly. And in \cite{Wilson2009}, it is shown that preconditioners for $BM^{-1}B^T$ should be tuned to the variation of the tensor and the distortion of the mesh. Better preconditioned iterative methods have been developed in~\cite{Bramble.J;Pasciak.J1988a,Rusten1992}. \section{Application to Non-conforming Methods} In this section, we use the equivalence between non-conforming methods and mixed methods to develop a V-cycle multigrid method for non-conforming methods and prove its uniform convergence. The two ingredients of our new multigrid method for non-conforming methods are: the overlapping Schwarz smoothers, and inter-grid transfer operators through the nested flux spaces. Again we consider the Poisson equation with Neumann boundary condition $-\Delta p = f$ in $\Omega$ with $\partial _{n}p|_{\partial \Omega} = 0$. Based on a triangulation $\mathcal T_h$ of $\Omega$, the Crouzeix-Raviart (CR) non-conforming finite element space \cite{CrouzeixRaviart1973} is defined as follows \begin{equation*} \Lambda_h = \{\lambda |_{T}\in \mathcal P_1(T), \forall\, T\in \mathcal T_h, \int _{E}\lambda \,{\rm d} s \text{ is continuous for all sides } E \text{ of } \mathcal T_h\}. \end{equation*} The space $\Lambda_h$ is not a subspace of $H^1(\Omega)$ due to the loss of continuity across the sides of elements. An elementwise gradient operator $\nabla _h$ is defined as $$ (\nabla _h \lambda) |_{T} := \nabla (\lambda |_{T})\quad \forall\, T\in \mathcal T_h, $$ and the bilinear form is defined as $$ (\nabla _h \lambda, \nabla _h \mu ) := \sum _{T\in \mathcal T_h}\int _{T} \nabla_h \lambda \cdot \nabla_h \mu\, \,{\rm d}x \quad \text{ for all } \lambda, \mu \in \Lambda_h. $$ The CR non-conforming finite element discretization is as follows: Given an $f\in L^2(\Omega)$, find $\lambda \in \Lambda_h\cap L_0^2(\Omega)$ such that \begin{equation}\label{cr} (\nabla _h \lambda, \nabla _h \mu ) = (f, \mu), \quad \text{for all } \mu \in \Lambda_h. \end{equation} Let $u\in \mathcal V$ be the mixed finite element approximation of the flux using the lowest order RT element. It is well known that~\cite{Marini.L1985}, for every $T\in \mathcal T_h$, \begin{equation}\label{localformula} u |_{T} = -\nabla _h\lambda |_{T} + \frac{1}{d} f_{T}(x-x_{T}), \quad \forall\, x\in T, \end{equation} where $x_{T}$ is the barycenter of the triangle $T$ and $f_T$ is the average of $f$ over $T$. Throughout this section we shall always consider a piecewise constant function $f$. We always denote by $u$ the solution to \eqref{mixedpoisson0} and by $\lambda$ the solution to \eqref{cr}. We note that such equivalence has been used to design multigrid methods for mixed methods with the help of non-conforming methods~\cite{Brenner.S1992,Chen.Z1996}. We are exploiting this equivalence in the other way around. Based on a sequence of hierarchy meshes, we will have a sequence of spaces $\Lambda_1, \Lambda_2 ,\ldots ,$ $\Lambda_J = \Lambda_h$. We shall develop a V-cycle multigrid method for solving the equation \eqref{cr} on the finest level. The notorious difficulty is the non-nestedness of hierarchies of non-conforming finite element spaces. Robust inter-grid operators (restriction and prolongation operators) should be designed carefully~\cite{Brenner.S1989,Braess.D;Verfurth.R1990,Oswald.P1997,Chen.Z;Oswald.P1998,Brenner.S1999,Kang.K;Lee.S1999,Koster.M;Ouazzi.A;Schieweck.F;Turek.S2012}. The existing convergence proof of multigrid methods for non-conforming methods~\cite{Brenner.S1989,Brenner.S1999,Oswald.P1997,Brenner2003a,Oswald.P2008} do not cover V-cycles with few smoothing steps but for multigrid cycles with sufficiently many smoothing steps. We shall design a V-cycle multigrid method for CR element and prove its convergence even for only one smoothing step. Essentially our method is just a different interpretation of the SSO method applied to the mixed finite element discretization. Therefore during the iteration, we always keep two quantities $( u^k, \lambda^k )$ which is the $k$th iteration of $(u,\lambda)$. The smoother in the finest level is an overlapping Schwarz smoother with Neumann boundary condition. It consists of solving a local problem $-\Delta \lambda^{k}_i = f$ in $\Omega_i$ with Neumann boundary condition $\partial _{n}\lambda^{k}_i|_{\partial \Omega_i} = u^k_{i-1}\cdot n$, where $\Omega_i$ is the patch of the $i$th vertex. Here we loop over the vertex $i=1,\ldots, N$ of $\mathcal T_h$ and use subscript $i$ to denote the iteration at the $i$th vertex. We set $u^k_0 = u^k$ and $u^{k+1} = u^k_{N}$. Once $\lambda^{k}_i$ is computed, it will be used to update the flux $u^k_i$ by the relation \eqref{localformula} since the relation holds for the local problem as well. To begin with, we need to compute flux $u_*$ on the finest level such that the local Neumann problem is well defined, i.e, the source $f$ is compatible with the prescribed boundary flux. Such flux $u_*$ can be found by a V-cycle multigrid iteration similar to the procedure for non-homogenous constraint discussed before. In the implementation level, the matrix of local problems can be obtained by extracting sub-matrices of the global one. The right-hand side is the corresponding components of $f$ plus the contribution from the boundary condition. It is the degree of freedom $\int _E u\cdot n$ that enters the computation which can be calculated by the formulae \begin{equation}\label{unlambda} \int _E u^k_i\cdot n_E \,{\rm d} s = \frac{|T|}{d+1}f + \int _E\nabla _h \lambda^k_i\cdot n_e \, \,{\rm d} s. \end{equation} The relation \eqref{unlambda} can be used to eliminate the flux and get a direct updated formulation $\lambda^k_{i-1} \to \lambda^k_{i}$ without recording the flux approximation $u^k_i$. Algebraically it can be realized by matrix multiplication of $\lambda^k_i$. Conceptually it is better to record the flux explicitly. We then discuss the prolongation from the coarse grid to the fine grid. Since now only two levels are involved, we will follow the convention to use subscript $(\cdot )_H$ for quantities in the coarse grid and $(\cdot)_h$ for that in the fine grid. In the coarse grid, we will solve a residual equation to be considered in a moment. Suppose we have obtained a correction of the flux $e_H$, we prolongate $e_H$ in the RT space in the coarse grid to that in the fine grid and denoted by $I_H^he_H$. Note that although spaces of CR non-conforming elements are non-nested, the RT spaces for flux are, and $I_H^h$ is just the natural inclusion. The correction is applied to the flux $u_h \leftarrow u_h + I_H^he_H$. With the updated flux, we have different boundary conditions for the local problems (the source is always $f$ in the finest level) and the smoother in the finest level can be applied again. We then discuss in detail the residual equation to be solved in the coarse grid. We first describe the restriction. Denoted by $u_h$ the current approximation of flux in the fine grid. The residual equation of the corresponding mixed method in the fine grid is \begin{equation}\label{mixedresidual} \begin{pmatrix} M & B^T\\ B & O \end{pmatrix} \begin{pmatrix} e_h\\ p_h \end{pmatrix} = \begin{pmatrix} -Mu_h\\ 0 \end{pmatrix}. \end{equation} So the restriction operator will apply to the residual $-Mu_h$, i.e., $r_H = -(I_H^h)^TMu_h$. On the coarse grid, we will still solve local problems on vertex patches. We start with the zero initial guess of flux $e_H$, i.e., $e_{H,0} = 0$, and solve local problems to update $e_{H,i}$ patch-wise for $i=1,\ldots, N_H$. The updated flux $e_{H,i}$ in patch $\Omega_{H,i}$ will provide a boundary condition for the next patch $\Omega_{H,i+1}$. To use the non-conforming formulation, we need to figure out the source data for each local problem. This can be done as follows. Let $r_H^i$ be the restriction of $r_H$ to the $i$th patch $\Omega_{H,i}$, and let $M_H^i$ be the corresponding mass matrix. Then the source for the local problem on $\Omega_{H,i}$ will be given by $\delta f_{H,i} = -\div M_{H,i}^{-1}r_H^i$ which is piecewise constant on $\Omega_{H,i}$. The inverse $M_{H,i}^{-1}$ can be computed efficiently since $M_{H,i}$ is tri-diagonal. Now we can solve the non-conforming discretization of the problem $-\Delta_H \lambda_{H,i} = \delta f_H$ with Neumann boundary condition $\partial_n \lambda_{H,i}|_{\partial \Omega_{H,i}} = e_{H,i-1}\cdot n$ and use $\lambda_{H,i}$ to update the flux correction $e_{H,i}$. Again such procedure can be implemented as one matrix multiplication which leads to a non-trivial restriction matrix. As usual, a V-cycle multigrid method is obtained by applying the above two-level method recursively to the coarse grid problem. Convergence of this multigrid algorithm is striaghtforward since it is just a different way to compute the same solution of the mixed formulation for each local problem. The quantity $2(E(u^k) - E(u)) = \| u - u^k\|^2 = \| \nabla _h \lambda - \nabla _h \lambda^k\|^2$ since the relation \eqref{localformula} always holds during the iteration. The same algorithm and convergence proof can be applied to other non-conforming methods, e.g., hybridized discontinuous Galerkin (HDG) methods~\cite{Arnold.D;Brezzi.F1985,Cockburn.B;Gopalakrishnan.J2004,CockburnGopalakrishnanLazarov2009} and weak Galerkin (WG) method~\cite{WangYe2012,MuWangYe2012polygon}, which are equivalent to the mixed methods. The only difference is the relation of $\lambda$ and the flux $u$. For example, for WG, we can simply use the following formulae to update the flux: $u = \nabla_w \lambda,$ where $\nabla_w$ is the weak gradient operator. We thus have obtained a V-cycle multigrid method for non-conforming finite elements and have proved the uniform convergent with even one smoothing step. Such results are very rare in literature and a recent work on a multigrid method for HDG methods with only one smoothing step can be found in~\cite{Hall2013}. \section{Application to Stokes Equations} In this section, we apply our approach to designing a multigrid method for a discrete Stokes system in two dimensions and prove its uniform convergence. Let $\Omega$ be a polygon and triangulated into a quasi-uniform mesh $\mathcal T_h$ with mesh size $h$. Again we assume that there exists a sequence of meshes $\mathcal T_1, \mathcal T_2, \ldots, \mathcal T_J = \mathcal T_h$. The triangulation $\mathcal T_1$ is a shape regular triangulation of $\Omega$ and $\mathcal T_{k+1}$ is obtained by dividing each triangle in $\mathcal T_{k}$ into four congruent small triangles. We further assume triangulations contain no singular vertex defined in~\cite{Scott.L;Vogelius.M1985}. Consider Stokes equations $$-\Delta u + \nabla p = f, \quad \div u = 0$$ with Dirichlet boundary condition $u|_{\partial \Omega}=0$. The homogenous boundary condition is not essential. As discussed before, the non-homogenous boundary condition will lead to a non-homogenous constraint and can be eliminated by one V-cycle or by a fast Poisson solver. We shall use exact divergence free elements and assume $\mathcal K_i = \ker(\div)\cap \mathcal V_i, i=1, \ldots, J$ are nested, i.e., $$ \mathcal K_1 \subset \mathcal K_2 \subset \ldots \subset \mathcal K_J = \mathcal K. $$ Examples of such Stokes elements include Scott-Vogelius elements~\cite{Scott.L;Vogelius.M1985} for which the assumption that all triangulations contain non-singular vertex is needed. We chose $\mathcal V \subset (H_0^1(\Omega))^2$ and $\mathcal P \subset L^2_0(\Omega)$ as Scott-Vogelius elements~\cite{Scott.L;Vogelius.M1985}. For this problem, the $A$-norm is the $H^1$ semi-norm $|\cdot|_1 = \|\nabla(\cdot)\|$ which is a norm on $H_0^1$. Let $\mathcal U$ be the $C^1$ finite element space on $\mathcal T$ such that ${\rm curl\,} \mathcal U = \mathcal K$. Namely we have the so-called Stokes complex: \begin{equation}\label{2DStokesexact} \mathcal U \stackrel{{\rm curl\,}}{\longrightarrow} \mathcal V \stackrel{\div}{\longrightarrow} \mathcal P \to 0. \end{equation} Similar exact sequence exists in each level $k = 1,2, \ldots, J$. We further decompose each $\mathcal K_k$ into subspaces associated to vertices. For a vertex $x_{k,j}\in \mathcal T_k$, we denote by $\Omega_{k,j}$ the patch of $x_{k,j}$, i.e., union of all triangles containing $x_{k,j}$. Let $\mathcal U_{k,j} = C_0^1(\Omega_{k,j})\cap \ \mathcal U_{k}$ and $\mathcal V_{k,j}= (H_0^1(\Omega_{k,j}))^2\cap \ \mathcal V_k$ be the subspaces spanned by all basis functions with support in $\Omega_{k,j}$ and set $\mathcal K_{i,j} = \mathcal V_{i,j}\cap \ker (\div)$. By the construction of Scott-Vogeligus element, $\mathcal K_{k,j} = {\rm curl\,} \mathcal U_{k,j}$. The final decomposition is $$ \mathcal V = \sum_{k=1}^{J}\sum_{j=1}^{N_k}\mathcal V_{k,j}, \text{ and } \mathcal K = \sum_{k=1}^{J}\sum_{j=1}^{N_k}\mathcal K_{k,j}. $$ In the correction form, the smoother applied to this decomposition is equivalent to solving a local Stokes problem with force $f - Av_i$ in the subdomain surrounding of a vertex with zero Dirichlet boundary condition on $\partial \Omega_{k,j}$. In the update form, it is solving the original Stokes problem with force $f$ but the boundary condition on $\partial \Omega_{k,j}$ is given by the current approximation of the velocity. Now the local problem is of considerable size (around a $200\times 200$ saddle point system) and the inexact solver using diagonal matrix of $A$ can reduce it to a SPD problem of smaller size (around $60\times 60$). \begin{remark}\rm We are aware that more effective block preconditioners for the Stokes equations are available~\cite{Benzi.M;Golub.G;Liesen.J2005,Mardal2011a}. Multigrid method based on solving local problems is of more theoretic value since multigrid convergence theory for Stokes equations with the partial regularity assumption and/or for V-cycle method with few smoothing steps is rare. \qed \end{remark} We define $Q_k: \mathcal U \to \mathcal U_k$ the $L^2$ projection, for $k=1,2,\ldots, J$. For $v\in \mathcal K$, since $\div v = 0$, we can find a unique $\phi\in \mathcal U$ such that $v = {\rm curl\,} \phi$. We then define $\Pi_k v = {\rm curl\,} Q_k\phi$. It is easy to show that $\Pi_l\Pi_k = \Pi_l$ for $l\leq k$ due to the nestedness of spaces. We document the stability and error estimate of $\Pi_k$ in the following lemmas. \begin{lemma}\label{lm:Pi} The operator $\Pi_k$ is stable in $L^2$ norm and \begin{equation}\label{eq:Jackson} \|v-\Pi_kv\| \lesssim h_k|v|_{1}, \quad \text{for all } v\in \mathcal K. \end{equation} \end{lemma} \begin{proof} It is well known that $Q_k$ is stable in both $L^2$-norm and ${\rm curl\,}$-norm on quasi-uniform meshes. Consequently $\Pi_k = {\rm curl\,} Q_k$ is stable in $L^2$-norm. It is obvious that $\Pi_k v = v$ for all $v\in \mathcal K_k$. Therefore $v-\Pi_kv = (I-\Pi_k)(v - v_k)$ for any $v_k\in \mathcal K_k$ and consequently $$ \|v-\Pi_kv\| \lesssim \inf_{v_k\in \mathcal K_k}\|v-v_k\| = \inf_{\phi_k\in \mathcal U_k}\| {\rm curl\,} \phi -{\rm curl\,} \phi_k \|\lesssim h_k|\phi|_{2} = h_k|v|_{1}. $$ \end{proof} \begin{lemma} The operator $\Pi_k$ is stable in $H^{\sigma}$-norm for $\sigma \in [0,1/2)$, i.e., \begin{equation}\label{Pisigma} \|\Pi_k v\|_{\sigma}\lesssim \|v\|_{\sigma}, \quad\text{ for all }v\in \mathcal K. \end{equation} \end{lemma} \begin{proof} For $\sigma = 0$, i.e., the stability of $\Pi_k$ in $L^2$-norm has been proved in Lemma \ref{lm:Pi}. For $v\in \mathcal K$, we define $\bar v$ as the piecewise constant approximation of $v$ defined by $\int_{T}\bar v = \int _Tv$ for all $T\in \mathcal T_k$. Obviously $\|v - \bar v \|\lesssim h_k|v|_1$. We prove the stability of $\Pi_k$ in $H^1$-norm as follows: \begin{align*} |\Pi_k v|_1 = |\Pi_k v - \bar v |_1 \lesssim h_k^{-1}\|\Pi_k v - \bar v \| \lesssim h_k^{-1}\left (\|\Pi_k v - v \| + \|v - \bar v \|\right )\lesssim |v|_1. \end{align*} In the last step, we have used the approximation property of $\Pi_k$; c.f. Lemma \ref{lm:Pi}. By the interpolation of divergence free spaces, c.f. Proposition 3.7 in~ \cite{Wendland.H2009}, we obtain the desired inequality \eqref{Pisigma}. \end{proof} Following Xu~\cite{Xu1992}, we can obtain the following stable decomposition. For completeness, we include a proof here. \begin{theorem}\label{th:stokesnorm} The decomposition $v = \sum_{k=1}^J (\Pi_k - \Pi_{k-1}) v$ is stable in $A$-norm, i.e., \begin{equation}\label{Qknorm} \sum _{k=1}^J |(\Pi_k - \Pi_{k-1}) v|_1^2 \lesssim |v|_{1}^2, \quad\text{ for all } v\in \mathcal K. \end{equation} \end{theorem} \begin{proof} Let $P_i: \mathcal K\to \mathcal K_i$ be the projection in $A$-inner product. Then by the duality argument, c.f. Theorem 6.9 in~\cite{Chen2015b}, we have the following error estimate, for some $\alpha \in (1/2,1]$, \begin{equation}\label{Pialpha} \|v - P_i v\|_{1-\alpha}\lesssim h_i^{\alpha}\|v\|_1, \text{ for all }v\in H_0^1(\Omega). \end{equation} Let $\tilde{\Pi}_k = \Pi_k - \Pi_{k-1}$, and $v_i=(P_i-P_{i-1})v$ for $i=1,2,\cdots, J$ with notation $P_0 = 0$. Using Cauchy-Swarchz inequality, it holds \begin{align*} \sum_{k=1}^J\|\nabla(\tilde{\Pi}_kv)\|^2=&\sum_{k=1}^J\sum_{i,j=k}^J\int_{\Omega}\nabla(\tilde{\Pi}_kv_i) \cdot \nabla(\tilde{\Pi}_kv_j)\,dx \\ =&\sum_{i,j=1}^J\sum_{k=1}^{i\wedge j}\int_{\Omega}\nabla(\tilde{\Pi}_kv_i)\cdot\nabla(\tilde{\Pi}_kv_j)\,dx \\ \leq & \sum_{i,j=1}^J\sum_{k=1}^{i\wedge j}\|\nabla(\tilde{\Pi}_kv_i)\|\|\nabla(\tilde{\Pi}_kv_j)\|, \end{align*} where $i\wedge j=\min\{i,j\}$. According to the inverse inequality, the stability of $\Pi_k$, c.f., \eqref{Pisigma}, and the error estimate of $P_i$ c.f. \eqref{Pialpha}, we have $$ \|\nabla(\tilde{\Pi}_kv_i)\|\lesssim h_k^{-\alpha}\|\tilde{\Pi}_kv_i\|_{1-\alpha}\lesssim h_k^{-\alpha}\|v_i\|_{1-\alpha}\lesssim h_k^{-\alpha}h_i^{\alpha}|v_i|_{1}. $$ Combining last two inequalities, we get from the strengthened Cauchy-Swarchz inequality \begin{align*} \sum_{k=1}^J\|\nabla (\Pi_k - \Pi_{k-1})v\|^2 \lesssim & \sum_{i,j=1}^J\sum_{k=1}^{i\wedge j}h_k^{-2\alpha}h_j^{\alpha}h_i^{\alpha}|v_i|_{1}|v_j|_{1} \lesssim \sum_{i,j=1}^Jh_{i\wedge j}^{-2\alpha}h_j^{\alpha}h_i^{\alpha}|v_i|_{1}|v_j|_{1} \\ \lesssim & \sum_{i,j=1}^J\left (\frac{1}{2}\right )^{\alpha|i-j|}|v_i|_{1}|v_j|_{1} \lesssim \sum_{i=1}^J |v_i|_1^2=|v|_{1}^2. \end{align*} \end{proof} We continue to show the micro-decomposition of the slice $(\Pi_k - \Pi_{k-1})v$ is stable in the energy norm. \begin{lemma}\label{lm:stokesmicro} For $v_k = (\Pi_k - \Pi_{k-1})v \in \mathcal K_k$, there exists a decomposition $v_k = \sum_{j=1}^{N_k}v_{k,j}$ with $v_{k,j}\in \mathcal K_{k,j}$ such that $$ \sum _{j=1}^{N_j}|v_{k,j}|_1^2 \lesssim |v_k|_1^2. $$ \end{lemma} \begin{proof} Recall that $\phi_k = Q_k \phi$ and $v = {\rm curl\,} \phi$. Let $\phi_{k} - \phi_{k-1} = \sum _{j=1}^{N_k}\psi_{k,j}$ be a decomposition such that $\operatorname{supp} \psi_{k,j}\in \Omega_{k, j}$. Such decomposition can be obtained by partition the basis decomposition. For example, for a basis function associated to an edge, it can be split as half and half to the patch of each vertex of this edge. We then set $v_{k,j} = {\rm curl\,} \psi_{k,j}$ and obtain the decomposition $v_k = \sum_{j=1}^{N_k}v_{k,j}$. Then \begin{align*} &\sum_{j=1}^{N_k}|v_{k,j}|_1^2 \leq \sum_{j=1}^{N_k}|\psi_{k,j}|_2^2\lesssim \sum_{j=1}^{N_k} h_k^{-4}\|\psi_{k,j}\|^2\lesssim h_k^{-4}\|\phi_k - \phi_{k-1}\|^2 \\ & = h_k^{-4}\|(I-Q_{k-1})(\phi_k - \phi_{k-1})\|^2 \lesssim h_k^{-2} \|{\rm curl\,}(\phi_k - \phi_{k-1})\|^2 = h_k^{-2} \|v_k\|^2. \end{align*} We write $v_k = (\Pi_{k} - \Pi_{k-1})v = (I - \Pi_{k-1})(\Pi_{k} - \Pi_{k-1})v = (I - \Pi_{k-1})v_k$ and use the $L^2$-norm estimate of $\Pi_k$ to conclude $$ h_k^{-2} \|v_k\|^2\lesssim |v_k|_1^2. $$ Then the desired inequality follows. \end{proof} Combination of Theorem \ref{th:stokesnorm} and Lemma \ref{lm:stokesmicro} leads to the stability of the decomposition $\mathcal K = \sum_{k=1}^J\sum_{j=1}^{N_j}\mathcal K_{k,j}$ in $A$-norm. \begin{theorem} For every $v\in \mathcal K$, there exists a decomposition $v = \sum_{k=1}^J\sum_{j=1}^{N_k}v_{k,j}$ with $v_{k,j}\in \mathcal K_{k,j}$ such that $$ \sum_{k=1}^J\sum _{j=1}^{N_j}|v_{k,j}|_1^2 \lesssim |v|_1^2. $$ \end{theorem} The assumption (SCS) is just that for multilevel $H^1$ finite element spaces $\mathcal V_1\subset \mathcal V_2 \subset \ldots \subset \mathcal V_J$ and can be proved similarly as before. Note that since $\mathcal V_{k,j}\subset (H_0^1(\Omega_{k,j}))^2$, for functions $v_{k,j}$ in $\mathcal V_{k,j}$, the norm equivalence $h_k^{-2}\|v_{k,j}\|\eqsim |v_{k,j}|_1$ holds with an $\mathcal O(1)$ constant. \section{Conclusion and Future Work} In this paper we have developed a multigrid method for saddle point systems based on a multilevel subspace decomposition of the constraint space $\mathcal K$. We have proved the convergence of such method based on the stable decomposition and strengthened Cauchy Schwarz inequality. For some mixed finite element discretizations of Poisson, Darcy, and Stokes equations, we have verified SD and SCS assumptions and consequently obtained a multigrid method for the resulting saddle point systems. In a forthcoming work \cite{Chen2015c}, we shall examine a plate bending problem which is a fourth order elliptic equation. The key is to find a underlying exact sequence. \bibliographystyle{abbrv} \input{MGSaddle.bbl} \end{document}
{ "timestamp": "2016-01-19T02:01:22", "yymm": "1601", "arxiv_id": "1601.04091", "language": "en", "url": "https://arxiv.org/abs/1601.04091", "abstract": "The first order condition of the constrained minimization problem leads to a saddle point problem. A multigrid method using a multiplicative Schwarz smoother for saddle point problems can thus be interpreted as a successive subspace optimization method based on a multilevel decomposition of the constraint space. Convergence theory is developed for successive subspace optimization methods based on two assumptions on the space decomposition: stable decomposition and strengthened Cauchy-Schwarz inequality, and successfully applied to the saddle point systems arising from mixed finite element methods for Poisson and Stokes equations. Uniform convergence is obtained without the full regularity assumption of the underlying partial differential equations. As a byproduct, a V-cycle multigrid method for non-conforming finite elements is developed and proved to be uniform convergent with even one smoothing step.", "subjects": "Numerical Analysis (math.NA)", "title": "Multigrid Methods for Constrained Minimization Problems and Application to Saddle Point Problems", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9899864281950694, "lm_q2_score": 0.8198933315126791, "lm_q1q2_score": 0.8116832707651931 }
https://arxiv.org/abs/2208.11413
Sharp inequalities for Neumann eigenvalues on the sphere
We prove that the second nontrivial Neumann eigenvalue of the Laplace-Beltrami operator on the unit sphere $\mathbb{S}^n \subseteq \mathbb{R}^{n+1}$ is maximized by the union of two disjoint, equal, geodesic balls among all subsets of $\mathbb{S}^n$ of prescribed volume. In fact, the result holds in a stronger version, involving the harmonic mean of the eigenvalues of order $2$ to $n$, and extends to densities. A (surprising) consequence occurs on the maximality of a geodesic ball for the first nontrivial eigenvalue under the volume constraint: the hemisphere inclusion condition of the Ashbaugh-Benguria result can be relaxed to a weaker one, namely empty intersection with a geodesic ball of the prescribed volume. Although we do not prove that this last inclusion result is sharp, for a mass less than the half of the sphere, we numerically identify a density with higher first eigenvalue than the corresponding geodesic ball and with support equal to the full sphere $\mathbb{S}^2$.
\section{Introduction} Let $n \ge 2$ and denote by $\Sn$ the unit sphere of dimension $n$ in $\R^{n+1}$. Let $\Om \subseteq \Sn$ be an open, Lipschitz set of measure $m\in (0,|\Sn|)$. The eigenvalues of the Laplace-Beltrami operator with Neumann boundary conditions on $\Om$ are $$0= \mu_0(\Om) \le \mu_1(\Om) \le \dots\le \mu_k(\Om) \to +\infty,$$ multiplicity being counted. If $\Om$ is connected then $\mu_1(\Om)>0$. For each $k \in \N$ we have $$\mu_k(\Om) = \min_{S\in{\mathcal S}_{k+1}} \max_{u \in S\setminus \{0\}} \frac{\int_\Om |\nabla u|^2 }{\int_\Om u^2 },$$ where ${\mathcal S}_k$ is the family of all subspaces of dimension $k$ in $H^1(\Om)$, and the integral is defined with the canonical measure on $\Sn$. Then, there exists $u \in H^1(\Om)$, $u \not= 0$ such that $$ \begin{cases} -\Delta_\Sn u = \mu_k(\Om) u \mbox { in } \Om,\\ \frac{\partial u}{\partial \nu} = 0 \mbox { on } \partial \Om. \end{cases} $$ The aim of this paper is to find sharp upper bounds for the second eigenvalue provided that the volume of the set $\Om$ is prescribed. Shortly, we prove that \begin{equation}\label{bmn01} \mu_2(\Om) \le \mu_1(B^{m/2}), \end{equation} where $m=|\Om|$ and $B^a$ denotes a geodesic ball of volume $a$ in $\Sn$. This means that the set of volume $m$ with maximal second non trivial Neumann eigenvalue is the union of two disjoint geodesic balls of volume $\frac{m}{2}$. Although this is our main motivation, the result we obtain is more general, in two directions. First, we shall prove the stronger inequality for the harmonic mean of the eigenvalues of order $2$ to $n$ \begin{equation}\label{bmn02} \sum_{k=2}^n \frac{1}{\mu_k(\Om)} \ge \frac{n-1}{ \mu_1(B^{m/2})}. \end{equation} Second, the inequality \eqref{bmn02} naturally extends to densities with prescribed mass. This is detailed in the last section, but we also refer to \cite{BH19} for an introduction to such problems. A (somehow surprising) consequence of inequality \eqref{bmn01} is an extension of the result of Ashbaugh-Benguria stating that if $\Om$ is contained in a hemisphere of $\Sn$ (and so has volume $m$ not larger than $ \frac{|\Sn|}{2}$), the sharp inequality occurs \begin{equation}\label{eq_ashbaughbenguria} \mu_1(\Om)\le \mu_1(B^{m}). \end{equation} Indeed, inequality \eqref{bmn01} can be applied as follows: if $\Om_1, \Om_2$ are two disjoint, smooth open subsets of $\Sn$, then $$\mu_2(\Om_1 \cup \Om_2) \le \mu_1\left(B^{\frac{1}{2}(|\Om_1|+|\Om_2|)}\right).$$ This means that $$\min \{\mu_1(\Om_1), \mu_1(\Om_2)\} \le \mu_1\left(B^{\frac{1}{2}(|\Om_1|+|\Om_2|)}\right).$$ In particular, this implies that the Ashbaugh-Benguria inequality \eqref{eq_ashbaughbenguria} holds when $\Om$ lies in in the complement of a geodesic ball of volume $m$. The main explanation of the "ease" with which we obtain this result relies on the richness of the class of new test functions we build for testing the second eigenvalue. It is a challenge to understand whether or not this last inclusion condition can be completely dropped; we comment this issue in Section \ref{sec_density}. Let us briefly recall the Euclidean context of these questions. In 1954 Szeg\"o \cite{Sz54} proves that in $\R^2$, the inequality $$\frac{1}{\mu_1(\Om)} + \frac{1}{\mu_2(\Om)} \ge \frac{2}{\mu_1(B^{m}_{Euc})}$$ holds in the class of simply connected smooth domains, where $B^m_{Euc}$ is the Euclidian ball of measure $m=|\Om|$. Two years later, Weinberger \cite{We56} proves that $|\Om|^\frac{2}{n} \mu_1(\Om)$ is maximized by the ball without any topological constraint. While the proof of Szeg\"o is based on conformal mappings and transplantation of the eigenfunctions of the disc, Weinberger builds a suitable set of test functions orthogonal on constants by extending the eigenfunctions of the ball and taking restrictions to the set $\Om$. In 2009 Girouard, Nadirsahvili and Polterovich prove that in $\R^2$, the second Neumann eigenvalue is maximized by the union of two disjoint balls of mass $\frac{m}{2}$ in the class of simply connected smooth domains. In 2019, the first author and Henrot \cite{BH19} remove the dimensional and the topological constraint and, moreover, prove that the result continues to be valid in the larger class of densities. While the two dimensional result can somehow be related to the proof of Szeg\"o, being based on conformal transplantation, the $n$-dimensional one comes from the construction of Weinberger, which is refined and complemented by a fine topological argument. This last result was later on extended in the hyperbolic space by Freitas and Laugesen \cite{FrLa20}. To finish the picture in the Euclidean space, it has been conjectured by Ashbaugh and Benguria in \cite{AsBe93} that the inequality \begin{equation}\label{bmn03} \sum_{k=1}^{j} \frac{1}{\mu_k(\Om)} \ge \frac{j}{ \mu_1(B^m)} \end{equation} holds true in the Euclidean space $\R^n$, for $j=n$. In 2018, Wang and Xia proved a weaker version of the inequality, for $j=n-1$. Although this is not the conjectured result, it still gives a stronger inequality than the Weinberger one in dimension $n \ge 3$. On spheres, the first results are due to Chavel \cite{Ch80} and to Ashbaugh and Benguria \cite{AB95}. Refining the Weinberger argument, Ashbaugh and Benguria prove that given $0<m< \frac{|\Sn|}{2}$ among all subsets of a hemisphere of volume $m$, the geodesic ball is the unique maximizer. A similar assertion for domains allowed to go beyond the hemisphere is generally an open question (even if their volumes are not larger than $\frac{|\Sn|}{2}$). We refer to \cite[Remark (2) after Theorem 5.1]{AB95} and \cite[Remark (2), p. 1085]{AsBe01} for some ideas to extend the inequality to domains on the whole sphere. However, those results are typically working only for subclasses of domains which have, in some sense, their mass balanced on sections of the sphere centered at the "center of mass" point. We comment in Section \ref{sec_density} the question of completely removing any inclusion constraint. If $\frac{|\Sn|}{2}\le m <|\Sn|$ the inequality for $\mu_1$ is not well understood and it is likely to fail, at least for some values of $m$. The behaviour of the the first Neumann eigenvalue on geodesic balls of radius larger than $\frac\pi2$ studied in \cite{AB95} support this idea. For instance, the first Neumann eigenvalue of the hemisphere and of the full sphere coincide, so that the monotonicity of the first eigenvalue on geodesic balls with respect to the measure fails to be true, for values of $m$ above the half of the sphere. This is a strong indicator that the inequality may not be true provided the volume is larger than $\frac{|\Sn|}{2}$. We comment this issue in Section \ref{sec_density} and refer to \cite{Ma22} for some numerical computation in support to this assertion. We point out the paper \cite{FaWe18}, where Fall and Weth analyze critical sets for $\mu_1$ on the sphere. As well, recently, in \cite{BBC20} the authors prove on spheres an extension of the result of Wang and Xia \cite{WaXi18} for harmonic means, still under the constraint on $\Om$ to be included in a hemisphere. Here is our main result. \begin{theorem}\label{bmn05} Let $\Om\subset \Sn$ be an open, Lipschitz set. Then \[\sum_{i=2}^{n}\frac{1}{\mu_i\left(\Om\right)}\geq \sum_{i=2}^{n}\frac{1}{\mu_i\left(B^{|\Om|/2}\sqcup B^{|\Om|/2}\right)}\left(=\frac{n-1}{\mu_1\left(B^{|\Om|/2}\right)}\right).\] Equality is attained when $\Om$ is the union of two equal, disjoint geodesic balls. \end{theorem} Note that the hemisphere inclusion hypothesis is not imposed in this result. We work with domains on the full sphere and with arbitrary measure. If $\Om$ has three or more connected components, then $\mu_2(\Om)$ equals to $0$ and the inequality is trivially true. The relevant cases are when $\Om$ is either connected or has two connected components. \smallskip As a consequence, $\mu_2$ is maximal on two disjoint geodesic balls of half measure. Indeed, we get the following. \begin{theorem}\label{bmn06} Let $\Om\subset \Sn$ be an open Lipschitz set, then \[\mu_2\left(\Om\right)\leq \mu_1\left(B^{|\Om|/2}\right).\] \end{theorem} The proof of Theorems \ref{bmn05} and \ref{bmn06} is going in the same direction as \cite{BH19}. We build a pack of $n$ test functions which are orthogonal to both constants and on the first (unknown) eigenfunction of a set $\Om$. While the idea of the construction is similar to the one of \cite{BH19} being based on folding (Weinberger based) Ashbaugh-Benguria test functions across a hyperplane containing the center of the sphere, the main difficulty is related to the validity of the topological argument. Roughly speaking, the key difficulty comes from the non uniqueness of the "center of mass" point of an arbitrary domain on a sphere; a domain may have multiple (even infinite) family of such centers for packages of suitable test functions. The uniqueness of such a point was a crucial ingredient in both topological arguments in \cite{BH19} and \cite{FrLa20}. \smallskip A consequence of this last result is an extension of the result of Ashbaugh-Benguria on the first eigenvalue. \begin{corollary}\label{bmn08.1} Let $m\in (0,|\Sn|/2)$ and let $B^m$ be a geodesic ball of measure $m$ in $\Sn$. Let $\Om \subset \Sn\setminus B^m$ be an open Lipschitz set such that $|\Om|=m$. Then \[\mu_1\left(\Om\right)\leq \mu_1\left(B^m\right).\] \end{corollary} In the result above, the hemisphere inclusion condition of Ashbaugh-Benguria is replaced by a weaker one, namely inclusion in the complement of a geodesic ball of measure $m$. In several other situations the inequality above continues to be true, as, for instance, for every domain which is disjoint with one if its isometric images. Let us also mention the recent paper \cite{LL22} where the authors use conformal methods to obtain $\mu_1(\Om)\leq \mu_1(B^m)$ for any simply connected set $\Om\subset\mathbb{S}^2$ under the condition $|\Om|=m\leq 0.94 \left|\mathbb{S}^2\right|$. \newline The issue of dropping completely the inclusion constraint in Corollary \ref{bmn08.1} has not a clear answer and will be discussed in the last section. In fact, we numerically identify a density, with support on the full sphere ${\mathbb S}^2$ and mass less than $2\pi$ which has higher first eigenvalue than the corresponding geodesic ball. The main consequence of this observation is that no mass transplantation argument such as ours can work on the full sphere.\newline The paper is organized as follows. In the next section we recall some topological tools and prove the main technical result on which is based the construction of the test functions. The key argument relies on the control, along a continuous deformation, of the number of zeroes of a vector field build upon the test functions. Section \ref{bmnsec3} is devoted to the proof of Theorem \ref{bmn06} and its corollaries, while Section \ref{bmnsec4} is devoted to the proof of Theorem \ref{bmn05}. We choose to switch what should be the natural order of the proofs, since the proof of Theorem \ref{bmn06} is a direct consequence of the topological arguments, while the proof of Theorem \ref{bmn05} requires some extra analysis which may hide the core ideas. Finally, in the last section we comment about possible extensions of those results and bring some numerical support for our assertions. \section{Topological results}\label{bmnsec2} \medskip \noindent{\bf Notations.} We write $\Sn$ the unit sphere of dimension $n$ in $\R^{n+1}$ and set $z\mapsto z_1,\hdots,z_{n+1}$ its coordinate functions. For any $a\in \Sn$, we let \[a^-=\{z\in\Sn:z\cdot a<0\},\ a^\bot=\{z\in\Sn:z\cdot a=0\},\ a^+=\{z\in\Sn:z\cdot a>0\}\] and denote \[R_a(z)=z-2(z\cdot a)a,\] \smallskip \[F_a(z)=\begin{cases}R_a(z) & \text{ when }z\in a^+,\\ z & \text{ when }z\in a^-\sqcup a^\bot.\\ \end{cases}\] Above $x \cdot y$ denotes the scalar product of $\R^{n+1}$ between the vectors $x$ and $y$ and $\langle x,y\rangle$ denotes the space spanned by $x,y$. For any $w,z\in\Sn$, we denote $\pi_z(w)=w-(w\cdot z)z$ the projection of $w$ on $T_z\Sn=z^\bot$, the tangent hyperplane at the sphere in point $z$.\bigbreak Let $|\cdot|$ denote the $n$-dimensional Hausdorff measure on $\Sn$. For any $m\in (0,|\Sn|)$, $r\in (0,\pi)$, and $p\in\Sn$, we let $B_{p,r}$ be the geodesic ball in $\Sn$ of center $p$ and radius $r$, $B_p^m$ the ball of center $p$ and measure $m$. When $p$ is not mentioned we may implicitly assume that $p=e_{n+1}$ (as it will not matter).\bigbreak Let $g\in\mathcal{C}^{0}([-1,1],\R_+)$ be a continuous even function such that $g>0$ on $(-1,1)$. We denote $G(t)=\int_0^t g(s) ds$ the primitive of $g$ which vanishes at $t=0$.\bigbreak For any $\rho\in L^1(\Sn,\R)$, and $(a,z)\in\Sn\times\Sn$ we denote \[E_\rho(z):=\int_{\Sn}\rho(v)G(z\cdot v)dv\] and \[\E_\rho(a,z):=\int_{\Sn}\rho(v)G(z\cdot F_a v)dv.\] Notice in particular that $\nabla E_\rho(z)=\nabla_z\E_\rho(-z,z)$, where ``$\nabla_z$'' is the partial gradient in its second variable. \medskip \noindent{\bf Counting zeroes modulo 4.} Let $M$ be a compact manifold with boundary $\partial M$. Suppose that we have some smooth involution without fixed point $S:M\to M$, and a $(1,1)$ tensor $Q$ acting as a invertible linear transformation on each tangent space, meaning that for every $x\in M$ we have $Q(x)\in GL(T_xM)$.\bigbreak We denote by $\X(M)$ be the set of continuous (tangent) vector fields $V$ on $M$ (with the natural $\mathcal{C}^0$ topology) such that \[S_* V=QV,\] meaning that for every $x\in M$, we have $dS(S^{-1}(x))V(S^{-1}(x))=Q(x)V(x)$. We also let \[\X^*(M)=\{V\in\X(M):V_{|\partial M}\text{ does not vanish}\}.\] When $V\in\X^*(M)$ is smooth ($\mathcal{C}^\infty$), we say it is nondegenerate if for every $x\in M$ such that $V(x)=0$, and any local chart $\varphi:\mathcal{B}\to M$ (where $\mathcal{B}$ is a ball in Euclidian space) with $\varphi(0)=x$, then $D(\varphi^*V)(0)$ is invertible (where $\varphi^*V=\left(\varphi^{-1}\right)_*V$). \begin{lemma}\label{TopLemma} There is a unique continuous function $I:\X^*(M)\to \mathbb{Z}/4\mathbb{Z}$ such that for every smooth nondegenerate vector field $V\in\X^*(M)$, \[I(V)=\mathrm{Card}\left(\{x\in M:V(x)=0\}\right)\text{ mod }(4).\] \end{lemma} In particular $I$ is invariant by homotopy in $\X^*(M)$. This result is similar to \cite[§4]{M97}, \cite[Ch. 7.4]{S11}; we give a proof in the appendix.\newline The main technical result on which is based the construction of test functions is the following. \begin{theorem}\label{bmn08} Let $\rho,\sigma\in L^1(\Sn,\R)$ with $\rho\geq 0$. Then there exists $(a,z)\in\Sn\times \Sn$ such that $z$ is a critical point of both $\E_\rho(a,\cdot)$ {and} $\E_\sigma(a,\cdot)$. \end{theorem} Criticality above reads $$\int_{\Sn}\rho(v)g(z\cdot F_a (v))\pi_zF_a (v)dv= \int_{\Sn}\sigma(v)g(z\cdot F_a (v))\pi_z F_a (v)dv=0.$$ The remaining part of the section is devoted to the proof of this theorem. \begin{proof} This is direct for any $a$ when $\rho=0$, so we suppose without loss of generality that $\int_{\Sn}\rho>0$. We begin with a computation which will be useful in several steps of the proof. \clem\begin{lemma} Let $p \in \Sn$. For any $r\in (0,\pi/2)$, let us denote $G_r: [-1,1] \to \R$ given by \[G_r(p\cdot z)= E_{1_{B_{p,r}}}(z).\] Then, $G_r\in \mathcal{C}^0([-1,1],\R)\cap\mathcal{C}^1((-1,1),\R)$ and satisfies $G_r'>0$ in $(-1,1)$. Moreover, $$G_r'(p\cdot z)\pi_z(p)=\nabla E_{1_{B_{p,r}}}(z).$$ \end{lemma} \begin{proof} By explicit computation, \[E_{1_{B_{p,r}}}(z)=\int_{B_{p,r}}G(z\cdot v)dv.\] Since $G$ is $\mathcal{C}^1$, we obtain directly that $E_{1_{B_{p,r}}}$ is $\mathcal{C}^1$ and only depends on the scalar product $z\cdot p$. Consequently, it may be written as a function $G_r(p\cdot z)$ with $G_r\in\mathcal{C}^1((-1,1),\R)$. Let us now check that $G_r'>0$ on $(-1,1)$. For any $z\neq \pm p$, \[G_r'(z\cdot p)\pi_z (p)=\nabla E_{1_{B_{p,r}}}(z)=\int_{B_{p,r}}g(z\cdot v)\pi_z(v)dv.\] We let $u=-\frac{\pi_z(p)}{|\pi_z(p)|}$ be the unique unit vector in $T_z \Sn\cap \langle p,z\rangle$ such that $u\cdot p<0$. Then \begin{align*} G_r'(z\cdot p)u\cdot p&=\int_{B_{p,r}}g(z\cdot v)(u\cdot v)dv\\ &=\int_{B_{p,r}\setminus R_u(B_{p,r})}g(z\cdot v)(u\cdot v)dv <0. \end{align*} The last inequality holds because $w\in B_{p,r}\setminus R_u(B_{p,r})$ implies $u\cdot w<0$. \end{proof} \noindent {\bf Homotopy and zeroes of the gradient fields.} Let $r\in (0,\pi/4)$ (its precise value is not important), and some $p,q\in\Sn$ that will be fixed later. Consider the following homotopy for $t \in [0,1]$: \[H_t(a,z):=\left((1-t)\E_\rho(a,z)+t\E_{1_{B_{p,r}}}(a,z),(1-t)\E_\sigma(a,z)+t\E_{1_{B_{q,r}}}(a,z)\right).\] We claim that $\nabla_z H_t(a,z)$ has no zeroes when $a\cdot z=0$. Indeed looking at the first component we always have \begin{align*} a\cdot\nabla_z \E_\rho(a,z)&=\int_{\Sn}\rho(v)g(z\cdot F_a v)(a\cdot\pi_zF_av) dv=\int_{\Sn}\rho(v)g(z\cdot v)(a\cdot F_av) dv<0.\\ a\cdot\nabla_z \E_{1_{B_{p,r}}}(a,z)&=\int_{B_{p,r}}g(z\cdot F_a v)(a\cdot\pi_zF_av) dv=\int_{B_{p,r}}g(z\cdot v)(a\cdot F_av) dv<0. \end{align*} so, by uniform continuity, the first component of $\nabla_z H_t(a,z)$ doesn't vanish for any $t\in [0,1]$ and $(a,z)\in\Sn\times \Sn$ for which $|a\cdot z|$ is small enough.\bigbreak We claim that for well-chosen points $p,q$, $$z\in a^-,\ \nabla_z H_1(a,z)=0 \mbox{ implies } \{z,R_a (z)\}=\{p,q\}.$$ The points $p,q$ we will be chosen to be antipodal, but it may be checked that any $p,q$ such that $B_{p,r}\cap B_{q,r}=\emptyset$ would work as well. Indeed, \begin{itemize}[label=\textbullet] \item If $\nabla_z\E_{1_{B_{p,r}}}(a,z)=0$, then $p\in \langle a,z\rangle$. This is because for any $b\in \langle z,a\rangle^{\bot}$, we have \[b\cdot \nabla_z\E_{1_{B_{p,r}}}(a,z)=\int_{B_{p,r}}g(z\cdot F_a (v))(b\cdot v)dv=\int_{B_{p,r}\setminus B_{R_b p,r}}g(z\cdot F_a (v))(b\cdot v)dv,\] which is positive (resp. negative) as soon as $b\cdot p>0$ (resp. $b\cdot p<0$). \item If $R_a(z)=-z$ or $B_{p,r}\subset a^-\sqcup a^+$, then $p\in \{z,R_a(z)\}$. Indeed in the first case it means $a=-z$, so $z$ can be identified with the critical points of $E_{1_{B_{p,r}}}$, which are $\{\pm z\}$. In the second case, if $\overline{B_{p,r}}$ is fully in $a^-$ (resp $a^+$), then $z$ (resp $R_a z$) is also a critical point of $E_{B_{p,r}}$, meaning \[p=\begin{cases}z & \text{ if }p\in a^-,\\ R_a(z) & \text{ if }p\in a^+\end{cases}.\] \item If $\nabla_z\E_{1_{B_{p,r}}}(a,z)=0$ and $p\notin \{z,R_a (z)\}$, then \begin{equation}\label{eq_milieu} c:=\frac{z+R_a(z)}{|z+R_a(z)|}\in \overline{B_{p,r}}. \end{equation} Indeed, according to the previous point $R_a(z)\neq -z$ (meaning $c$ is well-defined) and $B_{p,r}$ meets $a^\bot$. Since $\overline{B}(p,r)$ and $a^\bot$ intersect, $r<\frac{\pi}{4}$ and $p\in \langle a,z\rangle$, then necessarily $\overline{B_{p,r}}$ contains (exactly) one of $\{-c,c\}$. However, if it meets $-c$ then since $r<\frac{\pi}{4}$, $B_{p,r}\subset c^-\subset\{v:a\cdot \pi_z F_a (v)<0\}$ so \[a\cdot\nabla_z \E_{1_{B_{p,r}}}(a,z)=\int_{B_{p,r}}g(z\cdot F_a (v))(a\cdot \pi_z F_a (v))dv<0\] so necessarily $c\in\overline{B_{p,r}}$. \end{itemize} Since we are free to choose $p$ and $q$, we may take $p=e_{n+1}$, $q=-e_{n+1}$. Assume $\nabla_z H_1(a,z)=0$ and suppose that $\{p,q\}\neq \{z,R_az\}$. Then using the second point, necessarily $z\neq R_a z$ and either $B_{p,r}$ or $B_{q,r}$ meets $c$ (which is as defined in equation \eqref{eq_milieu}) ; without loss of generality assume $c\in \overline{B_{p,r}}$, then since $r<\frac{\pi}{4}$ we have $B_{q,r}\subset c^-\cap (a^+\sqcup a^-)$, which is in contradiction with the fact that $q\in \{z,R_a z\}$. This prove the claim.\bigbreak \noindent {\bf Counting zeroes modulo $4$.} Let us now make a change of parametrization; for any $(z,w)\in \Sn\times \Sn$ that are not identical, we define \[a= a(z,w):=\frac{w-z}{|w-z|},\] such that $z\in a^-$, $w=R_a z$, $R_a$ acting as an isometry between $T_z \Sn$ and $T_w \Sn$. We let $D$ be a (small) tubular neighbourhood of $\{(z,w)\in(\Sn)^2 :z=w\}$, $M=(\Sn)^2\setminus D$, and we define, for any densities $(\rho,\sigma)$, the vector field \[\V_{\rho,\sigma}(z,w)=\left(\nabla_z\E_\rho(a(z,w),z),R_a\nabla_z\E_\sigma(a(z,w),z)\right).\] $\V_{\rho,\sigma}$ and $\V_{1_{B_{p,r}},1_{B_{q,r}}}$ are two tangent vector fields of $M$ that are homotopic with no zeroes crossing $\partial M$ when $D$ is chosen small enough.\newline Define $S:(z,w)\in M\mapsto (w,z)\in M$; it is a smooth involution with no fixed point. For any $(z,w)\in M$, let $Q(z,w)$ be the linear endomorphism of $T_{z,w}M=T_z\Sn\times T_w\Sn$ defined by \[Q(z,w).(h,k)=(R_a k,R_a h).\] We claim that for any $(z,w)\in M$: \begin{equation}\label{eq_symmetry} dS(w,z)\V_{\rho,\sigma}(w,z)=Q(z,w)\V_{\rho,\sigma}(z,w), \end{equation} or in a more synthetic way that $S_*\V_{\rho,\sigma}=Q\V_{\rho,\sigma}$ as in the hypothesis of Lemma \ref{TopLemma}. Indeed, write $a:=a(z,w)$, notice that $a(w,z)=-a$, that $\pi_z R_a =R_a \pi_w$ and $F_{-a}=R_aF_a$. Then \begin{align*} \V_{\rho,\sigma}(w,z)&=\left(\int_{\Sn}\rho(v)g(w\cdot F_{-a}v)\pi_wF_{-a}v dv,R_a\int_{\Sn}\sigma(v)g(w\cdot F_{-a}v)\pi_wF_{-a}v dv\right)\\ &=\left(R_a\int_{\Sn}\rho(v)g(z\cdot F_{a}v)\pi_zF_{a}v dv,\int_{\Sn}\sigma(v)g(z\cdot F_{a}v)\pi_z F_{a}v dv\right),\\ \end{align*} and $dS(w,z).(h,k)=(k,h)$, so \begin{align*} dS(w,z).\V_{\rho,\sigma}(w,z)&=\left(\int_{\Sn}\sigma(v)g(z\cdot F_{a}v)\pi_z F_{a}v dv,R_a\int_{\Sn}\rho(v)g(z\cdot F_{a}v)\pi_zF_{a}v dv\right)\\ &=Q(z,w).\V_{\rho,\sigma}(z,w). \end{align*} We thus define the zero counting modulo $4$ in $M$ as in Lemma \ref{TopLemma} and we claim that \[I(\V_{1_{B_{p,r}},1_{B_{q,r}}})=2\text{ mod }(4).\] According to the previous discussion, the zeroes of $\V_{1_{B_{p,r}},1_{B_{q,r}}}$ are exactly $\{(p,q),(q,p)\}$. Let $(z,w)$ be sufficiently close to $(p,q)$ such that $\overline{B_{p,r}}\subset a(z,w)^-$, $\overline{B_{q,r}}\subset a(z,w)^+$. Notice that in this case \begin{align*} \nabla_z \E_{1_{B_{p,r}}}(a,z)&=\int_{B_{p,r}}g(z\cdot v)\pi_z vdv=G_r'(z\cdot p)\pi_z (p)\\ \nabla_z \E_{1_{B_{q,r}}}(a,z)&=\int_{B_{q,r}}g(z\cdot R_a(v))\pi_z (R_a(v))dv\\ &=R_a\Big (\int_{B_{q,r}}g(w\cdot v)\pi_w (v)dv\Big)\\ &=G_r'(w\cdot q)R_a(\pi_w (q)) \end{align*} because $\pi_z R_a=R_a \pi_w$. Thus $\V_{1_{B_{p,r}},1_{B_{q,r}}}(z,w)$ admits the simpler expression (when $(z,w)$ is in a neighbourhood of $(p,q)$) \begin{equation}\label{eq_J} \V_{1_{B_{p,r}},1_{B_{q,r}}}(z,w)=\left(G_r'(z\cdot p)\pi_z (p),G_r'(w\cdot q)\pi_w(q)\right). \end{equation} Up to a choice of orientable chart, this is locally homotopic to $-\mathrm{Id}_{|\mathcal{B}}$ (where $\mathcal{B}$ is the unit Euclidian ball of $\R^{2n}$) which has a unique nondegenerate zero. When $(z,w)$ is near $(q,p)$ the study is the same by the symmetry property \eqref{eq_symmetry}, so $(q,p)$ counts as one nondegenerate zero. We thus get that $I(\V_{1_{B_{p,r}},1_{B_{q,r}}})=2\text{ mod }(4)$ as claimed. Since $\V_{\rho,\sigma}$ and $\V_{1_{B_{p,r}},1_{B_{q,r}}}$ are homotopic in $\X^*(M)$, by invariance of $I$ through homotopy: \[I\left(\V_{\rho,\sigma}\right)=I\left(\V_{1_{B_{p,r}},1_{B_{q,r}}}\right)=2\text{ mod }(4).\] \noindent{\bf Conclusion.} Since $I\left(\V_{\rho,\sigma}\right)\neq 0\text{ mod }(4)$ we conclude that $\V_{\rho,\sigma}$ has a zero $(z,w)$ somewhere in $M$. Since $z\neq w$, then $(a,z):=\left(\frac{w-z}{|w-z|},z\right)$ satisfies the conclusion of Theorem \ref{bmn08}.\end{proof} \section{Proof of Theorem \ref{bmn06}}\label{bmnsec3} Although Theorem \ref{bmn06} is {\it de facto} a consequence of Theorem \ref{bmn05}, for expository reasons, we start with a short, independent proof of Theorem \ref{bmn06}. The reason is that Theorem \ref{bmn05} requires some extra arguments related to the presence of higher order eigenvalues and so it can hide the core ideas of the proof. \begin{proof}(of Theorem \ref{bmn06}) Let $r$ be the radius of $B^{|\Om|/2}$. The first non-trivial eigenvalue of $B_r$ (supposed to be centered in $e_{n+1}$) has multiplicity $n$ and its eigenfunctions are of the form \[v\mapsto J(\theta)\frac{v_i}{\sin(\theta)},\ i=1,\dots, n\] where $\theta=\arccos(v_{n+1})$ is the angle between $v$ and $e_{n+1}$. The function $J:[0,r]\to \R_+$ is a non-trivial solution of the differential equation \[\frac{1}{\sin(\theta)^{n-1}}\frac{d}{d\theta}\left[\sin(\theta)^{n-1}\frac{d}{d\theta}J(\theta)\right]+\left(\mu_1(B_r)-\frac{n-1}{\sin(\theta)^2}\right)J(\theta)=0.\] In \cite{AB95} (see also \cite{BBC20}), the authors study this function and prove that $J(0)=0$, $J'(r)=0$, $J'>0$ on $(0,r)$, meaning $J$ is positive and increasing on $(0,r]$. Following \cite{AB95}, we extend $J$ on $[0,\pi]$: we let it be constant (equal to $J(r)$ by continuity) on $[r,\pi/2]$, and symmetric along the reflexion $\theta\mapsto \pi -\theta$. Moreover, from \cite{AB95} (see also \cite{BBC20}), the functions \[\theta\in \left(0,\frac{\pi}{2}\right)\mapsto J(\theta)^2,\ \theta\in\left(0,\frac{\pi}{2}\right)\mapsto J'(\theta)^2+\frac{n-1}{\sin(\theta)^2}J(\theta)^2 \] are respectively nondecreasing and decreasing.\bigbreak We then define for all $ t$ in $ (-1,1)$ \[g(t):=\frac{J(\arccos(t))}{\sqrt{1-t^2}}\] and extend it by continuity in $1$ and $-1$ with value $0$. We set $G(t)=\int_{0}^t g$ as previously. \bigbreak Consider $\rho=1_\Om$, $\sigma=u_1 1_\Om$ where $u_1$ is a non-constant eigenfunction associated to $\mu_1(\Om)$. When $\Om$ is disconnected, such a function is a constant different from $0$ on some of the connected components and equals to $0$ elsewhere. In order to prove the inequality $\mu_2(\Om) \le \mu_2\left(B^{m/2} \sqcup B^{m/2}\right)$ the mass transplantation method used in \cite{BH19} works, in relation with the properties of $J$ recalled above. Indeed, following Theorem \ref{bmn08} there exists $(a,z) \in \Sn \times \Sn$ such that $$\int_{\Om}g(z\cdot F_a (v))\pi_zF_a (v)dv= \int_{\Om}u_1(v)g(z\cdot F_a (v))\pi_zF_a (v)dv=0.$$ Up to a rotation we may assume $z=e_{n+1}$ with $a\in e_{n+1}^-$. This means that for any $i=1,\hdots,n$, the function \[v\mapsto g(z\cdot F_a v)(F_a v)_i\] is orthogonal to $1$ and $u_1$ in $L^2(\Om)$. In particular, for each $i=1,\hdots,n$ we have \begin{equation}\label{eq_testfunc} \mu_1(\Om)\int_{\Om}\left|g(z\cdot F_a v)(F_a v)_i\right|^2dv\leq \int_{\Om}\left|\nabla(g(z\cdot F_a v)(F_a v)_i)\right|^2dv \end{equation} We now define $H_{a^-}=\left(B(e_{n+1},r)\cup B(-e_{n+1},r)\right)\cap a^-$, and we define the first angular coordinate $\theta^-:v\mapsto \arccos(v_{n+1})$. Similary, we let $H_{a^+}=R_a(H_{a^-})(\subset a^+)$ and $ {\theta^+}(v)=\theta^-(R_av)$. Then summing the previous inequality in $i$ we get \begin{align*} \mu_2(\Om)&\left[\int_{H_{a^-}\cap\Om}J(\theta^-)^2dv+\int_{H_{a^+}\cap\Om}J( {\theta^+})^2dv+|\Om\setminus (H_{a^-}\cup H_{a^+})|J(r)^2\right]\\ \leq &\left[\int_{H_{a^-}\cap\Om}b(\theta^-)dv+\int_{H_{a^+}\cap\Om}b( {\theta^+})dv+\int_{\Om\setminus (H_{a^-}\cup H_{a^+})}b\left(\theta^-(F_a v)\right)dv\right] \end{align*} Above, we have denoted $b(\theta):=J'(\theta)^2+\frac{n-1}{\sin(\theta)^2}J(\theta)^2$. Using the properties of $J(\theta)$, $b(\theta)$ and the fact that $|\Om\setminus (H_{a^-}\cup H_{a^+})|=|H_{a^-}\setminus\Om|+|H_{a^+}\setminus \Om|$ this reduces to \begin{align*} \mu_2(\Om)&\leq\frac{\int_{H_{a^-}\cap\Om}b(\theta^-)dv+\int_{H_{a^+}\cap\Om}b({\theta^+})dv+|\Om\setminus (H_{a^-}\cup H_{a^+})|b(r)}{\int_{H_{a^-}\cap\Om}J(\theta^-)^2dv+\int_{H_{a^+}\cap\Om}J({\theta^+})^2dv+|\Om\setminus (H_{a^-}\cup H_{a^+})|J(r)^2}\\ &\leq\frac{\int_{H_{a^-}}b(\theta^-)dv+\int_{H_{a^+}}b({\theta^+})dv}{\int_{H_{a^-}}J(\theta^-)^2dv+\int_{H_{a^+}}J( {\theta^+})^2dv}\\ &=\mu_1\left(B_r\right), \end{align*} which is the result. \end{proof} \cthe\begin{corollary} Let $\Om\subset\Sn$ be a Lipschitz domain and $i(\Om) \subseteq \Sn$ an isometric image of $\Om$. Assume that $\Om\cap i(\Om)= \emptyset$. Then \[\mu_1(\Om)\leq \mu_1(B^{|\Om|}).\] \end{corollary} The proof is an immediate consequence of Theorem \ref{bmn06} applied to to $\Om\sqcup i(\Om)$. This result was previously known only for $i(\Om)=-\Om$. \cthe\begin{corollary}\label{bmn15} Let $\Om\subset\Sn$ such that $|\Om|\le \frac{|\Sn|}{2} $ and $\Om\cap B^{|\Om|}=\emptyset$. Then \[\mu_1(\Om)\leq \mu_1(B^{|\Om|}).\] \end{corollary} This result extends the hemisphere inclusion condition of Ashbaugh and Benguria expressed, with our notation, as $\Om\cap B^{|\Sn|/2} =\emptyset$. \begin{proof} We remind that the function \[m\in \left(0,|\Sn|\right)\mapsto \mu_1(B^m)\] is continuous and decreasing on $(0,|\Sn|/2)$ (and actually on a slightly larger interval), as was proved in \cite{AB95}. Let $\Om \subseteq \Sn$ satisfying the hypotheses of Corollary \ref{bmn15}. Consider $\eps\in (0,|\Om|)$, then $\Om\cap B^{|\Om|-\eps}=\emptyset$ and, according to Theorem \ref{bmn06}, \[\min\left\{\mu_1(\Om), \mu_1\left(B^{|\Om|-\eps}\right)\right\}=\mu_2\left(\Om\sqcup B^{|\Om|-\eps}\right)\leq \mu_1\left(B^{|\Om|-\frac{1}{2}\eps}\right)\] Since $\mu_1\left(B^{|\Om|-\eps}\right)> \mu_1\left(B^{|\Om|-\frac{1}{2}\eps}\right)$, we get $\mu_1(\Om)\leq \mu_1\left(B^{|\Om|-\frac{1}{2}\eps}\right)$, which implies the result as $\eps\to 0$. \end{proof} \section{Proof of Theorem \ref{bmn05}}\label{bmnsec4} \begin{proof} In \cite{BBC20}, it has been proved that \[\sum_{i=1}^{n-1}\frac{1}{\mu_i\left(\Om\right)}\geq \sum_{i=1}^{n-1}\frac{1}{\mu_i\left(B^{|\Om|}\right)}\left(=\frac{n-1}{\mu_1\left(B^{|\Om|}\right)}\right).\] We mostly rely on those computations, to which we refer the reader. With the notations of the proof of Theorem \ref{bmn06}, the main idea is to choose a suitable basis for $e_{n+1}^\perp$ so that extra orthogonality conditions on the eigenfunctions occur. This is also the idea behind \cite{WaXi18} and \cite{BBC20}, however, some extra care is needed because of the structure of the test functions found in Section \ref{bmnsec2}. We fix again $\rho=1_\Om$, $\sigma=u_1 1_{\Om}$ and suppose, up to a rotation, that $a\in e_{n+1}^-$ is such that $e_{n+1}$ is a critical point of both $\E_\rho(a,\cdot)$ and $\E_{\sigma}(a,\cdot)$. This means that for any $\xi\in\Rn(\hookrightarrow \Rn\times\{0\}=e_{n+1}^\perp)$, the function \[\varphi_\xi(w):=g((F_aw)_{n+1})(\xi\cdot F_aw)\] is orthogonal to $1,u_1$ in $L^2(\Om)$. \clem\begin{lemma} There is a choice of an orthonormal basis $\xi_1,\hdots,\xi_n$ of $\R^n$ such that for any $i=1,\hdots,n$, \[\varphi_{\xi_i}\in \langle 1,u_1,\hdots,u_{i}\rangle^\bot\] \end{lemma} \begin{proof} We start by choosing $\xi_n$. Consider the function \[f:\xi\in\mathbb{S}^{n-1}\mapsto \left(\int_{\Om}u_2\varphi_\xi,\hdots,\int_{\Om}u_{n}\varphi_\xi\right)\in \R^{n-1}.\] Then $f$ verifies $f(-\xi)=-f(\xi)$, so by Borsuk-Ulam theorem \cite{Ma03} we get that $f$ must vanish at some $\xi_n$. We then continue with the restriction \[\xi\in\mathbb{S}^{n-1}\cap\xi_n^\bot\mapsto \left(\int_{\Om}u_2\varphi_\xi,\hdots,\int_{\Om}u_{n-1}\varphi_\xi\right)\in \R^{n-2}.\] to choose $\xi_{n-1}$, and so on until $\xi_2$ such that $\varphi_{\xi_2}$ is orthogonal to $1,u_1$ and $u_2$. Finally, $\xi_1$ is then chosen in $\Sn\cap \langle \xi_n,\hdots,\xi_2\rangle^\bot$ (there are two possibilities and we may choose the one that makes an oriented basis, although this is not important for the rest of the proof). \end{proof} From the orthogonality properties proved above, for any $i=1, \dots, n$ we get \[\int_{\Om}| \varphi_{\xi_i}|^2\leq \frac{1}{\mu_{i+1}(\Om)}\int_{\Om}|\nabla \varphi_{\xi_i}|^2.\] Assuming that $\xi_i =e_i$, $$\int_{{a^-}} (1_\Om+ 1_{R_a({a^+}\cap\Om)})J^2(\arccos (v_{n+1}))\frac{ v_i^2 }{1-v_{n+1}^2 }dv\le $$ $$\frac{1}{\mu_{i+1}(\Om)}\int_{{a^-} } (1_\Om+ 1_{R_a({a^+}\cap\Om)}) \left(\frac{J^2(\arccos (v_{n+1}))(1-v_i^2-v_{n+1}^2)}{(1-v_{n+1}^2)^2} + \frac{(J'(\arccos (v_{n+1})) v_i)^2}{1-v_{n+1}^2}\right) dv.$$ Since $\theta ^-=\arccos (v_{n+1})$ we simplify $$\int_{{a^-}} (1_\Om+ 1_{R_a({a^+}\cap\Om)})J^2(\theta^-)\frac{ v_i^2 }{\sin(\theta^-)^2 }dv\le $$ $$\frac{1}{\mu_{i+1}(\Om)}\left (\int_{{a^-} } (1_\Om+ 1_{R_a({a^+}\cap\Om)}) \frac{J^2(\theta^-)}{\sin(\theta^-)^2}\frac{1-v_i^2}{\sin(\theta^-)^2} dv+ \int_{{a^-} } (1_\Om+ 1_{R_a({a^+}\cap\Om)} ) \frac{(J'(\theta^-) v_i)^2}{\sin(\theta^-)^2} dv\right).$$ Summing for $i=1,\hdots,n$, the left hand side becomes $\int_{{a^-}} (1_\Om+ 1_{R_a({a^+}\cap\Om)})J(\theta^-)^2dv$ and the monotonicity property of $J$ leads to $$2\int_{H^-} J(\theta^-)^2dv\le \int_{{a^-}} (1_\Om+ 1_{R_a({a^+}\cap\Om)})J(\theta^-)^2dv.$$ Inside the first term of the right hand side, we use the inequality \[\sum_{i=1}^{n}\frac{1}{\mu_{i+1}(\Om)}\left(1-\frac{v_i^2}{\sin(\theta^-)^2}\right)\leq \sum_{i=2}^{n}\frac{1}{\mu_i(\Om)},\] so that $$\sum_{i=1}^{n} \frac{1}{\mu_{i+1}(\Om)} \int_{{a^-} } (1_\Om+ 1_{R_a({a^+}\cap\Om)}) \frac{J^2(\theta^-)}{\sin(\theta^-)^2}\frac{1-v_i^2}{\sin(\theta^-)^2} dv\le $$ $$\hskip 6cm \sum_{i=2}^{n}\frac{1}{\mu_i(\Om)}\int_{{a^-} } (1_\Om+ 1_{R_a({a^+}\cap\Om)}) \frac{J^2(\theta^-)}{\sin(\theta^-)^2}dv.$$ The second term in the right hand side vanishes outside $ ({\Om\cap H^-}) \cup {R_a({H^+}\cap\Om)}$, hence $$\sum_{i=1}^{n} \frac{1}{\mu_{i+1}(\Om)} \int_{{a^-} } (1_\Om+ 1_{R_a({a^+}\cap\Om)} ) \frac{(J'(\theta^-) v_i)^2}{\sin(\theta^-)^2} dv\le $$ $$\hskip 6cm \sum_{i=1}^{n} \frac{1}{\mu_{i+1}(\Om)} \int_{{H^-} } (1_\Om+ 1_{R_a({a^+}\cap\Om)} ) \frac{(J'(\theta^-) v_i)^2}{\sin(\theta^-)^2}dv.$$ Relying on the monotonicity property of $b$, we implement now the mass transplantation as follows: if a point belongs to $\Om\cap H^-$ or $R_a({a^+}\cap\Om) \cap H^-$ we keep the integrands unchanged. If a point belongs to $(\Om \cap a^-)\setminus H^-$ or to $R_a({a^+}\cap\Om) \setminus H^-$, we virtually transport it in any free point of $H^-$ or $R_aH^-$, the overall mass being preserved. We get \[2\int_{H^-} J(\theta^-)^2 dv\le 2 \left( \sum_{i=2}^{n}\frac{1}{\mu_i(\Om)}\right) \int_{{H^-} } \frac{J(\theta^-)^2}{\sin(\theta^-)^2}dv+2 \sum_{i=1}^{n} \frac{1}{\mu_{i+1}(\Om)} \int_{{H^-} } \frac{(J'(\theta^-) v_i)^2}{\sin(\theta^-)^2}dv.\] Taking into account the symmetry of $H^-$ and of $J$, we get $$\int_{{H^-} } \frac{(J'(\theta^-) v_i)^2}{\sin(\theta^-)^2}dv= \frac 1n \int_{{H^-} } (J'(\theta^-) )^2 dv= \frac 1n \int_{{B^{|\Om|/2}} } (J'(\theta) )^2 dv.$$ Since $$\sum_{i=1}^{n}\frac{1}{\mu_i(\Om)}\leq \frac{n}{n-1}\sum_{i=2}^{n}\frac{1}{\mu_i(\Om)}$$ and $$\mu_1(B^{|\Om|/2})= \frac{\int_{H^-} \left(J'(\theta^-)^2 + \frac{J(\theta^-)^2}{\sin(\theta^-)^2}\right)dv}{\int_{H^-} J(\theta^-)^2 dv},$$ we conclude the proof. \end{proof} \section{Further remarks and open questions}\label{sec_density} \noindent{\bf Extension to densities.} The proofs of Theorems \ref{bmn05} and \ref{bmn06} and of their corollaries are exclusively based on mass transplantation. Once the topological result of Section \ref{bmnsec2} can be applied to identify the suitable family of test functions, the geometry of $\Om$ is not anymore relevant. In the spirit of \cite{BH19}, all results of the paper extend naturally to densities. A further work \cite{Ma22} on extremal densities for Neumann eigenvalues on the sphere is in preparation. Shortly, assume that $\Om \subseteq \Sn$ is an open Lipschitz set and that $\rho : \Om \rightarrow [0,1]$ is a measurable function such that $\mathrm{essinf}_\Om \rho >0$. We consider the well posed eigenvalue problem in $H^1(\Om)$, \[\mu_k(\rho)= \inf_{S\in{\mathcal S}_{k+1}} \sup_{u \in S\setminus \{0\}} \frac{\int_{\Om} \rho|\nabla u|^2 }{\int_{\Om}\rho u^2 }.\] Then, Theorem \ref{bmn06} reads \[\mu_2(\rho)\leq \mu_1\left(B^{\frac{1}{2}\int_{\Sn}\rho}\right).\] All the other estimates for $\mu_1(\rho)$ follow from this inequality. In particular, if $m < \frac{|\mathbb{S}^2|}{2}$, $ \int_{\Sn}\rho=m$ and $\rho=0$ on $B^m$, then \[\mu_1(\rho)\leq\mu_1(B^m).\] \medskip \noindent{\bf Going beyond $B^m$ for $\mu_1$.} Two questions are in order. \medskip \noindent{\bf [Q1.]} {\it For $m < \frac{|\mathbb{S}^2|}{2}$, can one remove the constraint $\Om \subseteq \Sn \setminus B^m$?} The answer is not clear. One may think that it is only a technical difficulty in the construction of the test functions in order to remove the inclusion condition $\Om \subseteq \Sn\setminus B^m$, but the answer is more involved. An observation which is worth to be noticed is that numerical computations searching for the optimal domain on $ {\mathbb{S}^2}$, when they are performed in the exterior of $B^m$, are very stable and clearly lead to the spherical cap. This is coherent with the result of Corollary \ref{bmn08.1}. Once the inclusion constraint is removed, a certain numerical instability leading to some homogenization of the optimal density can be observed, a phenomenon to which we do not have a precise explanation (for further details see \cite{Ma22}). Yet, this does not imply that the result would be false on the whole sphere, but indicates that it is indeed false in the class of densities. This phenomenon is new. As a consequence, even if true for every $\Om\subseteq \Sn$, $|\Om|<\frac{|\Sn|}{2}$, we do not expect the inequality $\mu_1(\Om)\leq \mu_1(B^m)$ could be proved by means of mass transplantations, like ours. Otherwise, this would necessarily generalize to $\mu_1(\rho)$, which is being contradicted by the numerical example given in Table \ref{fig:rho_pl_values} and Figure \ref{fig:rho_pl}. In Table \ref{fig:rho_pl_values} we describe a piecewise affine, radial density, denoted $\rho^{pl}$. Its graph is given in Figure ~\ref{fig:rho_pl}. Then $\int_{\mathbb{S}^2} \rho^{pl} = 6< 2\pi$ and $\mu_1(\rho^{pl}) \approx 2.213185 > 2.071487 \approx \mu_1(B^6)$. The numerical computation of $\mu_1(\rho^{pl})$ leads to a one dimensional eigenvalue problem and, although not certified theoretically, gives a strong indication about the validity of the result \begin{table}[b] \begin{tabular}{|l||l|l|l|l|l|} \hline $\theta$ & 0 & 1.3 & 1.4 & 3.14159265 \\ \hline $\rho^{pl}(\theta)$ & 1 & 1 & 0.19480547 & 0.04829935 \\ \hline \end{tabular} \smallskip \caption{} \label{fig:rho_pl_values} \end{table} \begin{figure} \centering \includegraphics[width=0.40\textwidth]{profil_1D_4_points.png} \includegraphics[width=0.32\textwidth]{rho_radial_simplified.png} \caption{$\rho^{pl}$ plotted with respect to the polar angle $\theta \in [0,\pi]$ (left) and the same density on the sphere (right).} \label{fig:rho_pl} \end{figure} \color{black} \medskip \noindent{\bf [Q2.]} {\it What happens if $m \ge \frac{|\mathbb{S}^2|}{2}$?} Is there any chance that the spherical cap of prescribed volume continues to be maximal? The analysis performed in \cite{AB95} leads to the suggestion that this should not be the case. A strong argument is that the first Neumann eigenvalues on the hemisphere of ${\mathbb S}^2$ and the full sphere do coincide. Thus, the eigenvalue is not anymore decreasing when the measure of the geodesic ball increases, at least in a neighborhood of $4\pi$. Numerical computations performed in \cite{Ma22} suggest the existence of sets having higher first eigenvalue than the spherical cap of the same measure. \begin{remark}\rm {\bf (Euclidean version of Theorem \ref{bmn05}).} The inequality proved in Theorem \ref{bmn05} comes as a consequence of the construction of the test functions and relies on the arguments of \cite{WaXi18} and \cite{BBC20}. Clearly, based on the result of \cite{BH19} on the maximality of $\mu_2$ in the Euclidean space, the following inequality can be proved, in the lines of Theorem \ref{bmn05}: let $\Om \subseteq \R^n$ be bounded, open and smooth, then $$ \sum_{i=2}^{n}\frac{1}{\mu_i(\Om)}\ge \frac{n-1}{\mu_1(B_{Euc} ^{|\Om|/2})}.$$ This improves the result of \cite{BH19}. \end{remark} \begin{remark}\rm The starting point of both results of \cite{WaXi18} and \cite{BBC20} is the conjecture by Ashbaugh and Benguria that $$\sum_{i=1}^{n}\frac{1}{\mu_i(\Om)}\ge \frac{n}{\mu_1(B_{Euc}^{|\Om|})}$$ in the Euclidean space $\R^n$ (and similarly on $\Sn$). In view of Theorem \ref{bmn05}, we may naturally ask whether $$\sum_{i=2}^{n+1}\frac{1}{\mu_i(\Om)}\ge \frac{n}{\mu_1(B ^{|\Om|/2})}$$ on $\Sn$ (and similarly in the Euclidean space $\R^n$). This conjecture is supported by numerical computations on ${\mathbb S}^2$ (see \cite{Ma22}). Based on the multiplicity of the second eigenvalue on the union of two equal spherical caps and on the conjecture above on $\mu_1$, one may also suspect that $$\sum_{i=2}^{2n+1}\frac{1}{\mu_i(\Om)}\ge \frac{2n}{\mu_1(B ^{|\Om|/2})}.$$ However, this last assertion fails to be supported by numerical computations. \end{remark} \section{Appendix - proof of Lemma \ref{TopLemma}} \begin{proof} This is a consequence of the two following facts: first, by smoothing and a classical application of Sard's theorem we have the $\mathcal{C}^0$-density of vectors $V\in\X^*(M)$ with $\mathcal{C}^\infty$ regularity and nondegenerate zeros. Second for any two sufficently close, smooth and nondegenerate vector fields $V,W$, we have \[\mathrm{Card}(V^{-1}(0))(V)=\mathrm{Card}(W^{-1}(0))\text{ mod }(4).\] Indeed, letting $V_t=(1-t)V+tW$ we have an homotopy between $V$ and $W$ that never vanish on $\partial M$ when $\Vert V-W\Vert_{\mathcal{C}^0(M)}<\inf_{\partial M}|V|$. Then by a perturbation argument and parametric transversality theorem (see for instance \cite[Th. 7.1.1]{S11}) there is an homotopy $(H_t)_{0\leq t\leq 1}$ in $\X^*(M)$ such that $H_0$ (respectively $H_1$) is an arbitrarily small perturbation of $V$ (respectively $W$) and so \[\mathrm{Card}(V^{-1}(0))=\mathrm{Card}(H_0^{-1}(0)),\ \mathrm{Card}(W^{-1}(0))=\mathrm{Card}(H_1^{-1}(0)),\] and such that $H:[0,1]\times M\to TM$ is transversal to the null vector field. As a consequence $Z:=\{(t,x)\in [0,1]\times M:\ H_t(x)=0\}$ is a compact $1$-dimensional manifold that meets $\partial [0,1]\times M$ transversally but that does not meet $[0,1]\times \partial M$ (because the homotopy is in $\X^*(M)$). For every connected component $c$ of $Z$ there are four possibilities (due to the classification of $1$-dimensional manifold, see for instance \cite[Th 5.4.1]{S11}): \begin{itemize} \item $c$ is a circle ; in this case it has no influence on counting of zeroes of $H_0,\ H_1$. \item $c$ is a curve (we write it $c(\tau)_{\tau \in [0,1]}$) such that $c(0)$ and $c(1)$ are both in $\{0\}\times M$. Consider then $\tilde{c}(\tau):=\left(\text{Id}_{[0,1]},S\right)\circ c(\tau)$ ; by the symmetry property of $V$ we know it is also a connected component of $Z$ with both ends in $\{0\}\times M$. We claim that it is disjoint from $c$; indeed for every $\tau\in [0,1]$, $c(\tau)\neq \tilde{c}(\tau)$, however if $c$ and $\tilde{c}$ span the same curve then $\tilde{c}=c\circ \varphi$ for some continuous map $\varphi:[0,1]\to [0,1]$ with no fixed point, which is a contradiction. As a consequence $c(0),c(1),\tilde{c}(0),\tilde{c}(1)$ are all distincts, so the number of such zeroes is a multiple of $4$. \item $c$ is a curve with both ends in $\{1\}\times M$; this is the same as above. \item $c$ is a curve with $c(0)\in\{0\}\times M$, $c(1)\in \{1\}\times M$ ; this counts the same number of zeroes on both sides. \end{itemize} Thus the difference of the number of zeroes of $H_0$ and $H_1$ is a multiple of 4, which ends the proof. \end{proof} {\bf Acknowledgments.} The first author is thankful to Mark Ashbaugh for a deep insight of the question and fruitful suggestions. The authors were supported by ANR SHAPO (ANR-18-CE40-0013). \bibliographystyle{plain}
{ "timestamp": "2022-08-25T02:12:47", "yymm": "2208", "arxiv_id": "2208.11413", "language": "en", "url": "https://arxiv.org/abs/2208.11413", "abstract": "We prove that the second nontrivial Neumann eigenvalue of the Laplace-Beltrami operator on the unit sphere $\\mathbb{S}^n \\subseteq \\mathbb{R}^{n+1}$ is maximized by the union of two disjoint, equal, geodesic balls among all subsets of $\\mathbb{S}^n$ of prescribed volume. In fact, the result holds in a stronger version, involving the harmonic mean of the eigenvalues of order $2$ to $n$, and extends to densities. A (surprising) consequence occurs on the maximality of a geodesic ball for the first nontrivial eigenvalue under the volume constraint: the hemisphere inclusion condition of the Ashbaugh-Benguria result can be relaxed to a weaker one, namely empty intersection with a geodesic ball of the prescribed volume. Although we do not prove that this last inclusion result is sharp, for a mass less than the half of the sphere, we numerically identify a density with higher first eigenvalue than the corresponding geodesic ball and with support equal to the full sphere $\\mathbb{S}^2$.", "subjects": "Analysis of PDEs (math.AP)", "title": "Sharp inequalities for Neumann eigenvalues on the sphere", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9926541719043211, "lm_q2_score": 0.8175744806385542, "lm_q1q2_score": 0.8115687190483695 }
https://arxiv.org/abs/1910.07281
The diameter and radius of radially maximal graphs
A graph is called radially maximal if it is not complete and the addition of any new edge decreases its radius. In 1976 Harary and Thomassen proved that the radius $r$ and diameter $d$ of any radially maximal graph satisfy $r\le d\le 2r-2.$ Dutton, Medidi and Brigham rediscovered this result with a different proof in 1995 and they posed the conjecture that the converse is true, that is, if $r$ and $d$ are positive integers satisfying $r\le d\le 2r-2,$ then there exists a radially maximal graph with radius $r$ and diameter $d.$ We prove this conjecture and a little more.
\section{Introduction} We consider finite simple graphs. Denote by $V(G)$ and $E(G)$ the vertex set and edge set of a graph $G$ respectively. The complement of $G$ is denoted by $\bar{G}.$ The radius and diameter of $G$ are denoted by ${\rm rad}(G)$ and ${\rm diam}(G)$ respectively. {\bf Definition.} A graph $G$ is said to be {\it radially maximal} if it is not complete and $$ {\rm rad}(G+e)<{\rm rad}(G)\quad {\rm for}\,\,\,{\rm any}\,\,\,e\in E(\bar{G}). $$ Thus a radially maximal graph is a non-complete graph in which the addition of any new edge decreases its radius. Since adding edges in a graph cannot increase its radius, every graph is a spanning subgraph of some radially maximal graph with the same radius. It is well-known that the radius $r$ and diameter $d$ of a general graph satisfy $r\le d\le 2r$ [4, p.78]. In 1976 Harary and Thomassen [3, p.15] proved that the radius $r$ and diameter $d$ of any radially maximal graph satisfy $$ r\le d\le 2r-2. \eqno (1) $$ Dutton, Medidi and Brigham [1, p.75] rediscovered this result with a different proof in 1995 and they [1, p.76] posed the conjecture that the converse is true, that is, if $r$ and $d$ are positive integers satisfying (1) then there exists a radially maximal graph with radius $r$ and diameter $d.$ We prove this conjecture and a little more. We denote by $d_{G}(u,v)$ the distance between two vertices $u$ and $v$ in a graph $G.$ The {\it eccentricity}, denoted by $e_{G}(v),$ of a vertex $v$ in $G$ is the distance to a vertex farthest from $v.$ The subscript $G$ might be omitted if the graph is clear from the context. Thus $e(v)={\rm max}\{d(v,u)| u\in V(G)\}.$ If $e(v)=d(v,x),$ then the vertex $x$ is called an {\it eccentric vertex} of $v.$ By definition the radius of a graph $G$ is the minimum eccentricity of all the vertices in $V(G),$ whereas the diameter of $G$ is the maximum eccentricity. A vertex $v$ is a {\it central vertex} of $G$ if $e(v)={\rm rad}(G).$ A graph $G$ is said to be {\it self-centered} if ${\rm rad}(G)={\rm diam}(G).$ Thus self-centered graphs are those graphs in which every vertex is a central vertex. $N_G(v)$ will denote the neighborhood of a vertex $v$ in $G.$ The {\it order} of a graph is the number of its vertices. The symbol $C_k$ denotes a cycle of order $k.$ \section{Main Results} We will need the following operation on a graph. The {\it extension} of a graph $G$ at a vertex $v,$ denoted by $G\{v\},$ is the graph with $V(G\{v\})=V(G)\cup\{v'\}$ and $E(G\{v\})=E(G)\cup \{vv'\}\cup\{v'x|vx\in E(G)\}$ where $v'\not\in V(G).$ Clearly, if $G$ is a connected graph of order at least $2,$ then $e_{G\{v\}}(u)=e_{G}(u)$ for every $u\in V(G)$ and $e_{G\{v\}}(v')=e_{G\{v\}}(v)=e_{G}(v).$ In particular, ${\rm rad}(G\{v\})={\rm rad}(G)$ and ${\rm diam}(G\{v\})={\rm diam}(G).$ Gliviak, Knor and ${\rm\check{S}}$olt${\rm\acute{e}}$s [2, Lemma 5] proved the following result. {\bf Lemma 1.} {\it Let $G$ be a radially maximal graph. If $v\in V(G)$ is not an eccentric vertex of any central vertex of $G,$ then the extension of $G$ at $v$ is radially maximal.} Now we are ready to state and prove the main result. {\bf Theorem 2.} {\it Let $r,d$ and $n$ be positive integers. If $r\ge 2$ and $n\ge 2r,$ then there exists a self-centered radially maximal graph of radius $r$ and order $n.$ If $r<d\le 2r-2$ and $n\ge 3r-1,$ then there exists a radially maximal graph of radius $r,$ diameter $d$ and order $n.$ } {\bf Proof.} We first treat the easier case of self-centered graphs. Suppose $r\ge 2$ and $n\ge 2r.$ The even cycle $C_{2r}$ is a self-centered radially maximal graph of radius $r$ and order $2r.$ Choose any but fixed vertex $v$ of $C_{2r}.$ For $n>2r,$ successively performing extensions at vertex $v$ starting from $C_{2r}$ we obtain a graph $G(r,n)$ of order $n.$ $G(4,11)$ is depicted in Figure 1. \vskip 3mm \par \centerline{\includegraphics[width=2.6in]{Fig1.jpg}} \par Denote $G(r,2r)=C_{2r}.$ Since $G(r,n)$ has the same diameter and radius as $C_{2r},$ it is self-centered with radius $r.$ Let $xy$ be an edge of the complement of $G(r,n).$ Denote by $S$ the set consisting of $v$ and the vertices outside $C_{2r}.$ Then $S$ is a clique. If one end of $xy,$ say, $x$ lies in $S,$ then $y\not\in N[v],$ the closed neighborhood of $v$ in $G(r,n).$ We have $e(x)<r.$ Otherwise $x,y\in V(C_{2r})\setminus S.$ We then have $e(x)<r$ and $e(y)<r.$ In both cases, ${\rm rad}(G(r,n)+xy)<{\rm rad}(G(r,n)).$ Hence $G(r,n)$ is radially maximal. Next suppose $r<d\le 2r-2$ and $n\ge 3r-1.$ We define a graph $H=H(r,d,3r-1)$ of order $3r-1$ as follows. $V(H)=\{x_1,x_2,\ldots,x_{2r-1}\}\cup\{y_1,y_2,\ldots,y_r\}$ and \begin{align*} E(H)&=\{x_ix_{i+1}|i=1,2,\ldots,2r-1\}\cup\{x_{2r-1}y_1\}\cup\{x_{2r-2j+2}y_j|j=1,2,\ldots,2r-d\}\\ &\quad\cup \{x_{d-r+1}y_{2r-d+1}\}\cup\{y_ty_{t+1}|t=2r-d+1,\ldots,r-1\,\,\,{\rm if}\,\,\, d\ge r+2\} \end{align*} where $x_{2r}=x_1.$ $H$ is obtained from the odd cycle $C_{2r-1}$ by attaching edges and one path. A sketch of $H$ is depicted in Figure 2, and $H(6,d,17)$ with $d=7,8,9,10$ are depicted in Figure 3. \vskip 3mm \par \centerline{\includegraphics[width=4.4in]{Fig2.jpg}} \par \vskip 3mm \par \centerline{\includegraphics[width=6.8in]{Fig3.jpg}} \par Clearly, $H$ has radius $r,$ diameter $d$ and order $3r-1.$ To see this, verify that $x_{d-r+1}$ is a central vertex and $e_H(y_r)=d.$ Now we show that $H$ is radially maximal. Let $C$ be the cycle of length $2r-1;$ i.e., $C=x_1x_2\ldots x_{2r-1}x_1.$ We specify two orientations of $C.$ Call the orientation $x_1,x_2,\ldots, x_{2r-1},x_1$ {\it clockwise} and call the orientation $x_{2r-1},x_{2r-2},\ldots,x_1,x_{2r-1}$ {\it counterclockwise.} For two vertices $a,b\in V(C),$ we denote by $\overrightarrow{C}(a,b)$ the clockwise $(a,b)$-path on $C$ and by $\overleftarrow{C}(a,b)$ the counterclockwise $(a,b)$-path on $C.$ For $uv\in E(\bar{H}),$ denote $T=H+uv.$ To show ${\rm rad}(T)<r,$ it suffices to find a vertex $z$ such that $e_{T}(z)<r.$ Denote $$ A=V(C)=\{x_1,x_2,\ldots, x_{2r-1}\}\quad{\rm and}\quad B=V(H)\setminus V(C)=\{y_1,y_2,\ldots,y_r\}. $$ We distinguish three cases. {\bf Case 1.} $u,v\in A.$ Let $u=x_i$ and $v=x_j$ with $i>j.$ Since $d-r+1\le 2r-3,$ the vertex $y_2$ is a leaf whose only neighbor is $x_{2r-2}.$ Note that in $H,$ the three vertices $x_r,\,x_{r-1}$ and $x_{r-2}$ are central vertices, $y_1$ is the unique eccentric vertex of $x_r,$ and $y_2$ is the unique eccentric vertex of $x_{r-1}$ and $x_{r-2}.$ If $j\ge r$ or $i\le r,$ then $e_{T}(x_r)<r.$ Indeed, in the former case $\overrightarrow{C}(x_r,v)\cup vu\cup\overrightarrow{C}(u,x_{2r-1})\cup x_{2r-1}y_1$ is an $(x_r,y_1)$-path of length less than $r$ and in the latter case, $\overleftarrow{C}(x_r,u)\cup uv\cup\overleftarrow{C}(v,x_1)\cup x_1y_1$ is an $(x_r,y_1)$-path of length less than $r.$ Next suppose $i>r>j.$ If $|(i-r)-(r-j)|\ge 2,$ then in $T$ there is an $(x_r,y_1)$-path of length less than $r,$ which implies that $e_{T}(x_r)<r.$ It remains to consider the case $|(i-r)-(r-j)|\le 1.$ If $(i-r)-(r-j)=0$ or $1,$ then in $T,$ there is an $(x_{r-1},y_2)$-path of length less than $r$ and hence $e_T(x_{r-1})<r.$ If $(r-j)-(i-r)=1,$ then in $H,$ there is an $(x_{r-2},y_2)$-path of length $r-1$ and hence $e_T(x_{r-2})<r.$ {\bf Case 2.} $u,v\in B.$ Let $u=y_i$ and $v=y_j$ with $1\le i<j\le r.$ Subcase 2.1. $i=1$ and $j\le 2r-d.$ In the sequel the subscript arithmetic for $x_k$ is taken modulo $2r-1.$ $x_{r-2j+2}$ is a central vertex of $H$ whose unique eccentric vertex is $y_j.$ To see this, note that if $r-2j+2\le d-r+1$ then $d_H(x_{r-2j+2},y_r)\le d-r+1-(r-2j+2)+r-(2r-d)=2d-3r+2j-1\le r-1$ since $j\le 2r-d,$ and if $r-2j+2>d-r+1$ then $d_H(x_{r-2j+2},y_r)\le r-2j+2-(d-r+1)+r-(2r-d)=r-2j+1\le r-3$ since $j\ge 2.$ If $r-2j+2\ge 1,$ in $T$ there is the $(x_{r-2j+2},y_j)$-path $\overleftarrow{C}(x_{r-2j+2},x_1)\cup x_1y_1\cup y_1y_j.$ Hence $d_T(x_{r-2j+2},y_j)\le r-2j+2-1+2=r-2j+3\le r-1$ since $j\ge 2,$ implying $e_{T}(x_{r-2j+2})<r.$ If $r-2j+2\le 0,$ in $T$ there is the path $\overrightarrow{C}(x_{r-2j+2},x_{2r-1})\cup x_{2r-1}y_1\cup y_1y_j.$ Hence $d_T(x_{r-2j+2},y_j)\le 0-(r-2j+2)+2=2j-r\le r-2$ since $j\le 2r-d$ and $d\ge r+1,$ implying $e_T(x_{r-2j+2})<r.$ Subcase 2.2. $i=1$ and $2r-d+1\le j\le r.$ First suppose $j=r.$ Observe that $x_{2d-3r+1}$ is a central vertex of $H$ whose unique eccentric vertex is $y_r.$ Also the condition $d\le 2r-2$ implies $2d-3r+1< d-r+1.$ If $2d-3r+1\ge 1,$ then $d_T(x_{2d-3r+1},y_r)\le 2d-3r+1-1+2\le r-2.$ If $2d-3r+1\le 0,$ then $d_T(x_{2d-3r+1},y_r)\le 0-(2d-3r+1)+2\le r-1,$ where we have used the fact that $d\ge r+1.$ Hence $e_T(x_{2d-3r+1})<r.$ Next suppose $2r-d+1\le j\le r-1.$ Observe that $x_r$ is a central vertex of $H$ whose unique eccentric vertex is $y_1.$ Note also that $r>d-r+1.$ Now in $T,$ there is the $(x_r,y_1)$-path $\overleftarrow{C}(x_r,x_{d-r+1})\cup x_{d-r+1}y_{2r-d+1}\ldots y_j\cup y_jy_1.$ Hence $d_T(x_r,y_1)\le r-(d-r+1)+j-(2r-d)+1=j\le r-1,$ implying $e_T(x_r)< r.$ Subcase 2.3. $i\ge 2$ and $j\le 2r-d.$ First suppose $2(j-i)\le r-1.$ Then $2r-2j+2\ge r-2i+3.$ Clearly $x_{2r-2j+2}$ is the unique neighbor of $y_j$ in $H.$ By considering the two possible cases $r-2i+3\le d-r+1$ and $r-2i+3> d-r+1,$ it is easy to verify that $x_{r-2i+3}$ is a central vertex of $H$ whose unique eccentric vertex is $y_i.$ In $T$ there is the $(x_{r-2i+3},y_i)$-path $\overrightarrow{C}(x_{r-2i+3},x_{2r-2j+2})\cup x_{2r-2j+2}y_j\cup y_jy_i.$ Hence $d_T(x_{r-2i+3},y_i)\le 2r-2j+2-(r-2i+3)+1+1=r-2(j-i)+1\le r-1,$ implying $e_T(x_{r-2i+3})< r.$ Next suppose $2(j-i)\ge r.$ Then $r-2i+2\ge 2r-2j+2.$ Observe that $x_{r-2i+2}$ is a central vertex of $H$ whose unique eccentric vertex is $y_i.$ Also $j-i\le 2r-d-2.$ Similarly we have \begin{align*} d_T(x_{r-2i+2},y_i)&\le r-2i+2-(2r-2j+2)+1+1\\ &=2-r+2(j-i)\\ &\le 2-r+2(2r-d-2)\\ &\le r-2, \end{align*} implying $e_T(x_{r-2i+2})< r.$ Subcase 2.4. $2\le i\le 2r-d$ and $2r-d+1\le j\le r.$ First suppose $2r+2\le 2i+d.$ Then $d-r+1\ge r-2i+3.$ Note that $x_{r-2i+3}$ is a central vertex of $H$ whose unique eccentric vertex is $y_i.$ In $T$ we have the $(x_{r-2i+3},y_i)$-path $\overrightarrow{C}(x_{r-2i+3},x_{d-r+1})\cup x_{d-r+1}y_{2r-d+1}\ldots y_j\cup y_jy_i.$ Thus \begin{align*} d_T(x_{r-2i+3},y_i)&\le d-r+1-(r-2i+3)+j-(2r-d)+1\\ &\le d-r+1-(r-2i+3)+r-(2r-d)+1\\ &=2d-3r+2i-1\\ &\le r-1, \end{align*} implying $e_T(x_{r-2i+3})< r.$ Next suppose $2r+2\ge 2i+d+1.$ Then $r-2i+2\ge d-r+1.$ Observe that $x_{r-2i+2}$ is a central vertex of $H$ whose unique eccentric vertex is $y_i.$ Similarly we have \begin{align*} d_T(x_{r-2i+2},y_i)&\le r-2i+2-(d-r+1)+j-(2r-d)+1\\ &\le r-2i+2-(d-r+1)+r-(2r-d)+1\\ &=r-2i+2\\ &\le r-2, \end{align*} implying $e_T(x_{r-2i+2})< r.$ Subcase 2.5. $2r-d+1\le i<j\le r.$ Observe that $x_{r+1}$ is a central vertex of $H$ whose unique eccentric vertex is $y_r.$ Clearly $e_T(x_{r+1})< r.$ {\bf Case 3.} $u\in A$ and $v\in B.$ Let $u=x_i$ and $v=y_j.$ Observe that $x_r$ is a central vertex of $H$ whose unique eccentric vertex is $y_1.$ If $j=1,$ then $e_T(x_r)< r.$ Now suppose $2\le j\le 2r-d.$ Then both $x_{r-2j+2}$ and $x_{r-2j+3}$ are central vertices of $H$ whose unique eccentric vertex is $y_j.$ If $u$ lies on the path $\overrightarrow{C}(x_{2r-2j+2},x_{r-2j+2}),$ then $e_T(x_{r-2j+2})< r;$ if $u$ lies on the path $\overleftarrow{C}(x_{2r-2j+2},x_{r-2j+3}),$ then $e_T(x_{r-2j+3})< r.$ Finally suppose $2r-d+1\le j\le r.$ We have $2d-3r+1<d-r+1<r+1.$ Observe that both $x_{r+1}$ and $x_{2d-3r+1}$ are central vertices of $H$ whose unique eccentric vertex is $y_r.$ If $2d-3r+1\le i\le d-r+1,$ then $d_T(x_{2d-3r+1},y_r)\le r-1$ and hence $e_T(x_{2d-3r+1})< r.$ Similarly, if $d-r+2\le i\le r+1$ then $e_T(x_{r+1})< r.$ It remains to consider the case when $u=x_i$ lies on the path $\overrightarrow{C}(x_{r+2},x_{2d-3r}).$ We assert that $e_T(u)< r.$ First note that if $w\in\{y_{2r-d+1},y_{2r-d+2},\ldots,y_r\}$ then $d_T(x_i,w)\le d-r\le r-2.$ Also if $w\in V(C)$ we have $d_T(x_i,w)\le r-1$ since ${\rm diam}(C)=r-1.$ Next suppose $w=y_s$ with $1\le s\le 2r-d.$ Let $x_k$ and $x_{k+1}$ be the two vertices on $C$ with $d_C(x_i,x_k)=d_C(x_i,x_{k+1})=r-1.$ Since $x_i$ lies on the path $\overrightarrow{C}(x_{r+2},x_{2d-3r}),$ we have $k\ge 2$ and $k+1\le 2d-2r<2(d-r+1).$ It follows that $d_H(x_i,w)\le r-1,$ since $N_H(y_1)=\{x_{2r-1},x_1\}$ and $N_H(y_{2r-d})=\{x_{2(d-r+1)}\}.$ This completes the proof that $H$ is radially maximal. Note that by the two inequalities in (1), any non-self-centered radially maximal graph has radius at least $3.$ Obviously, the vertex $x_{2r-2}$ is not an eccentric vertex of any vertex in $H.$ Hence by Lemma 1, the extension of $H$ at $x_{2r-2},$ denoted $H_{3r},$ is radially maximal. Also, $H_{3r}$ has the same diameter and radius as $H,$ and has order $3r.$ Again, the vertex $x_{2r-2}$ is not an eccentric vertex of any vertex in $H_{3r}.$ For any $n>3r-1,$ performing extensions at the vertex $x_{2r-2}$ successively, starting from $H,$ we can obtain a radially maximal graph of radius $r,$ diameter $d$ and order $n.$ This completes the proof.\hfill $\Box$ Combining the restriction (1) on the diameter and radius of a radially maximal graph and Theorem 2 we obtain the following corollary. {\bf Corollary 3.} {\it There exists a radially maximal graph of radius $r$ and diameter $d$ if and only if $r\le d\le 2r-2.$} \section{Final Remarks} Since any graph with radius $r$ has order at least $2r,$ Theorem 2 covers all the possible orders of self-centered radially maximal graphs. Gliviak, Knor and ${\rm\check{S}}$olt${\rm\acute{e}}$s [2, p.283] conjectured that the minimum order of a non-self-centered radially maximal graph of radius $r$ is $3r-1.$ This conjecture is known to be true for the first three values of $r;$ i.e., $r=3,4,5$ [2, p.283], but it is still open in general. If this conjecture is true, then Theorem 2 covers all the possible orders of radially maximal graphs with a given radius. \vskip 5mm {\bf Acknowledgement.} This research was supported by the NSFC grants 11671148 and 11771148 and Science and Technology Commission of Shanghai Municipality (STCSM) grant 18dz2271000.
{ "timestamp": "2019-10-17T02:11:40", "yymm": "1910", "arxiv_id": "1910.07281", "language": "en", "url": "https://arxiv.org/abs/1910.07281", "abstract": "A graph is called radially maximal if it is not complete and the addition of any new edge decreases its radius. In 1976 Harary and Thomassen proved that the radius $r$ and diameter $d$ of any radially maximal graph satisfy $r\\le d\\le 2r-2.$ Dutton, Medidi and Brigham rediscovered this result with a different proof in 1995 and they posed the conjecture that the converse is true, that is, if $r$ and $d$ are positive integers satisfying $r\\le d\\le 2r-2,$ then there exists a radially maximal graph with radius $r$ and diameter $d.$ We prove this conjecture and a little more.", "subjects": "Combinatorics (math.CO)", "title": "The diameter and radius of radially maximal graphs", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9898303410461384, "lm_q2_score": 0.819893340314393, "lm_q1q2_score": 0.8115553046648533 }
https://arxiv.org/abs/2301.09101
Bounds On the order of the Schur multiplier of $p$-groups
In 1956, Green provided a bound on the order of the Schur multiplier of $p$-groups. This bound, given as a function of the order of the group, is the best possible. Since then, the bound has been refined numerous times by adding other inputs to the function, such as, the minimal number of generators of the group and the order of the derived subgroup. We strengthen these bounds by adding another input, the group's nilpotency class. The specific cases of nilpotency class 2 and maximal class are discussed in greater detail.
\section{Introduction} \vspace{.2cm} The Schur multiplier $M(G)$ of a finite group $G$ is defined as the second cohomology group of $G$ with coefficients in $\mathbb{C}^{*}$. It plays an important role in the theory of extensions of groups. Finding the bounds on the order, exponents, and ranks of the Schur multiplier of prime power groups has been a major focus of previous investigations. This article investigates the bounds on the order of the Schur multiplier of prime power groups. Let $G$ be a finite $p$-group of order $p^n$. In 1956 Green \cite{Green} proved that $|M(G)| \leq p^{\frac{1}{2}n(n-1)}$. Since then, this bound has been strengthened by many mathematicians \cite{Blackburn,Ellis1,Gaschutz,Jones2,Jones, Jones1,Moravec,Niroomand2,Niroomand1,Rai1,Vermani1,Vermani2,Weigold1,Weigold2}. To note the most recent ones, let G be a non-abelian $p$-group of order $p^n$ with derived subgroup of order $p^k$. Niroomand \cite{Niroomand2} proved that \begin{equation} \label{eq0} |M(G)| \leq p^{\frac{1}{2}(n-k-1)(n+k-2)+1}. \end{equation} The author noted in \cite{Rai1} that a bound by Ellis and Wiegold \cite{Ellis1} is better than this bound and derived from their bound that \[|M(G)| \leq p^{\frac{1}{2}(d-1)(n+k-2)+1}(= p^{\frac{1}{2}(d-1)(n+k)-(d-2)}),\] where $d$ is the minimal number of generators of $G.$ In this article, we further refine the bounds by adding another input, i.e., the nilpotency class of the group. Before proceeding to the results of this article, we set some notations that are mostly standard. The center and the commutator subgroup of a group $G$ are denoted by $Z(G)$ and $\gamma_2(G)$, respectively. By $d(G)$ we denote the minimal number of generators of $G$. We write $\gamma_i(G)$ for the $i$-th term in the lower central series of $G$. The subgroup $\gen{x^{p} \mid x \in G}$ is denoted by $G^{p}$. Finally, the abelianization of the group $G$, i.e. $G/\gamma_2(G)$, is denoted by $G^{ab}$. \\ We now state our first theorem. \begin{theorem}\label{thm1} Let $G$ be a non-abelian $p$-group of order $p^n$ and nilpotency class $c$ with $|\gamma_2(G)| = p^k$ and $d =d(G)$. Then \[|M(G)| \leq p^{\frac{1}{2}(d-1)(n+k)-\sum\limits_{i =2}^{\min(d,c)} d-i}.\] Thus, if $\mu = \min(d, c)$ then, \[|M(G)| \leq p^{\frac{1}{2}(d-1)(n+k)-\frac{1}{2}(\mu-1)[2d-(\mu+2)]}.\] Considering cases, the inequality can be restated as follows: \[ |M(G)| \leq \begin{cases} p^{\frac{1}{2}(d-1)(n+k) - \frac{1}{2}(d-1)(d-2)} & \text{if} \ \ \ \ \ d \leq c \\ \\ p^{\frac{1}{2}(d-1)(n+k)- \frac{1}{2}(c-1)(2d-(c+2))} & \text{if} \ \ \ \ \ d \geq c. \\ \end{cases} \] \end{theorem} \vspace{.6cm} Next, we consider $p$-groups of nilpotency class 2. A finite $p$-group of nilpotency class 2 is said to be special if its center coincides with the derived and the Frattini subgroups. Berkovich and Janko asked the following questions: \begin{question}\cite[Problem 1729]{Berkovich}\label{q1} Let G be a special p-group with $d(G) = d$ and $|Z(G)| = p^{\frac{1}{2}d(d-1)}$. Find the Schur multiplier of $G$ and describe the representation groups of $G$. \end{question} \begin{question}\cite[Problem 2027]{Berkovich}\label{q2} Find the Schur multiplier of special $p$-groups with center of order $p^2$. \end{question} The Question \ref{q1} and \ref{q2} of Bekovich and Janko have been studied in \cite{Rai2} and \cite{Hatui} respectively. In view of these questions, and the fact that for special $p$-groups $G$, $|Z(G)| = p^k$ if and only if $d(\gamma_2(G)) = k$, it seems reasonable to consider the term $d(\gamma_2(G))$ while investigating the bounds for the order of the Schur multiplier of $p$-groups of nilpotency class 2. \begin{theorem}\label{thm2} Let $G$ be a non-abelian finite $p$-group of order $p^n$ with $|\gamma_2(G)| =p^k$, $d(G) = d$ and $d\bigg(\frac{\gamma_2(G)}{\gamma_3(G)}\bigg) = \gamma$. Then \[|M(G)| \leq p^{\frac{1}{2}(d-1)(n+k)-\sum\limits_{i =2}^{\min(d,\gamma+1)} d-i}.\] Thus, if $\nu = \min(d, \gamma+1)$ then, \[|M(G)| \leq p^{\frac{1}{2}(d-1)(n+k)-\frac{1}{2}(\nu-1)[2d-(\nu+2)]}.\] Considering cases, the inequality can be restated as follows: \[ |M(G)| \leq \begin{cases} p^{\frac{1}{2}(d-1)(n+k) - \frac{1}{2}(d-1)(d-2)} & \text{if} \ \ \ \ \ d \leq \gamma+1 \\ \\ p^{\frac{1}{2}(d-1)(n+k)- \frac{1}{2}\gamma(2d-(\gamma+3))} & \text{if} \ \ \ \ \ d \geq \gamma+1. \\ \end{cases} \] \end{theorem} In view of the Questions \ref{q1} and \ref{q2} the following corollary is an application of Theorem \ref{thm2} to special $p$-groups. \begin{corollary}\label{cor2} Let $G$ be a finite $p$-group of nilpotency class 2 such that $G^p \leq \gamma_2(G)$, $|\gamma_2(G)| = p^{k}$ and $d(G) = d$. Then \[ p^{\frac{1}{2}d(d-1) - k} \leq |M(G)| \leq \begin{cases} p^{(d-1)(k+1)} & \text{if} \ \ \ \ \ d \leq k+1 \\ \\ p^{\frac{1}{2}d(d-1)+ \frac{1}{2}k(k+1)} & \text{if} \ \ \ \ \ d \geq k+1. \\ \end{cases} \] Moreover \begin{itemize} \item If $G^p \cong \mathbb{Z}_p$ then, \[ |M(G)| \leq \begin{cases} p^{(d-1)k-1} & \text{if} \ \ \ \ \ d \leq k \\ \\ p^{\frac{1}{2}d(d-1)+ \frac{1}{2}k(k-1)-1} & \text{if} \ \ \ \ \ d \geq k. \\ \end{cases} \] \item If $p$ is an odd prime and $G^p = \gamma_2(G)$, then \[|M(G)| \leq p^{\frac{1}{2}d(d-1)+ \frac{1}{2}k(k-3)}.\] \item If $G_1$ and $G_2$ are special $p$-groups with $|Z(G_1)| = p^2$ and $|Z(G_2)| = p^3$ then \[p^{\frac{1}{2}d(d-1) - 2} \leq |M(G_1)| \leq p^{\frac{1}{2}d(d-1) +3}\] and \[p^{\frac{1}{2}d(d-1) - 3} \leq |M(G_2)| \leq p^{\frac{1}{2}d(d-1) +6}.\] \end{itemize} \end{corollary} Let $p$ be an odd prime and $G_1$ be a special $p$-group with $|Z(G_1)| = p^2$. A more general result has already been given by Mazur \cite{Mazur} in this case. He proves that if the epicenter $Z^{*}(G_1)$ does not coincide with the center $Z(G_1)$ then $G_1$ is of exponent $p$ and belongs to one of the five classes of groups. These classes include a group of order $p^5$, three groups of order $p^6$, two groups of order $p^7$, a group of order $p^{2m+3}$ (for all $m \geq 3)$ and two groups of order $p^{2m+2}$ (for all $m \geq 2$). It is easy to see from \cite[Theorem 2.5.10]{Karp} that if $Z^{*}(G_1)$ coincide with $Z(G_1)$ then $M(G_1)$ is elementary abelian of order $p^{\frac{1}{2}d(d-1) - 2}$. Therefore, it only remains to compute the Schur multiplier of groups that belong to the above mentioned five families. This can be easily achieved by using the explicit presentation of the non-unicentral groups given by Mazur. This observation nullifies \cite[Theroem 1.3(d)]{Hatui} which states that if $G_1^p \cong \mathbb{Z}_p$ then $|M(G_1)| = p^{\frac{1}{2}d(d-1)}$ if and only if $Z^{*}(G_1) = G^p$. It is clear from the above discussion that such groups do not exist. Also, in contrast to \cite[Theorem 1.1(c)]{Hatui}, one can see that the Schur multiplier of $G_1$ is always elementary abelian and never of exponent $p^2$.\\ The following corollary improves on Corollary \ref{cor2} for $G_2$ when $|G_2| \geq p^{13}.$ \begin{corollary}\label{cor3} Let $p$ be an odd prime and $G_2$ be a special $p$-group with $|Z(G_2)| = p^3$ and $|G_2| \geq p^{13}.$ Then \[|M(G_2)| \leq p^{\frac{1}{2}d(d-1)+ 2}.\] Moreover, if $G_2^p = \gamma_2(G_2)$, then \[|M(G_2)| \leq p^{\frac{1}{2}d(d-1) - 2}.\] \end{corollary} Next, we consider the groups of maximal class. A finite $p$-group of order $p^n$ is said to be of maximal class if its nilpotency class is $n-1$. Let $G$ be a finite $p$-group of maximal class and order $p^n$. Since $G$ is generated by 2 elements, it follows by a result of Gasch\"{u}tz \cite{Gaschutz} that $|M(G)| \leq p^{n-1}$. Moravec \cite{Moravec} proved that, if $n > p+1$ then $|M(G)| \leq p^{\frac{p+1}{2}\left \lceil{\frac{n-1}{p-1}}\right \rceil}$. Improving Moravec's result, we prove the following theorem. \begin{theorem} \label{thm3} Let $p$ be an odd prime and $G$ be a finite $p$-group of maximal class with $ |G| = p^n$, $n \geq 4$. Then $|M(G)| \leq p^{\frac{n}{2}}$. \end{theorem} \section{Proofs} Let $G$ be a finite $p$-group and $\overline{G}$ be the factor group $G/Z(G)$. The commutator $x^{-1}y^{-1}xy$ of the elements $x, y \in G$ is denoted by $[x,y].$ For ease of reading, we shall use the same `bar notation' $\overline{g}$ for $g \in G$ to denote the different elements in different factor groups, when there is no danger of ambiguity. Such notations should be interpreted according to the context. For example, whenever $\overline{[x_1, x_2]} \otimes \overline{x_{3}} \in \frac{\gamma_2(G)}{\gamma_{3}(G)} \otimes \overline{G}^{ab}$ for $x_1, x_2, x_3 \in G$, by $\overline{[x_1, x_2]}$ and $\overline{x_{3}}$ we mean $[x_1,x_2]\gamma_3(G)$ and $x_3Z(G)\gamma_2(G)$, respectively.\\ We now proceed to prove the Theorem \ref{thm1}. The proof is founded on the following result of Ellis and Weigold \cite[ Proposition 1, comments following Theorem 2]{Ellis1}. \begin{proposition}\label{prop1} Let $G$ be a finite $p$-group of nilpotency class $c$ and $\overline{G}$ be the factor group $G/Z(G)$. Then \[\Big{|}M(G)\Big{|}\Big{|}\gamma_2(G)\Big{|}\prod_{i=2}^{c}\Big{|}\Ima \Psi_i\Big{|} \leq \Big{|}M(G^{ab})\Big{|}\prod_{i=2}^{c}\Big{|}\frac{\gamma_i(G)}{\gamma_{i+1}(G)} \otimes \overline{G}^{ab}\Big{|},\] where $\Psi_i$, for $i=2, \ldots c,$ is a map from $\underbrace{\overline{G}^{ab} \otimes \overline{G}^{ab} \cdots \otimes \overline{G}^{ab}}_{i+1 \ \text{times}}$ to $\frac{\gamma_i(G)}{\gamma_{i+1}(G)} \otimes \overline{G}^{ab}$ defined as follows:\\ \[ \Psi_2(\overline{x_1} \otimes \overline{x_2} \otimes \overline{x_{3}}) = \overline{[x_1, x_2]} \otimes \overline{x_{3}} + \overline{[x_2, x_{3}]} \otimes \overline{x_1} + \overline{[x_3, x_{1}]} \otimes \overline{x_2}.\] For $3 \leq i \leq c$, \begin{eqnarray*} \Psi_i(\overline{x_1} \otimes \overline{x_2} \otimes \cdots \otimes \overline{x_{i+1}}) & = & \overline{[x_1, x_2, \cdots, x_i]_l} \otimes \overline{x_{i+1}} + \overline{[x_{i+1}, [x_1, x_2, \cdots x_{i-1}]_l]} \otimes \overline{x_i} \\ && +\overline{[[x_i, x_{i+1}]_r, [x_1, \cdots, x_{i-2}]_l]} \otimes \overline{x_{i-1}} \\ && + \overline{[[x_{i-1}, x_i, x_{i+1}]_r, [x_1, x_2, \cdots, x_{i-3}]_l]} \otimes \overline{x_{i-2}} \\ && + \cdots + \overline{[x_2, \cdots, x_{i+1}]_r} \otimes \overline{x_1}\\ \end{eqnarray*} where \[[x_1, x_2, \cdots x_i]_r = [x_1, [\cdots [x_{i-2},[x_{i-1},x_i]]\ldots]\] and \[[x_1, x_2, \cdots x_i]_l = [\ldots[[x_1, x_2], x_3], \cdots, x_i].\] \end{proposition} We are now ready to prove Theorem \ref{thm1}.\\ \noindent \textbf{\textit{Proof of Theorem \ref{thm1}}}: Let $\Psi_i$ be the map defined above and $d(G/Z(G)) = \delta$. Following Proposition \ref{prop1} we have that \[|M(G)||\gamma_2(G)|\prod_{i=2}^{c}|\Ima \Psi_i| \leq |M(G^{ab})|p^{k\delta}.\] Applying \cite[Lemma 2.1]{Rai1} this gives \[|M(G)|\prod_{i=2}^{c}|\Ima \Psi_i| \leq p^{\frac{1}{2}(d-1)(n-k)+k(\delta-1)},\] so that \begin{equation}\label{eq1} |M(G)|\prod_{i=2}^{c}|\Ima \Psi_i| \leq p^{\frac{1}{2}(d-1)(n+k) - k(d-\delta)}. \end{equation} Choose a subset $S= \{x_1, x_2, \ldots, x_{\delta}\}$ of $G$ such that $\{\overline{x_1},\overline{x_2}, \cdots, \overline{x_{\delta}}\}$ be a minimal generating set for $G/Z(G)$. Fix $i \leq \min(\delta, c)$. Since $ i \leq c$, $\gamma_i(G)/\gamma_{i+1}(G)$ is a non-trivial group. Using \cite[Lemma 3.6 (c)]{Khukhro} we can choose a commutator $[y_1,y_2, \cdots, y_i]$ of weight $i$ such that $[y_1,y_2, \cdots, y_i] \notin \gamma_{i+1}(G)$ and $y_1, \ldots, y_i \in S$ . Since $i \leq \delta$, $S \backslash \{y_1,y_2, \cdots, y_i\}$ contains at least $\delta - i$ elements. Choose any $\delta - i$ elements $z_1, z_2, \ldots, z_{\delta - i}$ from $S \backslash \{y_1,y_2, \cdots, y_i\}$. Since $[y_1,y_2, \cdots, y_i] \notin \gamma_{i+1}(G)$ and $z_j \notin \{y_1,y_2, \cdots, y_i\}$, $\Psi_i(\overline{y_1}, \ldots, \overline{y_i}, \overline{z_j}) \neq 1$. Notice that the set $\{\Psi_i(\overline{y_1}, \ldots, \overline{y_i}, \overline{z_j}) \ \ | \ \ 1 \leq j \leq \delta -i\}$ is a minimal generating set for $\gen{\{\Psi_i(\overline{y_1}, \ldots, \overline{y_i}, \overline{z_j}) \ \ | \ \ 1 \leq j \leq \delta -i\}}$, because $\{\overline{x_1},\overline{x_2}, \cdots, \overline{x_{\delta}}\}$ is a minimal generating set for $G/Z(G)$. It follows that $|\text{Im}\Psi_i| \geq p^{\delta - i}$. Putting this in Equation \ref{eq1} we get the required result. \vspace{.3cm} We now proceed to prove Theorem \ref{thm2}. The following proposition is the main ingredient of the proof. \begin{proposition}\label{prop} Let $G$ be a $p$-group of nilpotency class 2 and $\Psi_2$ be the homomorphism given in the Proposition \ref{prop1}. Suppose $d\Big(\frac{G}{Z(G)}\Big) = \delta$ and $d(\gamma_2(G)) = \gamma$. Then, \[|\Ima(\Psi_2) | \geq p^{\sum\limits_{i= 2}^{min(\delta, \gamma+1)} (\delta-i)}.\] \end{proposition} $\allowbreak$ \begin{proof} Choose $x_1, x_2, \ldots, x_{\delta} \in G$ such that \[\frac{G}{\Phi(G)Z(G)} = \gen{\overline{x_1}} \times \gen{\overline{x_2}} \times \cdots \times \gen{\overline{x_{\delta}}}.\] Let $U$ be the set $\{x_1, x_2, \ldots, x_{\delta}\}.$ We now choose a minimal generating set for $\frac{\gamma_2(G)}{\gamma_2(G)^p}$ in the following manner: Since the set $T = \{[x_i, x_j] \mid 1 \leq i < j \leq \delta\}$ generates $\gamma_2(G)$, we choose an element from this set, say $[x_{i_1^1}, x_{i_2^1}]$ such that $\overline{[x_{i_1^1}, x_{i_2^1}]} \neq 0 \in \frac{\gamma_2(G)}{\gamma_2(G)^p}$. Define \[U_1 = \{x_{i_1^1}, x_{i_2^1}\},\] and \[V_1 = \Big\langle\overline{[x_{i_1^1}, x_{i_2^1}]}\Big\rangle \leq \frac{\gamma_2(G)}{\gamma_2(G)^p}.\] Suppose $U_j$ and $V_j \leq \frac{\gamma_2(G)}{\gamma_2(G)^p}$ have been defined. To define $U_{j+1}$ and $V_{j+1}$, we check if there exists any element $[y, z] \in T$, $y, z \in U$ such that $U_j \cap \{y, z\} \neq \phi$ and $V_j < \Big<V_j, \overline{[y,z]}\Big>$. If such an elements exists in $T$, say $[x_{i_1^{j+1}}, x_{i_2^{j+1}}]$, then we define \[U_{j+1} = U_j \cup \{x_{i_1^{j+1}}, x_{i_2^{j+1}}\},\] and \[V_{j+1} = \bigg<V_j, \overline{[x_{i_1^{j+1}}, x_{i_2^{j+1}}]}\bigg> \leq \frac{\gamma_2(G)}{\gamma_2(G)^p}.\] Otherwise we choose any element $[x_{i_1^{j+1}}, x_{i_2^{j+1}}] \in T$, $x_{i_1^{j+1}}, x_{i_2^{j+1}} \in U$ such that \[V_{j} < \Big<V_j, \overline{[x_{i_1^{j+1}}, x_{i_2^{j+1}}]}\Big>,\] and define \[U_{j+1} = \{x_{i_1^{j+1}}, x_{i_2^{j+1}}\},\] and \[V_{j+1} = \Big<V_j, \overline{[x_{i_1^{j+1}}, x_{i_2^{j+1}}]}\Big>.\] Clearly \[V_{\gamma} = \bigg<\overline{[x_{i_1^1}, x_{i_2^1}]}, \ldots, \overline{[x_{i_1^{\gamma}}, x_{i_2^{\gamma}}]}\bigg> = \frac{\gamma_2(G)}{\gamma_2(G)^p}.\] Now suppose that for $j = k_1, k_2, \ldots, k_t$, $U_j \cap U_{j+1} = \phi$ and also that these are the only such numbers. Denote the element \[\bar{x} \otimes \overline{[y,z]} + \bar{y} \otimes \overline{[z,x]} + \bar{z} \otimes \overline{[x,y]} \in \frac{G}{\Phi(G)Z(G)} \otimes \frac{\gamma_2(G)}{\gamma_2(G)^p}\] by $(x,y,z)$, and define \[W_{j} = \{(x, x_{i_1^{j}}, x_{i_2^{j}}) \mid x \in U \backslash U_{j} \}\] for $j = 1, \ldots, \gamma$. We claim that $W_{1} \cup W_{2} \cup \ldots \cup W_{\gamma}$ minimally generates $\Big<W_{1} \cup W_{2} \cup \ldots \cup W_{\gamma}\Big>$. To see this, note that $\frac{G}{\Phi(G)Z(G)}$ and $\frac{\gamma_2(G)}{\gamma_2(G)^p}$ are elementary abelian $p$-groups, and therefore can be considered as vector spaces over field $\mathbb{Z}/p\mathbb{Z}$ with bases $\{\overline{x_1}, \ldots, \overline{x_{\delta}}\}$ and $\{\overline{[x_{i_1^1}, x_{i_2^1}]}, \ldots, \overline{[x_{i_1^{\gamma}}, x_{i_2^{\gamma}}]}\}$ respectively. It follows that the set $\{\overline{x_i} \allowbreak \otimes \overline{[x_{i_1^{j}}, x_{i_2^{j}}]} \mid 1 \leq i \leq \delta, 1 \leq j \leq \gamma\}$ form a basis for the tensor product $\frac{G}{\phi(G)Z(G)} \otimes \frac{\gamma_2(G)}{\gamma_2(G)^p}$. Now take an element $(x, x_{i_1^1}, x_{i_2^1}) \in W_1$. The presence of the term $\overline{x} \otimes \overline{[x_{i_1^1}, x_{i_2^1}]}$ in the expression $(x, x_{i_1^1}, x_{i_2^1})$ ensures that $(x, x_{i_1^1}, x_{i_2^1}) \notin \Big<W_1 \backslash (x, x_{i_1^1}, x_{i_2^1})\Big>$. This shows that $W_1$ minimally generates $\gen{W_1}$. If $k_1 > 1$, suppose for $j \leq k_1-1$, $W_1 \cup W_2 \ldots \cup W_j$ minimally generates $\Big<W_1 \cup W_2 \ldots \cup W_j\Big>$. Take an element $(x, x_{i_1^{j+1}}, x_{i_2^{j+1}}) \in W_{j+1}$. By definition $x \in U \backslash U_{j+1}$. Therefore if \[(x, x_{i_1^{j+1}}, x_{i_2^{j+1}}) \in \bigg<W_1, W_2, \ldots, W_j, W_{j+1} \backslash (x, x_{i_1^{j+1}}, x_{i_2^{j+1}})\bigg>,\] then \[\overline{x} \otimes \overline{[x_{i_1^{j+1}}, x_{i_2^{j+1}}]} = \overline{x} \otimes \overline{[x_{i_1^{1}}, x_{i_2^{1}}]^{\alpha_1} \cdots [x_{i_1^{j}}, x_{i_2^{j}}]^{\alpha_j}}.\] Since $\overline{x} \neq 0 \in \frac{G}{\Phi(G)Z(G)}$, we get that \[ \overline{[x_{i_1^{j+1}}, x_{i_2^{j+1}}]} = \overline{[x_{i_1^{1}}, x_{i_2^{1}}]^{\alpha_1} \cdots [x_{i_1^{j}}, x_{i_2^{j}}]^{\alpha_j}}.\] But this is not possible because $V_j < \Big<V_j, \overline{[x_{i_1^{j+1}}, x_{i_2^{j+1}}]}\Big>.$ This shows that $W_{1} \cup W_{2} \cup \ldots \cup W_{k_1}$ minimally generates $\gen{W_{1} \cup W_{2} \cup \ldots \cup W_{k_1}}$. Now if $\gamma > 1$, suppose that $k_{t+1} = \gamma$, and also that $W_{1} \cup W_{2} \cup \ldots \cup W_{k_j}$ for $1 \leq j \leq t$ minimally generates $\Big<W_{1} \cup W_{2} \cup \ldots \cup W_{k_j}\Big>$. Let $(x, x_{i_1^{k_j+1}}, x_{i_2^{k_j+1}}) \in W_{k_j+1}$. By the definition of $W_{k_j+1}$, we have $x \in U \backslash U_{k_j+1} \ (i.e. \ U \backslash \{x_{i_1^{k_j+1}}, x_{i_2^{k_j+1}}\})$. First assume that $x \in U_{k_1} \cup \cdots \cup U_{k_j}$ and let $l_1, l_2, \ldots, l_r$ be all those numbers less than $k_j+1$ such that $ x = x_{i_{s_1}^{l_1}} = x_{i_{s_2}^{l_2}} = \cdots = x_{i_{s_r}^{l_r}}$ where $s_i's$ are either 1 or 2. Without loss of generality, we can assume that $s_i = 1$ for $i = 1, 2, \ldots, r$. Now if \[(x, x_{i_1^{k_j+1}}, x_{i_2^{k_j+1}}) \in \bigg<W_1, W_2, \ldots W_{k_j+1}\backslash (x, x_{i_1^{k_j+1}}, x_{i_2^{k_j+1}})\bigg>,\] then \begin{eqnarray*} \overline{x} \otimes \overline{[x_{i_1^{k_j+1}}, x_{i_2^{k_j+1}}]} & = & \overline{x} \otimes \overline{[x_{i_1^{1}}, x_{i_2^{1}}]^{\alpha_1} \cdots [x_{i_1^{k_j}}, x_{i_2^{k_j}}]^{\alpha_{k_j}}} \overline{[x_{i_2^{l_1}}, y_1]^{\beta_1} \cdots [x_{i_2^{l_r}}, y_r]^{\beta_r}} \end{eqnarray*} for some $y_1, y_2, \ldots, y_r \in G$ and for some $\alpha_i, \beta_k \in \mathbb{Z}$. Since $\overline{x} \neq 0 \in \frac{G}{\Phi(G)Z(G)}$ we have \begin{eqnarray*} \overline{[x_{i_1^{k_j+1}}, x_{i_2^{k_j+1}}]} & = & \overline{[x_{i_1^{1}}, x_{i_2^{1}}]^{\alpha_1} \cdots [x_{i_1^{k_j}}, x_{i_2^{k_j}}]^{\alpha_{k_j}} [x_{i_2^{l_1}}, y_1]^{\beta_1} [x_{i_2^{l_2}}, y_2]^{\beta_2} \cdots [x_{i_2^{l_r}}, y_r]^{\beta_r}}. \end{eqnarray*} But then, for some $y \in U$ and for some $i$ , \[V_{k_j} < \Big<V_{k_j}, \overline{[x_{i_2^{l_i}}, y]}\Big>.\] This contradicts the way we have chosen the basis for $\frac{\gamma_2(G)}{\gamma_2(G)^p}$. Therefore, we can now assume that that $x \in U \backslash (U_{k_1} \cup \cdots \cup U_{k_j} \cup U_{k_j+1})$. Next, if \[(x, x_{i_1^{k_j+1}}, x_{i_2^{k_j+1}}) \in \Big<W_1, W_2, \ldots W_{k_j+1}\backslash (x, x_{i_1^{k_j+1}}, x_{i_2^{k_j+1}})\Big>,\] we get that \begin{eqnarray*} \overline{x} \otimes \overline{[x_{i_1^{k_j+1}}, x_{i_2^{k_j+1}}]} & = & \overline{x} \otimes \overline{[x_{i_1^{1}}, x_{i_2^{1}}]^{\alpha_1} \cdots [x_{i_1^{k_j}}, x_{i_2^{k_j}}]^{\alpha_{k_j}}} \end{eqnarray*} for some $\alpha_i \ (1 \leq i \leq k_j) \in \mathbb{Z}$, which is again not possible. This shows that $W_{1} \cup W_{2} \cup \ldots \cup W_{k_j+1}$ minimally generates $\Big<W_{1} \cup W_{2} \cup \ldots \cup W_{k_j+1}\Big>$. If $k_{j+1} > k_j+1$, suppose that $W_{1} \cup W_{2} \cup \ldots \cup W_{k_j+i}$ for $i \leq k_{j+1}-k_j-1$ minimally generates $\Big<W_{1} \cup W_{2} \cup \ldots \cup W_{k_j+i}\Big>$. With the same idea applied in the above argument, it can be shown that $W_{1} \cup W_{2} \cup \ldots \cup W_{k_j+i+1}$ minimally generates $\Big<W_{1} \cup W_{2} \cup \ldots \cup W_{k_j+i+1}\Big>$. This shows that $W_{1} \cup W_{2} \cup \ldots \cup W_{k_{j+1}}$ minimally generates $\Big<W_{1} \cup W_{2} \cup \ldots \cup W_{k_{j+1}}\Big>$. Therefore, we can now conclude that $W_{1} \cup W_{2} \cup \ldots \cup W_{\gamma}$ minimally generates $\Big<W_{1} \cup W_{2} \cup \ldots \cup W_{\gamma}\Big>$ and hence hence $W_{1} \cup W_{2} \cup \ldots \cup W_{\gamma}$ is a linearly independent set. Now, since $|U_1| = |U_{k_i+1}| = 2$ for $1 \leq i \leq t$, it follows, by the definition of $W_1$ and $W_{k_i+1}$, that $|W_1|$ = $|W_{k_i+1}| = \delta-2$. For $1 \leq j \leq k_i -1$, note that, if $|W_j| \geq m$ then $|W_{j+1}| \geq m-1$, because $U_j \cap \{x_{i_1^{j+1}}, x_{i_2^{j+1}}\} \neq \phi$. Therefore it easily follows that \[|W_{1} \cup W_{2} \cup \ldots \cup W_{\gamma}| \geq \sum\limits_{i= 2}^{min(\delta, \gamma+1)} (\delta-i).\] Now, by the universal property of tensor products there exists a homomorphism $\eta$, such that the following diagram commutes. \begin{center} $ \begin{CD} \frac{G}{\gamma_2(G)Z(G)} \times \gamma_2(G) @>\phi >> \frac{G}{\gamma_2(G)Z(G)} \otimes \gamma_2(G) \\ @V \mathcal{P} VV @V \eta VV\\ \frac{G}{\Phi(G)Z(G)}\times \frac{\gamma_2(G)}{\gamma_2(G)^p} @> \theta >> \frac{G}{\Phi(G)Z(G)} \otimes \frac{\gamma_2(G)}{\gamma_2(G)^p},\\ \end{CD}$ \end{center} \vspace{.2cm} where \[\mathcal{P}(\overline{x}, y) = (\overline{x}, \overline{y}) \ \ \text{for} \ x \in G, y \in \gamma_2(G),\] \[\phi(\overline{x}, y) = \overline{x} \otimes y,\ \ \ \text{and} \ \ \ \theta(\overline{x}, \overline{y}) = \overline{x} \otimes \overline{y}.\] \vspace{.1cm} Therefore, we have \[\eta(\overline{x} \otimes y) = \overline{x} \otimes \overline{y}.\] Since the preimage of $W_{1} \cup W_{2} \cup \ldots \cup W_{\gamma}$ under $\eta$ is isomorphic to a subgroup of $\Ima(\Psi_2)$ we get that \[|\Ima(\Psi_2)| \geq p^{\sum\limits_{i= 2}^{min(\delta, \gamma+1)} (\delta-i)}.\] This completes the proof. \end{proof} We are now ready to prove The Theorem \ref{thm2}.\\ \noindent \textbf{\textit{Proof of Theorem \ref{thm2}}}: Let $\Psi_2$ be the homomorphism given in the Proposition \ref{prop1} and $\overline{\Psi}_2$ be the similarly defined homomorphism associated with the group $G/\gamma_3(G).$ Also, let $d(G/Z(G)) = \delta$. Since $G/\gamma_3(G)$ is a group of nilpotency class 2, we get from the Proposition \ref{prop} that \[|\Ima(\overline{\Psi}_2) | \geq p^{\sum\limits_{i= 2}^{min(\delta, \gamma+1)} (\delta-i)}.\] It is easy to see that $|\Ima(\Psi_2)| \geq |\Ima(\overline{\Psi}_2)|.$ The Theorem \ref{thm2} now follows from Equation \ref{eq1}.\\ We now proceed towards the proof of the Corollary \ref{cor2}. The proof makes use of the following two results. \begin{proposition}\label{prop3} Let $G$ be a p-group ($p$ odd) of nilpotency class 2 with $G/\gamma_2(G)$ elementary abelian, $d(G) =d$ and $|G^p| =p^t$. Let $V$ be the subgroup of $\gamma_2(G) \otimes G/\gamma_2(G)$ generated by all elements of the form $x^p \otimes x\gamma_2(G)$ for $x \in G$. Then $|V| =p^{\frac{1}{2}t(2d-t+1)}.$ \end{proposition} The proof of the proposition follows exactly along the same lines as \cite[proposition 3.3]{Rai2}. \begin{theorem}\cite[Theorem 2.5.6]{Karp}\label{thma} Let $Z$ be a central subgroup of a finite group $G$. Then there exists the following exact sequence \[Z \otimes \frac{G}{\gamma_2(G)} \mapsto M(G) \mapsto M(G/Z) \mapsto \gamma_2(G) \cap Z \mapsto 1,\] where the map $\alpha: Z \otimes \frac{G}{\gamma_2(G)} \mapsto M(G)$ is defined as follows: Let $G$ be given by $F/R$ for some free group $F$ and its normal subgroup $R$, and $Z$ be identified as $T/R$. After identifying $Z \otimes G$ and $M(G)$ as $T/R \otimes F/\gamma_2(F)R$ and $\gamma_2(F) \cap R/[F,R]$ respectively, $\alpha$ is defined by \[\alpha(xR \otimes yR\gamma_2(F)) = [x,y][F,R].\] \end{theorem} We are now ready to prove Corollary \ref{cor2}.\\ \noindent \textbf{\textit{Proof of Corollary \ref{cor2}}}. For all finite $p$-groups, the lower bound is a well known fact [Corollary 3.2, \cite{Jones}]. The upper bound is a direct consequence of the Theorem \ref{thm2} and the fact that for the group $G$, $\gamma = k$ and $n=d+k$. Assume next that $G^p \cong \mathbb{Z}_p$. Since $\gamma_2(K) \leq K^2$ for any finite group $K$, for $p=2$ it follows that $G$ is either a quaternion or a dihedral group of order 8. Therefore $M(G)$ is either trivial or of order 2. Hence we can assume now that $p$ is an odd prime. Consider the exact sequence from Theorem \ref{thma} for $Z = G^p$. We will show that the map $\alpha:G^p \otimes \frac{G}{\gamma_2(G)} \mapsto M(G)$ is the trivial map in this case. To see this, let $G^p = \gen{g^p}$ for some $g \in G$. By the definition of $\alpha$ note that $x^p \otimes x\gamma_2(G) \in \kernal \ \alpha$ for all $x \in G$. Therefore $(gy)^p \otimes gy\gamma_2(G) \in \kernal \ \alpha$ for all $x, y \in G$. The bilinearity of the tensor product $\otimes$ implies that \[g^p \otimes y\gamma_2(G) + y^p \otimes g\gamma_2(G) \in \kernal \ \alpha.\] But $y^p = (g^p)^m $ for some natural number $m$. Hence $g^p \otimes y\gamma_2(G) \in \kernal \ \alpha$ for all $y \in G$. As a result, $\alpha$ is the trivial map. It follows, from the exact sequence in \ref{thma}, that \[|M(G)| = \frac{|M(G/G^p)|}{|G^p|}.\] Now apply the general bound obtained earlier in the corollary for the group $G/G^p$ to get the required bound in this case. Next suppose that $p$ is an odd prime and $G^p = \gamma_2(G)$. Applying the exact sequence in Theorem \ref{thma} again for $Z = G^p = \gamma_2(G)$, we get \begin{equation}\label{eq2} |M(G)| \leq \frac{|M(G/\gamma_2(G)|}{|\gamma_2(G)|}\frac{|G^p \otimes G/\gamma_2(G)|}{|\kernal \ \alpha|}.\end{equation} Again by the definition of $\alpha$ it is evident that $x^p \otimes \overline{x} \in \kernal \ \alpha$ for all $x \in G.$ Therefore from Proposition \ref{prop3} \[|\kernal \ \alpha | \geq p^{\frac{1}{2}\gamma(2d-k+1)}.\] Now putting $|M(G/\gamma_2(G)| = p^{\frac{1}{2}d(d-1)}$, $|G^p \otimes G/\gamma_2(G)| = p^{dk}$ and the lower bound for $|\kernal \ \alpha|$ in the Inequality \ref{eq2} yield the desired result.\\ We now prove Corollary \ref{cor3} which improves on Corollary \ref{cor2} in the case $G$ is a special $p$-group with $|G| \geq p^{13}$ and $|Z(G)| = p^3.$ The proof uses the well known connection between the capability of groups and the Schur multiplier. \vspace{.2cm} \textbf{\textit{Proof of Corollary \ref{cor3}}}: Let $Z^*(G)$ be the smallest central subgroup of $G$ such that $G/Z^*(G)$ is capable. Since $|G| \geq p^{13}$, by \cite[Theorem 5.7]{Mazur} $G$ is not capable. Therefore $Z^*(G)$ is non-trivial. Let $Z$ be a subgroup of $Z^*(G)$ of order $p$. Since $Z \leq Z(G) = \gamma_2(G)$, by \cite[Theorem 2.5.10]{Karp} we have \[|M(G)| = |M(G/Z)|/|Z|.\] Note that $G/Z$ is a nilpotent group of class 2 such that $G^p \leq \gamma_2(G)$ and $(G/Z)^p = \gamma_2(G/Z)$. This enables us to apply Corollary \ref{cor2} for the group $G/Z$ to obtain the desired result. \vspace{.3cm} We now prove Theorem \ref{thm3} which gives bounds on the Schur multiplier of $p$-groups of maximal class. These groups are an important class of finite $p$-groups and were first studied by Blackburn \cite{Blackburn0}. The following proof uses the well-known facts discovered by him.\\ \noindent \textbf{\textit{Proof of Theorem \ref{thm3}}} Let $P_1 = C_G(\gamma_2(G)/\gamma_4(G)).$ Choose arbitrary elements $s \in G \backslash P_1 \cup C_G(\gamma_{n-2}(G))$ and $s_1 \in P_1 \backslash \gamma_2(G)$. Then $s$ and $s_1$ generate $G$. If we define $s_i = [s_{i-1}, s]$ for $i \geq 2$, then $s_i \in \gamma_i(G) \backslash \gamma_{i+1}(G)$. Let $\Psi_i, i \geq 3$, be the map defined in the Proposition \ref{prop1}. Then \begin{eqnarray*} \Psi_i(\overline{s_1} \otimes \overline{s} \otimes \overline{s} \otimes \cdots \otimes \overline{s} \otimes \overline{s_1}) & = & \overline{[s_1, s, s, \cdots,s]_l}\overline{[s, s, \cdots, s, s_1]_r} \otimes \overline{s_1} + \overline{t} \otimes \overline{s} \end{eqnarray*} for some $t \in G$. \\ If $i$ is an odd integer, notice that \[\overline{[s_1, s, s, \cdots,s]_l} = \overline{[s, s, \cdots, s, s_1]_r}.\] Since $p \neq 2$, it follows that $\Psi_i(\overline{s_1} \otimes \overline{s} \otimes \overline{s} \otimes \cdots \otimes \overline{s} \otimes \overline{s_1})$ is a non-identity element so that Im$\Psi_i$ is non-trivial. Using this fact, Equation \ref{eq1} yields the desired result. \vspace{.3cm}
{ "timestamp": "2023-01-24T02:11:51", "yymm": "2301", "arxiv_id": "2301.09101", "language": "en", "url": "https://arxiv.org/abs/2301.09101", "abstract": "In 1956, Green provided a bound on the order of the Schur multiplier of $p$-groups. This bound, given as a function of the order of the group, is the best possible. Since then, the bound has been refined numerous times by adding other inputs to the function, such as, the minimal number of generators of the group and the order of the derived subgroup. We strengthen these bounds by adding another input, the group's nilpotency class. The specific cases of nilpotency class 2 and maximal class are discussed in greater detail.", "subjects": "Group Theory (math.GR)", "title": "Bounds On the order of the Schur multiplier of $p$-groups", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.984336353585837, "lm_q2_score": 0.8244619350028204, "lm_q1q2_score": 0.8115478547709996 }
https://arxiv.org/abs/1608.08834
On the edge of the stable range
We prove a general homological stability theorem for certain families of groups equipped with product maps, followed by two theorems of a new kind that give information about the last two homology groups outside the stable range. (These last two unstable groups are the "edge" in our title.) Applying our results to automorphism groups of free groups yields a new proof of homological stability with an improved stable range, a description of the last unstable group up to a single ambiguity, and a lower bound on the rank of the penultimate unstable group. We give similar applications to the general linear groups of the integers and of the field of order 2, this time recovering the known stablility range. The results can also be applied to general linear groups of arbitrary principal ideal domains, symmetric groups, and braid groups. Our methods require us to use field coefficients throughout.
\section{Introduction} A sequence of groups and inclusions $ G_1\hookrightarrow G_1\hookrightarrow G_3\hookrightarrow\cdots $ is said to satisfy \emph{homological stability} if in each degree $d$ there is an integer $n_d$ such that the induced map $H_d(G_{n-1})\to H_d(G_n)$ is an isomorphism for $n> n_d$. Homological stability is known to hold for many families of groups, including symmetric groups~\cite{Nakaoka}, general linear groups~\cite{Quillen, Charney, vanderKallen}, mapping class groups of surfaces and 3-manifolds~\cite{Harer, RandalWilliamsMCG, WahlMCG, HatcherWahl}, diffeomorphism groups of highly connected manifolds~\cite{GalatiusRandalWilliams}, and automorphism groups of free groups~\cite{HatcherVogtmannStability,HatcherVogtmannRational}. Homological stability statements often also specify that the last map outside the range $n> n_d$ is a surjection, so that the situation can be pictured as follows. \[ \cdots \to H_d(G_{n_d-3}) \to \underbrace{ H_d(G_{n_d-2}) \to H_d(G_{n_d-1}) }_{\text{edge of the stable range}} \twoheadrightarrow \underbrace{ {H_d(G_{n_d})} \xrightarrow{\cong} {H_d(G_{n_d+1})} \xrightarrow{\cong} \cdots}_{\text{stable range}} \] The groups $H_d(G_{n_d}), H_d(G_{n_d+1}),\ldots$, which are all isomorphic, are said to form the \emph{stable range}. This paper studies what happens at \emph{the edge of the stable range}, by which we mean the last two unstable groups $H_d(G_{n_d-2})$ and $H_d(G_{n_d-1})$. We prove a new and rather general homological stability result that gives exactly the picture above with $n_d=2d+1$. Then we prove two theorems of an entirely new kind. The first describes the {kernel} of the surjection $H_d(G_{n_d-1})\twoheadrightarrow H_d(G_{n_d})$, and the second explains how to make the map $H_d(G_{n_d-2})\to H_d(G_{n_d-1})$ into a {surjection} by adding a new summand to its domain. These general results hold for homology with coefficients in an arbitrary field. We apply our general results to general linear groups of principal ideal domains (PIDs) and automorphism groups of free groups. In both cases we obtain new proofs of homological stability, recovering the known stable range for the general linear groups, and improving upon the known stable range for $\mathrm{Aut}(F_n)$. We also obtain new information on the last two unstable homology groups for $\mathrm{Aut}(F_n)$, $\mathit{GL}_n(\mathbb{Z})$ and $\mathit{GL}_n(\mathbb{F}_2)$, in each case identifying the last unstable group up to a single ambiguity. Our proofs follow an overall pattern that is familiar in homological stability. We define a sequence of complexes acted on by the groups in our family, and we assume that they satisfy a connectivity condition. Then we use an algebraic argument, based on spectral sequences obtained from the actions on the complexes, to deduce the result. The connectivity condition has to be verified separately for each example, but it turns out that in our examples the proof is already in the literature, or can be deduced from it. The real novelty in our paper is the algebraic argument. To the best of our knowledge it has not been used before, either in the present generality or in any specific instances. Even in the case of general linear groups of PIDs, where our complexes are exactly the ones used by Charney in the original proof of homological stability~\cite{Charney} for Dedekind domains, we are able to improve the stable range obtained, matching the best known. \subsection{General results} Let us state our main results, after first establishing some necessary terminology. From this point onwards homology is to be taken with coefficients in an arbitrary field $\mathbb{F}$, unless stated otherwise. A \emph{family of groups with multiplication} $(G_p)_{p\geqslant 0}$ consists of a sequence of groups $G_0,G_1,G_2,\ldots$ equipped with product maps $G_p\times G_q\to G_{p+q}$ for $p,q\geqslant 0$, subject to some simple axioms. See section~\ref{families-section} for the precise definition. The axioms imply in particular that $\bigoplus_{p\geqslant 0}H_\ast(G_p)$ is a graded commutative ring. Examples include the symmetric groups, braid groups, the general linear groups of a PID, and automorphism groups of free groups. To each family of groups with multiplication $(G_p)_{p\geqslant 0}$ we associate the \emph{splitting posets} $\mathit{SP}_n$ for $n\geqslant 2$. If we think of $G_n$ as the group of symmetries of an `object of size $n$', then an element of $\mathit{SP}_n$ is a splitting of that object into two ordered nontrivial pieces. See section~\ref{section-spn} for the precise definition. The \emph{stabilisation map} $ s_\ast\colon H_\ast(G_{n-1})\to H_\ast(G_n) $ is the map induced by the homomorphism $G_{n-1}\to G_n$ that takes the product on the left with the neutral element of $G_1$. Our first main result is the following homological stability theorem. \begin{Th}\label{theorem-stability} Let $(G_p)_{p\geqslant 0}$ be a family of groups with multiplication, and assume that $|\mathit{SP}_n|$ is $(n-3)$-connected for all $n\geqslant 2$. Then the stabilisation map \[ s_\ast\colon H_\ast(G_{n-1})\longrightarrow H_\ast(G_n) \] is an isomorphism for $\ast\leqslant\frac{n-2}{2}$ and a surjection for $\ast\leqslant\frac{n-1}{2}$. Here homology is taken with coefficients in an arbitrary field. \end{Th} Theorem~\ref{theorem-stability} overlaps with work in progress of S\o ren Galatius, Alexander Kupers and Oscar Randal-Williams. Indeed, if we were to add the assumption that $\bigsqcup_{p\geqslant 0} G_p$ is a braided monoidal groupoid, then it would follow from the work of Galatius, Kupers and Randal-Williams. (The definition of family of groups with multiplication ensures that $\bigsqcup_{p\geqslant 0}G_p$ is a monoidal groupoid; the braiding assumption holds in all of our examples.) We will mention other points of overlap as they occur. In a given degree $m$, Theorem~\ref{theorem-stability} gives us the surjection and isomorphisms in the following sequence. \[ \cdots \to H_m(G_{2m-2}) \to \underbrace{ H_m(G_{2m-1}) \to H_m(G_{2m}) }_{\text{edge of the stable range}} \twoheadrightarrow \underbrace{ {H_m(G_{2m+1})} \xrightarrow{\cong} {H_m(G_{2m+2})} \xrightarrow{\cong} \cdots}_{\text{stable range}} \] Our next two theorems extend into the {edge of the stable range}. \begin{Th}\label{theorem-kernel} Let $(G_p)_{p\geqslant 0}$ be a family of groups with multiplication, and assume that $|\mathit{SP}_n|$ is $(n-3)$-connected for all $n\geqslant 2$. Then the kernel of the map \[ s_\ast\colon H_m(G_{2m})\twoheadrightarrow H_m(G_{2m+1}) \] is the image of the product map \[ H_{1}(G_{2})^{\otimes {m-1}} \otimes\ker[H_1(G_2)\xrightarrow{s_\ast} H_1(G_3)] \longrightarrow H_{m}(G_{2m}). \] Here homology is taken with coefficients in an arbitrary field. \end{Th} \begin{Th}\label{theorem-realisation} Let $(G_p)_{p\geqslant 0}$ be a family of groups with multiplication, and assume that $|\mathit{SP}_n|$ is $(n-3)$-connected for all $n\geqslant 2$. Then the map \[ H_m(G_{2m-1})\oplus H_1(G_2)^{\otimes m} \twoheadrightarrow H_m(G_{2m}) \] is surjective. Here homology is taken with coefficients in an arbitrary field. \end{Th} Homological stability results like Theorem~\ref{theorem-stability} are often combined with theorems computing the stable homology $\lim_{n\to\infty}H_\ast(G_n)$ to deduce the value of $H_\ast(G_n)$ in the stable range. In a similar vein, Theorems~\ref{theorem-kernel} and~\ref{theorem-realisation} allow us to bound the last two unstable groups $H_m(G_{2m})$ and $H_m(G_{2m-1})$ in terms of $\lim_{n\to\infty}H_\ast(G_n)$. In the following subsections we will see how this works for automorphism groups of free groups and general linear groups of PIDs. Note that our results do not rule out the possibility of a larger stable range than the one provided by Theorem~\ref{theorem-stability}. Nevertheless, in what follows we will refer to $H_m(G_{2m})$ and $H_m(G_{2m-1})$ as the `last two unstable groups'. \subsection{Applications to automorphism groups of free groups} The automorphism groups of free groups form a family of groups with multiplication $(\mathrm{Aut}(F_n))_{n\geqslant 0}$. In this case the splitting poset $\mathit{SP}_n$ consists of pairs $(A,B)$ of proper subgroups of $F_n$ satisfying $A\ast B = F_n$. By relating the splitting poset to the poset of free factorisations studied by Hatcher and Vogtmann in~\cite{HatcherVogtmannCerf}, we are able to show that $|\mathit{SP}_n|$ is $(n-3)$-connected, so that Theorems~\ref{theorem-stability}, \ref{theorem-kernel} and~\ref{theorem-realisation} can be applied. Our first new result is obtained using Theorem~\ref{theorem-stability} in arbitrary characteristic, and Theorems~\ref{theorem-stability}, \ref{theorem-kernel} and~\ref{theorem-realisation} in characteristic other than $2$. \begin{Th}\label{theorem-autfn-stability} Let $\mathbb{F}$ be a field. Then the stabilisation map \[ s_\ast\colon H_\ast(\mathrm{Aut}(F_{n-1});\mathbb{F}) \longrightarrow H_\ast(\mathrm{Aut}(F_n);\mathbb{F}) \] is an isomorphism for $\ast\leqslant\frac{n-2}{2}$ and a surjection for $\ast\leqslant\frac{n-1}{2}$. Moreover, if $\mathrm{char}(\mathbb{F})\neq 2$, then $s_\ast$ is an isomorphism for $\ast\leqslant\frac{n-1}{2}$ and a surjection for $\ast\leqslant\frac{n}{2}$. \end{Th} Hatcher and Vogtmann showed in~\cite{HatcherVogtmannStability} that $s_\ast\colon H_\ast(\mathrm{Aut}(F_{n-1}))\to H_\ast(\mathrm{Aut}(F_n))$ is an isomorphism for $\ast\leqslant \frac{n-3}{2}$ and a surjection for $\ast\leqslant\frac{n-2}{2}$, where homology is taken with arbitrary coefficients. Theorem~\ref{theorem-autfn-stability} increases this stable range one step to the left in each degree when coefficients are taken in a field, and two steps to the left in each degree when coefficients are taken in a field of characteristic other than $2$. (In characteristic $0$ this falls far short of the best known result~\cite{HatcherVogtmannRational}.) In particular we learn for the first time that the groups $H_m(\mathrm{Aut}(F_{2m+1});\mathbb{F})$ are stable. By applying Theorems~\ref{theorem-kernel} and~\ref{theorem-realisation} when $\mathbb{F}=\mathbb{F}_2$, we are able to learn the following about the last two unstable groups $H_m(\mathrm{Aut}(F_{2m});\mathbb{F}_2)$ and $H_m(\mathrm{Aut}(F_{2m-1});\mathbb{F}_2)$. \begin{Th}\label{theorem-autfn-outside} Let $t\in H_1(\mathrm{Aut}(F_2);\mathbb{F}_2)$ denote the element determined by the transformation $x_1\mapsto x_1$, $x_2\mapsto x_1x_2$, and let $m\geqslant 1$. Then the kernel of the stabilisation map \[ s_\ast\colon H_m(\mathrm{Aut}(F_{2m});\mathbb{F}_2)\twoheadrightarrow H_m(\mathrm{Aut}(F_{2m+1});\mathbb{F}_2) \] is the span of $t^m$, and the map \[ H_m(\mathrm{Aut}(F_{2m-1});\mathbb{F}_2)\oplus\mathbb{F}_2\to H_m(\mathrm{Aut}(F_{2m});\mathbb{F}_2), \qquad (x,y)\mapsto s_\ast(x)+y\cdot t^m \] is surjective. \end{Th} This theorem shows that the last unstable group $H_m(\mathrm{Aut}(F_{2m});\mathbb{F}_2)$ is either isomorphic to the stable homology $\lim_{n\to\infty} H_m(\mathrm{Aut}(F_n);\mathbb{F}_2)$, or is an extension of it by a copy of $\mathbb{F}_2$ generated by $t^m$. It does not state which possibility holds. Galatius~\cite{Galatius} identified the stable homology $\lim_{n\to\infty}H_\ast (\mathrm{Aut}(F_n))$ with $H_\ast(\Omega^\infty_0 S^\infty)$, where $\Omega_0^\infty S^\infty$ denotes a path-component of $\Omega^\infty S^\infty = \colim_{n\to\infty} \Omega^n S^n$. Thus we are able to place the following bounds on the dimensions of the last two unstable groups for $m\geqslant 1$, where $\epsilon$ is either $0$ or $1$. \begin{gather*} \dim(H_m(\mathrm{Aut}(F_{2m});\mathbb{F}_2)) = \dim(H_m(\Omega_0^\infty S^\infty;\mathbb{F}_2))+\epsilon \\ \dim(H_m(\mathrm{Aut}(F_{2m-1});\mathbb{F}_2))\geqslant \dim(H_m(\Omega_0^\infty S^\infty;\mathbb{F}_2)) \end{gather*} \subsection{Applications to general linear groups of PIDs} The general linear groups of a commutative ring $R$ form a family of groups with multiplication $(\mathit{GL}_n(R))_{n\geqslant 0}$. When $R$ is a PID, the realisation $|\mathit{SP}_n|$ of the splitting poset is precisely the \emph{split building} $[R^n]$ studied by Charney, who showed that it is $(n-3)$-connected~\cite{Charney}. Theorems~\ref{theorem-stability}, \ref{theorem-kernel} and~\ref{theorem-realisation} can therefore be applied in this setting. Theorem~\ref{theorem-stability} shows that $H_\ast(\mathit{GL}_{n-1}(R))\to H_\ast(\mathit{GL}_n(R))$ is onto for $\ast\leqslant\frac{n-1}{2}$ and an isomorphism for $\ast\leqslant\frac{n-2}{2}$, where homology is taken with field coefficients. This exactly recovers homological stability with the range due to van der Kallen~\cite{vanderKallen}, but only with field coefficients. Theorems~\ref{theorem-kernel} and~\ref{theorem-realisation} then allow us to learn about the last two unstable groups $H_{m}(\mathit{GL}_{2m-1}(R))$ and $H_m(\mathit{GL}_{2m}(R))$, where little seems to be known in general. In order to illustrate this we specialise to the cases $R=\mathbb{Z}$ and $R=\mathbb{F}_2$ and take coefficients in $\mathbb{F}_2$; this is the content of our next two subsections. \subsection{Applications to the general linear groups of $\mathbb{Z}$} We now specialise to the groups $\mathit{GL}_n(\mathbb{Z})$ and take coefficients in $\mathbb{F}_2$. Theorems~\ref{theorem-kernel} and~\ref{theorem-realisation} give us the following information about the final two unstable groups $H_m(\mathit{GL}_{2m}(\mathbb{Z});\mathbb{F}_2)$ and $H_m(\mathit{GL}_{2m-1}(\mathbb{Z});\mathbb{F}_2)$. \begin{Th} \label{glnz-outside} Let $t$ denote the element of $H_1(\mathit{GL}_2(\mathbb{Z});\mathbb{F}_2)$ determined by the matrix $\left(\begin{smallmatrix} 1 & 1 \\ 0 & 1\end{smallmatrix}\right)$ and let $m\geqslant 1$. Then the kernel of the stabilisation map \[ s_\ast\colon H_m(\mathit{GL}_{2m}(\mathbb{Z});\mathbb{F}_2)\twoheadrightarrow H_m(\mathit{GL}_{2m+1}(\mathbb{Z});\mathbb{F}_2) \] is the span of $t^m$, and the map \[ H_m(\mathit{GL}_{2m-1}(\mathbb{Z});\mathbb{F}_2)\oplus\mathbb{F}_2\to H_m(\mathit{GL}_{2m}(\mathbb{Z});\mathbb{F}_2), \qquad (x,y)\mapsto s_\ast(x)+y\cdot t^m \] is surjective. \end{Th} This theorem shows that the last unstable group $H_m(\mathit{GL}_{2m}(\mathbb{Z});\mathbb{F}_2)$ is either isomorphic to the stable homology $\lim_{n\to\infty} H_m(\mathit{GL}_m(\mathbb{Z});\mathbb{F}_2)$, or is an extension of it by a copy of $\mathbb{F}_2$ generated by $t^m$. It does not guarantee that $t^m\neq 0$, and so does not specify which possibility occurs. The theorem also gives us the following lower bounds on the dimensions of the last two unstable groups in terms of $\dim(\lim_{n\to\infty} H_m(\mathit{GL}_n(\mathbb{Z});\mathbb{F}_2))$, and in particular shows that they are highly nontrivial. \begin{gather*} \dim(H_m(\mathit{GL}_{2m}(\mathbb{Z});\mathbb{F}_2)) = \dim\left(\lim_{n\to\infty}H_m(\mathit{GL}_n(\mathbb{Z});\mathbb{F}_2)\right)+\epsilon \\ \dim(H_m(\mathit{GL}_{2m-1}(\mathbb{Z});\mathbb{F}_2)) \geqslant \dim\left(\lim_{n\to\infty} H_m(\mathit{GL}_n(\mathbb{Z});\mathbb{F}_2)\right) \end{gather*} Here $\epsilon$ is either $0$ or $1$. \subsection{Applications to the general linear groups of $\mathbb{F}_2$} Now let us specialise to the groups $\mathit{GL}_n(\mathbb{F}_2)$. Quillen showed that in this case the stable homology $\lim_{n\to\infty}H_\ast(\mathit{GL}_n(\mathbb{F}_2);\mathbb{F}_2)$ vanishes~\cite[Section~11]{Quillen}. Combining this with Maazen's stability result shows that $H_m(\mathit{GL}_n(\mathbb{F}_2);\mathbb{F}_2)=0$ for $n\geqslant 2m+1$. It is natural to ask for a description of the final unstable homology groups $H_m(\mathit{GL}_{2m}(\mathbb{F}_2);\mathbb{F}_2)$. These are known to be nontrivial for $m=1$ and $m=2$, the latter case being due to Milgram and Priddy (Example~2.6 and Theorem~6.5 of~\cite{MilgramPriddy}), but to the best of our knowledge nothing further is known. By applying Theorem~\ref{theorem-kernel} we obtain the following result, which determines each of the groups $H_m(\mathit{GL}_{2m}(\mathbb{F}_2);\mathbb{F}_2)$ up to a single ambiguity. \begin{Th}\label{theorem-glnftwo} Let $t$ denote the element of $H_1(\mathit{GL}_2(\mathbb{F}_2);\mathbb{F}_2)$ determined by the matrix $\left(\begin{smallmatrix} 1 & 1 \\ 0 & 1\end{smallmatrix}\right)$. Then $H_m(\mathit{GL}_{2m}(\mathbb{F}_2);\mathbb{F}_2)$ is either trivial, or is a copy of $\mathbb{F}_2$ generated by the class $t^m$. \end{Th} We hope that by extending the techniques of the present paper we will be able in future to prove the following conjecture. We anticipate that the known non-vanishing of $t$ and $t^2$ will be an essential ingredient in its proof. \begin{conjecture} For every $m\geqslant 1$ the group $H_m(\mathit{GL}_{2m}(\mathbb{F}_2);\mathbb{F}_2)$ is a single copy of $\mathbb{F}_2$ generated by the class $t^m$, where $t\in H_1(\mathit{GL}_2(F_2);\mathbb{F}_2)$ is the element determined by the matrix $\left(\begin{smallmatrix} 1 & 1 \\ 0 & 1\end{smallmatrix}\right)$. \end{conjecture} A proof of this conjecture would, \emph{via} the homomorphisms $\mathrm{Aut}(F_n)\to\mathit{GL}_n(\mathbb{Z})\to\mathit{GL}_n(\mathbb{F}_2)$, also resolve the ambiguities in Theorems~\ref{theorem-autfn-outside} and~\ref{glnz-outside}, showing that the final unstable homology groups $H_m(\mathrm{Aut}(F_{2m}),\mathbb{F}_2)$ and $H_m(\mathit{GL}_{2m}(\mathbb{Z});\mathbb{F}_2)$ are extensions by $\mathbb{F}_2$ of $\lim_{n\to\infty} H_m(\mathrm{Aut}(F_{n});\mathbb{F}_2)$ and $\lim_{n\to\infty}H_m(\mathit{GL}_{n}(\mathbb{Z});\mathbb{F}_2)$ respectively. In particular this would confirm that the known homological stability ranges are sharp. Theorem~\ref{theorem-glnftwo} is relevant to questions about the groups $H^m(\mathit{GL}_{2m}(\mathbb{F}_2);\mathbb{F}_2)$ raised by Milgram and Priddy in~\cite[p.301]{MilgramPriddy}, and posed explicitly by Priddy in~\cite[section 5]{PriddyProblem}. Let $M_{mm}$ denote the subgroup of $\mathit{GL}_{2m}(\mathbb{F}_2)$ consisting of matrices of the form \[ \begin{pmatrix} I_m & \ast \\ 0 & I_m \end{pmatrix}. \] Milgram and Priddy describe an element $\det_m\in H^m(M_{mm};\mathbb{F}_2)$ that is invariant under the action of $N_{\mathit{GL}_{2m}(\mathbb{F}_2)}(M_{mm})/M_{mm} =\mathit{GL}_m(\mathbb{F}_2)\times\mathit{GL}_m(\mathbb{F}_2)$, and so potentially lifts to an element of $H^m(\mathit{GL}_{2m}(\mathbb{F}_2);\mathbb{F}_2)$. Priddy asks whether $\det_m$ lifts to $H^m(\mathit{GL}_{2m}(\mathbb{F}_2);\mathbb{F}_2)$, and if so, whether it spans $H^m(\mathit{GL}_{2m}(\mathbb{F}_2);\mathbb{F}_2)$. As explained to us by David Sprehn, $t^m$ is the image of a class in $H_m(M_{mm};\mathbb{F}_2)$, and $\det_m$ spans the invariants $H^m(M_{mm};\mathbb{F}_2)^{\mathit{GL}_m(\mathbb{F}_2)\times\mathit{GL}_m(\mathbb{F}_2)}$. Theorem~\ref{theorem-glnftwo} therefore shows that $H^m(\mathit{GL}_{2m}(\mathbb{F}_2);\mathbb{F}_2)$ is either trivial, or is a single copy of $\mathbb{F}_2$ generated by a lift of $\det_m$. \subsection{Decomposability beyond the stable range} Let $(G_p)_{p\geqslant 0}$ be a family of groups with multiplication, and consider the bigraded commutative ring $A=\bigoplus_{p\geqslant 0}H_\ast(G_p)$. Homological stability tells us that any element of $H_\ast(G_p)$ that lies in the stable range decomposes as a product of elements in the augmentation ideal of $A$. (In fact it tells us that such an element decomposes as a product with the generator of $H_0(G_1)$.) We believe that connectivity bounds on the splitting complex can yield decomposability results far beyond the stable range. The following conjecture was formulated after studying explicit computations for symmetric groups and braid groups~\cite{CLM}, in which cases it holds. \begin{conjecture} \label{conjecture} Let $(G_p)_{p\geqslant 0}$ be a family of groups with multiplication. Suppose that $|\mathit{SP}_n|$ is $(n-3)$-connected for all $n\geqslant 2$. Then the map \[ \mu\colon \bigoplus_{\substack{p+q=n\\p,q\geqslant 1}} H_\ast(G_p)\otimes H_\ast(G_{q}) \longrightarrow H_\ast(G_n) \] is surjective in degrees $\ast\leqslant (n-2)$, and its kernel is the image of \[ \alpha\colon \bigoplus_{\substack{p+q+r=n\\p,q,r\geqslant 1}} H_\ast(G_p)\otimes H_\ast(G_q)\otimes H_\ast(G_r) \longrightarrow \bigoplus_{\substack{p+q=n\\p,q\geqslant 1}} H_\ast(G_p)\otimes H_\ast(G_{q}) \] in degrees $\ast\leqslant (n-3)$. Here $\mu$ and $\alpha$ are defined by $\mu(x\otimes y) = x\cdot y$ and $\alpha(x\otimes y\otimes z) = (x\cdot y)\otimes z - x\otimes(y\cdot z)$. \end{conjecture} We are able to prove the surjectivity statement in degrees $\ast\leqslant \frac{n}{2}$ and the injectivity statement in degrees $\ast\leqslant \frac{n-1}{2}$, both of which are half a degree better than the stable range (Lemmas~\ref{twomplusoneacyclic-lemma} and~\ref{twomacyclic-lemma}), and Theorems~\ref{theorem-kernel} and~\ref{theorem-realisation} are the `practical' versions of these facts. We hope that in future work we will be able to obtain information further beyond the stable range. \subsection{Organisation of the paper} In the first half of the paper we introduce the concepts required to understand the statements of Theorems~\ref{theorem-stability}, \ref{theorem-kernel} and~\ref{theorem-realisation} and then, assuming these theorems for the time being, we give the proofs of the applications stated earlier in this introduction. Section~\ref{families-section} introduces families of groups with multiplication, and introduces four main examples: the symmetric groups, general linear groups of PIDs, automorphism groups of free groups, and braid groups. Section~\ref{section-spn} introduces the splitting posets $\mathit{SP}_n$ associated to a family of groups with multiplication, and identifies them in the four examples. In section~\ref{section-connectivity} we show that for these four examples, the realisation $|\mathit{SP}_n|$ of the splitting poset is $(n-3)$-connected. Finally, in section~\ref{section-applications} we give the proofs of Theorems~\ref{glnz-outside}, \ref{theorem-glnftwo}, \ref{theorem-autfn-stability} and \ref{theorem-autfn-outside}. In the second half of the paper we give the proofs of our three general results, Theorems~\ref{theorem-stability}, \ref{theorem-kernel} and~\ref{theorem-realisation}. Section~\ref{section-scn} introduces the splitting complex, an alternative to the splitting poset that features in the rest of the argument. Section~\ref{section-bar} introduces a graded chain complex $\mathcal{B}_n$ obtained from a family of groups with multiplication. In section~\ref{section-spectral} we show that, under the hypotheses of Theorems~\ref{theorem-stability}, \ref{theorem-kernel} and~\ref{theorem-realisation} there is a spectral sequence with $E^1$-term $\mathcal{B}_n$ and converging to $0$ in total degrees $\leqslant(n-2)$. Section~\ref{section-filtration} introduces and studies a filtration on $\mathcal{B}_n$. The filtration allows us to understand the homology of $\mathcal{B}_n$ inductively within a range of degrees. Then sections~\ref{section-stability}, \ref{section-realisation} and~\ref{section-kernel} give the proofs of the three theorems. \subsection{Acknowledgements} \label{acknowledgements} My thanks to Rachael Boyd, Anssi Lahtinen, Martin Palmer, Oscar Randal-Williams, David Sprehn and Nathalie Wahl for useful discussions. \section{Families of groups with multiplication} \label{families-section} In this section we define the families of groups with multiplication to which our methods will apply, and we provide a series of examples. \begin{definition}\label{definition-families} A \emph{family of groups with multiplication} $(G_p)_{p\geqslant 0}$ is a sequence of discrete groups $G_0,G_1,G_2,\ldots$ equipped with a \emph{multiplication map} \[ G_p\times G_q\longrightarrow G_{p+q}, \qquad (g,h)\longmapsto g\diamond h \] for each $p,q\geqslant 0$. We assume that the following axioms hold: \begin{enumerate} \item \emph{Unit:} The group $G_0$ is the trivial group, and its unique element $e_0$ acts as a unit for left and right multiplication. In other words $e_0\diamond g = g = g\diamond e_0$ for all $p\geqslant 0$ and all $g\in G_p$. \item \emph{Associativity:} The associative law \[ (g\diamond h)\diamond k = g\diamond(h\diamond k). \] holds for all $p,q,r\geqslant 0$ and all $g\in G_p$, $h\in G_q$ and $k\in G_r$. Consequently, for any sequence $p_1,\ldots,p_r\geqslant 0$ there is a well-defined \emph{iterated multiplication map} \[ G_{p_1}\times \cdots \times G_{p_r}\longrightarrow G_{p_1+\cdots+p_r}. \] \item \emph{Commutativity:} The product maps are commutative up to conjugation, in the sense that there exists an element $\tau_{pq}\in G_{p+q}$ such that the squares \[\xymatrix{ G_p\times G_q \ar[r]\ar[d]_\cong & G_{p+q} \ar[d]^{c_{\tau_{pq}}} \\ G_q\times G_p \ar[r] & G_{p+q} }\] commute, where $c_{\tau_{pq}}$ denotes conjugation by $\tau_{pq}$. (We do not impose any further conditions upon the $\tau_{pq}$.) \item \emph{Injectivity:} The multiplication maps are all injective. It follows that the iterated multiplication maps are also injective. Using this, we henceforth regard $G_{p_1}\times\cdots\times G_{p_r}$ as a subgroup of $G_{p_1+\cdots+p_r}$ for each $p_1,\ldots,p_r\geqslant 0$. \item \emph{Intersection:} We have \[ (G_{p+q}\times G_r)\cap (G_p\times G_{q+r}) = G_p\times G_q\times G_r, \] for all $p,q,r\geqslant 0$, where $G_{p+q}\times G_r$, $G_p\times G_{q+r}$ and $G_p\times G_q\times G_r$ are all regarded as subgroups of $G_{p+q+r}$. \end{enumerate} We denote the neutral element of $G_p$ by $e_p$. \end{definition} \begin{remark} We could delete the intersection axiom from Definition~\ref{definition-families}, at the expense of working with the splitting complex of section~\ref{section-scn} instead of the splitting poset. See Remark~\ref{spnorscn} for further discussion. \end{remark} \begin{example}[Symmetric groups] For $p\geqslant 0$ we let $\Sigma_p$ denote the symmetric group on $n$ letters. Then we may form the family of groups with multiplication $(\Sigma_p)_{p\geqslant 0}$, equipped with the product maps \[ \Sigma_p\times\Sigma_q\to\Sigma_{p+q}, \qquad (f,g)\mapsto f\sqcup g \] where $f\sqcup g$ is the automorphism of $\{1,\ldots,p+q\}\cong \{1,\ldots,p\}\sqcup\{1,\ldots,q\}$ given by $f$ on the first summand and by $g$ on the second. Then the axioms of a multiplicative family are all immediately verified. In the case of commutativity, the element $\tau_{pq}$ is the permutation that interchanges the first $p$ and last $q$ letters while preserving their ordering. \end{example} \begin{example}[General linear groups of PIDs] Let $R$ be a PID. For $n\geqslant 0$, let $\mathit{GL}_n(R)$ denote the general linear group of $n\times n$ invertible matrices over $R$. Then we may form the family of groups with multiplication $(\mathit{GL}_p(R))_{p\geqslant 0}$, equipped with the product maps \[ \mathit{GL}_p(R)\times \mathit{GL}_q(R)\to\mathit{GL}_{p+q}(R), \qquad (A,B)\mapsto \begin{pmatrix} A & 0 \\ 0 & B \end{pmatrix} \] given by the block sum of matrices. The unit, associativity, commutativity, injectivity and intersection axioms all hold by inspection. In the case of commutativity, the element $\tau_{pq}$ is the permutation matrix $=\left(\begin{smallmatrix} 0 & I_q \\ I_p & 0 \end{smallmatrix}\right)$. (It would have been enough to assume that $R$ is a commutative ring here. However, as we will see later, we will only be able to apply our results when $R$ is a PID.) \end{example} \begin{example}[Automorphism groups of free groups] For $p\geqslant 0$ we let $F_p$ denote the free group on $p$ letters, and we let $\mathrm{Aut}(F_p)$ denote the group of automorphisms of $F_p$. Then we may form the family of groups with multiplication $(\mathrm{Aut}(F_p))_{p\geqslant 0}$, equipped with the product maps \[ \mathrm{Aut}(F_p)\times \mathrm{Aut}(F_q) \to \mathrm{Aut}(F_{p+q}), \qquad (f,g)\mapsto f\ast g. \] Here $f\ast g$ is the automorphism of $F_{p+q}\cong F_p\ast F_q$ given by $f$ on the first free factor and by $g$ on the second. Then the unit, associativity and connectivity axioms all hold by inspection. In the case of commutativity, the element $\tau_{pq}$ is the automorphism that interchanges the first $p$ generators with the last $q$ generators. The injectivity axiom is also clear. We prove the intersection axiom as follows. Suppose that $ f_p\ast f_{q+r} = f_{p+q}\ast f_r $ where each $f_\alpha$ lies in $\mathrm{Aut}(F_\alpha)$. We would like to show that $f_{q+r} = f_q\ast f_r$ for some $f_q\in\mathrm{Aut}(F_q)$. Let $x_i$ be one of the middle $q$ generators. Then $f_{q+r}$ sends $x_i$ to a reduced word in the first $p+q$ generators and to a reduced word in the last $q+r$ generators. Since an element of a free group has a unique reduced expression, it follows that $x_i$ is sent to a word in the middle $q$ generators. Thus $f_{q+r} = f_q\ast f_r$ for some $f_q\colon F_q\to F_q$. By inverting the original equation we see that in fact $f_q\in\mathrm{Aut}(F_q)$. \end{example} \begin{example}[Braid groups] \label{example-braid} Given $p\geqslant 0$, let $B_p$ denote the braid group on $p$ strands. This is defined to be the group of diffeomorphisms of the disk $D^2$ that preserve the boundary pointwise and that preserve (not necessarily pointwise) a set $X_p\subset D^2$ of $p$ points in the interior of $D^2$, arranged from left to right, all taken modulo isotopies relative to $\partial D^2$ and $X_p$. \[\begin{tikzpicture}[scale=0.1] \path[draw, line width=1.5, fill=white!80!black] (0,0) circle (15); \path[draw, fill=black] (-10,0) circle (0.5); \path[draw, fill=black] (-5,0) circle (0.5); \path[draw, fill=black] (-0,0) circle (0.5); \path[draw, fill=black] (5,0) circle (0.5); \path[draw, fill=black] (10,0) circle (0.5); \node at (135:18) {$D^2$}; \node at (0,-5) {$X_5$}; \end{tikzpicture}\] The product maps are \[ B_p\times B_q\to B_{p+q}, \qquad (\beta,\gamma)\mapsto \beta\sqcup\gamma \] where $\beta\sqcup\gamma$ denotes the braid obtained by juxtaposing $\beta$ and $\gamma$. More precisely, we choose an embedding $D^2\sqcup D^2\hookrightarrow D^2$ that embeds two copies of $D^2$ `side by side' in $D^2$, in such a way that $X_p\sqcup X_q$ is sent into $X_{p+q}$ preserving the left-to-right order. \[\begin{tikzpicture}[scale=0.1] \path[draw, line width=1.5, fill=white!90!black] (0,0) circle (15); \path[draw, line width=1, fill=white!80!black] (-5,0) circle (6.5); \path[draw, line width=1, fill=white!80!black] (7.5,0) circle (4.5); \path[draw, fill=black] (-10,0) circle (0.5); \path[draw, fill=black] (-5,0) circle (0.5); \path[draw, fill=black] (-0,0) circle (0.5); \path[draw, fill=black] (5,0) circle (0.5); \path[draw, fill=black] (10,0) circle (0.5); \end{tikzpicture}\] Then $\beta\sqcup\gamma$ is defined to be the map given by $\beta$ and $\gamma$ on the respective embedded punctured discs, and by the identity elsewhere. Then the unit, associativity and injectivity axioms are immediate. The commutativity axiom holds when we take $\tau_{pq}$ to be the class of a diffeomorphism that interchanges the two embedded discs, passing the left one above the right. The intersection axiom follows from the fact that we may identify the subgroup $B_p\times B_{q+r}\subseteq B_{p+q+r}$ with the set of isotopy classes of diffeomorphisms that fix an arc that cuts the disc in two, separating the first $p$ punctures from the last $q+r$ punctures, and similarly for $B_{p+q}\times B_r$ and $B_p\times B_q\times B_r$. \end{example} \section{The splitting poset} \label{section-spn} In this section we define the splitting posets associated to a family of groups with multiplication, and identify them in the case of symmetric groups, braid groups, general linear groups of PIDs, and automorphism groups of free groups. Conditions on the connectivity of these posets are the key assumptions in all of our main theorems. \begin{definition}[The splitting poset] Let $(G_p)_{p\geqslant 1}$ be a family of groups with multiplication. Then for $n\geqslant 2$, the $n$-th \emph{splitting poset} $\mathit{SP}_n$ of $(G_p)_{p\geqslant 1}$ is defined to be the set \[ \mathit{SP}_n= \frac{G_n}{G_1\times G_{n-1}} \sqcup \frac{G_n}{G_2\times G_{n-2}} \sqcup \cdots \sqcup \frac{G_n}{G_{n-2}\times G_{2}} \sqcup \frac{G_n}{G_{n-1}\times G_{1}} \] equipped with the partial ordering $\leqslant$ with respect to which \[ g(G_{p}\times G_{n-p}) \leqslant h(G_q\times G_{n-q}) \] if and only if $p\leqslant q$ and there is $k\in G_n$ such that \[ g(G_p\times G_{n-p})=k(G_p\times G_{n-p}) \ \text{ and }\ h(G_q\times G_{n-q})=k(G_q\times G_{n-q}). \] Lemma~\ref{chain-lemma} below verifies that the relation $\leqslant$ is transitive. \end{definition} \begin{lemma}\label{chain-lemma} Given an arbitrary chain \begin{equation}\label{chain1} g_0(G_{p_0}\times G_{n-p_0}) \leqslant g_1(G_{p_1}\times G_{n-p_1}) \leqslant \cdots \leqslant g_r(G_{p_r}\times G_{n-p_r}) \end{equation} in $\mathit{SP}_n$ we may assume, after possibly choosing new coset representatives, that $g_0=\cdots=g_r$. It follows that $g_i(G_{p_i}\times G_{n-p_i})\leqslant g_j(G_{p_j}\times G_{n-p_j})$ for any $i\leqslant j$. \end{lemma} \begin{proof} We prove by induction on $s=1,2,\ldots,r$ that given an arbitrary chain \eqref{chain1} we may assume, after choosing new coset representatives, that $g_0=\cdots=g_s=g$ for some $g\in G_n$, the case $s=r$ being our desired result. When $s=1$, the claim is immediate from the definition of $\leqslant$. For the induction step, suppose that the claim holds for $s$. Take an arbitrary chain~\eqref{chain1} and use the induction hypothesis to choose new coset representatives so that $g_0=\cdots=g_s=g$. Since $g(G_{p_s}\times G_{n-p_s})\leqslant g_{s+1}(G_{p_{s+1}}\times G_{n-p_{s+1}})$ we may assume, after replacing $g_{s+1}$ if necessary, that $g (G_{p_s}\times G_{n-p_s})=g_{s+1}(G_{p_s}\times G_{n-p_s})$. Then there are $\gamma\in G_{p_s}$ and $\delta\in G_{n-p_s}$ such that $g^{-1}g_{s+1}=\gamma\diamond\delta$. Since $e_{p_s}\diamond \delta$ lies in $G_{p_t}\times G_{n-p_t}$ for $t\leqslant s$, we may replace $g$ with $g(e_{p_s}\diamond\delta)$. And since $\gamma\diamond e_{n-p_s}$ lies in $G_{p_{s+1}}\times G_{n-p_{s+1}}$, we may replace $g_{s+1}$ with $g_{s+1}(\gamma_{p_s}^{-1}\diamond e_{n-p_s})$. But then $g_{s+1}=g$. So $g_0=\cdots=g_{s+1}$ as required. \end{proof} Now we will identify the splitting posets associated to the symmetric groups, general linear groups of PIDs, automorphism groups of free groups, and braid groups. \begin{proposition}[Splitting posets for symmetric groups] \label{proposition-spn-sigman} For the family of groups with multiplication $(\Sigma_p)_{p\geqslant 0}$, the $n$-th splitting poset $\mathit{SP}_n$ is isomorphic to the poset of proper subsets of $\{1,\ldots,n\}$ under inclusion. \end{proposition} \begin{proof} We define a bijection $\phi$ from $\mathit{SP}_n$ to the poset of proper subsets of $\{1,\ldots,n\}$ by the rule \[ \phi\left( g(\Sigma_p\times\Sigma_{n-p}) \right) = \{g(1),\ldots,g(p)\}. \] This $\phi$ is a well-defined bijection, and we must show that \[ g(\Sigma_p\times\Sigma_{n-p})\leqslant h(\Sigma_q\times\Sigma_{n-q}) \iff \{g(1),\ldots,g(p)\}\subseteq\{h(1),\ldots,h(q)\}. \] If the first condition holds then $p\leqslant q$ and we may assume that $g=h$, so that the second condition follows immediately. If the second condition holds then $p\leqslant q$ and, replacing $h$ by $h\circ(k\times \mathrm{Id})$ and $g$ by $g\circ (\mathrm{Id}\times l)$ for an appropriate $k\in\Sigma_q$ and $l\in\Sigma_{n-p}$, we may assume that $g=h$, so that the first condition holds. \end{proof} Let $R$ be a PID. To identify the splitting posets associated to the family $(\mathit{GL}_p(R))_{p\geqslant 0}$, recall that Charney in~\cite{Charney} defined $S_R(R^n)$ to be the poset of ordered pairs $(P,Q)$ of proper submodules of $R^n$ satisfying $P\oplus Q=R^n$, equipped with the partial order $\leqslant$ defined by \[ (P,Q)\leqslant (P',Q') \iff P\subseteq P'\text{ and }Q\supseteq Q'. \] Charney then defined the \emph{split building} of $R^n$, denoted by $[R^n]$, to be the realisation $|S_R(R^n)|$. (Note that Charney worked with arbitrary Dedekind domains.) \begin{proposition}[Splitting posets for general linear groups of PIDs] \label{proposition-spn-glnr} Let $R$ be a PID. For the family of groups with multiplication $(\mathit{GL}_n(R))_{n\geqslant 0}$, the splitting poset $\mathit{SP}_n$ is isomorphic to $S_R(R^n)$, so that $|\mathit{SP}_n|$ is isomorphic to the split building $[R^n]$. \end{proposition} \begin{proof} Define $s_1,\ldots,s_{n-1}\in\mathit{SP}_n$ and $t_1,\ldots,t_{n-1}\in S_R(R^n)$ by \[s_p=e_n(\mathit{GL}_p(R)\times\mathit{GL}_{n-p}(R)), \qquad t_p=(\mathrm{span}(x_1,\ldots,x_p),\mathrm{span}(x_{p+1},\ldots,x_n)),\] where $e_n\in\mathit{GL}_n(R)$ denotes the identity element and $x_1,\ldots,x_n$ is the standard basis of $R^n$. Then the following three properties hold for the elements $s_i\in\mathit{SP}_n$, and their analogues hold for the $t_i\in S_R(R^n)$. \begin{enumerate} \item \label{ooone} $s_1,\ldots,s_{n-1}$ are a complete set of orbit representatives for the $\mathit{GL}_n(R)$ action on $\mathit{SP}_n$. \item \label{ootwo} The stabiliser of $s_p$ is $\mathit{GL}_p(R)\times\mathit{GL}_{n-p}(R)$. \item \label{oothree} $x\leqslant y$ if and only if there is $g\in \mathit{GL}_n(R)$ such that $x=g\cdot s_p$ and $y=g\cdot s_q$ where $p\leqslant q$. \end{enumerate} It follows immediately that there is a unique isomorphism of posets $\mathit{SP}_n\to S_R(R^n)$ satisfying $s_i\mapsto t_i$ for all $i$. The three properties hold for $s_i\in\mathit{SP}_n$ by definition. We prove them for $t_i\in S_R(R^n)$ as follows. For \eqref{ooone}, the fact that $R$ is a PID guarantees that if $(P,Q)\in S_R(R^n)$ then $P$ and $Q$ are free, of ranks $p$ and $q$ say, such that $p+q=n$. If we choose bases of $P$ and $Q$ and concatenate them to form an element $A\in\mathit{GL}_n(R)$, then $A\cdot t_p = (P,Q)$ as required. Property~\eqref{ootwo} is immediate. For~\eqref{oothree}, suppose that $(P,Q)\leqslant (P',Q')$ and let $p=\mathrm{rank}(P)$ and $p'=\mathrm{rank}(P')$, so that $p\leqslant p'$. Then \[ R^n = P\oplus (P'\cap Q)\oplus Q', \qquad P\oplus (P'\cap Q) = P', \qquad (P'\cap Q)\oplus Q' = Q. \] Let $g$ denote the element of $\mathit{GL}_n(R)$ whose columns are given by a basis of $P$, followed by a basis of $(P'\cap Q)$, followed by a basis of $Q'$. Again this is possible since $R$ is a PID. Then $(P,Q)=g\cdot t_p$ and $(P',Q')=g\cdot t_{p'}$ where $p\leqslant p'$, as required. \end{proof} Let us now identify the splitting posets for automorphism groups of free groups. The situation is closely analogous to that for general linear groups. Define $S(F_n)$, for each $n\geqslant 2$, to be the poset of ordered pairs $(P,Q)$ of proper subgroups of $F_n$ satisfying $P\ast Q = F_n$. It is equipped with the partial order under which $(P,Q)\leqslant (P',Q')$ if and only if $(P,Q)=(J_0,J_1\ast J_2)$ and $(P',Q')=(J_0\ast J_1,J_2)$ for some proper subgroups $J_0,J_1,J_2$ of $F_n$ satisfying $J_0\ast J_1\ast J_2 = F_n$. (Note that the condition in the definition of $\leqslant$ is stronger than assuming that $P\subseteq P'$ and $Q'\supseteq Q$). The proof of the following proposition is similar to that of Proposition~\ref{proposition-spn-glnr}, and we leave the details to the reader. \begin{proposition}[Splitting posets for automorphism groups of free groups] \label{proposition-spn-autfn} For the family of groups with multiplication $(\mathrm{Aut}(F_n))_{n\geqslant 0}$, the splitting poset $\mathit{SP}_n$ is isomorphic to $S(F_n)$. \end{proposition} Let us now identify the splitting posets associated to the family $(B_p)_{p\geqslant 0}$ of braid groups. See Example~\ref{example-braid} for the relevant notation. Given $n\geqslant 2$, let us define a poset $\mathcal{A}_n$ as follows. The elements of $\mathcal{A}_n$ are the arcs embedded in $D^2\setminus X_n$, starting at the `north pole' of the disc and ending at the `south pole', such that $X_n$ meets both components of their complement, all taken modulo isotopies in $D^2\setminus X_n$ that preserve the endpoints. \[\begin{tikzpicture}[scale=0.1] \path[draw, line width=1.5, fill=white!80!black] (0,0) circle (15); \path[draw, fill=black] (-10,0) circle (0.5); \path[draw, fill=black] (-5,0) circle (0.5); \path[draw, fill=black] (-0,0) circle (0.5); \path[draw, fill=black] (5,0) circle (0.5); \path[draw, fill=black] (10,0) circle (0.5); \path[draw, fill=black] (0,15) circle (0.5); \path[draw, fill=black] (0,-15) circle (0.5); \path[draw] (0,15) .. controls (4,5) and (5,-5) .. (0,-5) .. controls (-10,-5) and (-5,5 ) .. (-10,5) .. controls (-10,5) and (-12.5,5) .. (-12.5,0) .. controls (-12.5,-5) and (-5,-10) .. (0,-15); \node at (-10,7) {$\alpha$}; \end{tikzpicture}\] Given $\alpha,\beta\in\mathcal{A}_n$, we say that $\alpha\leqslant\beta$ if $\alpha$ and $\beta$ have representatives $a$ and $b$ that meet only at their endpoints, and such that $a$ lies `to the left' of $b$. (More precisely, $a$ and $b$ must meet the north pole in anticlockwise order and the south pole in clockwise order.) \[\begin{tikzpicture}[scale=0.1] \path[draw, line width=1.5, fill=white!80!black] (0,0) circle (15); \path[draw, fill=black] (-10,0) circle (0.5); \path[draw, fill=black] (-5,0) circle (0.5); \path[draw, fill=black] (-0,0) circle (0.5); \path[draw, fill=black] (5,0) circle (0.5); \path[draw, fill=black] (10,0) circle (0.5); \path[draw, fill=black] (0,15) circle (0.5); \path[draw, fill=black] (0,-15) circle (0.5); \path[draw] (0,15) .. controls (4,5) and (5,-5) .. (0,-5) .. controls (-10,-5) and (-5,5 ) .. (-10,5) .. controls (-10,5) and (-12.5,5) .. (-12.5,0) .. controls (-12.5,-5) and (-5,-10) .. (0,-15); \path[draw] (0,15) .. controls (5,10) and (7.5,5).. (7.5,0) .. controls (7.5,-5) and (5,-10) .. (0,-15); \node at (-10,7) {$\alpha$}; \node at (7,-10) {$\beta$}; \end{tikzpicture}\] Again, the proof of the following is similar to that of Proposition~\ref{proposition-spn-glnr}, and we leave it to the reader to provide details if they wish. \begin{proposition}[Splitting posets for braid groups] For the family of groups with multiplication $(B_p)_{p\geqslant 0}$, we have $\mathit{SP}_n\cong \mathcal{A}_n$. \end{proposition} \section{Examples of connectivity of $|\mathit{SP}_n|$} \label{section-connectivity} Our Theorems~\ref{theorem-stability}, \ref{theorem-kernel} and~\ref{theorem-realisation} apply to a family of groups with multiplication only when the associated splitting posets satisfy the connectivity condition that each $|\mathit{SP}_n|$ is $(n-3)$-connected. In this section we verify this condition for our main examples: symmetric groups, where the result is elementary; general linear groups of PIDs, where the result was proved by Charney in~\cite{Charney}; automorphism groups of free groups, where we make use of Hatcher and Vogtmann's result on the connectivity of the \emph{poset of free factorisations of $F_n$} in~\cite{HatcherVogtmannCerf}; and for braid groups, where the claim is a variant of known results on arc complexes. Let us fix our definitions and notation for realisations of posets. If $P$ is a poset, then its \emph{order complex} (or \emph{flag complex} or \emph{derived complex}) $\Delta(P)$ is the abstract simplicial complex whose vertices are the elements of $P$, and in which vertices $p_0,\ldots,p_r$ span an $r$-simplex if they form a chain $p_0<\cdots<p_r$ after possibly reordering. The \emph{realisation} $|P|$ of $P$ is then defined to be the realisation $|\Delta(P)|$ of $\Delta(P)$. We will usually not distinguish between a simplicial complex and its realisation. So if $P$ is a poset, then the simplicial complex $|P|$ and topological space $|(|P|)|$ will both be denoted by $|P|$. When we discuss topological properties of a poset or of a simplicial complex, we are referring to the topological properties of its realisation as a topological space. \subsection{Symmetric groups} The result for symmetric groups is elementary. \begin{proposition}[Connectivity of $|\mathit{SP}_n|$ for symmetric groups] For the family of groups with multiplication $(\Sigma_p)_{p\geqslant 0}$ we have $|\mathit{SP}_n|\cong S^{n-2}$, and in particular $|\mathit{SP}_n|$ is $(n-3)$-connected. \end{proposition} \begin{proof} Let $\partial\Delta^{n-1}$ denote the simplicial complex given by the boundary of the simplex with vertices $1,\ldots,n$. Then the face poset $\mathcal{F}(\partial\Delta^{n-1})$ of $\partial\Delta^{n-1}$ is exactly the poset of proper subsets of $\{1,\ldots,n\}$ ordered by inclusion. But we saw in Proposition~\ref{proposition-spn-sigman} that the latter is isomorphic to $\mathit{SP}_n$. Thus $|\mathit{SP}_n|\cong|\mathcal{F}(\partial\Delta^{n-1})|\cong|\partial\Delta^{n-1}| \cong S^{n-2}$ as required. \end{proof} \subsection{General linear groups of PIDs} Let $R$ be a PID. In Proposition~\ref{proposition-spn-glnr} we saw that for the family of groups with multiplication $(\mathit{GL}_p(R))_{p\geqslant 0}$ there is an isomorphism $\mathit{SP}_n\cong S_R(R^n)$, where $S_R(R^n)$ is the poset whose realisation is the split building $[R^n]$. Since $R$ is in particular a Dedekind domain, Theorem~1.1 of~\cite{Charney} shows that $[R^n]$ has the homotopy type of a wedge of $(n-2)$-spheres. So we immediately obtain the following. \begin{proposition}[Connectivity of $|\mathit{SP}_n|$ for general linear groups of PIDs] Let $R$ be a PID. For the family of groups with multiplication $(\mathit{GL}_p(R))_{p\geqslant 0}$, and for any $n\geqslant 2$, $|\mathit{SP}_n|$ has the homotopy type of a wedge of $(n-2)$-spheres, and in particular is $(n-3)$-connected. \end{proposition} \subsection{Automorphism groups of free groups} Now we give the proof of the connectivity condition on the splitting posets for automorphism groups of free groups. This is the most involved of our connectivity proofs. \begin{definition} Let $F$ be a free group of finite rank. Define $P(F)$ to be the poset of \emph{ordered} tuples $H=(H_0,\ldots,H_r)$ of proper subgroups of $F$ such that $r\geqslant 1$ and $H_0\ast\cdots\ast H_r=F$. It is equipped with the partial order in which $H\geqslant K$ if $K$ can be obtained by repeatedly amalgamating \emph{adjacent} entries of $H$. \end{definition} \begin{theorem}\label{pf-theorem} If $F$ has rank $n$, then $|P(F)|$ has the homotopy type of a wedge of $S^{n-2}$-spheres. \end{theorem} \begin{corollary} [Connectivity of $|\mathit{SP}_n|$ for automorphism groups of free groups] \label{spn-autfn-corollary} For the family of groups with multiplication $(\mathrm{Aut}(F_p))_{p\geqslant 0}$, the splitting poset $|\mathit{SP}_n|$ has the homotopy type of a wedge of $(n-2)$-spheres, and in particular is $(n-3)$-connected. \end{corollary} This result has been obtained independently, and with the same proof, as part of work in progress by Kupers, Galatius and Randal-Williams. (See also the remarks after Theorem~\ref{theorem-stability}.) \begin{proof}[Proof of Corollary~\ref{spn-autfn-corollary}] If $P$ is a poset then we denote by $P'$ the \emph{derived poset} of chains $p_0<\cdots<p_r$ in $P$ ordered by inclusion. Its realisation satisfies $|P'|\cong |P|$. Recall from Proposition~\ref{proposition-spn-autfn} that $\mathit{SP}_n$ is isomorphic to the poset $S(F_n)$ defined there. So it will suffice to show that $P(F_n)$ is isomorphic to $S(F_n)'$, for then $|\mathit{SP}_n|\cong |S(F_n)|\cong |S(F_n)'|\cong |P(F_n)|$ and the result follows from Theorem~\ref{pf-theorem}. Consider the maps \[ \lambda\colon P(F_n)\to S(F_n)', \qquad \mu\colon S(F_n)'\to P(F_n) \] defined by \[ \lambda\Bigl(H_0,\ldots,H_{r+1}\Bigr)= \Bigl[ (H_0,H_1\ast \cdots \ast H_{r+1})<\cdots < (H_0\ast\cdots\ast H_{r},H_{r+1})\Bigr] \] and \[ \mu\Bigl[ (A_0,B_0)<\cdots < (A_r,B_r)\Bigr] = \Bigl(A_0,A_1\cap B_0,A_2\cap B_1, \ldots,A_r\cap B_{r-1},B_r\Bigr). \] Then one can verify that $\lambda$ and $\mu$ are mutually inverse maps of posets. The verification requires one to use the fact that if $(X_1,Y_1)<(X_2,Y_2)<(X_3,Y_3)$, then $X_{1}\ast (X_2\cap Y_{1})=X_2$, $Y_{2}\ast(Y_1\cap X_{2})=Y_1$ and $(X_2\cap Y_{1})\ast(X_{3}\cap Y_2)=X_{3}\cap Y_{1}$, which follow from the definition of the partial ordering on $S(F_n)$. \end{proof} We now move towards the proof of Theorem~\ref{pf-theorem}. In order to do so we require another definition. \begin{definition} Let $F$ be a free group of finite rank. Define $Q(F)$ to be the poset of \emph{unordered} tuples $H=(H_0,\ldots,H_r)$ of proper subgroups of $F$ such that $r\geqslant 1$ and $H_0\ast\cdots\ast H_r=F$. Give it the partial order in which $H\geqslant K$ if $K$ can be obtained by repeatedly amalgamating entries of $H$, adjacent or otherwise. Let $f\colon P(F)\to Q(F)$ denote the map that sends an ordered tuple to the same tuple, now unordered. \end{definition} The poset $Q(F_n)$ is exactly the opposite of the \emph{poset of free factorisations of $F_n$}. This poset was introduced and studied by Hatcher and Vogtmann in section~6 of~\cite{HatcherVogtmannCerf}, where it was shown that its realisation has the homotopy type of a wedge of $(n-2)$-spheres. It follows that if $F$ is a free group of rank $m$ then $|Q(F)|$ has the homotopy type of a wedge of $(m-2)$-spheres. We will now prove Theorem~\ref{pf-theorem} by deducing the connectivity of $|P(F)|$ from the known connectivity of $|Q(F)|$. In order to do this we will use a poset fibre theorem due to Bj\"orner, Wachs and Welker~\cite{BWW}. Let us recall some necessary notation. Given a poset $P$ and an element $p\in P$, we define $P_{<p}$ to be the poset $\{q\in P\mid q<p\}$. We define $P_{\leqslant p}$, $P_{>p}$ and $P_{\geqslant p}$ similarly. The \emph{length} $\ell(P)$ of a poset $P$ is defined to be the maximum $\ell$ such that there is a chain $p_0<p_1<\cdots<p_\ell$ in $P$; the length of the empty poset is defined to be $-1$. Theorem~1.1 of~\cite{BWW} states that if $f\colon P\to Q$ is a map of posets such that for all $q\in Q$ the fibre $|f^{-1}Q_{\leqslant q}|$ is $\ell(f^{-1}Q_{<q})$-connected, then so long as $|Q|$ is connected, we have \[ |P|\simeq |Q|\vee\bigvee_{q\in Q} |f^{-1}Q_{\leqslant q}|\ast|Q_{>q}| \] where $\ast$ denotes the \emph{join}. See the introduction to~\cite{BWW} for further details. \begin{proof}[Proof of Theorem~\ref{pf-theorem}] The proof is by induction on the rank of $F$. When $\rank(F)=2$ we need only observe that $P(F)$ is an infinite set with trivial partial order, so that $|P(F)|$ is an infinite discrete set, and in particular is a wedge of $0$-spheres. Suppose now that $\rank(F)\geqslant 3$ and that the claim holds for all free groups of smaller rank than $F$. Since $\rank(F)\geqslant 3$, $|Q(F)|$ is connected. Suppose that $H=(H_0,\ldots,H_{r_H})\in Q(F)$. Then Lemmas~\ref{pf-one}, \ref{pf-two}, \ref{pf-four} and \ref{pf-five} below tell us the following. \begin{itemize} \item $\ell(f^{-1}(Q(F)_{<H}))=r_H-2$ \item $|f^{-1}(Q(F)_{<H})|\cong S^{r_H-1}$ \item $|Q(F)_{>H}|\simeq\bigvee S^{n-r_H-2}$ \end{itemize} Since $S^{r_H-1}$ is $(r_H-2)$-connected, we may apply the theorem of Bj\"orner, Wachs and Welker, which tells us that \begin{align*} |P(F)| &\simeq |Q(F)|\vee\bigvee_{H\in Q(F)} \bigl(|f^{-1}(Q(F)_{\leqslant H})|\ast|Q(F)_{>H}|\bigr). \\ &\simeq \bigvee S^{n-2} \vee\bigvee_{H\in Q(F)} \left(\left(\bigvee S^{n-r_H-2}\right)\ast S^{r_H-1}\right) \\ &\simeq \bigvee S^{n-2} \vee\bigvee_{H\in Q(F)}\bigvee \left(S^{n-r_H-2}\ast S^{r_H-1}\right) \\ &\simeq \bigvee S^{n-2}\vee\bigvee_{H\in Q(F)}\bigvee S^{n-2} \\ &\simeq\ \bigvee S^{n-2} \end{align*} as required. \end{proof} \begin{lemma} \label{pf-one} Let $F$ be a free group of finite rank and let $H=(H_0,\ldots,H_r)\in Q(F)$. Then $\ell(f^{-1}(Q(F)_{<H}))=r-2$. \end{lemma} \begin{proof} After fixing an ordering of the tuple $H$, one can amalgamate $(r-1)$ adjacent entries before obtaining a $2$-tuple. This shows that $\ell(f^{-1}(Q(F)_{\leqslant H}))=r-1$. Since any maximal chain must include $H$ itself (with some ordering) it follows that $\ell(f^{-1}(Q(F)_{<H}))=r-2$. \end{proof} \begin{lemma} \label{pf-two} Let $F$ be a free group of finite rank and let $H=(H_0,\ldots,H_r)\in Q(F)$. Then $|f^{-1}(Q(F)_{\leqslant H})|\cong S^{r-1}$. \end{lemma} \begin{proof} The poset $f^{-1}(Q(F))_{\leqslant H}$ is the subposet of $P(F)$ consisting of tuples $K=(K_0,\ldots,K_s)$ where each $K_j$ is an amalgamation of some of the $H_i$. It is isomorphic to the poset $X_r$ of sequences $F=(F_0\subset F_1\subset\cdots\subset F_{s-1})$ of proper subsets of $\{0,\ldots, r\}$, where $F'\leqslant F$ if $F'$ can be obtained from $F$ by forgetting terms of the sequence. The isomorphism \[ X_r\xrightarrow{\ \cong\ }f^{-1}(Q(F))_{\leqslant H} \] sends $F=(F_0\subset\cdots\subset F_{s-1})$ to $K=(K_0,\ldots, K_s)$ where for $i\leqslant s-1$, $K_j$ is the subgroup generated by the $H_i$ for $i\in F_j\setminus F_{j-1}$, and where $K_s$ is the subgroup generated by the $H_j$ for $j\not\in F_{s-1}$. Now $X_n$ is isomorphic to the poset of faces of the barycentric subdivision of $\partial\Delta^r$, as we see by identifying $F_0\subset\cdots\subset F_{s-1}$ with the face whose vertices are the barycentres of the simplices spanned by the $F_i$. So $|X_n|\cong\partial\Delta^r\cong S^{r-1}$ as claimed. \end{proof} Before stating the next lemma we introduce some notation. Given a poset $P$, let $CP$ denote the poset obtained by adding a new minimal element $-$. \begin{lemma} \label{pf-four} Let $F$ be a free group of finite rank and let $H=(H_0,\ldots,H_r)\in Q(F)$. Then \[ |Q(F)_{> H}| \cong |Q(H_0)|\ast \cdots \ast |Q(H_r)|. \] \end{lemma} \begin{proof} There is an isomorphism \[ Q(F)_{\geqslant H} \cong CQ(H_0)\times \cdots \times CQ(H_r). \] It simply takes a tuple $K=(K_0,\ldots,K_s)$ and sends it to the element of $CQ(H_0)\times \cdots \times CQ(H_r)$ whose $CQ(H_i)$-component is the tuple consisting of those $K_j$ which are contained in $H_i$ if there are more than one such, and which is $-$ otherwise, in which case $H_i$ itself appears as one of the $K_j$. This isomorphism identifies $H$ itself with the tuple $(-,\ldots,-)$, so that we obtain a restricted isomorphism \[ Q(F)_{> H} \cong CQ(H_0)\times \cdots \times CQ(H_r)\setminus(-,\ldots,-). \] Now the realisation of the right hand side is exactly $|Q(H_0)|\ast\cdots\ast |Q(H_r)|$, so the result follows. \end{proof} \begin{lemma} \label{pf-five} Let $F$ be a free group of finite rank and let $H=(H_0,\ldots,H_r)\in Q(F)$. Then $|Q(H_0)|\ast \cdots \ast |Q(H_r)|$ has the homotopy type of a wedge of $(n-r-2)$-spheres. \end{lemma} \begin{proof} Write $s_i$ for the rank of $H_i$, so that $|Q(H_i)|$ has the homotopy type of a wedge of $(s_i-2)$-spheres. Since wedge sums commute with joins up to homotopy equivalence, it follows that $|Q(H_0)|\ast\cdots\ast |Q(H_r)|$ has the homotopy type of a wedge of copies of $S^{s_0-2}\ast\cdots\ast S^{s_r-2}$. But then \[ S^{s_0-2}\ast\cdots\ast S^{s_r-2} \cong S^{(s_0-2)+\cdots+(s_r-2)+r} =S^{(s_0+\cdots+s_r)-2(r+1) +r} =S^{n -r-2} \] as required. \end{proof} \subsection{Braid groups} Now we investige the connectivity of the realisations of the splitting posets for braid groups. In this case we will appeal to well-known connectivity results for complexes of arcs. \begin{proposition}[Connectivity of $|\mathit{SP}_n|$ for braid groups] For the family of groups with multiplication $(B_p)_{p\geqslant 0}$, and for any $n\geqslant 2$, $|\mathit{SP}_n|$ has the homotopy type of a wedge of $(n-2)$-spheres, and in particular is $(n-3)$-connected. \end{proposition} \begin{proof} Recall from Proposition~\ref{proposition-spn-autfn} that we identified $\mathit{SP}_n$ with the poset of arcs $\mathcal{A}_n$ defined there. Thus $|\mathcal{A}_n|$ is (the realisation of) the simplicial complex with vertices the elements of $\mathcal{A}_n$, in which vertices $\alpha_0,\ldots,\alpha_r$ span a simplex if and only if, after possibly reordering, $\alpha_0<\cdots< \alpha_r$. Now $\alpha_0<\cdots< \alpha_r$ holds if and only if the $\alpha_i$ have representatives $a_i$ that are disjoint except at their endpoints, and such that $a_0,\ldots,a_r$ meet the north pole in anticlockwise order. Thus $|\mathcal{A}_n|$ is the realisation of the simplicial complex whose vertices are isotopy classes of nontrivial (they do not separate a disc from the remainder of the surface) arcs in $D^2\setminus X_n$ from the north pole to the south, where a collection of vertices form a simplex if they have representing arcs that can be embedded disjointly except at their endpoints. In the notation of section~4 of~\cite{WahlMCG}, this is exactly the complex $\mathcal{B}(S,\Delta_0,\Delta_1)$ where $S=D^2\setminus X_n$, $\Delta_0\subset\partial D^2$ is the set containing just the north pole, and $\Delta_1\subset\partial D^2$ is the set containing just the south pole. Now, replacing $S$ with the complement of $n$ open discs in $D^2$ does not change the isomorphism type of the complex. But in that case, Lemma~4.7 of~\cite{WahlMCG} applies to show that $|\mathcal{A}_n|$ has connectivity $(n-2)$ greater than that of $|\mathcal{A}_2|$, which is $(-1)$-connected since it is a simply a nonempty set. \end{proof} \section{Proofs of the applications} \label{section-applications} In this section we will assume that Theorems~\ref{theorem-stability}, \ref{theorem-kernel} and~\ref{theorem-realisation} hold, and we will prove the remaining theorems stated in the introduction. We begin with three closely analogous lemmas about the groups $\mathit{GL}_n(\mathbb{Z})$, $\mathit{GL}_n(\mathbb{F}_2)$ and $\mathrm{Aut}(F_n)$. \begin{lemma}\label{lemma-low-glnz} Define elements of $\mathit{GL}_n(\mathbb{Z})$, $n=1,2,3$ as follows \[ s_1=\begin{pmatrix} -1 \end{pmatrix}, \qquad s_2=\begin{pmatrix} -1 & \\ & 1 \end{pmatrix}, \qquad s_3=\begin{pmatrix} -1 & & \\ & 1 & \\ & & 1\end{pmatrix}, \qquad t=\begin{pmatrix} 1 & 1 \\ 0 & 1\end{pmatrix}. \] Use the same symbols to denote the corresponding elements of $H_1(\mathit{GL}_n(\mathbb{Z});\mathbb{Z}) = \mathit{GL}_n(\mathbb{Z})_\mathrm{ab}$. Then the $H_1(\mathit{GL}_n(\mathbb{Z});\mathbb{Z})$ for $n=1,2,3$ are elementary abelian $2$-groups with generators $s_1\in H_1(\mathit{GL}_1(\mathbb{Z});\mathbb{Z})$, $s_2,t\in H_1(\mathit{GL}_2(\mathbb{Z});\mathbb{Z})$ and $s_3\in H_1(\mathit{GL}_3(\mathbb{Z});\mathbb{Z})$, and the stabilisation maps have the following effect. \[\xymatrix@R=5 pt@C=35 pt{ H_1(\mathit{GL}_1(\mathbb{Z});\mathbb{Z}) \ar[r]^{s_\ast} & H_1(\mathit{GL}_2(\mathbb{Z});\mathbb{Z}) \ar[r]^{s_\ast} & H_1(\mathit{GL}_3(\mathbb{Z});\mathbb{Z}) \\ s_1 \ar@{|->}[r] & s_2 \ar@{|->}[r] & s_3 \\ {} & t\ar@{|->}[r] & 0 }\] \end{lemma} \begin{proof} There are split extensions \[ \mathit{SL}_n(\mathbb{Z})\longrightarrow \mathit{GL}_n(\mathbb{Z})\xrightarrow{\det}\{\pm 1\} \] with section determined by $-1\mapsto s_n$, so that we have isomorphisms \[ H_1(\mathit{GL}_n(\mathbb{Z});\mathbb{Z})\cong H_1(\mathit{SL}_n(\mathbb{Z});\mathbb{Z})_{\{\pm 1\}}\oplus \mathbb{Z}/2\mathbb{Z}, \] where $\mathbb{Z}/2\mathbb{Z}$ is generated by the class of $s_n$. This isomorphism respects the stabilisation maps. Now $H_1(\mathit{SL}_1(\mathbb{Z});\mathbb{Z})$ obviously vanishes, and $H_1(\mathit{SL}_3(\mathbb{Z});\mathbb{Z})$ vanishes since $\mathit{SL}_n(\mathbb{Z})$ is perfect for $n\geqslant 3$. So it suffices to show that $H_1(\mathit{SL}_2(\mathbb{Z});\mathbb{Z})_{\{\pm 1\}}$ is a group of order $2$ generated by $t$. Let us write \[ u=\begin{pmatrix} 0 & -1 \\ 1 & 1 \end{pmatrix}, \qquad v=\begin{pmatrix} 0 & 1 \\ -1 & 0\end{pmatrix}. \] Then $H_1(\mathit{SL}_2(\mathbb{Z});\mathbb{Z})\cong\mathbb{Z}/12\mathbb{Z}$, where $v\leftrightarrow 3$ and $u\leftrightarrow 2$~\cite[p.91]{Knudson}. One can verify that $s_2vs_2^{-1}=v^{-1}$ and $s_2us_2^{-1}=v^{-1}u^{-1}v$, so that $\{\pm 1\}$ acts on $H_1(\mathit{SL}_2(\mathbb{Z}))$ by negation. Consequently $H_1(\mathit{SL}_2(\mathbb{Z});\mathbb{Z})_{\{\pm 1\}}=(\mathbb{Z}/12\mathbb{Z})_{\{\pm 1\}}$ has order $2$ with generator $t=vu$ as required. \end{proof} \begin{lemma}\label{lemma-low-glnftwo} $H_1(\mathit{GL}_n(\mathbb{F}_2)) = \mathit{GL}_n(\mathbb{F}_2)_\mathrm{ab}$ is trivial for $n=1,3$, and is generated by the element $t$ determined by the matrix $\left(\begin{smallmatrix} 1 & 1 \\ 0 & 1\end{smallmatrix}\right)$ for $n=2$. \end{lemma} \begin{proof} For $n=1$ this is trivial, and for $n=3$ it follows from the fact that $\mathit{GL}_3(\mathbb{F}_2)=\mathit{SL}_3(\mathbb{F}_2)$ is perfect. For $n=2$, we simply observe that $\mathit{GL}_2(\mathbb{F}_2)$ is a dihedral group of order $6$ generated by the involutions \[ \begin{pmatrix} 1 & 1 \\ 0 & 1\end{pmatrix} \quad\text{and}\quad \begin{pmatrix} 0 & 1 \\ 1 & 0\end{pmatrix}, \] so that the abelianization is a group of order $2$ generated by either of the involutions. \end{proof} \begin{lemma}\label{lemma-low-autfn} Define elements of $\mathrm{Aut}(F_n)$, $n=1,2,3$ as follows. For $n=1,2,3$ let $s_i$ denote the transformation that inverts the first letter and fixes the others. And let $t\in\mathrm{Aut}(F_2)$ denote the transformation $x_1\mapsto x_1$, $x_2\mapsto x_1x_2$. Use the same symbols to denote the corresponding elements of $H_1(\mathrm{Aut}(F_n);\mathbb{Z}) = Aut(F_n)_\mathrm{ab}$. Then the $H_1(\mathrm{Aut}(F_n);\mathbb{Z})$ for $n=1,2,3$ are elementary abelian $2$-groups with generators $s_1\in H_1(\mathrm{Aut}(F_1);\mathbb{Z})$, $s_2,t\in H_1(\mathrm{Aut}(F_2);\mathbb{Z})$ and $s_3\in H_1(\mathrm{Aut}(F_3);\mathbb{Z})$, and the stabilisation maps have the following effect. \[\xymatrix@R=5 pt@C=35 pt{ H_1(\mathrm{Aut}(F_1);\mathbb{Z}) \ar[r]^{s_\ast} & H_1(\mathrm{Aut}(F_2);\mathbb{Z}) \ar[r]^{s_\ast} & H_1(\mathrm{Aut}(F_3);\mathbb{Z}) \\ s_1 \ar@{|->}[r] & s_2 \ar@{|->}[r] & s_3 \\ {} & t\ar@{|->}[r] & 0 }\] \end{lemma} \begin{proof} The linearisation map $\mathrm{Aut}(F_n)\to\mathit{GL}_n(\mathbb{Z})$ is an isomorphism on abelianisations for all $n$. In the case $n=1$ this is because the map itself is an isomorphism. In the case $n=2$ this is because the map $\mathrm{Out}(F_2)\to\mathit{GL}_2(\mathbb{Z})$ is an isomorphism, so there is an extension $F_2\to\mathrm{Aut}(F_2)\to\mathit{GL}_2(\mathbb{Z})$ in which the action of $\mathit{GL}_2(\mathbb{Z})$ on $(F_2)_\mathrm{ab}=\mathbb{Z}^2$ is the tautological one, so that the coinvariants $((F_2)_\mathrm{ab})_{\mathit{GL}_2(\mathbb{Z})}$ vanish, and the claim follows. And for $n\geqslant 3$ this is because $\mathit{SL}_n(\mathbb{Z})$ is perfect, as is the subgroup $SA_n$ of $\mathrm{Aut}(F_n)$ consisting of automorphisms with determinant one. (For the last claim we refer to the presentation of $SA_n$ given in Theorem~2.8 of~\cite{Gersten}.) The linearisation map sends the generators $s_1,s_2,s_3,t$ listed here to the corresponding generators from Lemma~\ref{lemma-low-glnz}, so the claim follows. \end{proof} \begin{proof}[Proof of Theorem~\ref{glnz-outside}] Let $\mathbb{F}$ be a field of characteristic $2$. We will use the K\"unneth isomorphism $H_1(-;\mathbb{F})\cong H_1(-;\mathbb{Z})\otimes\mathbb{F}$ without further mention. Theorem~\ref{theorem-kernel} states that the kernel of the map \begin{equation} \label{stabilisation-map-one} s_\ast\colon H_m(\mathit{GL}_{2m}(\mathbb{Z});\mathbb{F}) \twoheadrightarrow H_m(\mathit{GL}_{2m+1}(\mathbb{Z});\mathbb{F}) \end{equation} is the image of the product map \[ H_{1}(\mathit{GL}_{2}(\mathbb{Z});\mathbb{F})^{\otimes {m-1}} \otimes\ker[H_1(\mathit{GL}_2(\mathbb{Z});\mathbb{F})\xrightarrow{s_\ast} H_1(\mathit{GL}_3(\mathbb{Z});\mathbb{F})] \longrightarrow H_{m}(\mathit{GL}_{2m}(\mathbb{Z});\mathbb{F}). \] By Lemma~\ref{lemma-low-glnz}, $H_1(\mathit{GL}_2(\mathbb{Z});\mathbb{F})$ is spanned by the classes $s_2$ and $t$, and $\ker[H_1(\mathit{GL}_2(\mathbb{Z});\mathbb{F})\xrightarrow{s_\ast} H_1(\mathit{GL}_3(\mathbb{Z});\mathbb{F})]$ is spanned by $t$. Any product involving both $s_2$ and $t$ vanishes, since $s_2\cdot t = s_\ast(s_1)\cdot t = s_1\cdot s_\ast(t)=0$. So it follows that the image of the given product map is precisely the span of $t^m$, which gives us the claimed description of of kernel of~\eqref{stabilisation-map-one}. Next, Theorem~\ref{theorem-realisation} states that the map \[ H_m(\mathit{GL}_{2m-1}(\mathbb{Z});\mathbb{F})\oplus H_1(\mathit{GL}_2(\mathbb{Z});\mathbb{F})^{\otimes m} \twoheadrightarrow H_m(\mathit{GL}_{2m}(\mathbb{Z});\mathbb{F}) \] is surjective. The second summand of the domain is spanned by the words in $s_2$ and $t$, but the image of any word involving $s_2=s_\ast(s_1)$ lies in the image of $H_m(\mathit{GL}_{2m-1}(\mathbb{Z});\mathbb{F})$. Thus the image of the given map is in fact spanned by the image of $H_m(\mathit{GL}_{2m-1}(\mathbb{Z});\mathbb{F})$ and of $t^m$, as required. \end{proof} \begin{proof}[Proof of Theorem~\ref{theorem-glnftwo}] Since $H_m(\mathit{GL}_{2m+1}(\mathbb{F}_2);\mathbb{F}_2)$ vanishes, Theorem~\ref{theorem-kernel} shows that $H_m(\mathit{GL}_{2m}(\mathbb{F}_2);\mathbb{F}_2)$ is spanned by the image of \[H_1(\mathit{GL}_2(\mathbb{F}_2);\mathbb{F}_2)^{\otimes(m-1)}\otimes \ker[s_\ast\colon H_1(\mathit{GL}_2(\mathbb{F}_2);\mathbb{F}_2)\to H_1(\mathit{GL}_3(\mathbb{F}_2);\mathbb{F}_2)].\] But by Lemma~\ref{lemma-low-glnftwo}, this image is precisely the span of $t^m$. \end{proof} \begin{proof}[Proof of Theorem~\ref{theorem-autfn-stability}] The first claim is immediate from Theorem~\ref{theorem-stability}. For the second claim, when $\mathrm{char}(\mathbb{F})\neq 2$ we have $H_1(\mathrm{Aut}(F_2);\mathbb{F})=0$ by Lemma~\ref{lemma-low-autfn}, so that Theorem~\ref{theorem-kernel} shows that $s_\ast\colon H_\ast(G_{n-1})\to H_\ast(G_n)$ is injective for $\ast=\frac{n-1}{2}$, and Theorem~\ref{theorem-realisation} shows that $s_\ast \colon H_\ast(G_{n-1})\to H_\ast(G_n)$ is surjective for $\ast =\frac{n}{2}$. \end{proof} \begin{proof}[Proof of Theorem~\ref{theorem-autfn-outside}] This is entirely analogous to the proof of Theorem~\ref{glnz-outside}, this time making use of Lemma~\ref{lemma-low-autfn}. \end{proof} \section{The splitting complex} \label{section-scn} In this section we identify the {realisation} of the splitting poset $\mathit{SP}_n$ with the realisation of a semisimplicial set that we call the `splitting complex'. It is the splitting complex, rather than the splitting poset, that will feature in our arguments from this section onwards. In this section we will make use of semisimplicial sets; see section~2 of~\cite{RW} for a general discussion of semisimplicial sets (and spaces) and their realisations. We have borrowed the name `splitting complex' from work in progress of Galatius, Kupers and Randal-Williams. See also the remarks after Theorem~\ref{theorem-stability}. \begin{definition}[The splitting complex]\label{sc-definition} Let $n\geqslant 2$. The \emph{$n$-th splitting complex} of a family of groups with multiplication $(G_p)_{p\geqslant 0}$ is the semisimplicial set $\mathit{SC}_n$ defined as follows. Its set of $r$-simplices is \[ (\mathit{SC}_n)_r = \bigsqcup_{\substack{q_0+\cdots+q_{r+1}=n \\ q_0,\ldots,q_{r+1}\geqslant 1}} \frac{G_n}{G_{q_0}\times\cdots\times G_{q_{r+1}}} \] if $r\leqslant n-2$, and is empty otherwise. And the $i$-th face map \[ d_i\colon (\mathit{SC}_n)_r\longrightarrow(\mathit{SC}_n)_{r-1}, \] is defined by \[ d_i(g (G_{q_0}\times\cdots\times G_{q_{r+1}})) =g (G_{q_0}\times\cdots\times G_{q_i+q_{i+1}} \times\cdots\times G_{q_{r+1}}) \] for $g\in G_n$. \end{definition} \begin{example} Figure~\ref{scfigure} illustrates the splitting complex $\mathit{SC}_4$. \begin{figure}\[\xymatrix@C=50pt@R=50pt{ \displaystyle\frac{G_4}{G_3\times G_1} & \displaystyle\frac{G_4}{G_2\times G_1\times G_1} \ar[dl]^(0.3){d_1}\ar[l]_(0.45){d_0} \\ \displaystyle\frac{G_4}{G_2\times G_2} & \displaystyle\frac{G_4}{G_1\times G_2\times G_1} \ar[ul]|\hole_(0.3){d_0}\ar[dl]|\hole^(0.3){d_1} & \displaystyle\frac{G_4}{G_1\times G_1\times G_1\times G_1} \ar[ul]_(0.3){d_0}\ar[l]_(0.45){d_1}\ar[dl]^(0.3){d_2} \\ \displaystyle\frac{G_4}{G_1\times G_3} & \displaystyle\frac{G_4}{G_1\times G_1\times G_2} \ar[ul]_(0.3){d_0}\ar[l]^(0.45){d_1} }\] \caption{The splitting complex $\mathit{SC}_4$} \label{scfigure} \end{figure} Taking the disjoint union of the terms in each column gives the $0$-, $1$- and $2$-simplices. And the arrows leaving each term represent the face maps on that term, ordered from top to bottom. \end{example} \begin{remark} In the expression $G_{q_0}\times\cdots\times G_{q_{r+1}}$ appearing in Definition~\ref{sc-definition}, we can imagine the symbols $\times$ as being labelled from $0,\ldots,r$, so that the $i$-th face map $d_i$ simply `erases the $i$-th $\times$'. \end{remark} Let $P$ be a poset. The \emph{semisimplicial nerve} $NP$ of $P$ is defined to be the semisimplicial set whose $r$-simplices are the chains $p_0<\cdots<p_r$ of length $(r+1)$ in $P$, and whose face maps are defined by $d_i(p_0<\cdots<p_r) = p_0<\cdots\widehat{p_i}\cdots<p_r$. The realisation $\|NP\|$ of the semisimplicial nerve is naturally homeomorphic to the realisation $|P|$ of the poset. \begin{proposition} \label{spntoscn} Let $(G_p)_{p\geqslant 0}$ be a family of groups with multiplication and let $n\geqslant 2$. Then $\mathit{SC}_n\cong N(\mathit{SP}_n)$. In particular $|\mathit{SP}_n|\cong \|\mathit{SC}_n\|$. \end{proposition} \begin{proof} Let $\phi\colon \mathit{SC}_n\to N(\mathit{SP}_n)$ denote the map that sends an $r$-simplex $g(G_{q_0}\times\cdots\times G_{q_{r+1}})$ of $\mathit{SC}_n$ to the $r$-simplex \[ g(G_{q_0}\times G_{q_1+\cdots+q_{r+1}}) < g(G_{q_0+q_1}\times G_{q_2+\cdots+q_{r+1}}) < \cdots < g(G_{q_0+\cdots +q_r}\times G_{q_{r+1}}) \] of $N(\mathit{SP}_n)$. One can verify that $\phi$ is indeed a semi-simplicial map. Surjectivity follows from Lemma~\ref{chain-lemma}. Injectivity follows from the fact that \[ \bigcap_{i=0}^r G_{q_0+\cdots+q_i}\times G_{q_{i+1}+\cdots+q_{r+1}} = G_{q_0}\times\cdots\times G_{q_{r+1}}, \] which follows by induction from the intersection axiom. \end{proof} \begin{remark}[Splitting posets or splitting complexes?] \label{spnorscn} The results of this section show that if we wish we could replace $|\mathit{SP}_n|$ with $\|\mathit{SC}_n\|$ in the statements of Theorems~\ref{theorem-stability}, \ref{theorem-kernel} and~\ref{theorem-realisation}. In doing so, we could jettison the intersection axiom from Definition~\ref{definition-families}, possibly admitting more examples in the process. However, it is arguably simpler to work with the splitting poset, and that was certainly the case in sections~\ref{section-spn} and~\ref{section-connectivity} where we studied specific examples. Moreover, the examples of interest to us here all satisfy the intersection axiom. We therefore decided to write our paper with splitting posets at the forefront. \end{remark} \section{A bar construction} \label{section-bar} In this section we introduce a variant of the bar construction which takes as its input an algebra like $\bigoplus_{p\geqslant 0} H_\ast(G_p)$ and produces a graded chain complex (that is, a chain complex of graded vector spaces) called $\mathcal{B}_n$. We will see in the next section that $\mathcal{B}_n$ is the $E^1$-term of the spectral sequence around which all of our proofs revolve. We fix a field $\mathbb{F}$ throughout. For the purposes of this section we fix a field $\mathbb{F}$ and a commutative graded $\mathbb{F}$-algebra $A$ equipped with an additional grading that we call the \emph{charge}. Thus \[ A=\bigoplus_{p\geqslant 0} A_p \] where $A_p$ is the part of $A$ with \emph{charge} $p$. We will call the natural grading of $A$ the \emph{topological} grading, and we will suppress it from the notation wherever possible. We require that the multiplication on $A$ respects the charge grading, and that each charge-graded piece $A_p$ is concentrated in non-negative degrees. We further require that $A_0$ is a copy of $\mathbb{F}$ concentrated in topological degree $0$ and (necessarily) generated by the unit element $1$. In particular, $A$ is augmented. Finally we assume that $(A_1)_0$, the part of $A$ of charge $1$ and topological degree $0$, is a copy of $\mathbb{F}$ generated by an element $\sigma$. \begin{example} Our only examples of such algebras will be \[ A=\bigoplus_{p\geqslant 0} H_\ast(G_p) \] where $(G_p)_{p\geqslant 0}$ is a family of groups with multiplication. Here the topological grading is the grading of homology, and the charge grading is obtained from the multiplicative family. The element $\sigma\in (A_1)_0 = H_0(G_1)$ is defined to be the standard generator. \end{example} \begin{definition}[The chain complex $\mathcal{B}_n$] Let $A$ be an $\mathbb{F}$-algebra as described at the start of the section. For $n\geqslant 0$ we define $\mathcal{B}_n$ to be the chain complex of {graded} abelian groups whose $b$-th term is \[ (\mathcal{B}_n)_b = \bigoplus_{\substack{q_0+\cdots+q_b=n\\q_0,\ldots,q_b\geqslant 1}} A_{q_0}\otimes\cdots\otimes A_{q_b} \] and whose differential is defined by \[ d_b(x_0\otimes\cdots\otimes x_b) = \sum_{i=0}^{b-1} (-1)^i x_0\otimes\cdots\otimes x_i\cdot x_{i+1} \otimes\cdots\otimes x_b. \] For $n=0$ we define $\mathcal{B}_0$ by letting all groups vanish except for $(\mathcal{B}_0)_0$, which consists of a single copy of $\mathbb{F}$. Note that $\mathcal{B}_n$ is bigraded. Its \emph{homological} grading is the grading that is explicit in the definition, and which is reduced by the differential $d_b$. Its \emph{topological} grading is the grading obtained from the topological grading of $A$, and is preserved by the differential $d_b$. We say that the part of $\mathcal{B}_n$ with homological grading $b$ and topological grading $d$ lies in \emph{bidegree $(b,d)$}. We reserve the notation $(\mathcal{B}_n)_b$ for the part of $\mathcal{B}_n$ that lies in homological degree $b$. \end{definition} \begin{remark}[$\mathcal{B}_n$ and the bar complex] \label{remark-bar-one} Regarding $\mathbb{F}$ as a left and right $A$-module \emph{via} the projection $A\to (A_0)_0=\mathbb{F}$, we may form the two-sided normalised bar complex $B(\mathbb{F},\bar A,\mathbb{F})$ \[ \mathbb{F}\otimes\mathbb{F} \longleftarrow \mathbb{F}\otimes\bar A\otimes\mathbb{F} \longleftarrow \mathbb{F}\otimes\bar A\otimes\bar A\otimes\mathbb{F} \longleftarrow \mathbb{F}\otimes\bar A\otimes\bar A\otimes\bar A\otimes\mathbb{F} \cdots \] or, more simply, \[ \mathbb{F} \longleftarrow \bar A \longleftarrow \bar A\otimes\bar A \longleftarrow \bar A\otimes\bar A\otimes\bar A \longleftarrow \cdots \] where all tensor products are over $\mathbb{F}$. This is naturally \emph{trigraded}: there is the homological grading explicit in the the expressions above, together with charge and topological gradings inherited from $A$. Writing $[B(\mathbb{F},\bar A,\mathbb{F})]_{\mathrm{charge}=n}$ for the homogeneous piece with charge grading $n$ inherited from $A$, then we have the following: \[ (\mathcal{B}_n)_b = [B(\mathbb{F},\bar A,\mathbb{F})_{b+1}]_{\mathrm{charge}=n}. \] See Remark~\ref{remark-bar-two} for further discussion. \end{remark} \begin{example} Here is a diagram of $\mathcal{B}_4$. \[\xymatrix@C=40pt@R=40pt{ & A_3\otimes A_1\ar[dl] & A_2\otimes A_1\otimes A_1\ar[dl]\ar[l] \\ A_4 & A_2\otimes A_2\ar[l] & A_1\otimes A_2\otimes A_1 \ar[ul]|\hole\ar[dl]|\hole & A_1\otimes A_1\otimes A_1\otimes A_1 \ar[ul]\ar[l]\ar[dl] \\ & A_1\otimes A_3\ar[ul] & A_1\otimes A_1\otimes A_2\ar[ul]\ar[l] }\] The first column of the diagram represents $(\mathcal{B}_4)_0$, the direct sum of the terms in the next column represent $(\mathcal{B}_4)_1$, and so on. The effect of the differential $d_b$ on an element of one of the summands is the alternating sum (taken from top to bottom) of its images under the arrows exiting that summand. The arrows are all constructed using the product of $A$ in the evident way. \end{example} \section{The spectral sequence} \label{section-spectral} The complex $\mathcal{B}_n$ is our main tool in proving the theorems stated in the introduction. The aim of the present section is to prove the following result, which demonstrates the connection between $\mathcal{B}_n$ and the splitting poset. Throughout this section we fix a family of groups with multiplication $(G_p)_{p\geqslant 0}$ and the algebra $A=\bigoplus H_\ast(G_p)$, which is of the kind described at the start of section~\ref{section-bar}. Throughout this section homology is to be taken with coefficients in an arbitrary field $\mathbb{F}$. \begin{theorem} \label{spectral-sequence-theorem} Let $(G_p)_{p\geqslant 0}$ be a family of groups with multiplication that satisfies the connectivity axiom, and let $A=\bigoplus_{p\geqslant 0} H_\ast(G_p)$. Then there is a first quadrant spectral sequence with $E^1$-term \[ (E^1,d^1) = (\mathcal{B}_n,d_b) \] for which $E^\infty$ vanishes in bidegrees $(b,d)$ satisfying $b+d\leqslant (n-2)$. \end{theorem} \begin{remark}[The spectral sequence and $\tor$] \label{remark-bar-two} In Remark~\ref{remark-bar-one}, we identified $\mathcal{B}_n$ in terms of a two-sided bar complex. It follows that we may therefore identify the $E^2$-term of the above spectral sequence in terms of a $\tor$ group: \[ E^2_{i,j}= \tor^A_{i+1}(\mathbb{F},\mathbb{F})_ {\substack{\mathrm{charge}=n\\ \mathrm{topological}=j}} \] This observation potentially allows us to use the machinery of derived functors to understand the $E^2$-term of our spectral sequence. We do \emph{not} do this in the present version of this paper. Instead, our arguments are all done explicitly on the level of $\mathcal{B}_n$ itself. We hope that in a future version of this paper we will rephrase our arguments in terms of $\tor$ wherever possible. \end{remark} The rest of the section is devoted to the proof of Theorem~\ref{spectral-sequence-theorem}. To begin, we introduce a topological analogue of $\mathcal{B}_n$. Observe that the multiplication map $G_a\times G_b\to G_{a+b}$ induces a map of classifying spaces $BG_a\times BG_b\to BG_{a+b}$. We call it the \emph{product map on classifying spaces} and denote it by $(x,y)\mapsto x\cdot y$. We will use the product maps on classifying spaces to create an augmented semisimplicial space from which we can recover $\mathcal{B}_n$. See section~2 of~\cite{RW} for conventions about semisimplicial spaces, augmented semisimplicial spaces, and their realisations. \begin{definition}[The augmented semisimplicial space $t\mathcal{B}_n$] Given a family of groups with multiplication $(G_p)_{p\geqslant 0}$, and given $n\geqslant 2$, we let $t\mathcal{B}_n$ denote the augmented semisimplicial set whose set of $r$-simplices is given by \[ (t\mathcal{B}_n)_r = \bigsqcup_{\substack{q_0+\cdots+q_{r+1}=n \\ q_0,\ldots,q_{r+1}\geqslant 1}} BG_{q_0}\times\cdots\times BG_{q_{r+1}} \] for $r=-1,\ldots,(n-2)$, and which is empty otherwise. The face map $d_i\colon (t\mathcal{B}_n)_r\to (t\mathcal{B}_n)_{r-1}$ is defined by \[ d_i(x_0,\ldots,x_{r+1}) = (x_0,\ldots,x_i\cdot x_{i+1},\ldots,x_{r+1}), \] where $\cdot$ denotes the product map on classifying spaces. \end{definition} \begin{example} Here is a diagram of $t\mathcal{B}_4$. \[\xymatrix@C=40pt@R=40pt{ & \scriptstyle BG_3\times BG_1\ar[dl] & \scriptstyle BG_2\times BG_1\times BG_1\ar[dl]\ar[l] \\ \scriptstyle BG_4 & \scriptstyle BG_2\times BG_2\ar[l] & \scriptstyle BG_1\times BG_2\times BG_1 \ar[ul]|\hole\ar[dl]|\hole & \scriptstyle BG_1\times BG_1\times BG_1\times BG_1 \ar[ul]\ar[l]\ar[dl] \\ & \scriptstyle BG_1\times BG_3\ar[ul] & \scriptstyle BG_1\times BG_1\times BG_2\ar[ul]\ar[l] }\] The four columns correspond to the $r$-simplices of $t\mathcal{B}_4$ for $r=-1,0,1,2$ respectively, the disjoint union of the terms in a column being the space of simplices of the relevant dimension. \end{example} The next proposition shows the sense in which $t\mathcal{B}_n$ is a topological analogue of $\mathcal{B}_n$. \begin{proposition}[From $t\mathcal{B}_n$ to $\mathcal{B}_n$] \label{spectral-sequence-proposition} There is a spectral sequence with $E_1$-term \[ (E^1, d^1) = (\mathcal{B}_n, d_b) \] and converging to $H_\ast(\|t\mathcal{B}_n\|)$. \end{proposition} \begin{proof} As in section~2.3 of~\cite{RW}, but with a shift of grading, the augmented semisimplicial set $t\mathcal{B}_n$ gives rise to a spectral sequence, converging to $H_\ast(\|t\mathcal{B}_n\|)$, and whose $E^1$-term is given by \[ E^1_{s,t} = H_t((t\mathcal{B}_n)_{s-1}), \] with $d^1$ given by the alternating sum of the maps induced by the face maps of $t\mathcal{B}_n$. Writing each $(t\mathcal{B}_n)_{s-1}$ as a product of spaces and applying the K\"unneth isomorphism (which applies because homology is taken with coefficients in the field $\mathbb{F}$) we see that this is isomorphic to $\mathcal{B}_n$ equipped with the differential $d_b$. \end{proof} \begin{proposition} \label{splitting-to-bar} Suppose that the realisation of the $n$-th splitting poset $\mathit{SP}_n$ is $(n-3)$-connected. Then the realisation $\|t\mathcal{B}_n\|$ is $(n-2)$-connected. \end{proposition} \begin{proof} In order to give this proof, we must be precise about our construction of classifying spaces. Given a group $G$, we define $EG$ to be the realisation of the category obtained from the action of $G$ on itself by right multiplication. (So it is $B\overline G$ in the notation of~\cite{SegalClassifying}.) Then we define $BG=EG/G$. The map $EG\to BG$ is a locally trivial principal $G$-fibration, and $EG$ is itself contractible. The assignment $G\to EG$ is functorial, and respects products in the sense that if $G$ and $H$ are groups then the map $E(G\times H)\to EG\times EH$ obtained from the projections is an isomorphism. We can therefore construct a homotopy equivalence as follows. \begin{align*} BG_{q_0}\times\cdots\times BG_{q_{r+1}} &= \frac{EG_{q_0}}{G_{q_0}}\times\cdots \times\frac{EG_{q_{r+1}}}{G_{q_{r+1}}} \\ &= \frac{EG_{q_0}\times\cdots\times EG_{q_{r+1}}} {G_{q_0}\times\cdots\times G_{q_{r+1}}} \\ &\xrightarrow{\cong} \frac{E(G_{q_0}\times\cdots\times G_{q_{r+1}})} {G_{q_0}\times\cdots\times G_{q_{r+1}}} \\ &\xrightarrow{\simeq} \frac{EG_n} {G_{q_0}\times\cdots\times G_{q_{r+1}}} \end{align*} Here the first arrow comes from the compatibility with products. The second map comes from the iterated product map $G_{q_0}\times\cdots\times G_{q_{r+1}}\to G_n$, and it is a homotopy equivalence because it lifts to a map of principal $(G_{q_0}\times\cdots\times G_{q_{r+1}})$-bundles whose total spaces are both contractible. There is an isomorphism \[ \frac{EG_n} {G_{q_0}\times\cdots\times G_{q_{r+1}}} \xrightarrow{\ \cong\ } EG_n\times_{G_n}\left( \frac{G_n}{G_{q_0}\times\cdots\times G_{q_{r+1}}} \right) \] sending the orbit of an element $x$ to the orbit of $(x,e_n(G_{q_0}\times\cdots\times G_{q_{r+1}}))$. Combining the two maps just constructed gives us a homotopy equivalence: \begin{equation}\label{equation-borel} BG_{q_0}\times\cdots\times BG_{q_{r+1}} \xrightarrow{\ \simeq\ } EG_n\times_{G_n}\left( \frac{G_n}{G_{q_0}\times\cdots\times G_{q_{r+1}}} \right) \end{equation} Now let $\mathit{SC}_n^+$ denote the augmented semisimplicial set obtained from $\mathit{SC}_n$ by adding a single point as a $-1$-simplex. The maps~\eqref{equation-borel} then form the components of a homotopy equivalence \[ (t\mathcal{B}_n)_r\xrightarrow{\ \simeq\ } EG_n\times_{G_n}(\mathit{SC}_n^+)_r. \] These equivalences in turn assemble to a levelwise homotopy equivalence \[ t\mathcal{B}_n\xrightarrow{\ \simeq\ }EG_n\times_{G_n}\mathit{SC}_n^+ \] and consequently induce a homotopy equivalence \[ \|t\mathcal{B}_n\|\xrightarrow{\ \simeq\ }\|EG_n\times_{G_n}\mathit{SC}_n^+\|. \] By assumption, $|\mathit{SP}_n|$ is $(n-3)$-connected, so that $\|\mathit{SC}_n\|$ (to which it is isomorphic by Proposition~\ref{spntoscn}) is also $(n-3)$-connected. Consequently $\|\mathit{SC}_n^+\|$, which is just the suspension of $\|\mathit{SC}_n\|$, is $(n-2)$-connected. Equivalently, the inclusion of the basepoint $\ast\hookrightarrow\|\mathit{SC}_n^+\|$ is an $(n-2)$-equivalence. It follows that the map $EG_n\times_{G_n}\ast\to EG_n\times_{G_n}\|\mathit{SC}_n^+\|$ is also an $(n-2)$-equivalence, so that the quotient \[ \frac{EG_n\times_{G_n}\|\mathit{SC}_n^+\|} {EG_n\times_{G_n}\ast} \] is $(n-2)$-connected. But then \[ \|t\mathcal{B}_n\| \cong \|EG_n\times_{G_n}\mathit{SC}_n^+\| \cong \frac{EG_n\times_{G_n}\|\mathit{SC}_n^+\|} {EG_n\times_{G_n}\ast} \] is $(n-2)$-connected as required. \end{proof} \section{Relating $\mathcal{B}_n$ to the stabilisation maps} \label{section-filtration} Let $A$ be an $\mathbb{F}$-algebra of the kind described at the start of section~\ref{section-bar}. Thus $A$ has a natural topological grading with respect to which it is commutative, it has an additional charge grading $A=\bigoplus_{p\geqslant 0}A_p$, $A_0$ consists of a single copy of $\mathbb{F}$ in topological degree $0$, $(A_1)_0$ is a copy of $\mathbb{F}$ generated by an element $\sigma$, and each piece $A_p$ is concentrated in non-negative topological degrees. \begin{definition}[The stabilisation map.] The \emph{stabilisation map} $s\colon A_{n-1}\to A_n$ is defined by $s(a)= \sigma\cdot a$. \end{definition} \begin{example} In the case $A=\bigoplus_{p\geqslant 0} H_\ast(G_p)$ where $(G_p)_{p\geqslant 0}$ is a family of groups with multiplication, we take $\sigma$ to be the standard generator of $(A_1)_0=H_0(G_1)$, and then $s\colon A_{n-1}\to A_n$ is nothing other than the stabilisation map $s_\ast\colon H_\ast(G_{n-1})\to H_\ast(G_n)$ defined in the introduction. \end{example} The aim of this section is to relate the complex $\mathcal{B}_n$ to the stabilisation maps. In order to do so, we introduce complexes $\S_n$ whose homology quantifies the injectivity and surjectivity of the stabilisation maps. \begin{definition}[The complex $\S_n$] For $n\geqslant 1$, let $\S_n$ denote the graded chain complex defined as follows. If $n\geqslant 2$, then $\S_n$ is the complex. \[\xymatrix@R=5pt{ (\S_n)_0 && (\S_n)_1\ar[ll]_{d_1} \\ A_n && A_{n-1}\ar[ll]_{s} }\] concentrated in homological degrees $0$ and $1$. And for $n=1$, $\S_1$ is the complex concentrated in homological degree $0$, where it is given by the part of $A_1$ lying in positive degrees, which we denote by $A_{1,>0}$. \end{definition} In the case where $A = \bigoplus_{p\geqslant 0}H_\ast(G_p)$ comes from a family of groups with multiplication $(G_n)_{n\geqslant 0}$, the complex $\S_n$ for $n\geqslant 2$ is simply \[ H_\ast(G_n)\xleftarrow{\ \quad s_\ast\quad\ } H_\ast(G_{n-1}), \] so that injectivity and surjectivity of the stabilisation map $s_\ast$ in certain ranges of degrees can be expressed as the vanishing of the homology of $\S_n$ in certain ranges of bidegrees. All of our results on the stabilisation map are proved from this point of view. Our aim now is to relate the stabilisation maps, \emph{via} the complexes $\S_n$, to the complex $\mathcal{B}_n$. We do this using the following filtration. \begin{definition}\label{filtration-definition} Given $n\geqslant 2$, define a filtration \[ F_0\subseteq F_1\subseteq\cdots\subseteq F_{n-1}=\mathcal{B}_n \] of $\mathcal{B}_n$ by defining $F_{n-1}=\mathcal{B}_n$, and by defining $F_r$ for $r\leqslant (n-2)$ to be the subcomplex of $\mathcal{B}_n$ spanned by summands of the form $A_{n-s}\otimes -$ and $A_{1,0}\otimes A_{n-s-1}\otimes-$ for $s\leqslant r$. As usual $A_{1,0}$ denotes the part of $A$ lying in bidegree $(1,0)$. Here it is considered as a graded submodule of $A_1$. \end{definition} \begin{example} Let us illustrate the above definition in the case $n=3$, i.e.~for the filtration $F_0\subseteq F_1\subseteq F_2=\mathcal{B}_3$. \[\xymatrix@!0@=50pt{ {} & \ar@{.}[dl] \phantom{\scriptstyle A_2\otimes A_1} & {} \\ \scriptstyle A_3 & F_0 & \ar@{.}[ul]\ar@{.}[dl] \phantom{\scriptstyle A_1\otimes A_1\otimes A_1} \\ {} & \scriptstyle A_{1,0}\otimes A_2\ar[ul] & {} } \quad \xymatrix@!0@=50pt{ {} & \scriptstyle A_2\otimes A_1\ar[dl] & {} \\ \scriptstyle A_3 & F_1 & \scriptstyle A_{1,0}\otimes A_1\otimes A_1 \ar[ul]\ar[dl] \\ {} & \scriptstyle A_{1,0}\otimes A_2\ar[ul] & {} } \quad \xymatrix@!0@=50pt{ {} & \scriptstyle A_2\otimes A_1\ar[dl] & {} \\ \scriptstyle A_3 & F_2 & \scriptstyle A_1\otimes A_1\otimes A_1 \ar[ul]\ar[dl] \\ {} & \scriptstyle A_1\otimes A_2\ar[ul] & {} }\] \end{example} \begin{example} \label{filtration-example} In the case $n=4$, we can depict $\mathcal{B}_4$ as follows. \[\xymatrix@!0@C=80pt@R=80pt{ & \scriptstyle A_3\otimes A_1\ar[dl] & \scriptstyle A_2\otimes A_1\otimes A_1 \ar[dl]\ar[l] \\ \scriptstyle A_4 & \scriptstyle A_2\otimes A_2\ar[l] & \scriptstyle A_{1}\otimes A_2\otimes A_1 \ar[ul]|\hole\ar[dl]|\hole & \scriptstyle A_{1}\otimes A_1\otimes A_1\otimes A_1 \ar[ul]\ar[l]\ar[dl] \\ & \scriptstyle A_{1}\otimes A_3\ar[ul] & \scriptstyle A_{1}\otimes A_1\otimes A_2\ar[ul]\ar[l] }\] Then we can depict the filtration \[ F_0\subseteq F_1\subseteq F_2\subseteq F_3=\mathcal{B}_4 \] symbolically in the form \[ \vcenter{ \xymatrix@=15pt{ & \cdot\ar@{..}[dl] & \cdot \ar@{..}[dl]\ar@{..}[l] \\ \bullet & \cdot\ar@{..}[l] & \cdot \ar@{..}[ul]|\hole\ar@{..}[dl]|\hole & \cdot \ar@{..}[ul]\ar@{..}[l]\ar@{..}[dl] \\ & \circ\ar[ul] & \cdot\ar@{..}[ul]\ar@{..}[l] }} \ \subseteq\ \vcenter{ \xymatrix@=15pt{ & \bullet\ar[dl] & \cdot \ar@{..}[dl]\ar@{..}[l] \\ \bullet & \cdot\ar@{..}[l] & \circ \ar[ul]|\hole\ar[dl]|\hole & \cdot \ar@{..}[ul]\ar@{..}[l]\ar@{..}[dl] \\ & \circ\ar[ul] & \cdot\ar@{..}[ul]\ar@{..}[l] }} \ \subseteq\ \vcenter{ \xymatrix@=15pt{ & \bullet\ar[dl] & \bullet \ar[dl]\ar[l] \\ \bullet & \bullet\ar[l] & \circ \ar[ul]|\hole\ar[dl]|\hole & \circ \ar[ul]\ar[l]\ar[dl] \\ & \circ\ar[ul] & \circ\ar[ul]\ar[l] }} \ \subseteq\ \vcenter{ \xymatrix@=15pt{ & \bullet\ar[dl] & \bullet \ar[dl]\ar[l] \\ \bullet & \bullet\ar[l] & \bullet \ar[ul]|\hole\ar[dl]|\hole & \bullet \ar[ul]\ar[l]\ar[dl] \\ & \bullet\ar[ul] & \bullet\ar[ul]\ar[l] }} \] where a bullet $\bullet$ indicates that the relevant summand of $\mathcal{B}_4$ is included in that term of the filtration, a circle $\circ$ indicates a summand $A_1\otimes -$ of $\mathcal{B}_4$ that has been replaced by $A_{1,0}\otimes -$, and a dot $\cdot$ indicates an omitted summand. \end{example} The next proposition will describe the filtration quotients of the filtration we have just defined. In order to state it we need the following definition. \begin{definition} Let $\mathcal{C}$ be a chain complex of graded $\mathbb{F}$-vector spaces (such as $\mathcal{B}_n$ or $\S_n$). The \emph{homological suspension} of $\mathcal{C}$, denoted $\Sigma_b\mathcal{C}$, is defined to be the chain complex of graded $\mathbb{F}$-vector spaces obtained by increasing the homological grading of each term by $1$. In other words \[ (\Sigma_b\mathcal{C})_{b,d} = \mathcal{C}_{b-1,d} \] for $b,d\geqslant 0$. \end{definition} \begin{proposition}\label{quotient-isomorphism} For $r\geqslant 1$ there is an isomorphism \[ F_r/F_{r-1} \cong \Sigma_b [\S_{n-r}\otimes \mathcal{B}_r], \] while \[ F_0\cong \S_n. \] \end{proposition} \begin{example} Let us illustrate the result of of Proposition~\ref{quotient-isomorphism} in the case $n=4$ and $r=2$. Following on from Example~\ref{filtration-example}, we see that $F_2/F_1$ can be depicted like this: \[\xymatrix@!0@C=60pt@R=60pt{ & \cdot\ar@{..}[dl] & \scriptstyle [A_2]\otimes [A_1\otimes A_1] \ar[dl]_{-}\ar@{..}[l] \\ \cdot & \scriptstyle [A_2]\otimes [A_2]\ar@{..}[l] & \ar@{..}[ul]|\hole\ar@{..}[dl]|\hole & \scriptstyle [A_{1,0}\otimes A_1]\otimes [A_1\otimes A_1] \ar[ul]_+\ar@{..}[l]\ar[dl]^+ \\ & \cdot\ar@{..}[ul] & \scriptstyle [A_{1,0}\otimes A_1]\otimes [A_2] \ar[ul]^+\ar@{..}[l] }\] The signs on the arrows indicate whether the arrow is the one obtained from the obvious multiplication map, or is the negative of that map. Observing now that \[ \S_2 = (A_2\xleftarrow{\ s\ } A_1) \cong (A_2\longleftarrow A_{1,0}\otimes A_1) \] and that \[ \mathcal{B}_2 = (A_2\longleftarrow A_1\otimes A_1), \] where the unmarked arrows are obtained from multiplication maps, we see that $F_2/F_1$ is isomorphic to the complex depicted as follows. \[\xymatrix@!0@C=60pt@R=60pt{ & \cdot\ar@{..}[dl] & \scriptstyle (\S_2)_0\otimes (\mathcal{B}_2)_1 \ar[dl]_{-}\ar@{..}[l] \\ \cdot & \scriptstyle (\S_2)_0\otimes (\mathcal{B}_2)_0\ar@{..}[l] & \ar@{..}[ul]|\hole\ar@{..}[dl]|\hole & \scriptstyle (\S_2)_1\otimes (\mathcal{B}_2)_1 \ar[ul]_+\ar@{..}[l]\ar[dl]^{+} \\ & \cdot\ar@{..}[ul] & \scriptstyle (\S_2)_1\otimes (\mathcal{B}_2)_0 \ar[ul]^{+}\ar@{..}[l] }\] The signs on the arrows now indicate whether the arrow is equal to the tensor product of a differential from $\S_2$ or $\mathcal{B}_2$ with an identity map, or to the negative of such. On the other hand, $\Sigma_b[\S_2\otimes\mathcal{B}_2]$ is exactly the same, but where now the signs are governed by the Koszul sign convention. \[\xymatrix@!0@C=60pt@R=60pt{ & \cdot\ar@{..}[dl] & \scriptstyle (\S_2)_0\otimes (\mathcal{B}_2)_1 \ar[dl]_{+}\ar@{..}[l] \\ \cdot & \scriptstyle (\S_2)_0\otimes (\mathcal{B}_2)_0\ar@{..}[l] & \ar@{..}[ul]|\hole\ar@{..}[dl]|\hole & \scriptstyle (\S_2)_1\otimes (\mathcal{B}_2)_1 \ar[ul]_+\ar@{..}[l]\ar[dl]^{-} \\ & \cdot\ar@{..}[ul] & \scriptstyle (\S_2)_1\otimes (\mathcal{B}_2)_0 \ar[ul]^{+}\ar@{..}[l] }\] The last two complexes are isomorphic \emph{via} the identity map on the summands $(\S_2)_0\otimes (\mathcal{B}_2)_0$ and $(\S_2)_1\otimes(\mathcal{B}_2)_0$, and \emph{via} the negative of the identity map on the summands $(\S_2)_0\otimes(\mathcal{B}_2)_1$ and $(\S_2)_1\otimes(\mathcal{B}_2)_1$, as claimed in Proposition~\ref{quotient-isomorphism}. \end{example} \begin{proof}[Proof of Proposition~\ref{quotient-isomorphism}] For the purposes of the proof, for $m\geqslant 1$ we define a chain complex of graded $\mathbb{F}$-modules $\bar\S_m$ as follows. For $m\geqslant 2$, $\bar\S_m$ is \[\xymatrix@R=5pt{ \bar\S_0 && \bar\S_1\ar[ll]_{d_1} \\ A_m && A_{1,0}\otimes A_{m-1}\ar[ll]_{s} }\] concentrated in homological degrees $0$ and $1$. For $m=1$, we define $\bar\S_1$ to be the graded submodule $A_{1,\geqslant 1}$ of $A_1$ consisting of the terms in positive degree. Observe that $\bar\S_m$ is isomorphic to $\S_m$ \emph{via} the identity map $A_m\to A_m$ in homological degree $0$, and \emph{via} the isomorphism \[ A_{1,0}\otimes A_{m-1}\xrightarrow{\cong} A_{m-1},\qquad \sigma\otimes x\mapsto x \] in homological degree $1$. We will prove the result with $\bar\S_m$ in place of $\S_m$. We begin with the case $r\leqslant n-2$. By definition, $(F_r/F_{r-1})_b$ is the direct sum of the terms \[ A_{q_0}\otimes\cdots\otimes A_{q_b} \] where $q_0+\cdots+q_b=n$, $q_1,\ldots,q_b\geqslant 1$, $q_0=n-r$, together with the terms \[ A_{1,0}\otimes A_{q_1}\otimes \cdots\otimes A_{q_b} \] where $1+q_1+\cdots+q_b=n$, $q_1,\ldots,q_b\geqslant 1$, and $q_1=n-r-1$. In other words, $(F_r/F_{r-1})_b$ is the direct sum of the terms \[ A_{n-r}\otimes[A_{q_0}\otimes\cdots\otimes A_{q_{b-1}}] \] where $q_0+\cdots+q_{b-1}=r$, $q_0,\ldots,q_{b-1}\geqslant 1$, which is exactly $(\bar\S_{n-r})_0\otimes(\mathcal{B}_r)_{b-1}$, together with the direct sum of the terms \[ [A_{1,0}\otimes A_{n-r-1}] \otimes [A_{q_0}\otimes\cdots\otimes A_{q_{b-2}}] \] where $q_0+\cdots+q_{b-2}=r$, $q_0,\ldots,q_{b-2}\geqslant 1$, which is exactly $(\bar\S_{n-r})_1\otimes(\mathcal{B}_r)_{b-2}$. But that is exactly $(\bar\S_{n-r}\otimes\mathcal{B}_r)_{b-1}=(\Sigma[\bar\S_{n-r}\otimes\mathcal{B}_r])_b$. Thus we may construct a degree-wise isomorphism between $F_r/F_{r-1}$ and $\Sigma[\bar\S_{n-r}\otimes\mathcal{B}_r]$ by simply identifying corresponding direct summands. However, the map constructed this way respects the differential only up to sign. To correct this, we map from $\Sigma_b[\bar\S_{n-r}\otimes\mathcal{B}_r]$ to $F_r/F_{r-1}$ by taking $(-1)^{b_2}$ times the identity map on the summands coming from $(\bar\S_{n-r})_{b_1-1}\otimes (\mathcal{B}_r)_{b_2}$. One can now check that this gives the required isomorphism of chain complexes. The proof in the case $r=n-1$ is similar, and the details are left to the reader. \end{proof} \section{Proof of Theorem~\ref{theorem-stability}} \label{section-stability} For the purposes of this section, we let $(G_p)_{p\geqslant 0}$ be a family of groups with multiplication satisfying the hypotheses of Theorems~\ref{theorem-stability}, \ref{theorem-kernel} and~\ref{theorem-realisation}, and we define $A=\bigoplus_{n\geqslant 0}H_\ast(G_n)$. In this section we will prove the following. \begin{theorem}\label{stability-inductive} The complexes $\S_n$ for $n\geqslant 1$, and $\mathcal{B}_n$ for $n\geqslant 2$, are acyclic in the range $b\leqslant n-2d-1$. \end{theorem} Here and in what follows, the phrase ``in the range'' should be understood to mean ``in the range of bidegrees $(b,d)$ for which''. So for example, the theorem states that for $n\geqslant 2$ the complexes $\S_n$ and $\mathcal{B}_n$ are acyclic in all bidegrees $(b,d)$ for which $b\leqslant n-2d-1$. The theorem implies that the homology of $\S_n$ vanishes in bidegrees $(0,d)$ for $d\leqslant \frac{n-1}{2}$, and in bidegrees $(1,d)$ for $d\leqslant \frac{n-2}{2}$. Unwinding the definition of $\S_n$ and $A$, we see that this states that $s_\ast\colon H_\ast(G_{n-1})\to H_\ast(G_n)$ is surjective in degrees $\ast\leqslant \frac{n-1}{2}$, and injective in degrees $\ast\leqslant\frac{n-2}{2}$. In other words, it exactly recovers the statement of Theorem~\ref{theorem-stability}. Our proof of Theorem~\ref{stability-inductive} will be by strong induction on $n$. The case $n=1$ simply states that the homology of $\S_1$ is concentrated in positive degrees, which holds by definition. The case $n=2$ is immediately verified since it states that the maps $s_\ast\colon H_\ast(G_1)\to H_\ast(G_2)$ and $H_\ast(G_1)\otimes H_\ast(G_1)\to H_\ast(G_2)$ are isomorphisms in degree $\ast=0$. For the rest of the section we will assume that Theorem~\ref{stability-inductive} holds for all integers smaller than $n$, and will will prove that it holds for $n$. \begin{lemma} \label{FonetoBn} Assume that Theorem~\ref{stability-inductive} holds for all integers smaller than $n$. Then the composite \[ F_1\hookrightarrow F_2\hookrightarrow \cdots \hookrightarrow F_{n-2}\hookrightarrow F_{n-1}=\mathcal{B}_n \] is a surjection on homology in the range $b\leqslant n-2d$ and an isomorphism in the range $b\leqslant n-2d-1$. \end{lemma} \begin{proof} For $r$ in the range $n-1\geqslant r\geqslant 2$, the inductive hypothesis tells us that $\S_{n-r}$ and $\mathcal{B}_r$ are acyclic in the ranges $b\leqslant (n-r)-2d-1$ and $b\leqslant r-2d-1$ respectively. Consequently $\S_{n-r}\otimes\mathcal{B}_r$ is acyclic in the range $b\leqslant n-2d-1$, so that $F_r/F_{r-1}\cong \Sigma_b(\S_{n-r}\otimes\mathcal{B}_r)$ is acyclic in the range $b\leqslant n-2d$. It follows that $F_{r-1}\to F_r$ is a surjection on homology in the range $b\leqslant n-2d$ and an isomorphism in the range $b\leqslant n-2d-1$. (The estimate for the acyclic range of $\S_{n-r}\otimes\mathcal{B}_r$ is seen as follows. The K\"unneth Theorem tells us that the homology of $\S_{n-r}\otimes\mathcal{B}_r$ is the tensor product of the homologies of $\S_{n-r}$ and $\mathcal{B}_r$. Nonzero elements $x$ and $y$ of these respective homologies must lie in bidegrees $(b_1,d_1)$ and $(b_2,d_2)$ satisfying $b_1\geqslant (n-r)-2d_1$ and $b_2\geqslant r-2d_2$, so that $x\otimes y$ lies in bidegree $(b_1+b_2,d_1+d_2)$ satisfying $(b_1+b_2)\geqslant n -2(d_1+d_2)$, so that $\S_{n-r}\otimes\mathcal{B}_r$ is acyclic in the range $b\leqslant n-2d-1$, as claimed.) \end{proof} \begin{lemma} The inclusion $F_0\hookrightarrow F_1$ is an isomorphism in homology in the range $b\leqslant n-2d-1$. \end{lemma} \begin{proof} Consider the chain complex corresponding to the square \[\xymatrix@!0@=60pt{ {} & A_{n-1}\otimes A_{1,0}\ar[dl] & {} \\ A_n & {} & A_{1,0}\otimes A_{n-2}\otimes A_{1,0} \ar[ul]\ar[dl] \\ {} & A_{1,0}\otimes A_{n-1}\ar[ul] & {} }\] in which the arrows are induced by the multiplication maps of $A$. This is a subcomplex $\S q_n$ of $\mathcal{B}_n$, and indeed of $F_1$. Restricting the filtration $F_0\subset F_1$ of $F_1$ to $\S q_n$ gives a filtration $\bar F_0\subset \bar F_1$ of $\S q_n$ for which $\bar F_0 = F_0$. There results a commutative diagram with short exact rows and left column an isomorphism. \[\xymatrix{ 0\ar[r] & \bar F_0\ar[r]\ar[d]_{\cong} & \S q_n\ar[r]\ar[d] & \bar F_1/\bar F_0\ar[r]\ar[d] & 0 \\ 0\ar[r] & F_0\ar[r] & F_1\ar[r] & F_1/F_0\ar[r] & 0 }\] The right-hand vertical map is an injection with cokernel \[ \Sigma[\S_{n-1}\otimes H_{\ast\geqslant 1}(G_1)]. \] Since $\S_{n-1}$ is acyclic in the range $b\leqslant (n-1)-2d-1$, this cokernel is acyclic in the range $b\leqslant [(n-1)-2(d-1)-1]+1=n-2d+1$, so that the right-hand map in the diagram is a surjection in homology in the same range. The connecting homomorphism for the top row is zero, since $\S q_n$ is isomorphic to the chain complex obtained from the square \[\xymatrix@!0@=60pt{ {} & A_{n-1}\ar[dl] & {} \\ A_n & {} & A_{n-2} \ar[ul]\ar[dl] \\ {} & A_{n-1}\ar[ul] & {} }\] in which each map is multiplication by $\sigma\in A_{1,0}$, where triviality of the connecting homomorphism is evident. The connecting homomorphism for the bottom sequence is therefore zero in the range (of bidegrees for its domain) $b\leqslant n-2d+1$. It follows that in the range $b\leqslant n-2d$ we have short exact sequences \[ 0\to H_\ast(F_0)\to H_\ast(F_1)\to H_\ast(F_1/F_0)\to 0. \] In the smaller range $b\leqslant n-2d-1$ the third term vanishes, so that $H_\ast(F_0)\to H_\ast(F_1)$ is an isomorphism as claimed. \end{proof} We can now complete the proof of Theorem~\ref{stability-inductive}. It follows from the last two lemmas that in the range $b\leqslant n-2d-1$ the inclusion $\S_{n}=F_0\hookrightarrow \mathcal{B}_n$ is an isomorphism in homology. The homology of $\S_n$ is concentrated in the range $b\leqslant 1$, so that the homology of $\mathcal{B}_n$ vanishes in the range $2\leqslant b\leqslant n-2d-1$. It remains to prove that $H_\ast(\S_n)=H_\ast(\mathcal{B}_n)=0$ in the range where $b\leqslant n-2d-1$ and $b\leqslant 1$ both hold. In order to proceed we use the spectral sequence of Theorem~\ref{spectral-sequence-theorem}, which has $H_\ast(\mathcal{B}_n)=E^2_{\ast,\ast}$. No nonzero differentials $d^r$, $r\geqslant 2$, of the spectral sequence affect terms in the range $b\leqslant n-2d-1$, $b\leqslant 1$. This is because any differential with source in this range has target outside the first quadrant. And any differential $d^r$ with target in this range has source $E^r_{b+r,d-r+1}$, where \[ b+r\leqslant n-2d-1+r \leqslant n-2(d-r+1)-1, \] so that $E^r_{b+r,d-r+1}=0$. Thus $H_\ast(\S_n)=H_\ast(\mathcal{B}_n)=E^\infty_{\ast,\ast}$ in the range $b\leqslant n-2d-1$, $b\leqslant 1$. Recall that $E^\infty_{\ast,\ast}=0$ in the range $d\leqslant n-2-b$. Now for $n\geqslant 3$ and $b=0,1$ we have \[d\leqslant \frac{n-b-1}{2}\implies d\leqslant n-2-b.\] (The case $d\geqslant 1$ must be treated separately from the case $d=0$, which is vacuous.) Thus $H_\ast(\S_n)=H_\ast(\mathcal{B}_n)=E^\infty_{\ast,\ast}=0$ as required. \section{Proof of Theorem~\ref{theorem-realisation}} \label{section-realisation} For the purposes of this section, we let $(G_p)_{p\geqslant 0}$ be a family of groups with multiplication satisfying the hypotheses of Theorems~\ref{theorem-stability}, \ref{theorem-kernel} and~\ref{theorem-realisation}, and we define $A=\bigoplus_{n\geqslant 0}H_\ast(G_n)$. In this section we will prove Theorem~\ref{theorem-realisation}, essentially by extracting a little extra data from the proof of Theorem~\ref{theorem-stability}, and then exploiting a cheap trick. Throughout the section we will write $A_{i,j}$ for the part of $A$ with charge $i$ and topological degree $j$. In other words, $A_{i,j}=(A_i)_j$. \begin{lemma}\label{twomplusone-acyclicity} For $m\geqslant 1$, the graded chain complex $\mathcal{B}_{2m+1}$ is acyclic in the range $3\leqslant b\leqslant (2m+1)-2d$. \end{lemma} \begin{proof} Lemma~\ref{FonetoBn} shows that the inclusion $F_1\hookrightarrow \mathcal{B}_n$ is a surjection on homology in the range $b\leqslant n-2d$. However, $F_1$ is concentrated in homological degrees $b=0,1,2$, and so is acyclic in the range $b\geqslant 3$. Combining the two facts gives the result. \end{proof} \begin{lemma} \label{onemlemma} In the spectral sequence of Theorem~\ref{spectral-sequence-theorem}, for $n=2m+1$, there are no differentials affecting the term in bidegree $(1,m)$ from the $E^2$ page onwards. \end{lemma} \begin{proof} Certainly there are no such differentials with source in this bidegree, since the spectral sequence is concentrated in the first quadrant. Since $E^1=\mathcal{B}_{2m+1}$, Lemma~\ref{twomplusone-acyclicity} shows that $E^2$ vanishes in the range \[ 3\leqslant b\leqslant (2m+1)-2d. \] If $r\geqslant 2$, then any differential $d^r$ with target in bidegree $(1,m)$ has source in bidegree $(b,d)=(1+r,m-r+1)$, so that \[ b=(2m+1)-2d - [r-2] \leqslant (2m+1)-2d, \] and consequently the source term vanishes. \end{proof} \begin{lemma}\label{twomplusoneacyclic-lemma} Let $m\geqslant 2$. Then the complex $\mathcal{B}_{2m+1}$ is acyclic in bidegree $(1,m)$. \end{lemma} \begin{proof} In the spectral sequence of Theorem~\ref{spectral-sequence-theorem} for $n=2m+1$, we know that $E^2_{1,m}=E^\infty_{1,m}$ by Lemma~\ref{onemlemma}, and that $E^\infty_{1,m}=0$ since $m\geqslant 2$ guarantees that $1+m\leqslant (2m+1)-2$. So $E^2_{1,m}=0$, but this is simply the homology of $\mathcal{B}_{2m+1}$ in bidegree $(1,m)$. \end{proof} \begin{lemma}\label{twomacyclic-lemma} Let $m\geqslant 2$. Then $\mathcal{B}_{2m}$ is acyclic in bidegree $(0,m)$. \end{lemma} \begin{proof} Consider the following composite. \[ \Sigma_b A_{2m} \xrightarrow{\ \theta\ } \mathcal{B}_{2m+1} \xrightarrow{\ \phi\ } \Sigma_b \mathcal{B}_{2m}\otimes \mathcal{B}_1 \xrightarrow{\ \psi\ } \Sigma_b\mathcal{B}_{2m} \] Here $\theta$ is the map that sends $x\in A_{2m}$ to the element $x\otimes \sigma- \sigma\otimes x\in(\mathcal{B}_{2m+1})_{1}$. To check that $\theta$ is a chain map, we need only check that the differential vanishes on its image, which holds because \[ d(x\otimes \sigma - \sigma\otimes x)= x\cdot \sigma - \sigma\cdot x =0. \] Next, $\Sigma_b(\mathcal{B}_{2m}\otimes\mathcal{B}_1)$ can be identified with the submodule of $\mathcal{B}_{2m+1}$ consisting of summands of the form $-\otimes A_1$, and $\phi$ is the projection onto these summands. It is a chain map. Finally, $\psi$ is the map that projects $\mathcal{B}_1=A_1$ onto its degree $0$ part $A_{1,0}\cong\mathbb{F}$. In homology in bidegree $(1,m)$ this map is zero since it factors through the homology of $\mathcal{B}_{2m+1}$, which vanishes in that bidegree. On the other hand, in this bidegree the composite is simply the suspension of the map $A_{2m}\to \mathcal{B}_{2m}$, which is a surjection in homological degree $b=0$. It follows that the target of this map, which is the homology of $\mathcal{B}_{2m}$ in bidegree $(0,m)$, is zero. \end{proof} \begin{proof}[Proof of Theorem~\ref{theorem-realisation}] We have seen that $\mathcal{B}_{2m}$ is acyclic in bidegree $(0,m)$. This means that the map \[ \bigoplus_{\substack{p+q=2m \\ p,q\geqslant 1}} \bigoplus_{\substack{p'+q'=m \\ p',q'\geqslant 0}} A_{p,p'}\otimes A_{q,q'} \longrightarrow A_{2m,m} \] is surjective. Now, suppose that $p,q,p',q'$ are as in the summation above, with $p'\leqslant\frac{p-1}{2}$. Then we have the commutative diagram \[\xymatrix{ A_{p,p'}\otimes A_{q,q'} \ar[r] & A_{2m,m} \\ A_{p-1,p'}\otimes A_{q,q'} \ar[u]^{s\otimes\mathrm{id}} \ar[r] & A_{2m-1,m} \ar[u]_{s} } \] in which the left-hand map is surjective by Theorem~\ref{theorem-stability}, so that the image of $A_{p,p'}\otimes A_{q,q'}$ is contained in the image of $s$. Similarly, if $q'\leqslant\frac{q-1}{2}$, then the image of $A_{p,p'}\otimes A_{q,q'}$ is contained in the image of $s$. The only summands to which these observations do not apply are those indexed by $p,q,p',q'$ as in the summation, satisfying also that \[ p'>\frac{p-1}{2},\qquad q'>\frac{q-1}{2}. \] Adding these inequalities shows that we have \[ m=p'+q'>m-1. \] Thus the only possibility it that $p'$ is greater than $\frac{p-1}{2}$ by exactly $1/2$, and similarly for $q'$. In other words, we must have $p=2p'$ and $q=2q'$. So we have shown that the map \[ A_{2m-1,m}\oplus \bigoplus_{\substack{p'+q'=m \\ p',q'\geqslant 1}} A_{2p',p'}\otimes A_{2q',q'} \longrightarrow A_{2m,m} \] is surjective. In the case $m=2$ this proves the claim, and for $m>2$ the claim now follows by induction. \end{proof} \section{Proof of Theorem~\ref{theorem-kernel}} \label{section-kernel} For the purposes of this section, we let $(G_p)_{p\geqslant 0}$ be a family of groups with multiplication satisfying the hypotheses of Theorems~\ref{theorem-stability}, \ref{theorem-kernel} and~\ref{theorem-realisation}, and we define $A=\bigoplus_{n\geqslant 0}H_\ast(G_n)$. The aim of this section is to prove Theorem~\ref{theorem-kernel}, which is an immediate consequence of Theorem~\ref{theorem-realisation} and the following. The section will deal with complexes like $\mathcal{B}_n$ which have a homological and topological grading. Given such a complex $\mathcal{C}$, we will write $H_{i,j}(\mathcal{C})$ for the part of $H_i(\mathcal{C})$ that lies in topological grading $j$, in other words $H_{i,j}(\mathcal{C})= H_i(\mathcal{C})_j$. \begin{theorem}\label{new-kernel-theorem} Let $m\geqslant 1$. Then the images of the maps \begin{gather} \ker\Bigl[s_\ast\colon H_{m-1}(G_{2m-2})\to H_{m-1}(G_{2m-1})\Bigr] \otimes H_1(G_2) \longrightarrow \ker\Bigl[s_\ast\colon H_{m}(G_{2m})\to H_m(G_{2m+1})\Bigr] \label{surjective-map-one} \\ H_{m-1}(G_{2m-2}) \otimes \ker\Bigl[s_\ast\colon H_1(G_2)\to H_1(G_3)\Bigr] \longrightarrow \ker\Bigl[s_\ast\colon H_{m}(G_{2m})\to H_m(G_{2m+1})\Bigr] \label{surjective-map-two} \end{gather} together span $\ker\left[s_\ast\colon H_{m}(G_{2m})\to H_m(G_{2m+1})\right]$. \end{theorem} The main ingredient in the proof of Theorem~\ref{new-kernel-theorem} is Lemma~\ref{twomplusoneacyclic-lemma}, which states that $H_{1,m}(\mathcal{B}_{2m+1})=0$ for $m\geqslant 1$, and of which it is an entirely algebraic consequence. However our argument is significantly more unpleasant than we would like. Here is the general outline: Theorem~\ref{new-kernel-theorem} is a statement about $H_{1,m}(\S_{2m+1})$, which is by definition the kernel $\ker\left[s_\ast\colon H_{m}(G_{2m})\to H_m(G_{2m+1})\right]$. We will use the filtration \[ \S_{2m+1}=F_0\subseteq F_1\subseteq\cdots\subseteq F_{2m}=\mathcal{B}_{2m+1} \] from Definition~\ref{filtration-definition} to get from what we know about $H_{1,m}(\mathcal{B}_{2m+1})$ to what we need to know about $H_{1,m}(\S_{2m+1})$. We will do this by using the spectral sequence arising from the filtration in topological degree $m$. \[ E^1_{i,j}=H_{i+j,m}(F_i/F_{i-1}) \implies H_{i+j,m}(\mathcal{B}_{2m+1}) \] The point is to identify the differentials affecting the term $E^1_{0,1}=H_{1,m}(\S_{2m+1})$ with the maps~\eqref{surjective-map-one} and~\eqref{surjective-map-two}. Let us begin the proof in detail. We are interested in the values of $H_{r,m}(F_i/F_{i-1})$ in the cases $r=0,1,2$. Recall from Proposition~\ref{quotient-isomorphism} that for $i\geqslant 1$ we have \[ F_i/F_{i-1} \cong \Sigma_b [\S_{2m+1-i}\otimes\mathcal{B}_i] \] so that \[ H_{r,m}(F_i/F_{i-1}) \cong H_{r-1,m}[\S_{2m+1-i}\otimes\mathcal{B}_i] \cong \bigoplus_{\substack{r_1+r_2=r-1\\ m_1+m_2=m}} H_{r_1,m_1}(\S_{2m+1-i})\otimes H_{r_2,m_2}(\mathcal{B}_i). \] We have the following. \begin{lemma} For $r=0,1,2$ and $i=0,\ldots,2m$, the only nonzero groups $H_{r,m}(F_i/F_{i-1})$ are as follows. \begin{align*} H_{1,m}(F_0) & \cong H_{1,m}(\S_{2m+1}) \\ H_{2,m}(F_0) & \cong H_{2,m}(\S_{2m+1}) \\ H_{1,m}(F_1/F_0) & \cong H_{0,m}(\S_{2m})\otimes H_{0,0}(\mathcal{B}_1)\\ H_{2,m}(F_1/F_0) & \cong H_{1,m}(\S_{2m})\otimes H_{0,0}(\mathcal{B}_1)\\ H_{2,m}(F_2/F_1) & \cong H_{1,m-1}(\S_{2m-1})\otimes H_{0,1}(\mathcal{B}_2)\\ H_{2,m}(F_3/F_2) & \cong H_{0,m-1}(\S_{2m-2})\otimes H_{1,1}(\mathcal{B}_3) \end{align*} \end{lemma} \begin{proof} {\bf Case $i=0$.} In this case we have $H_{r,m}(F_0)=H_{r,m}(\S_{2m+1})$, and by Theorem~\ref{stability-inductive} this is nonzero only for $r\geqslant 1$. {\bf Case $i=1$.} In this case we have \[ H_{r,m}(F_1/F_0) \cong H_{r,m}(\Sigma_b[\S_{2m}\otimes\mathcal{B}_1]) \cong H_{r-1,m}(\S_{2m}\otimes\mathcal{B}_1) \cong \bigoplus_{m_1+m_2=m} H_{r-1,m_1}(\S_{2m})\otimes H_{0,m_2}(\mathcal{B}_1) \] since $\mathcal{B}_1$ is concentrated in homological degree $b=0$. Now by Theorem~\ref{stability-inductive} the term $H_{r-1,m_1}(\S_{2m})$ vanishes for $m_1\leqslant m-r/2$. So for $r=0$ we require $m_1>m$, which is impossible, and for $r=1,2$ the only possibility is $m_1=m$, $m_2=0$. So the possible terms are \[ H_{1,m}(F_1/F_0)\cong H_{0,m}(\S_{2m})\otimes H_{0,0}(\mathcal{B}_1) \] and \[ H_{2,m}(F_1/F_0)\cong H_{1,m}(\S_{2m})\otimes H_{0,0}(\mathcal{B}_1). \] {\bf Case $2\leqslant i\leqslant 2m$.} In this case we have \begin{align*} H_{r,m}(F_i/F_{i-1}) &\cong H_{r,m}(\Sigma_b[\S_{2m+1-i}\otimes\mathcal{B}_i]) \\ &\cong H_{r-1,m}(\S_{2m+1-i}\otimes\mathcal{B}_i) \\ &\cong \bigoplus_{\substack{r_1+r_2=r-1\\m_1+m_2=m}} H_{r_1,m_1}(\S_{2m+1-i})\otimes H_{r_2,m_2}(\mathcal{B}_i). \end{align*} Now from Theorem~\ref{stability-inductive} we know that $H_{r_2,m_2}(\mathcal{B}_i)=0$ for $r_2\leqslant 2i-2m_2-1$ while $H_{r_1,m_1}(\S_{2m+1-i})=0$ for $r_1\leqslant 2m+1-i-2m_1-1$ Thus a nonzero group appearing in the direct sum above must have \[ r_1=2m-i-2m_1+\delta \text{ and } r_2=i-2m_2-1+\epsilon \] for $\delta,\epsilon>0$. Then the constraints $r_1+r_2=r-1$ and $m_1+m_2=m$ give us $r=\delta+\epsilon$. Thus, to find a nonzero group when $i\geqslant 2$ and $r=0,1,2$, the only possibility is that $r=2$ and $\delta=\epsilon=1$. But then $(r_1,r_2)=(1,0)$ or $(r_1,r_2)=(0,1)$, in which case we have two possible summands, only one of which is possible at a given time, namely \[ H_{2,m}(F_i/F_{i-1}) = \left\{\begin{array}{ll} H_{0,m-(i-1)/2}(\S_{2m+1-i})\otimes H_{1,(i-1)/2}(\mathcal{B}_i) & \text{for }i\text{ odd,} \\ H_{1,m-i/2}(\S_{2m+1-i})\otimes H_{0,i/2}(\mathcal{B}_i) & \text{for }i\text{ even.} \end{array}\right. \] However, Lemmas~\ref{twomplusoneacyclic-lemma} and~\ref{twomacyclic-lemma} guarantee that the second factors vanish for $i\geqslant 4$. Thus the only contributing factors are \[ H_{2,m}(F_3/F_2) = H_{0,m-1}(\S_{2m-2})\otimes H_{1,1}(\mathcal{B}_3) \] and \[ H_{2,m}(F_2/F_1)=H_{1,m-1}(\S_{2m})\otimes H_{0,1}(\mathcal{B}_2). \] This completes the proof. \end{proof} Thus the spectral sequence is as follows. \[ \begin{tikzpicture} \matrix (m) [matrix of math nodes, nodes in empty cells,nodes={minimum width=5ex, minimum height=5ex,outer sep=-5pt}, column sep=1ex,row sep=1ex]{ \scriptstyle H_{2,m}(\S_{2m+1}) & \bullet & \bullet & \bullet & \bullet \\ \scriptstyle H_{1,m}(\S_{2m+1}) & \scriptstyle H_{1,m}(\S_{2m})\otimes H_{0,0}(\mathcal{B}_1) & \bullet & \bullet & \\ 0 & \scriptstyle H_{0,m}(\S_{2m})\otimes H_{0,0}(\mathcal{B}_1)& \scriptstyle H_{1,m-1}(\S_{2m-1})\otimes H_{0,1}(\mathcal{B}_2) & \bullet& \bullet \\ 0 & 0 & 0 & \scriptstyle H_{0,m-1}(\S_{2m-2})\otimes H_{1,1}(\mathcal{B}_3)& \bullet \\ 0 & 0 & 0 & 0 & 0 \\ }; \draw[dotted] (m-1-1) -- (m-2-1); \draw[dotted] (m-2-1) -- (m-3-1); \draw[dotted] (m-3-1) -- (m-4-1); \draw[dotted] (m-4-1) -- (m-5-1); \draw[dotted] (m-3-1) -- (m-3-2); \draw[dotted] (m-3-2) -- (m-3-3); \draw[dotted] (m-3-3) -- (m-3-4); \draw[dotted] (m-3-4) -- (m-3-5); \end{tikzpicture} \] We will now investigate the differentials affecting $H_{1,m}(\S_{2m+1})$. We will need the following preliminary result. \begin{lemma} \label{Bthreelemma} An arbitrary element of $H_{1,1}(\mathcal{B}_3)$ has a representative of the form \[ (x\otimes \sigma - \sigma\otimes x) + q\otimes\sigma \] where $x\in A_{2,1}$ and $q\in\ker(s\colon A_{2,1}\to A_{3,1})$, and $\sigma\in A_{1,0}$ is the stabilising element. \end{lemma} \begin{proof} $A_1$ and $A_2$ are concentrated in non-negative degrees, and in degree $0$ they are spanned by $\sigma$ and $\sigma^2$ respectively. Thus an arbitrary cycle of $\mathcal{B}_3$ in bidegree $(1,1)$ has form $j\otimes\sigma + k\otimes\sigma^2 + \sigma\otimes l +\sigma^2\otimes m$ for $j,l\in A_{2,1}$ and $k,m\in A_{1,1}$. By adding $d(k\otimes\sigma\otimes\sigma -\sigma\otimes\sigma\otimes m)$, we may assume that $k=m=0$, so that our cycle has the form $j\otimes\sigma+\sigma\otimes l$. This can be rewritten in the required form with $x=-l$ and $q=j+l$. \end{proof} \begin{lemma} The span of the images of the differentials with target $H_{1,m}(\S_{2m+1})\cong \ker(s\colon H_m(G_{2m})\to H_m(G_{2m+1})$ is precisely the span of the maps \eqref{surjective-map-one} and \eqref{surjective-map-two}. \end{lemma} \begin{proof} There are just three positions in the spectral sequence supporting differentials with the given target. We will compute the differentials case by case. {\bf Case 1: Differentials with domain $H_{1,m}(\S_{2m})\otimes H_{0,0}(\mathcal{B}_1)$.} An element $l$ of the domain can be represented by a cycle $l_1=x\otimes\sigma$ in $\S_{2m}\otimes\mathcal{B}_1$, where $x\in\ker(s\colon A_{2m,m}\to A_{2m+1,m})$ and $\sigma\in A_{1,0}$ is the stabilising element. Then under the isomorphism of Proposition~\ref{quotient-isomorphism}, $l_1$ corresponds to the element $l_2=\sigma\otimes x\otimes \sigma$ of $F_1/F_0$. We lift this to the element $l_3=\sigma\otimes x \otimes \sigma$ of $F_1$. Then $d(l_3) = \sigma\cdot x \otimes \sigma - \sigma\otimes x\cdot \sigma=0$. Thus all differentials $d^r$ vanish on $l$. (In fact there is only one possibility, $d^1$.) {\bf Case 2: Differentials with domain $H_{1,m-1}(\S_{2m-1})\otimes H_{0,1}(\mathcal{B}_2)$}. An element $l$ of the domain can be represented by a linear combination of cycles of the form $x\otimes y$ in $\S_{2m-1}\otimes\mathcal{B}_2$, where $x\in\ker(s\colon A_{2m-2,m-1}\to A_{2m-1,m-1})$ and $y\in A_{2,1}$. Let us assume without loss that $l$ is in fact represented by $l_1=x\otimes y$. Then under the isomorphism of Proposition~\ref{quotient-isomorphism}, $l_1$ corresponds to the element $l_2=\sigma\otimes x\otimes y$ of $F_2/F_1$, which we lift to the element $l_3=\sigma\otimes x\otimes y$ of $F_2$. Now $d(l_3) = \sigma\cdot x\otimes y - \sigma \otimes x\cdot y =-\sigma\otimes x\cdot y$, which lies in $F_0$. Thus $d^1(l)=0$, while $d^2(l)$ is the class represented by $-\sigma\otimes x\cdot y$, which under the isomorphism of Proposition~\ref{quotient-isomorphism} corresponds to the element $-x\cdot y$ of $A_{2m,m} = H_m(G_{2m})$. This is precisely the image of $-x\cdot y$ under the map~\eqref{surjective-map-one} above. Thus the image of $d^2$ is precisely the image of~\eqref{surjective-map-one}. {\bf Case 3: Differentials with domain $H_{0,m-1}(\S_{2m-2})\otimes H_{1,1}(\mathcal{B}_3)$.} By Lemma~\ref{Bthreelemma}, an element $l$ of the domain has a representative of the form \[ l_1=\sum_\alpha x_\alpha \otimes(y_\alpha\otimes\sigma-\sigma\otimes y_\alpha) +\sum_\beta p_\beta\otimes(q_\beta\otimes\sigma) \] where $x_\alpha,p_\beta\in A_{2m-2,m-1}$, $y_\alpha\in A_{2,1}$ and $q_\beta\in \ker(s\colon A_{2,1}\to A_{3,1})$. Under the isomorphism of Proposition~\ref{quotient-isomorphism}, $l_1$ corresponds to the element \[ l_2=\sum_\alpha ( x_\alpha \otimes y_\alpha\otimes\sigma- x_\alpha\otimes \sigma\otimes y_\alpha) +\sum_\beta p_\beta\otimes q_\beta\otimes\sigma \] of $F_3/F_2$. We lift this to the element \[ l_3=\sum_\alpha ( x_\alpha \otimes y_\alpha\otimes\sigma- x_\alpha\otimes \sigma\otimes y_\alpha +\sigma\otimes x_\alpha\otimes y_\alpha) +\sum_\beta p_\beta\otimes q_\beta\otimes\sigma \] of $F_3$. (The apparently new terms lie in $F_2$.) Then \[ d(l_3)=\sum_\alpha ( x_\alpha\cdot y_\alpha\otimes \sigma - \sigma\otimes x_\alpha\cdot y_\alpha) +\sum_\beta p_\beta\cdot q_\beta\otimes\sigma. \] This lies in $F_1$, so that $d^1(l)=0$, and its image in $F_1/F_0$ is \[ \sum_\alpha x_\alpha\cdot y_\alpha\otimes \sigma +\sum_\beta p_\beta\cdot q_\beta\otimes\sigma \] so that applying the isomorphism of Proposition~\ref{quotient-isomorphism} shows that \[ d^2(l) = \left[\sum_\alpha x_\alpha\cdot y_\alpha + \sum_\beta p_\beta\cdot q_\beta\right] \otimes [\sigma] \in H_{0,m}(\S_{2m})\otimes H_{0,0}(\mathcal{B}_1). \] Thus $l$ lies in the kernel of $d^2$ if and only if \[ \left[\sum_\alpha x_\alpha\cdot y_\alpha + \sum_\beta p_\beta\cdot q_\beta\right] \] is zero in $H_{0,m}(\S_{2m})$, or in other words if and only if there is $w\in A_{2m-1,m}$ such that $\sum_\alpha x_\alpha\cdot y_\alpha + \sum_\beta p_\beta\cdot q_\beta= \sigma\cdot w$. In this case, we may again represent $l$ by $l_1$, which again corresponds to the element $l_2$ of $F_3/F_2$, but which we now lift to the element $l_3-\sigma\otimes w\otimes\sigma$ of $F_3$. (The additional term lies in $F_1$.) But then $d(l_3-\sigma\otimes w\otimes \sigma)$ is precisely the element \[ \sum_\beta \sigma\otimes p_\beta\cdot q_\beta \] of $F_0$. Applying the isomorphism of Proposition~\ref{quotient-isomorphism}, we find that \[ d^3(l) = \left[\sum_\beta p_\beta\cdot q_\beta\right] \in H_{1,m}(\S_{2m+1}). \] But then it follows that the image of $d^3$ is precisely the span of the map \eqref{surjective-map-two}. \end{proof} We may now complete the proof. Since $H_{1,m}(\mathcal{B}_{2m+1})=0$, it follows that the infinity-page of the spectral sequence must vanish in total degree $1$. So then in particular we must have $E^\infty_{1,0}=0$, or in other words, the differentials with target $H_{1,m}(\S_{2m+1})$ must span. But we have identified the (nonzero) differentials with the maps \eqref{surjective-map-one} and \eqref{surjective-map-two}. Thus it follows that together, the images of these two maps must span. This completes the proof of Theorem~\ref{new-kernel-theorem}. \bibliographystyle{plain}
{ "timestamp": "2016-10-25T02:01:05", "yymm": "1608", "arxiv_id": "1608.08834", "language": "en", "url": "https://arxiv.org/abs/1608.08834", "abstract": "We prove a general homological stability theorem for certain families of groups equipped with product maps, followed by two theorems of a new kind that give information about the last two homology groups outside the stable range. (These last two unstable groups are the \"edge\" in our title.) Applying our results to automorphism groups of free groups yields a new proof of homological stability with an improved stable range, a description of the last unstable group up to a single ambiguity, and a lower bound on the rank of the penultimate unstable group. We give similar applications to the general linear groups of the integers and of the field of order 2, this time recovering the known stablility range. The results can also be applied to general linear groups of arbitrary principal ideal domains, symmetric groups, and braid groups. Our methods require us to use field coefficients throughout.", "subjects": "Algebraic Topology (math.AT); Geometric Topology (math.GT)", "title": "On the edge of the stable range", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9869795095031688, "lm_q2_score": 0.8221891218080991, "lm_q1q2_score": 0.8114838161609987 }
https://arxiv.org/abs/1012.3541
On the polygonal diameter of the interior, resp. exterior, of a simple closed polygon in the plane
We give a tight upper bound on the polygonal diameter of the interior, resp. exterior, of a simple $n$-gon, $n \ge 3$, in the plane as a function of $n$, and describe an $n$-gon $(n \ge 3)$ for which both upper bounds (for the interior and the exterior) are attained \emph{simultaneously}.
\section{Introduction} The following is well known \begin{theorem}{\rm (The Jordan theorem)}\label{theo1.1} Let $f:[0,1] \to \bR^2$ be a simple closed curve in the plane ($f$ is continous, $f(0) = f(1)$ and $f(u) \not= f(v)$ for $0 < u < v \le 1$). Define $P=_{\rm def}$ {\rm image}$f= \{f(u) : 0 \le u \le 1\}$, the image of $f$. Then $\bR^2 \setminus P = U_0 \cup U_1$, where $U_0, U_1$ are connected open, non-empty mutually disjoint sets, $U_0$ is bounded (interior), $U_1$ is unbounded (exterior), and $P = {\rm bd}(U_0) = {\rm bd} (U_1)$. \end{theorem} The proof of this theorem is not easy; see \cite{Bertoglio}, \cite{Lawson}, \cite{Thomassen}, \cite[p. 37 ff.]{Moise}, \cite[vol. I, pp. 39-64]{Aleksandrov}, \cite[pp. 285 ff.]{Kuratowski}, and the survey \cite{Dostal}. When the curve $P$ is polygonal, however, i.e., when $f$ is piecewise affine, the theorem becomes elementary: \begin{theorem} {\rm(The piecewise affine Jordan theorem)} \label{theo1.2} Let $p_0,p_1,\dots,p_{n-1},p_n=p_0, n \ge 3$, be ($n$ distinct) points in $\bR^2$. Assume that the polygon $P=_{\rm def} \bigcup\limits^n_{i=1} [p_{i-1},p_i]$ is simple, i.e., the segments $[p_{i-1},p_i]$ do not intersect except for common endpoints: $\{p_i\} = [p_{i-1},p_i] \cap [p_i,p_{i+1}]$ for $1 \le i \le n-1, \{p_0\} = [p_0,p_1] \cap [p_{n-1},p_0]$. Then $\bR^2 \setminus P = U_0 \cup U_1$ with the same properties of $U_0,U_1$ listed above {\rm (Theorem \ref{theo1.1})}. \end{theorem} \begin{definition}\label{def1.1} A polygon $P$ satisfying the conditions of Theorem \ref{theo1.2} is a \emph{simple closed $n$-gon}. The bounded [resp. unbounded] domain $U_0$ [resp. $U_1$] is the \emph{interior} [resp. \emph{exterior}], denoted by int$P$ [resp. ext$P$], of $P$. \end{definition} A particularly simple proof of Theorem \ref{theo1.2} is known as the ``raindrop proof'', see \cite[pp. 267-269]{Courant}, \cite[pp. 281-285]{Hille}, \cite[pp. 27-29]{Bensen}, or \cite[pp. 16-18]{Moise}. We reproduce this proof in a somewhat more complete and formal form than usually given in the literature for later reference to some of its parts. So we first prove Theorem \ref{theo1.2} (in Paragraphs 2 and 3 below). Then, squeezing this proof, a \emph{tight} upper bound on the polygonal diameter of int$P$ [resp. ext$P$] (see Definition \ref{def3.2} below) is given as a function of $n$, and an $n$-gon $(n \ge 3)$ for which both upper bounds are attained \emph{simultaneously} is described (see Theorem \ref{theo4.1} below). The $d$-dimensional analogue $(d \ge 2)$ of this problem was discussed in \cite[Theorem 3.2]{Perles}. There we gave upper bounds on the polygonal diameter of int$\cC$, resp. ext$\cC$, for a polyhedral $(d-1)$-pseudomanifold $\cC$ in $\bR^d$ as a function of the number $n$ of its facets and $d$. The bounds given there are shown to be \emph{almost} tight (see \cite[Section 4]{Perles}), whereas the bounds given here (for $d = 2$) are tight. Another novelty of the present paper is that there is an $n$-gon $P$ in $\bR^2$ for which \emph{both} upper bounds (on the polygonal diameter of int$P$ and ext$P$) are attained (simultanously), as said above, whereas for $d \ge 3$ the examples given in \cite[Section 4]{Perles} (namely one for int$\cC$ and another one for ext$\cC$) are \emph{different} from each other. For the sake of the proof of Theorem \ref{theo1.2}, we split it into two statements: Let $P$ be a simple closed polygon in $\bR^2$. (E) (separation): $\bR^2 \setminus P$ is the disjoint union of two open sets, int$P$ and ext$P$. The boundary of each one of these sets is $P$; int$P$ is bounded and ext$P$ is unbounded. (F) (connectivity): The sets int$P$ and ext$P$ are [polygonally] connected. We shall prove (E) (Paragraph 2) by constructing a continuous function $f: \bR^2 \setminus P \to \{0,1\}$ which attains both values $0$ and $1$ in every neighborhood of every point $x \in P$, and defining ext$P = f^{-1}(0)$, int$P = f^{-1}(1)$. Statement (F) (polygonal connectivity of int$P$ and of ext$P$) follows from Theorem \ref{theo3.1} below. \section{A ``raindrop'' proof of (E)} The construction of $f$ will be performed in three steps: \textbf{Preliminary step:} Choosing a ``generic'' direction. Choose an orthogonal basis $(u,v)$ for $\bR^2$ so that no two vertices of $P$ have the same $x$-coordinate. Intuitively: the polygon $P$ is drawn as a paper; rotate the paper so that no two vertices lie one above the other. Formally: let $L_1, \dots, L_t$ be all lines spanned by subsets of $\{p_1,\dots,p_n\}$. For $i=1,\dots,t$ let $L^0_i =_{\rm def} L_i-L_i$ be the linear ($1$-dimensional) subspace parallel to $L_i$. Choose a unit vector $v \in \bR^2 \setminus \bigcup \limits^t_{i=1} L^0_i$ (``$v$'' for ``vertical''). The vector $v$ is our direction ``up'', and $-v$ is pointing ``down''. By our choice of $v$, a line $L$, spanned by the vertices of $P$, will meet a line parallel to $v$ in at most one point. For a point $p \in \bR^2 \setminus P$ denote by $R(p)$ the closed vertical ``pointing down'' half-line $R(p) =_{\rm def} \{p-\lambda v: 0 \le \lambda < \infty\}$. $R(p)$ is the path of a ``raindrop'' emanating from $p$. We divide $\bR^2 \setminus P$ into two disjoint sets \[ \begin{array}{lll} S_0 & =_{\rm def} & \{p \in \bR^2 \setminus P: R(p) \, \mbox{ does not meet any vertex of }\, P\}\,,\\ S_1 & =_{\rm def} & \{ p \in \bR^2 \setminus P: R(p) \, \mbox{ meets exactly one vertex of } \, P\}\,. \end{array} \] (By our choice of $v$, we have $\bR^2 \setminus P = S_0 \cup S_1$.) We shall define $f$ on $S_0$ (= Step I), then extend it (continuously) to $S_1$ (= Step II). The following notation will be used: For a set $A \subset \bR^2$, $A^+ =_{\rm def} \{a + \lambda v: a \in A, \lambda \ge 0\}$. Thus $A^+$ is the set of points that lie ``above'' $A$. If $A$ is closed, then $A^+$ is closed. Note that (for all $p \in \bR^2$ and $A \subset \bR^2$): \begin{equation}\label{eq1} R(p) \, \mbox{ meets } \, A \, \mbox{ iff } \, p \in A^+\,. \end{equation} \textbf{Step I:} Define $f$ on $S_0$. For $p \in S_o$ denote by $r(p)$ the number of edges of $P$ met by $R(p)$, and define $f(p) =_{\rm def} {\rm par} (r(p)) =_{\rm def} \frac{1}{2} (1-(-1)^{r(p)})$, the parity of $r(p)$ ($f(p) = 0$ if $r(p)$ is even, $1$ if $r(p)$ is odd). \begin{center} \end{center} \begin{center} Fig. 1: the function $r(p)$ \hfill Fig. 2: the parity function $f(p) = {\rm par}(r(p))$ \end{center} Next we show that $S_0$ is a dense open subset of $\bR^2$, and that $f: S_0 \to \{0,1\}$ is a continuous, hence locally constant function. Using vert$P$ for the set of vertices of $P$, we have in view of (\ref{eq1}) \begin{equation}\label{eq2} S_0 = \bR^2 \setminus (P \cup (\mbox{vert}P)^+)\,. \end{equation} The set $({\rm vert}P)^+$ is closed, same as $P$. Thus $S_0$ is an open subset of $\bR^2$. Moreover, the set $P \cup ({\rm vert}P)^+$ can be covered by a finite number of lines in $\bR^2$. It follows that $S_0$ is dense in $R^2$. Continuity of $f$: Assume $x \in S_0$. Let $\varepsilon$ be the (positive) distance from $x$ to $P \cup ({\rm vert}P)^+ (= \bR^2 \setminus S_0)$. If $x' \in \bR^2, \|x-x'\| < \varepsilon$, then the segment $[x,x']$ does not meet $P \cup ({\rm vert}P)^+$. Let $e = [p_{i-1},p_i] \, (1 \le i \le n)$ be any edge of $P$. The set $e^+$ is a closed, convex, unbounded and full-dimensional polyhedral subset of $\bR^2$, whose boundary consists of the lower edge $e$ and the side edges $p^+_{i-1}, p^+_i$. Thus bd$e^+ \subset P \cup ({\rm vert} P)^+$, and therefore the segment $[x,x']$ does not meet the boundary of $e^+$. It follows that $x' \in e^+$ iff $x \in e^+$, i.e., $R(x)$ meets $e$ iff $R(x')$ meets $e$. This is true for all edges $e$ of $P$. Therefore $r(x) = r(x')$, hence $f(x) = f(x')$. This shows that the function $f: S_0 \to \{0,1\}$ is locally constant, hence continuous (in $S_0)$. \textbf{Step II:} Extend $f$ continuously from $S_0$ to $S_0 \cup S_1 = \bR^2 \setminus P$. Suppose $p \in S_1$. Let $p_i$ be the unique vertex of $P$ that meets $R(p)$, i.e., $p \in p^+_i$. Note that $p \not= p_i$, i.e., $p \in \mbox{ relint } p^+_i$. Let $e_1 = [p_{i-1},p_i], e_2 = [p_i,p_{i+1}]$ be the two edges of $P$ incident with $p_i$. Define $L = p+\bR v$. $L$ is the vertical line through $p$. Denote by $L^-, L^+$ the two closed half-planes of $\bR^2$ bounded by $L$. None of the edges $e_1,e_2$ is included in $L$, and they may be either in the same half-plane $L^-$ or $L^+$, or in different half-planes. Choose the notation so that either $(\alpha)$ $e_1 \subset L^-, e_2 \subset L^+$ (Fig. 3) or $(\beta)$ $e_1 \cup e_2 \subset L^+$ (Fig. 4). \begin{center} \end{center} \begin{center} Fig. 3: case $\alpha$ \hspace{1cm} ~~~~~~~~~~~~~~~~~~~~~~Fig. 4: case $\beta$ \end{center} A glance on Figures 3 and 4 shows that for a point $x$ in the vicinity of $p$, but not lying on $L$, the parity of $r(x)$ is the same in either side of $L$. Hence we can extend the definition of $f$ to $p$ by defining $f(p)$ to be this parity. To make this into a formal argument consider the closed set $\triangle =_{\rm def} P \cup ({\rm vert} P \setminus \{p_i\})^+$. This set includes the boundary of $e^+$, for every edge $e$ of $P$, except for $e^+_1$ and $e^+_2$. It also includes the boundaries of $e^+_1$ and $e^+_2$, except for $p^+_i \setminus \{p_i\}$, and it does not contain the point $p$. Put $\varepsilon =_{\rm def} \mbox{ dist}(p,\triangle) > 0$, and define $U =_{\rm def} \{x \in \bR^2: \|x-p\| < \varepsilon\} = {\rm int} B^2(p,\varepsilon)$. Note that if $x \in U$, then the closed interval $[p,x]$ misses $\triangle$. Now make the following observations. \begin{enumerate} \item[(I)] If $e$ is any edge of $P$, other than $e_1$ and $e_2$, then the interval $[p,x]$ does not meet the boundary of $e^+$, and therefore $p$ and $x$ are either both in $e^+$, or both not in $e^+$. \item[(II)] If, say, $e_1 \subset L^-$ and $x \in {\rm int} L^-$ then, moving along the interval $[p,x]$ from $p$ to $x$, we start at a point $p \in p^+_i \subset {\rm bd} e^+_1$, move into int$e^+_1$, and do not hit the boundary of $e^+_1$ again. Therefore $x \in {\rm int} e^+_1$. The same holds with $L^-$ replaced by $L^+$, and/or $e_1$ replaced by $e_2$. It follows that in case $(\alpha)$: if $x \in U \setminus L$, then $x$ belongs to exactly one of the sets $e^+_1,e^+_2$. And it follows that in case $(\beta)$: if $x \in U \cap {\rm int} L^-$, then $x$ belongs to none of the sets $e^+_1,e^+_2$; if $x \in U \cap L^+$, then $x$ belongs to both of them. \item[(III)] If $p_j \in {\rm vert} P \setminus \{p_i\}$, then $p^+_j \subset \triangle$, and therefore $x \notin p^+_j$, who-ever $x \in U$. \item[(IV)] If $x \in U \setminus L$, then clearly $x \notin p^+_i$. If $x \in U \cap L$, then the interval $[p,x]$ lies on $L$, contains a point $p \in p^+_i \setminus \{p_i\}$ and does not meet $p_i$; therefore $x \in p^+_i \setminus \{p_i\}$ (= relint$p^+_i$). From these observations we infer: \begin{enumerate} \item[(A)] $U \setminus L \subset S_0$ and $f$ is constant on $U \setminus L$. \item[(B)] $U \cap L \subset S_1$. \end{enumerate} \end{enumerate} Now define $f(p)$ to be the constant value that $f$ takes on $U \setminus L$. Clearly, if we apply the same procedure to any point $p' \in U \cap L$, we will end up with a value $f(p')$ equal to the value $f(p)$ just defined. (Note that any $\varepsilon'$-neighborhood of $p' \,(\varepsilon' > 0)$ contains points of $U \setminus L$.) Thus we have extended $f$ to a locally constant, hence continuous function $f: \bR^2 \setminus P \to \{0,1\}$. To complete the proof of statement (E), we define, as indicated after (F) above, the sets ext$P =_{\rm def} f^{-1}(0)$ and int$P =_{\rm def} f^{-1} (1)$. These are clearly two disjoint open sets in $\bR^2$, whose union is dom$f = \bR^2 \setminus P$. Note that $\bR^2 \setminus {\rm conv}P \subset {\rm ext}P$ and, therefore, int$P \subset {\rm conv} P$. Thus ext$P$ is unbounded and int$P$ is bounded. We still have to show that every point of $P$ is a boundary point of both int$P$ and ext$P$ (and therefore int$P \not= \emptyset, {\rm ext} P \not= \emptyset$). Since the boundaries of int$P$ and of ext$P$ are closed sets, it suffices to show that the common boundary points of int$P$ and ext$P$ are dense in $P$. For any vertex $p_i \, (1 \le i \le n)$ the intersection of the vertical line $p_i + \bR v$ with an edge $e$ of $P$ is at most a singleton. Thus $e \setminus \cup \{p_i + \bR v: 1 \le i \le n\}$ is dense in $e$, and $P \setminus \cup \{p_i + \bR v: 1 \le i \le n\}$ is dense in $P$. If $x \in P \setminus \cup \{p_i + \bR v: 1 \le i \le n\}$, then $x$ belongs to the relative interior of some edge $e$ of $P$. If $\varepsilon > 0$ is sufficiently small, then the points $x + \varepsilon v, x - \varepsilon v$ are both in $S_0$, the half-line $R(x + \varepsilon v)$ meets $e$, in addition to all edges met by $R(x-\varepsilon v)$. Thus $r(x+ \varepsilon v) = 1 + r (x-\varepsilon v)$, and $f(x+\varepsilon v) \not= f(x-\varepsilon v)$, i.e., $\{f(x-\varepsilon v), f(x+\varepsilon v)\} = \{0,1\}$. Thus $x$ is a common boundary point of int$P$ and ext$P$. This finishes the proof of (E). \section{Proof of (F)} Put $I_i =_{\rm def} [p_{i-1},p_i], 1 \le i \le n$, the edges of $P$, and for $i = 1,2, \dots, n$ let $u_i$ be a unit vector perpendicular to aff$I_i$. Choose the orientation of $u_i$ in such a way that for each point $b \in {\rm relint} I_i$ and for all sufficiently small positive value of $\varepsilon, b + \varepsilon u_i \in {\rm ext}P$ and $b- \varepsilon u_i \in {\rm int}P$. Define $u_{i,i+1} =_{\rm def} u_i + u_{i+1}, \, 1 \le i \le n$ (the indices are taken modulo $n$, i.e., $p_n = p_0, u_{n+1} = u_1, u_{n,n+1} = u_{n,1} = u_n+u_1$). \begin{lemma}\label{lem3.1} If $\varepsilon$ is a sufficiently small positive number, then $p_i + \varepsilon u_{i,i+1} \in {\rm ext}P$, and $p_i - \varepsilon u_{i,i+1} \in {\rm int}P$ for $1 \le i \le n$. \end{lemma} \textbf{Proof:} The edges $I_i,I_{i+1}$ lie in two rays (half-lines) $L_i,L_{i+1}$ bounded by $p_i$, say $L_i = p_i + \bR^+ v_i, L_{i+1} = p_i + \bR^+ v_{i+1}$, where $v_i, v_{i+1}$ are suitable unit vectors orthogonal to $u_i$, $u_{i+1}$, respectively. \begin{center} \end{center} \begin{center} ~\phantom{000000}(a) ~~\hspace{4.5cm} (b) \hspace{5cm} (c) Fig. 5 \end{center} If $\varepsilon$ is a sufficiently small positive number $(0 < \varepsilon < {\rm dist}(p_i,P \setminus ({\rm relint} (I_i \cup I_{i+1}))$, then $B^2 (p_i,\varepsilon) \setminus P = B^2 (p_i,\varepsilon) \setminus (L_i \cup L_{i+1})$. The union $L_i \cup L_{i+1}$ divides $B^2 (p_i, \varepsilon)$ into two open sectors, $B^2 (p_i, \varepsilon) \cap {\rm int} P$ and $B^2 (p_i, \varepsilon) \cap {\rm ext} P$. If $L_i,L_{i+1}$ are collinear $(v_{i+1} = -v_i)$, then each one of these two sectors is an open half disc. In this case $u_i = u_{i+1}$ (Fig. 5(a)), $u_{i,i+1} = 2u_i = 2u_{i+1}$, and the lemma holds trivially. If $u_i,u_{i+1}$ are not collinear, then one of the sectors is larger than a half disc, and the other is smaller. In both cases we have \begin{equation}\label{eq3} \langle u_i, v_{i+1} \rangle = \langle u_{i+1}, v_i \rangle = \sin \alpha\,, \end{equation} where $\alpha$ is the central angle of the sector $B^2 (p_i,\varepsilon) \cap {\rm ext}P$ at $p_i \, (0 \le \alpha \le 360^o)$. If $\langle u_i, v_{i+1}\rangle < 0$, then $B^2 (p_i, \varepsilon) \cap {\rm ext}P$ is the larger sector (Fig. 5(b)), and if $\langle u_i,v_{i+1}\rangle > 0$, then $B^2(p_i,\varepsilon) \cap {\rm int}P$ is the larger sector (Fig. 5(c)). Summing up the equalities \[ \begin{array}{lll} u_i & = & \langle u_i, u_{i+1}\rangle u_{i+1} + \langle u_i,v_{i+1}\rangle v_{i+1} \,,\\ u_{i+1} & = & \langle u_{i+1}, u_i\rangle u_i + \langle u_{i+1}, v_i\rangle v_i \end{array} \] and using (\ref{eq3}), we find $(1 - \langle u_i, u_{i+1}\rangle) \, (u_i + u_{i+1}) = \sin \alpha\, (v_i + v_{i+1})$. If $u_i \not= u_{i+1}$, then $1-\langle u_i,u_{i+1}\rangle > 0$, and \[ u_{i,i+1} = u_i + u_{i+1} = \frac{\sin \alpha}{1-\langle u_i,u_{i+1}\rangle} \cdot (v_i + v_{i+1})\,. \] Thus $u_{i,i+1}$ is a positive [resp., negative] multiple of $v_i + v_{i+1}$ when $\sin \alpha > 0$ [resp., $\sin \alpha < 0$]. In both cases, $u_{i,i+1}$ points towards ext$P$, and $-u_{i,i+1}$ towards int$P$. \hfill \rule{2mm}{2mm} \begin{lemma}\textbf{{\rm (``Push away from \boldmath$P$''\unboldmath)}}\label{lem3.2} \begin{enumerate} \item[(a)] Fix $i, \, 1 \le i \le n$, suppose $b \in {\rm relint} I_i$ and $u$ is a vector satisfying $\langle u,u_i \rangle > 0$. Define $I^0 =_{\rm def} [b,p_i],I^\varepsilon =_{\rm def} [b+ \varepsilon u, p_i + \varepsilon u_{i,i+1}]$ ($u_i, u_{i+1}$ and $u_{i,i+1} = u_i + u_{i+1}$ denote the same vectors as in the previous lemma). If $\varepsilon$ is a sufficiently small positive number, then $I^\varepsilon \subset {\rm ext}P$ and $I^{-\varepsilon} \subset {\rm int}P$. (The required smallness of $\varepsilon$ may depend on the choice of the point $b$ and of the vector $u$.) \item[(b)] Fix $i, 1 \le i \le n$, and define $J^0 =_{\rm def} [p_i,p_{i+1}] = I_{i+1}, J^\varepsilon =_{\rm def} [p_i + \varepsilon u_{i,i+1}, p_{i+1} + \varepsilon u_{i+1,i+2}]$. If $\varepsilon$ is a sufficiently small positive number, then $J^\varepsilon \in {\rm ext}P$ and $J^{-\varepsilon} \in {\rm int}P$. \end{enumerate} \end{lemma} \textbf{Proof:} \begin{enumerate} \item[(a)] First note that $I^0$ does not meet any edge of $P$ except $I_i$ and $I_{i+1}$. The same holds for $I^\varepsilon$, provided \[ |\varepsilon| < \min \left(\frac{1}{2}, \frac{1}{\|u\|}\right) \cdot {\rm dist} \left(I^0, P \setminus({\rm relint} (I_i \cup I_{i+1}))\right)\,. \] By Lemma \ref{lem3.1}, $p_i + \varepsilon u_{i,i+1} \in {\rm ext}P$ and $p_i - \varepsilon u_{i,i+1} \in {\rm int}P$, provided $\varepsilon$ is positive and sufficiently small. To complete the proof, it suffices to show that $I^\varepsilon \cap I_i = \emptyset$ and $I^\varepsilon \cap I_{i+1} = \emptyset$ (for sufficiently small $|\varepsilon|, \, \varepsilon \not= 0$). As for $I_i : \langle u_i, u \rangle > 0$ (given) and $\langle u_i, u_{i,i+1}\rangle = 1 + \langle u_i, u_{i+1}\rangle > 0$. Therefore, for any $\varepsilon \not= 0$ both endpoints of $I^\varepsilon$ lie (strictly) on the same side of the line aff$I_i$, hence $I_i \cap I^\varepsilon = \emptyset$. As for $I_{i+1}$: If $I_{i+1}$ and $I_i$ lie on the same line $(u_i = u_{i+1})$, then the previous argument shows that $I_{i+1} \cap I^\varepsilon = \emptyset$ for all $\varepsilon \not= 0$ as well. If $u_i \not= u_{i+1}$, consider first the case $\langle u_i, v_{i+1}\rangle < 0$. (Fig. 5(b)). For $\varepsilon > 0, I^\varepsilon$ lies in the open half-plane $\{x \in \bR^2 : \langle u_i, x \rangle > \langle u_i,p_i\rangle\}$, whereas $I_{i+1}$ lies in the closed half-plane $\{x \in \bR^2: \langle u_i, x \rangle \le \langle u_i, p_i \rangle \}$. Therefore $I^\varepsilon \cap I_{i+1} = \emptyset$. For $\varepsilon < 0$, \[ \langle u_{i+1}, p_i + \varepsilon u_{i,i+1}\rangle = \langle u_{i+1}, p_i \rangle + \varepsilon (1 + \langle u_i, u_{i+1}\rangle) < \langle u_{i+1},p_i\rangle\,. \] On the other hand, $\langle u_{i+1},b\rangle < \langle u_{i+1},p_i\rangle$ (for any point $b \in {\rm relint} I_i$, since $\langle u_{i+1},v_i \rangle < 0$), and therefore $\langle u_{i+1},b + \varepsilon u\rangle < \langle u_{i+1},p_i\rangle$ for sufficiently small $|\varepsilon|, \varepsilon \not= 0$. Thus both endpoints of $I^\varepsilon$ lie on the same open side of the line aff$I_{i+1}$, hence $I^\varepsilon \cap I_{i+1} = \emptyset$. In the case $\langle u_i, v_{i+1}\rangle > 0$ (Fig. 5(c) above), just repeat the previous argument with the roles of $\varepsilon > 0$ and $\varepsilon < 0$ interchanged. \item[(b)] The proof is similar to that of (a). First, note that $J^0$ does not meet any edge of $P$ except $I_i,I_{i+1}$ and $I_{i+2}$. The same holds for $J^\varepsilon$, provided \[ |\varepsilon| < \min \left(\frac{1}{2}, \frac{1}{\|u\|}\right) \cdot {\rm dist } \left(J^0, P \setminus {\rm relint} (I_i \cup I_{i+1} \cup I_{i+2})\right)\,. \] By Lemma \ref{lem3.1}, $p_i + \varepsilon u_{i,i+1}, p_{i+1} + \varepsilon u_{i+1,i+2} \in {\rm ext}P$ and $p_i - \varepsilon u_{i,i+1}, p_{i+1} - \varepsilon u_{i+1,i+2} \in {\rm int} P$, provided $\varepsilon$ is positive and sufficiently small. To complete the proof, it suffices to show that $J^\varepsilon \cap I_i = \emptyset, J^\varepsilon \cap I_{i+1} = \emptyset$ and $J^\varepsilon \cap I_{i+2} = \emptyset$ (for sufficiently small $|\varepsilon|, \varepsilon \not= 0$). As for $I_{i+1}\!: \langle u_{i+1},u_{i,i+1}\rangle = 1 + \langle u_{i+1},u_i\rangle > 0$ and $\langle u_{i+1},u_{i+1,i+2}\rangle = 1 + \langle u_{i+1},u_{i+2} \rangle > 0$. Therefore, for any $\varepsilon > 0$, both endpoints of $J^\varepsilon$ lie on the same open side of the line aff$I_{i+1}$, hence $I_{i+1} \cap J^\varepsilon = \emptyset$. As for $I_i$: If $I_{i+1}$ and $I_i$ lie in the same line $(u_i = u_{i+1})$, then the previous argument shows that $I_i \cap J^\varepsilon = \emptyset$ for all $\varepsilon \not= 0$ as well. If $u_i \not= u_{i+1}$, consider first the case $\langle u_i, v_{i+1}\rangle < 0$ (Fig. 5(b)). For $\varepsilon > 0, J^\varepsilon$ lies in the open half-plane $\{x \in \bR^2: \langle u_{i+1},x\rangle > \langle u_{i+1},p_i\rangle\}$, whereas $I_i$ lies in the closed half-plane $\{x \in \bR^2: \langle u_{i+1},x\rangle \le \langle u_{i+1},p_i\rangle\}$. Therefore, $J^\varepsilon \cap I_i = \emptyset$. For $\varepsilon < 0$, we have $\langle u_i,p_i + \varepsilon u_{i,i+1}\rangle = \langle u_i, p_i \rangle + \varepsilon (1 + \langle u_i,u_{i+1}\rangle) < \langle u_i,p_i\rangle$. On the other hand, $\langle u_i, p_{i+1}\rangle < \langle u_i,p_i\rangle$ (since $\langle u_i,v_{i+1}\rangle < 0$), and therefore $\langle u_i, p_{i+1} + \varepsilon u_{i+1,i+2} \rangle < \langle u_i,p_i\rangle$ for sufficiently small $|\varepsilon|$. Thus both endpoints of $J^\varepsilon$ lie on the same open side of the line aff$I_i$, hence $J^\varepsilon \cap I_i = \emptyset$. In the case $\langle u_i, v_{i+1}\rangle > 0$ (Fig. 5(c)), just repeat the previous argument with the roles of $\varepsilon > 0$ and $\varepsilon < 0$ interchanged. As for $I_{i+2}$: Since the roles of $I_i$ and $I_{i+2}$ are interchangeable, the statement proved above for $I_i$ applies to $I_{i+2}$ as well. \hfill \rule{2mm}{2mm} \end{enumerate} \begin{definition}\label{def3.1} Let $p$ be a point in $\bR^2 \setminus P$ (= ${\rm ext}P \cup {\rm int}P$), and $I$ be an edge of $P$. We say that $p$ \emph{sees} $I$ if, for some point $a \in {\rm relint}\ I, [p,a]\cap P = \{a\}$. \end{definition} \begin{lemma}\label{lem3.3} Assume $p \in \bR^2 \setminus P$. Then $p$ sees at least one edge of $P$. \end{lemma} \textbf{Proof:} Assume, w.l.o.g., that $p \in {\rm ext}P$. Let $q$ be a point in int$P$. Let $U$ be a neighborhood of $q$ that lies entirely in int$P$. Choose a point $q' \in U$ such that the line aff$(p,q')$ does not meet any vertex of $P$. (This condition can be met by avoiding a finite number of lines through $p$.) Then the line segment $[p,q']$ must meet $P$. Let $a$ be the first point of $P$ on $[p,q']$ (starting from $p$). Then $a$ is a relative interior point of some edge $I$ of $P_i$, and $[p,a]\cap P = \{a\}$. \rule{2mm}{2mm} \begin{definition}{(poldiam(\boldmath$\cdot$\unboldmath))}:\label{def3.2} For a set $S \subset \bR^2$ and points $a,b \in S$, denote by $\pi_S (a,b)$ the smallest number of edges of a polygonal path that connects $a$ to $b$ within $S$ ($\pi_S(a,b) =_{\rm def} \infty$ if no such polygonal path exists). If $S$ is polygonally connected, then $\pi_S (\cdot,\cdot)$ is an integer valued metric on $S$. The \emph{polygonal diameter} of $S$ is defined as poldiam$(S)=_{\rm def}$ ${\rm sup}\{\pi_S(a,b) : a,b \in S\}$. \end{definition} To prove (F) in Section 1 above, it suffices to show that poldiam(int$P$)$< \infty$ and poldiam(ext$P$)$< \infty$. The following theorem does it. \begin{theorem}{\rm \textbf{(straightforward upper bound on poldiam(int\boldmath$P$\unboldmath) and poldiam(ext\boldmath$P$\unboldmath))}}\label{theo3.1} If $P$ is a simple closed $n$-gon $(n \ge 3)$ in $\bR^2$, then we have that {\rm poldiam(int$P$)} and {\rm poldiam(ext$P$)} are both $\le \lfloor\frac{n}{2}\rceil + 3$. \end{theorem} \textbf{Proof:} Assume that $a,b$ are two points in the same component (int$P$ or ext$P$) of $\bR^2 \setminus P$. By Lemma \ref{lem3.2}, $a\,[b]$ sees at least one edge $I'\,[I'']$ of $P$ via $\bR^2 \setminus P$ (possibly $I' = I''$). The set $P \setminus ({\rm relint} (I' \cup I''))$ consists of at most two simple polygonal paths $P',P''$, the shorter one of which, say $P'$, concatenated by $I',I''$ in both of its endpoints is of the form $\langle J_0,J_1,\dots,J_m,J_{m+1}\rangle$, where $m \le \lfloor\frac{n-2}{2}\rfloor = \lfloor\frac{n}{2}\rfloor-1, J_0, J_1\dots,J_{m+1}$ are edges of $P \, (\{J_0,J_{m+1}\} = \{I',I''\})$, $J_{i-1}$ and $J_i$ share a vertex $q_i$ for $i=1, \, 2,\dots,m+1$, $a$ sees via $\bR^2 \setminus P$ a point $a' \in {\rm relint} J_0$, and $b$ sees via $\bR^2 \setminus P$ a point $b' \in {\rm relint} J_{m+1}$. Thus $\langle a,a',q_1,q_2,\dots,q_m,q_{m+1},b',b\rangle$ is a polygonal path of $m + 4 \le \lfloor\frac{n}{2}\rfloor - 1 + 4 = \lfloor\frac{n}{2}\rfloor + 3$ edges that connects $a$ to $b$ and runs along $P$ except for $[a,a']$ and $[b',b]$. By Lemma \ref{lem3.2}, this path can be pushed away from $P$ into $\bR^2 \setminus P$, thus producing a polygonal path of $m + 4 \le \lfloor\frac{n}{2}\rfloor +3$ edges that connects $a$ to $b$ via $\bR^2 \setminus P$. \hfill \rule{2mm}{2mm} \section{Tight upper bounds on poldiam(int\boldmath$P$\unboldmath) and on poldiam(ext\boldmath$P$)} Theorem \ref{theo3.1} gives a upper bound on poldiam(int$P$)~ [poldiam(ext$P$)] which is somewhat ``naive'', but sufficient to prove (F) in Section 1 above. Here we ``squeeze'' the proof of Theorem \ref{theo3.1} to obtain a tight result. \begin{theorem}{\rm (Main Theorem)}\label{theo4.1} Let $P$ be a simple closed $n$-gon in $\bR^2, n \ge 3$. Then \begin{enumerate} \item[(a)] the polygonal diameter of {\rm int}$P$ is $\le \lfloor\frac{n}{2}\rfloor$, and the polygonal diameter of {\rm ext}$P$ is $\le \lceil\frac{n}{2}\rceil$; \item[(b)] for every $n \ge 3$, there is an $n$-gon $P_n$ for which \emph{both} bounds are attained. \end{enumerate} \end{theorem} \textbf{Proof of Theorem 4.1(a):} First note that if $P$ is a convex polygon, then poldiam(int$P)= 1 \le \lfloor\frac{n}{2}\rfloor$, and it can be easily checked that poldiam(ext$P)=2 \le \lceil\frac{n}{2}\rceil$. (If we consider the closures, however, we find that poldiam(cl int$P)=1$, whereas poldiam(cl ext$P)= 3$ if $P$ has parallel edges, and equals $2$ otherwise.) This settles the case $n=3$ ($P_3$ is just a triangle). If $n=4$ and $P$ is not convex, then ext$P$ is the union of three convex sets (two open half-planes and a wedge), each two having a point in common, and therefore poldiam (ext$P) = 2 = \lceil\frac{n}{2}\rceil$. This settles the case $n=4$ for ext$P$. In view of the proof of Theorem \ref{theo3.1} and the foregoing discussion, we can establish the bounds on poldiam(int$P$) and poldiam(ext$P$) as claimed in Theorem \ref{theo4.1}(a) by showing the following: \begin{theorem}\label{theo4.2} Let $P$ be a closed simple $n$-gon in $\bR^2$. \begin{enumerate} \item[(i)] If $n \ge 4$ and $a,b \in {\rm int}P$, then there are two vertices $a',b'$ of $P$ such that a sees $a'$ via int$P$, $b$ sees $b'$ via {\rm int}$P$, and $a',b'$ are at most $\lfloor\frac{n}{2}\rfloor-2$ edges apart on $P$. (Recall that ``$a$ sees $a'$ via {\rm int}$P$'' means just: $]a,a'[\subset {\rm int}P$.) \item[(ii)] If $n \ge 5$ and $a,b \in {\rm ext}P$, then there are two vertices $a',b'$ of $P$ such that a sees $a'$ via {\rm ext}$P$, $b$ sees $b'$ via {\rm ext}$P$, and $a',b'$ are at most $\lceil\frac{n}{2}\rceil-2$ edges apart on $P$,\\ \emph{or:} $\pi_{{\rm ext P}}(a,b) \le 3 \left(\le \lceil\frac{n}{2}\rceil\right.$ for $\left.n \ge 5\right)$. \end{enumerate} \end{theorem} \begin{remark}\label{rem1} The condition $n \ge 5$ in the first part of Theorem \ref{theo4.2} (ii) cannot be relaxed to $n \ge 4$: Let $P_4 = \langle p_0,p_1,p_2,p_3\rangle$ be a convex quadrilateral, and let $a,b \in {\rm ext}P_4$, $a$ close to $[p_0,p_1]$ and $b$ close to $[p_2,p_3]$. Then $a$ and $b$ do not see a common vertex of $P_4$. \end{remark} \begin{lemma}\label{lem4.1} Let $P$ be a simple closed polygon in $\bR^2$. Let $\lceil b',p\rceil$ be an edge of $P$, $a,b$ two points such that $a \in \bR^2 \setminus P$, $b \in ]b',p]$ $($=$[b',p] \setminus \{b'\})$ and a sees $b$ $($via $\bR^2 \setminus P$$)$. Then a sees $($via $\bR^2 \setminus P$$)$ a vertex of $P$ included in $[a,b',b] \setminus [a,b]$. \end{lemma} \textbf{Proof:} If a sees $b'$ then we are done. Otherwise the polygon $P \setminus ]b',p[$ meets the set $[a,b,b'] \setminus [b',b]$. For $0 \le \lambda \le 1$, define $b(\lambda) =_{\rm def} (1-\lambda) b + \lambda b'$, and let $\lambda_0$ be the smallest value of $\lambda$, $0 \le \lambda \le 1$, such that $[a,b (\lambda)] \cap (P \setminus ]b',p[) \not= \emptyset$ $(0 < \lambda_0 \le 1; \lambda_0 = 1$ is possible). Let $c'$ be the point of $[a,b(\lambda_0) ] \cap P$ nearest to $a$. Then $c'$ is a vertex of $P$, $c' \in [a,b,b'] \setminus [a,b]$ and $a$ sees $c'$. \hfill \rule{2mm}{2mm} \begin{corollary}\label{cor4.1} Let $P$ be a simple closed $n$-gon, $n \ge 3$, in $\bR^2$. Every point $a \in \bR^2 \setminus P$ sees via $\bR^2 \setminus P$ at least two vertices of $P$. \end{corollary} \textbf{Proof:} Let $R$ be a ray emanating from $a$ that meets $P$. By a slight rotation of $R$ around $a$ we may assume that $R$ does not meet any vertex of $P$, but still $R \cap P \not= \emptyset$. Let $b$ be the first point of $R$ that belongs to $P$ (starting from $a$). By assumption $b \in [b',b''[$ for some edge $[b',b'']$ of $P$. By Lemma \ref{lem4.1}, $a$ sees via $\bR^2 \setminus P$ a vertex $c'$ $[c'']$ of $P$ included in $[a,b,b'] \setminus [a,b]$ [included in $[a,b,b''] \setminus [a,b]$], and clearly $c' \not= c''$. \hfill \rule{2mm}{2mm} \begin{lemma}\label{lem4.2} Let $P$ be a simple closed $n$-gon, $n \ge 4$, in $\bR^2$, and let $a \in \bR^2 \setminus P$. If every ray emanating from a meets $P$, then a sees via $\bR^2 \setminus P$ two \emph{non-adjacent} vertices of $P$. \end{lemma} \begin{remark}\label{rem2} The condition that every ray emanating from $a$ meets $P$ is met by every point $a \in {\rm int}P$. \end{remark} \textbf{Proof:} By Corollary \ref{cor4.1}, $a$ sees a vertex $c$ of $P$ via $\bR^2 \setminus P$. Consider the ray $R =_{\rm def} \{a + \lambda (a-c) : \lambda \ge 0\}$ that emanates from $a$ in a direction \emph{opposite} to $c$. By our assumption, $R$ meets $P$. Let $b$ be the first point of $R$ that belongs to $P$. If $b$ is a vertex of $P$, then a sees the two vertices $b,c$ via $\bR^2 \setminus P$. These vertices are \emph{not adjacent}, since $[c,b] \cap P = \{c,b\}$. Otherwise, if $b$ is not a vertex of $P$, then $b$ is a relative interior point of an edge $[b',b'']$ of $P$ $(R \cap ]b',b''[ = \{b\}$). By Lemma \ref{lem4.1}, $a$ sees via $\bR^2 \setminus P$ a vertex $c'$ $[c'']$ of $P$ included in $[a,b,b'] \setminus [a,b]$ [included in $[a,b,b''] \setminus [a,b]$]. Clearly, $c' \not= c''$ and $c',c''$ are non-adjacent in $P$ unless $c' = b'$ and $c'' = b''$. In this case $a$ sees via $\bR^2 \setminus P$ both couples of vertices $\{c,b'\}$ and $\{c,b''\}$. At least one of these couples is \emph{non-adjacent} in $P$, otherwise $P$ would be a triangle, contrary to the assumption that $n \ge 4$. \hfill \rule{2mm}{2mm} \textbf{Proof of Theorem 4.2:}\label{theo4.2} \begin{enumerate} \item[(i)] Suppose $P$ is a simple closed $n$-gon, $n \ge 4$, in $\bR^2$. Define $S =_{\rm def} {\rm int}P$, and assume $a,b \in S$. If $n = 4,5$, then cl$S$ (=$P \cup {\rm int}P$) is starshaped with respect to a vertex of $P$. (If $n = 5$, then $S$ can be triangulated by two interior diagonals with a common vertex.) In this case $a$ and $b$ see via $S$ a common vertex $a'$ of $P$. Define $b' =_{\rm def} a'$; we find that $a',b'$ are at zero edges apart on $P$. But $0 \le 0 = \lfloor\frac{n}{2}\rfloor-2$ for $n=4,5$. Assume, therefore, that $n \ge 6$, and that $a$ and $b$ do not see a common vertex of $P$ via $S$. By Lemma \ref{lem4.2}, $a$ sees via $S$ two non-adjacent vertices $a',a''$ of $P$. These vertices divide $P$ into two paths $P_1,P_2$, each having $\le n-2$ edges. Applying Lemma \ref{lem4.2} again, we find that $b$ sees via $S$ two non-adjacent vertices $b',b''$ of $P$ and $\{a',a''\} \cap \{b',b''\} = \emptyset$. If both $b'$ and $b''$ are interior vertices of the same path, say $P_1$, then they divide $P_1$ into three parts. The middle part has at least two edges, and the two extreme parts together have at most $n-4$ edges. The shorter extreme part, with endpoints (say) $a',b'$, has at most $\lfloor\frac{n-4}{2}\rfloor = \lfloor\frac{n}{2}\rfloor-2$ edges. If, however, $b'$ is an interior vertex of $P_1$ and $b''$ is an interior vertex of $P_2$, then they divide $P_1$ and $P_2$ into four polygonal paths, each one of which having one endpoint $b'$ or $b''$. The shortest of these paths has at most $\lfloor\frac{n}{4}\rfloor$ edges. But $\lfloor\frac{n}{4}\rfloor \le \lfloor\frac{n}{2}\rfloor-2$ for $n \ge 6$. \item[(ii)] Assume $n \ge 5$, define $T = {\rm ext}P$, and let $a,b \in T$. Then either \begin{enumerate} \item[(A1)] every ray emanating from $a$ meets $P$, \emph{or} \item[(A2)] some ray emanating from $a$ misses $P$. \end{enumerate} Similarly, either \begin{enumerate} \item[(B1)] every ray emanating from $b$ meets $P$, \emph{or} \item[(B2)] some ray emanating from $b$ misses $P$. \end{enumerate} If (A1) and (B1) hold, then both $a$ and $b$ see via $T$ two non-adjacent vertices of $P$ (Lemma \ref{lem4.2}). If $n \ge 6$, this implies that $a[b]$ sees a vertex $a'$ $[b']$ of $P$ such that $a',b'$ are at most $\lfloor\frac{n-4}{2}\rfloor = \lfloor\frac{n}{2}\rfloor-2 \le \lceil\frac{n}{2}\rceil-2$ or $\lfloor\frac{n}{4}\rfloor \le \lfloor\frac{n}{2}\rfloor-2 \le \lceil\frac{n}{2}\rceil-2$ edges apart on $P$, as in the proof of part (i) above. If $n=5$, then $a$ sees via $T$ a vertex $a'$ of $P$, and $b$ sees via $T$ a vertex $b'$ of $P$, where $a'$ and $b'$ are either equal or adjacent, i.e., $a',b'$ are at most one edge apart on $P$. But for $n=5$ one has $1 \le \lceil\frac{n}{2}\rceil-2$. If (A2) and (B2) hold, then, due to the compactness of $P$, we can find rays $R_a = \{a + \lambda u : \lambda \ge 0\}$ and $R_b = \{b + \lambda v : \lambda \ge 0\}$ that miss $P$, where the direction vectors $u$ and $v$ are \emph{linearly independent}. When $\lambda$ is sufficiently large, the segment $[a+\lambda u, b + \lambda u]$ misses $P$. Therefore $\pi_T (a,b) \le 3 \left(\le \lceil\frac{n}{2}\rceil\right.$ for $\left.n \ge 5\right)$ if $R_a \cap R_b = \emptyset$, and $\pi_T(a,b) = 2 < 3 \left(\le\lceil\frac{n}{2}\rceil \mbox{ for } n \ge 5 \right)$ if $R_a \cap R_b \not= \emptyset$. If (A1) and (B2) hold, then $a$ sees via $T$ two non-adjacent vertices $a',a''$ of $P$, which divide $P$ into two paths $P_1,P_2$ (with disjoint relative interiors) each one of which having $\le n-2$ edges. The point $b$, however, sees two distinct vertices $b',b''$ of $P$, which may be adjacent (Corollary 4.1). If $\{a',a''\} \cap \{b',b''\} \not= \emptyset$, then again $\pi_T(a,b) \le 2 < 3 \left(\le\lceil\frac{n}{2}\rceil\right.$ for $\left.n \ge 5\right)$. If $\{a',a''\} \cap \{b',b''\} = \emptyset$, then $b'$ and $b''$ are interior vertices of $P_1$ or $P_2$, or both. If $b'$ and $b''$ belong to different paths, then (as in the proof of part (i) above) they divide $P_1$ and $P_2$ into four polygonal paths, each having one endpoint $b'$ or $b''$. The shortest one of these paths has at most $\lfloor\frac{n}{4}\rfloor$ edges. But $\lfloor\frac{n}{4}\rfloor \le \lceil\frac{n}{2}\rceil-2$ for $n \ge 5$. If both $b'$ and $b''$ are interior vertices of the same path, say $P_1$, then (as in the proof of part (i) above) they divide $P_1$ into three parts. The two extreme parts together have at most $n-2-1=n-3$ edges. The shortest extreme part with endpoints (say) $a',b'$ has at most $\lfloor\frac{n-3}{2}\rfloor$ edges. But $\lfloor\frac{n-3}{2}\rfloor = \lfloor\frac{n-1}{2}\rfloor-1 = \lceil\frac{n}{2}\rceil-2$ for all $n \in \bN $. The same applies when (A2) and (B1) hold. This finishes the proof of Theorem 4.2. \hfill \rule{2mm}{2mm} \end{enumerate} By this also the proof of Theorem 4.1(a) is finished. \hfill \rule{2mm}{2mm} \textbf{Proof of Theorem 4.1(b):} We split our examples into two cases, namely even $n$ and odd $n$, $n \ge 3$. \begin{example}\label{ex4.1} \textbf{\boldmath$n=2m$\unboldmath~ (even), \boldmath$m \ge 2$\unboldmath}. Figure 6 shows the example for the case $m = 3\, (n=6)$. \begin{center} \end{center} \begin{center} Fig. 6: $m = 3 \, (n=6)$ \end{center} Here we have $\pi_{{\rm int} P} (a,b) = m \,(=3) = \lfloor\frac{n}{2}\rfloor$ and $\pi_{{\rm ext} P} (c,d) = m \,(=3) = \lceil\frac{n}{2}\rceil $. One can extend the figure inward beyond vertex $\# 4$. \end{example} \begin{example}\textbf{\boldmath$n=2m+1$\unboldmath~ (odd), \boldmath$m \ge 1$\unboldmath.}\label{ex4.2} Figure 7 shows the example for the case $m = 3$ $(n=7)$ \begin{center} \end{center} \begin{center} Fig. 7: $m = 3 \, (n=7)$ \end{center} We have $\pi_{{\rm int} P} (a,b) = m\, (=3) = \lfloor\frac{n}{2}\rfloor$ and $\pi_{{\rm ext} P} (c,d) = m+1\, (=4) = \lceil\frac{n}{2}\rceil$. Again, one can extend the figure inward beyond vertex $\# 4$. \end{example}
{ "timestamp": "2010-12-17T02:01:15", "yymm": "1012", "arxiv_id": "1012.3541", "language": "en", "url": "https://arxiv.org/abs/1012.3541", "abstract": "We give a tight upper bound on the polygonal diameter of the interior, resp. exterior, of a simple $n$-gon, $n \\ge 3$, in the plane as a function of $n$, and describe an $n$-gon $(n \\ge 3)$ for which both upper bounds (for the interior and the exterior) are attained \\emph{simultaneously}.", "subjects": "Combinatorics (math.CO)", "title": "On the polygonal diameter of the interior, resp. exterior, of a simple closed polygon in the plane", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9896718456529513, "lm_q2_score": 0.819893340314393, "lm_q1q2_score": 0.8114253553475086 }
https://arxiv.org/abs/1002.2847
A variant of the Johnson-Lindenstrauss lemma for circulant matrices
We continue our study of the Johnson-Lindenstrauss lemma and its connection to circulant matrices started in \cite{HV}. We reduce the bound on $k$ from $k=O(\epsilon^{-2}\log^3n)$ proven there to $k=O(\epsilon^{-2}\log^2n)$. Our technique differs essentially from the one used in \cite{HV}. We employ the discrete Fourier transform and singular value decomposition to deal with the dependency caused by the circulant structure.
\section{Introduction} Let $x^1,\dots,x^n\in {\mathbb R}^d$ be $n$ points in the $d$-dimensional Euclidean space ${\mathbb R}^d$. The classical Johnson-Lindenstrauss lemma tells that, for a given $\varepsilon\in(0,\frac 12)$ and a natural number $k=O(\varepsilon^{-2}\log n)$, there exists a linear map $f:{\mathbb R}^d\to{\mathbb R}^k$, such that $$ (1-\varepsilon)||x^j||_2^2\le ||f(x^j)||_2^2\le (1+\varepsilon)||x^j||_2^2 $$ for all $j\in\{1,\dots,n\}.$ Here $||\cdot||_2$ stands for the Euclidean norm in ${\mathbb R}^d$ or ${\mathbb R}^k$, respectively. Furthermore, here and any time later, the condition $k=O(\varepsilon^{-2}\log n)$ means, that there is an absolute constant $C>0$, such that the statement holds for all natural numbers $k$ with $k\ge C\varepsilon^{-2}\log n$. We shall also always assume, that $k\le d$. Otherwise, the statement becomes trivial. The original proof of this fact was given by Johnson and Lindenstrauss in \cite{JL}. We refer to \cite{DG} for a beautiful and self-contained proof. Since then, it has found many applications for example in algorithm design. These applications inspired numerous variants and improvements of the Johnson-Lindenstrauss lemma, which try to minimize the computational costs of $f(x)$, the memory used, the number of random bits used and to simplify the algorithm to allow an easy implementation. We refer to \cite{IM,A,AC2,AC,M} for details and to \cite{M} for a nice description of the history and the actual ``state of the art''. All the known proofs of the Johnson-Lindenstrauss lemma work with random matrices and proceed more or less in the following way. One considers a probability measure ${\mathbb P}$ on a some subset ${\mathcal P}$ of all $k\times d$ matrices (i.e. all linear mappings ${\mathbb R}^d\to{\mathbb R}^k$). The proof of the Johnson-Lindenstrauss lemma then emerges by some variant of the following two estimates $$ {\mathbb P}\biggl(f\in{\mathcal P}:||f(x)||_2^2\ge 1+\varepsilon\biggr)<1-\frac {1}{2n} $$ and $$ {\mathbb P}\biggl(f\in{\mathcal P}:||f(x)||_2^2\le 1-\varepsilon\biggr)<1-\frac {1}{2n}, $$ which have to be proven for all unit vectors $x\in{\mathbb R}^d$, and a simple union bound over all points $x^j/||x^j||_2, j=1,\dots,n$. Here and later on we assume, without loss of generality, that $x^j\not =0$ for all $j=1,\dots,n.$ The best known construction of $f$ (according to the properties mentioned above) was given by Ailon and Chazelle in \cite{AC2} with an improvement due to Matou\v{s}ek, cf. \cite{M}. It states, that $f$ may be given as a composition of a sparse matrix, certain random Fourier matrix and a random diagonal matrix. Although it provides a good computational time of $f(x)$ (with high probability $f(x)$ may be computed using $O(d\log d+\min\{d\varepsilon^{-2}\log n,\varepsilon^{-2}\log^3n\})$ operations), it still needs, that each coordinate of the $k\times d$ matrix is generated independently. In \cite{HV}, we studied a different construction of $f$, namely the possibility of a composition of a random circulant matrix with a random diagonal matrix. As a multiple of a circulant matrix may be implemented with the help of a discrete Fourier transform, it provides the running time of $O(d\log d)$, requires less randomness (only $2d$ compared to $kd$ or $(k+1)d$ used earlier) and allows a very simple implementation, as the Fast Fourier Transform is a part of every standard mathematical software package. The main difference between this approach and all the other constructions available in the literature so far is that the components of $f(x)$ are now no longer independent random variables. Decoupling this dependence, we were able to prove in \cite{HV} the Johnson-Lindenstrauss lemma for composition of a random circulant matrix and a random diagonal matrix, but only for $k=O(\varepsilon^{-2}\log^3n)$. It is the main aim of this note to improve this bound to $k=O(\varepsilon^{-2}\log^2n)$. This comes essentially closer to the standard bound $k=O(\varepsilon^{-2}\log n)$. Reaching this optimal bound (and keeping the control of the constants involved) remains an open problem and a subject of a challenging research. We use a completely different technique here. We use the discrete Fourier transform and the singular value decomposition of circulant matrices. That is the reason, why we found it more instructive to state and prove our variant of Johnson-Lindenstrauss lemma for complex vectors and Gaussian random variables. As a corollary, we obtain of course a corresponding real version. To state our main result, we first fix some notation. Let \begin{itemize} \item $\varepsilon\in (0,\frac 12)$, \item $n\ge d$ be natural numbers, \item $x^1,\dots,x^n\in{\mathbb C}^d$ be $n$ arbitrary points in ${\mathbb C}^d$, \item $k=O(\varepsilon^{-2}\log^2 n)$ be a natural number smaller then $d$, \item $a=(a_0,\dots,a_{d-1})$ be independent complex Gaussian variables, cf. Definition \ref{dfn1}, \item $\varkappa=(\varkappa_0,\dots,\varkappa_{d-1})$ be independent Bernoulli variables. \end{itemize} We denote by $M_{a,k}$ and $D_\varkappa$ the partial random circulant matrix and the random diagonal matrix, respectively, cf. Definition \ref{dfn2} for details. \begin{thm}\label{thm1} The mapping $f:{\mathbb C}^d\to {\mathbb C}^k$ given by $f(x)=\frac{1}{\sqrt{2k}}M_{a,k}D_\varkappa x$ satisfies $$ (1-\varepsilon)||x^j||_2^2\le ||f(x^j)||_2^2\le (1+\varepsilon)||x^j||_2^2 $$ for all $j\in\{1,\dots,n\}$ with probability at least 2/3. Here $||\cdot||_2$ stands for the $\ell_2$-norm in ${\mathbb C}^d$ or ${\mathbb C}^k$, respectively. \end{thm} For reader's convenience, we formulate also a variant of Theorem \ref{thm1}, which deals with real Euclidean spaces. \begin{cor}\label{cor1} Let $\varepsilon\in (0,\frac 12)$, $n\ge d$ be natural numbers, and let $x^1,\dots,x^{n}\in{\mathbb R}^{2d}$ be $n$ arbitrary points in ${\mathbb R}^{2d}$. Let $\alpha_0,\dots,\alpha_{d-1},\beta_0,\dots,\beta_{d-1}$ be $2d$ independent real Gaussian variables and let $\varkappa=(\varkappa_0,\dots,\varkappa_{d-1})$ be independent Bernoulli variables. If $k=O(\varepsilon^{-2}\log^2 n)$ is a natural number, then the mapping $f:{\mathbb R}^{2d}\to {\mathbb R}^{2k}$ given by $$ f(x)=\frac{1}{\sqrt{2k}} \left(\begin{matrix}M_{\alpha,k}&-M_{\beta,k}\\ M_{\beta,k}&M_{\alpha,k}\end{matrix}\right) \left(\begin{matrix}D_{\varkappa}&0\\ 0&D_{\varkappa}\end{matrix}\right)x $$ satisfies $$ (1-\varepsilon)||x^j||_2^2\le ||f(x^j)||_2^2\le (1+\varepsilon)||x^j||_2^2 $$ for all $j\in\{1,\dots,n\}$ with probability at least 2/3. Here $||\cdot||_2$ stands for the $\ell_2$-norm in ${\mathbb R}^{2d}$ or ${\mathbb R}^{2k}$, respectively. \end{cor} The proof follows trivially from Theorem \ref{thm1} by considering complex Gaussian variables $a=(\alpha_0+i\beta_0,\dots,\alpha_{d-1}+i\beta_{d-1})$ and complex vectors $y^j=(x^j_0+ix^j_{d},\dots,x^j_{d-1}+ix^j_{2d-1})\in {\mathbb C}^d$, $j=1,\dots,n$. \section{Used techniques} Let us give a brief overview of techniques used in the proof of Theorem \ref{thm1}. We shall list only those few properties needed in the sequel. \subsection{Discrete Fourier transform} Our main tool in this note is the discrete Fourier transform. If $d$ is a natural number, then the discrete Fourier transform ${\mathcal F}_d:{\mathbb C}^d\to{\mathbb C}^d$ is defined by $$ ({\mathcal F}_d x)(\xi)=\frac{1}{\sqrt{d}}\sum_{u=0}^{d-1}x_u \exp\Bigl(-\frac{2\pi iu\xi}{d}\Bigr). $$ With this normalisation, ${\mathcal F}_d$ is an isomorphism of ${\mathbb C}^d$ onto itself. The inverse discrete Fourier transform is given by $$ ({\mathcal F}^{-1}_d x)(\xi)=\frac{1}{\sqrt{d}}\sum_{u=0}^{d-1}x_u \exp\Bigl(\frac{2\pi i u\xi}{d}\Bigr). $$ Observe, that the matrix representation of ${\mathcal F}^{-1}_d$ is the conjugate transpose of the matrix representation of ${\mathcal F}_d$, i.e. ${\mathcal F}^{-1}_d={\mathcal F}_d^*$. \subsection{Circulant matrices} \begin{dfn}\label{dfn1} Let $\alpha$ and $\beta$ be independent real Gaussian random variables with $$ {\mathbb E}\alpha={\mathbb E}\beta=0 \quad\text{and}\quad {\mathbb E}|\alpha|^2={\mathbb E}|\beta|^2=1. $$ Then we call $$ a=\alpha+i\beta $$ \emph{complex Gaussian variable.} \end{dfn} Let us note, that if $a$ is a complex Gaussian variable, then $$ {\mathbb E} a={\mathbb E} \alpha + i {\mathbb E} \beta = 0 \quad\text{and}\quad {\mathbb E} |a|^2 = {\mathbb E} \alpha^2 + {\mathbb E} \beta^2 = 2. $$ \begin{dfn}\label{dfn2} (i) Let $k\le d$ be natural numbers. Let $a=(a_0,\dots,a_{d-1})\in {\mathbb C}^d$ be a fixed complex vector. We denote by $M_{a,k}$ the partial circulant matrix $$M_{a,k}=\left( \begin{matrix} a_0 & a_1 & a_2 & \dots & a_{d-1}\\ a_{d-1} & a_0 & a_1 & \dots & a_{d-2}\\ a_{d-2} & a_{d-1} & a_{0} &\dots & a_{d-3}\\ \vdots & \vdots & \vdots &\ddots & \vdots\\ a_{d-k+1} & a_{d-k+2} & a_{d-k+3} & \dots & a_{d-k} \end{matrix} \right)\in {\mathbb C}^{k\times d}. $$ If $k=d$, we denote by $M_{a}=M_{a,d}$ the full circulant matrix. This notation extends naturally to the case, when $a=(a_0,\dots,a_{d-1})$ are independent complex Gaussian variables. (ii) If $\varkappa=(\varkappa_0,\dots,\varkappa_{d-1})$ are independent Bernoulli variables, we put $$ D_{\varkappa}={\rm diag}(\varkappa):=\left(\begin{matrix} \varkappa_0 & 0 & \dots & 0\\ 0 & \varkappa_1 & \dots & 0\\ \vdots & \vdots &\ddots & \vdots\\ 0 & 0 & \dots & \varkappa_{d-1} \end{matrix}\right)\in{\mathbb R}^{d\times d}. $$ \end{dfn} Of course, $D_\varkappa:{\mathbb C}^d\to{\mathbb C}^d$ is also an isomorphism. The fundamental connection between discrete Fourier transform and circulant matrices is given by \begin{equation}\label{eq:1} M_a={\mathcal F}_d\, {\rm diag}(\sqrt d{\mathcal F}_d a) {\mathcal F}^{-1}_d, \end{equation} which may be verified by direct calculation. Hence every circulant matrix may be diagonalised with the use of a discrete Fourier transform, its inverse and a multiple of the discrete Fourier transform of its first row. \subsection{Singular value decomposition} The last tool needed in the proof is the singular value decomposition. Let $M:{\mathbb C}^d\to {\mathbb C}^k$ be a $k\times d$ complex matrix with $k\le d$. Then there exists a decomposition $$ M=U\Sigma V^*, $$ where $U$ is a $k\times k$ unitary complex matrix, $\Sigma$ is a $k\times k$ diagonal matrix with nonnegative entries on the diagonal, $V$ is a $d\times k$ complex matrix with $k$ orthonormal columns and $V^*$ denotes the conjugate transpose of $V$. Hence $V^*$ has $k$ orthonormal rows. The entries of $\Sigma$ are the singular values of $M$, namely the square roots of the eigenvalues of $MM^*$. If $a=(a_0,\dots,a_{d-1})\in {\mathbb C}^d$ is a complex vector and $M_a$ is the corresponding circulant matrix, then its singular values may be calculated using \eqref{eq:1}. We obtain \begin{align*} M_aM_a^*&={\mathcal F}_d {\rm diag}(\sqrt d{\mathcal F}_d a) {\mathcal F}^{-1}_d [{\mathcal F}_d {\rm diag}(\sqrt d{\mathcal F}_d a) {\mathcal F}_d^{-1}]^* ={\mathcal F}_d {\rm diag}(\sqrt d{\mathcal F}_d a) {\rm diag}(\overline{\sqrt d{\mathcal F}_d a}) {\mathcal F}_d^{-1}\\ &={\mathcal F}_d {\rm diag}(d|{\mathcal F}_d a|^2) {\mathcal F}_d^{-1}. \end{align*} Hence, the singular values of $M_a$ are $\{\sqrt d|({\mathcal F}_d a)(\xi)|\}_{\xi=0}^{d-1}.$ The action of an arbitrary projection onto a vector of independent real Gaussian variables is very well known. It may be described as follows. \begin{lem}\label{lem3} Let $a=(a_0,\dots,a_{d-1})$ be independent real Gaussian variables. Let $k\le d$ be a natural number and let $x^1,\dots,x^k$ be mutually orthogonal unit vectors in ${\mathbb R}^d$. Then $$ \{\langle a,x^j\rangle\}_{j=1}^k $$ is equidistributed with a $k$-dimensional vector of independent real Gaussian variables. \end{lem} A direct calculation shows, that Lemma \ref{lem3} holds also for complex vectors $a$ and $x^1,\dots,x^k$. We present the following formulation of this fact. \begin{lem}\label{lem4} Let $a=(a_0,\dots,a_{d-1})$ be independent complex Gaussian variables. Let $W$ be a $k\times d$ matrix with $k$ orthonormal rows. Then $Wa$ is equidistributed with a $k$-dimensional vector of independent complex Gaussian variables. \end{lem} \section{Proof of Theorem \ref{thm1}} We shall need the following statement, which describes the preconditioning role of the diagonal matrix $D_\varkappa$. A similar fact has been used also in \cite{AC2}. Nevertheless, using discrete Fourier transform instead of a Hadamard matrix does not pose any restrictions on the underlying dimension $d$. Without repeating the details, we point out, that we discussed briefly in \cite[Remark 2.5]{HV}, why this preconditioning may not be omitted. \begin{lem}\label{lem1} Let $n\ge d$ be natural numbers and let $x^1,\dots,x^n\in{\mathbb C}^d$ be complex vectors. Let $\varkappa=(\varkappa_0,\dots,\varkappa_{d-1})$ be independent Bernoulli variables. Then there is an absolute constant $C>0$, such that with probability at least $5/6$ \begin{equation}\label{eq:3.1} ||{\mathcal F}_d\, D_{\varkappa}(x^j)||_\infty \le \frac{C\,\sqrt {\log n}}{\sqrt d} \cdot ||x^j||_2 \end{equation} holds for all $j\in\{1,\dots,n\}.$ \end{lem} \begin{proof} Let $x=\alpha+i\beta$ be a unit complex vector in ${\mathbb C}^d$. We put $y=(y_0,\dots,y_{d-1})={\mathcal F}_d \, D_{\varkappa}(x)$. Then we may estimate \begin{equation}\label{eq:3.11} {\mathbb P}_\varkappa(|y_l|>s)\le 2{\mathbb P}_\varkappa(\Re y_l>\frac{s}{\sqrt 2})+2{\mathbb P}_\varkappa(\Im y_l>\frac{s}{\sqrt 2}), \quad l=0,\dots, d-1, \end{equation} where $$ \Re y_l=\frac {1}{\sqrt d}\sum_{u=0}^{d-1}\varkappa_u[\alpha_u\cos(2\pi lu/d)+\beta_u\sin(2\pi lu/d)] $$ and $$ \Im y_l=\frac {1}{\sqrt d}\sum_{u=0}^{d-1}\varkappa_u[\beta_u\cos(2\pi lu/d)-\alpha_u\sin(2\pi lu/d)] $$ are the real and the imaginary part of $y_l$, respectively. Using the Markov's inequality and a real parameter $t>0$, which is at our disposal, we may proceed in a standard way: \begin{align*} {\mathbb P}_\varkappa\Bigl(\Re y_l>\frac{s}{\sqrt 2}\Bigr)&= {\mathbb P}_\varkappa\Bigl(\exp(t\Re y_l-\frac{st}{\sqrt 2})>1\Bigr)\\ &\le \exp\Bigl(-\frac{st}{\sqrt 2}\Bigr){\mathbb E}_\varkappa\exp(t\Re y_l)\\ &= \exp\Bigl(-\frac{st}{\sqrt 2}\Bigr) \prod_{u=0}^{d-1}\cosh\Bigl[\frac{t}{\sqrt d}[\alpha_u\cos(2\pi lu/d)+\beta_u\sin(2\pi lu/d)]\Bigr]\\ &\le \exp\Bigl(-\frac{st}{\sqrt 2}\Bigr) \prod_{u=0}^{d-1} \exp\Bigl(\frac{t^2}{2d}[\alpha_u\cos(2\pi lu/d)+\beta_u\sin(2\pi lu/d)]^2\Bigr)\\ &\le \exp\Bigl(-\frac{st}{\sqrt 2}\Bigr) \prod_{u=0}^{d-1} \exp\Bigl(\frac{t^2}{2d}[\alpha_u^2+\beta_u^2]\Bigr)= \exp\Bigl(-\frac{st}{\sqrt 2}+\frac{t^2}{2d}\Bigr). \end{align*} We have used the inequality $\cosh(v)\le \exp(v^2/2)$, which holds for all $v\in{\mathbb R}$, and the inequality between geometric and quadratic means. For the optimal $t=\frac{sd}{\sqrt 2}$, this is equal to $\exp(-\frac{s^2d}{4})$. As the second summand in \eqref{eq:3.11} may be estimated in the same way, we obtain \begin{equation}\label{eq:3.12} {\mathbb P}_\varkappa(|y_l|>s)\le 4 \exp\Bigl(-\frac{s^2d}{4}\Bigr),\quad l=0,\dots, d-1. \end{equation} Choosing $s=O(d^{-1/2}\sqrt{\log n})$ and applying the union bound over all $nd\le n^2$ components of $\{{\mathcal F}_d\, D_{\varkappa}(x^j/||x^j||_2)\}_{j=1}^n$, we obtain the result. \end{proof} {\it Proof of Theorem \ref{thm1}} Let us choose a vector $\varkappa=(\varkappa_0,\dots,\varkappa_{d-1})\in\{-1,+1\}^d$, such that \eqref{eq:3.1} holds. According to the Lemma \ref{lem1} this happens with probability at least $5/6$. Let us take $\tilde x=\frac{x^j}{||x^j||_2}$ for any fixed $j=1,\dots,n.$ We show, that there is an absolute constant $c>0$, such that \begin{equation}\label{eq:3.2} {\mathbb P}_a\bigl(||M_{a,k}D_\varkappa \tilde x||_2^2\ge 2(1+\varepsilon) k\bigr)\le \exp\Bigl(-\frac{ck\varepsilon^{2}}{\log n}\Bigr) \end{equation} and \begin{equation}\label{eq:3.3} {\mathbb P}_a\bigl(||M_{a,k}D_\varkappa \tilde x||_2^2\le 2(1-\varepsilon) k\bigr)\le \exp\Bigl(-\frac{ck\varepsilon^{2}}{\log n}\Bigr) \end{equation} holds. From \eqref{eq:3.2} and \eqref{eq:3.3}, Theorem \ref{thm1} follows again by a union bound over all $j= 1,\dots, n.$ Let $y^j=S^j(D_{\varkappa} \tilde x)\in{\mathbb C}^d$, $j=0,\dots,k-1$, where $S$ is the shift operator defined by $$ S:{\mathbb C}^d\to {\mathbb C}^d, \quad S(z_0,\dots,z_{d-1})=(z_1,\dots,z_{d-1},z_0). $$ We denote by $Y$ the $k\times d$ matrix with rows $y^0,\dots,y^{k-1}$. Then it holds \begin{equation*} ||M_{a,k}D_\varkappa \tilde x||_2^2= \sum_{j=0}^{k-1}\bigl|\sum_{u=0}^{d-1}a_{(u-j)\,{\rm mod}\, d\,}\varkappa_u\tilde x_u\bigr|^2 =\sum_{j=0}^{k-1}\bigl| \sum_{u=0}^{d-1} y^j_u a_u\bigr|^2 =||Ya||_2^2. \end{equation*} Let $Y=U\Sigma V^*$ be the singular value decomposition of $Y$. As mentioned above, $b:=V^*a$ is a $k$-dimensional vector of independent complex Gaussian variables. Hence, $$ ||Y a||_2^2=||U\Sigma V^* a||_2^2=||U\Sigma b||_2^2=||\Sigma b||_2^2=\sum_{j=0}^{k-1}\lambda^2_j |b_j|^2, $$ where $\lambda_j, j=0,\dots, k-1$, are the singular values of $Y$. Let us denote $\mu_j=\lambda_j^2$. Then $$ ||\mu||_1=\sum_{j=0}^{k-1}\lambda_j^2=||Y||^2_{F}=k, $$ where $||Y||_F$ is the Frobenius norm of $Y$. Moreover, \begin{align}\label{eq:3.4} ||\mu||_\infty&=||\lambda||^2_\infty =\sup_{z\in{\mathbb C}^d, ||z||_2\le 1}||Yz||_2^2\\ &\notag\le \sup_{z\in{\mathbb C}^d, ||z||_2\le 1}||M_{D_\varkappa \tilde x}z||_2^2 = d||{\mathcal F}_d D_\varkappa(\tilde x)||^2_\infty \le C^2\log n, \end{align} where $M_{D_\varkappa \tilde x}$ stands for the $d\times d$ complex circulant matrix with the first row equal to $D_\varkappa \tilde x$. This leads finally also to \begin{equation}\label{eq:3.5} ||\mu||_2\le \sqrt{||\mu||_1\cdot ||\mu||_\infty}\le C\sqrt{k\log n}. \end{equation} Then $$ {\mathbb P}_a\bigl(||Ya||_2^2>2(1+\varepsilon)k\bigr)= {\mathbb P}_b\biggl(\sum_{j=0}^{k-1}\mu_j(|b_j|^2-2)>2\varepsilon k\biggr). $$ We denote $$ Z:=\sum_{j=0}^{k-1}\mu_j(|b_j|^2-2). $$ The complex version of Lemma 1 from Section 4.1 of \cite{LM} (cf. also Lemma 2.2 of \cite{M}) states that \begin{equation}\label{eq:3.8} {\mathbb P}_b(Z\ge 2\sqrt 2||\mu||_2\sqrt t+2||\mu||_\infty t)\le\exp(-t). \end{equation} Using \eqref{eq:3.4} and \eqref{eq:3.5}, we arrive at $$ {\mathbb P}_b(Z\ge 2\sqrt 2 C \sqrt{tk\log n}+2C^2 t\log n)\le\exp(-t). $$ Choosing $t= \frac{c'k\varepsilon^2}{C^2 \log n}$ for $c'>0$ small enough, we get $$ {\mathbb P}_b(Z\ge 2\varepsilon k)\le \exp\Bigl(-\frac{ck\varepsilon^2}{\log n}\Bigr). $$ This finishes the proof of \eqref{eq:3.2}. Let us note, that \eqref{eq:3.3} follows in the same manner with \eqref{eq:3.8} replaced by \begin{equation*} {\mathbb P}_b(Z\le -2\sqrt 2||\mu||_2\sqrt t)\le\exp(-t), \end{equation*} which may be again found in Lemma 1, Section 4.1 of \cite{LM}. \begin{rem} The statement and the proof of Theorem \ref{thm1} do not change, if we replace the partial circulant matrix $M_{a,k}$ with any $k\times d$ submatrix of $M_a$. \end{rem} {\bf Acknowledgement:} The author would like to thank Aicke Hinrichs for valuable comments. The author also acknowledges the financial support provided by the FWF project Y 432-N15 START-Preis “Sparse Approximation and Optimization in High Dimensions”. \thebibliography{99} \bibitem{A} D.~Achlioptas, Database-friendly random projections: Johnson-Lindenstrauss with binary coins. \emph{J. Comput. Syst. Sci.}, 66(4):671-687, 2003. \bibitem{AC2} N.~Ailon and B.~Chazelle, Approximate nearest neighbors and the fast Johnson-Lindenstrauss transform. In \emph{Proc. 38th Annual ACM Symposium on Theory of Computing}, 2006. \bibitem{AC} N.~Ailon and B.~Chazelle, The fast Johnson-Lindenstrauss transform and approximate nearest neighbors. \emph{SIAM J. Comput.} 39 (1), 302-322, 2009. \bibitem{DG} S.~Dasgupta and A.~Gupta, An elementary proof of a theorem of Johnson and Lindenstrauss. \emph{Random. Struct. Algorithms}, 22:60-65, 2003. \bibitem{HV} A. Hinrichs and J. Vyb\'iral, Johnson-Lindenstrauss lemma for circulant matrices, submitted, available on {\tt http://arxiv.org/abs/1001.4919}. \bibitem{IM} P.~Indyk and R.~Motwani, Approximate nearest neighbors: Towards removing the curse of dimensionality. In \emph{Proc. 30th Annual ACM Symposium on Theory of Computing}, pp. 604-613, 1998. \bibitem{JL} W.~B.~Johnson and J.~Lindenstrauss, Extensions of Lipschitz mappings into a Hilbert space. \emph{Contem. Math.}, 26:189-206, 1984. \bibitem{LM} B.~Laurent and P. Massart, Adaptive estimation of a quadratic functional by model selection. \emph{Ann. Statist.} 28(5):1302--1338, 2000. \bibitem{M} J.~Matou\v{s}ek, On variants of the Johnson-Lindenstrauss lemma, \emph{Random Struct. Algorithms} 33(2):142--156, 2008. \end{document}
{ "timestamp": "2010-02-15T11:23:19", "yymm": "1002", "arxiv_id": "1002.2847", "language": "en", "url": "https://arxiv.org/abs/1002.2847", "abstract": "We continue our study of the Johnson-Lindenstrauss lemma and its connection to circulant matrices started in \\cite{HV}. We reduce the bound on $k$ from $k=O(\\epsilon^{-2}\\log^3n)$ proven there to $k=O(\\epsilon^{-2}\\log^2n)$. Our technique differs essentially from the one used in \\cite{HV}. We employ the discrete Fourier transform and singular value decomposition to deal with the dependency caused by the circulant structure.", "subjects": "Functional Analysis (math.FA)", "title": "A variant of the Johnson-Lindenstrauss lemma for circulant matrices", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9840936092211993, "lm_q2_score": 0.8244619220634456, "lm_q1q2_score": 0.8113477085488633 }
https://arxiv.org/abs/2002.11015
Parabolic frequency on manifolds
We prove monotonicity of a parabolic frequency on manifolds. This is a parabolic analog of Almgren's frequency function. Remarkably we get monotonicity on all manifolds and no curvature assumption is needed. When the manifold is Euclidean space and the drift operator is the Ornstein-Uhlenbeck operator this can been seen to imply Poon's frequency monotonicity for the ordinary heat equation. Monotonicity of frequency is a parabolic analog of the 19th century Hadamard three circles theorem about log convexity of holomorphic functions on $\CC$. From the monotonicity, we get parabolic unique continuation and backward uniqueness.
\section{Introduction} Bounds on growth for functions satisfying a PDE give crucial information and have many consequences. One of the oldest bounds of this type is Hadamard's three circles theorem for holomorphic functions. For elliptic equations, such as the Laplace equation, Almgren proved the monotonicity of a frequency function that measures the rate of growth, \cite{A}. Almgren's frequency played a fundamental role in his regularity results, \cite{A}, and other areas; see, e.g., \cite{GL}, \cite{Lo}. Almgren's frequency was generalized to the heat equation by Poon, \cite{P}, who proved the monotonicity of a parabolic frequency function. The results of Almgren and Poon rely heavily on the scaling structure of ${\bold R}^n$ (cf. \cite{CM1}) and do not extend globally to general manifolds. Here we prove a very general monotonicity for drift heat equations on any manifold and show that this general monotonicity implies the earlier one. Part of the strength is the simplicity of the argument yet the power of the consequences. Suppose that $(M,g)$ is a Riemannian manifold. Let $\phi:M\to {\bold R}$ be a smooth function and define an operator ${\mathcal{L}}_{\phi}$ (drift Laplacian) on vector-valued functions $u: M \to {\bold R}^N$ by \begin{align} \label{e:defLphi} {\mathcal{L}}_{\phi}\,u=\Delta\,u-\langle\nabla u,\nabla \phi\rangle={\text {e}}^{\phi}\,{\text {div}}\,\left({\text {e}}^{-\phi}\,\nabla u\right)\, . \end{align} These operators play an important role in many parabolic problems; see, e.g., \cite{CM2}, \cite{CM3}. The prime example of ${\mathcal{L}}_{\phi}$ is where $M={\bold R}^n$ with the flat metric, $\phi=\frac{|x|^2}{4}$ and ${\mathcal{L}}_{\frac{|x|^2}{4}}\,u=\Delta\,u-\frac{1}{2}\,\langle \nabla u,x\rangle$ is the Ornstein-Uhlenbeck operator. We let $L^2_{\phi}$ and $W^{1,2}_{\phi}$ be the spaces of square integrable ${\bold R}^N$-valued functions and Sobolev functions with respect to the weight ${\text {e}}^{-\phi}$. It follows from \eqr{e:defLphi} that ${\mathcal{L}}_{\phi}$ is self-adjoint on $W^{1,2}_{\phi}$ with respect to the weighted volume \begin{align} \int \langle u\, , {\mathcal{L}}_{\phi}\,v \rangle \,{\text {e}}^{-\phi}=-\int \langle \nabla u,\nabla v\rangle\,{\text {e}}^{-\phi}\, . \end{align} \vskip2mm Suppose that $u:M\times [a,b]\to {\bold R}^N$ is smooth and{\footnote{Some growth assumption is necessary to rule out the classical Tychonoff example.}} $u , u_t \in W^{1,2}_{\phi}$ for each $t\in [a,b]$. Set \begin{align} I(t)&=\int |u|^2\,{\text {e}}^{-\phi}\, ,\\ D(t)&=-\int |\nabla u|^2\,{\text {e}}^{-\phi}=\int \langle u\, , {\mathcal{L}}_{\phi}\,u \rangle \,{\text {e}}^{-\phi}\, ,\label{e:D}\\ U(t)&=\frac{D}{I}\, . \end{align} Observe that with our convention $U$ is always non-positive. The next theorem is a parabolic version of the classical Hadamard's three circle theorem{\footnote{The three circles theorem was stated and proven by J.E. Littlewood in 1912, but he stated it as a known theorem. Harald Bohr and Edmund Landau, in 1896, attribute the theorem to Jacques Hadamard; Hadamard did not publish a proof.}} for holomorphic functions: \begin{Thm} \label{t:hadamard} When $(\partial_t-{\mathcal{L}}_{\phi})\,u=0$, then $(\log I)'(t)=2\,U(t)$ and $\log I(t)$ is convex so $U'\geq 0$. Moreover, when $U$ is constant, then $u(x,t)={\text {e}}^{U\,t}\,u(x,0)$ and $u(\cdot,0)$ is an eigenfunction of ${\mathcal{L}}_{\phi}$ with eigenvalue $-U$. \end{Thm} Poon, \cite{P}, proved a monotonicity that can be shown (see Section \ref{s:s1}) to follow from the special case of Theorem \ref{t:hadamard} when $M={\bold R}^n$, $N=1$ and $\phi=\frac{|x|^2}{4}$. His monotonicity holds on manifolds with non-negative sectional curvature and parallel Ricci curvature which are exactly the assumptions needed to generalize Hamilton's work, \cite{H1}, \cite{H2}, from Euclidean space to manifolds.\footnote{See the discussion in \cite{P} after theorem 1.1' on page 522 and the remark on page 530.} In contrast our monotonicity holds on any manifold and no curvature assumption is needed. \vskip2mm Theorem \ref{t:hadamard} has the following immediate consequences (recall that $U$ is non-positive): \begin{Cor} \label{c:UniqueCont} If $u:M\times [a,b]\to {\bold R}^N$ and $(\partial_t-{\mathcal{L}}_{\phi})\,u=0$, then \begin{align} \label{e:firstpart} I(b)\geq I(a)\,{\text {e}}^{2\, U(a)\,(b-a)}\, . \end{align} In particular, if $u(\cdot,b)=0$, then $u \equiv 0$. \end{Cor} \begin{proof} By Theorem \ref{t:hadamard} \begin{align} \log I(b)-\log I(a)=\int_a^b(\log I)'(s)\,ds = 2\, \int_a^b U(s) \, ds \geq 2\, U(a)\,(b-a)\, . \end{align} \end{proof} Equation \eqr{e:firstpart} can be thought of as a bound for the vanishing order at $\infty$ whereas the second part is a version of backward uniqueness. The first part implies strong unique continuation at $\infty$. That is, if $u$ vanishes to infinite order at $\infty$, then it vanishes. We say that $u:M\times (a,\infty)\to {\bold R}$ vanishes to infinite order at $\infty$ if $\lim_{t\to\infty}{\text {e}}^{c\,t}\,I(t)=0$ for all constants $c$. \vskip2mm Suppose more generally $u$ satisfies the equation: \begin{align} (\partial_t-{\mathcal{L}}_{\phi}-\lambda)\,u=0\, . \end{align} Where $\phi$ is as above and $\lambda=\lambda (t)$ is a function depending on $t$ only. By considering $v(x,t)={\text {e}}^{-\int_a^t \lambda (s)\,ds}\,u(x,t)$ and observing that $v$ satisfies \eqr{e:defLphi}. It follows that our results apply to $v$ and hence we get a monotonicity for $u$. \vskip2mm Our results holds also for more general operators (cf. \cite{ESS}, \cite{W}) where \begin{align} \label{e:assumption} |(\partial_t-{\mathcal{L}}_{\phi})\,u|\leq C(t)\,(|u|+|\nabla u|)\, , \end{align} and $C(t)$ is allowed to depend on $t$; see Theorem \ref{t:UniqueCont1} and Corollary \ref{c:secondUniqueCont}. \section*{Acknowledgement} We are grateful to S. Brendle, R. Hamilton and S. Klainerman for discussions. \section{Parabolic frequency on manifolds} \label{s:s1} \begin{proof} (of Theorem \ref{t:hadamard}). Calculating and integrating by parts gives \begin{align} I'(t)&=2\,\int \langle u , \,u_t \rangle \,{\text {e}}^{-\phi}=2\int \langle u\, ,{\mathcal{L}}_{\phi}\,u \rangle \,{\text {e}}^{-\phi}=-2\int |\nabla u|^2\,{\text {e}}^{-\phi}=2\,D(t)\, .\label{e:I'}\\ D'(t)&=-2\,\int \langle \nabla u,\nabla u_t\rangle\,{\text {e}}^{-\phi}=-2\,\int \langle \nabla u,\nabla {\mathcal{L}}_{\phi}\,u\rangle\,{\text {e}}^{-\phi}=2 \int |{\mathcal{L}}_{\phi}\,u|^2\,{\text {e}}^{-\phi}\, .\label{e:D'} \end{align} By \eqr{e:I'} and the definition of $U$ we get \begin{align} \label{e:diffI} (\log I)'(t)=2\,\frac{D(t)}{I(t)}=2\,U(t)\, . \end{align} Therefore, using \eqr{e:I'}, \eqr{e:D'} and \eqr{e:D} \begin{align} \label{e:cs} D'\,I-I'\,D&=\left(2\int |{\mathcal{L}}_{\phi}\,u|^2\,{\text {e}}^{-\phi}\right)\,\left(\int |u|^2\,{\text {e}}^{-\phi}\right)-2\,D^2(t)\notag\\ &=\left(2\int |{\mathcal{L}}_{\phi}\,u|^2\,{\text {e}}^{-\phi}\right)\,\left(\int |u|^2\,{\text {e}}^{-\phi}\right)-2\,\left(\int \langle u, \,{\mathcal{L}}_{\phi}\,u \rangle \,{\text {e}}^{-\phi}\right)^2\geq 0\, . \end{align} Here the inequality follows from the Cauchy-Schwarz inequality. Finally, from this we get \begin{align} \label{e:monoU} U'=\frac{D'\,I-I'\,D}{I^2}\geq 0\, . \end{align} When $U$ is constant $U'=0$ and we therefore have equality in the Cauchy-Schwarz inequality \eqr{e:cs}. It follows that \begin{align} {\mathcal{L}}_{\phi}\,u=c(t)\,u\, . \end{align} Next to evaluate $c$ we observe that by the second equality in \eqr{e:D} \begin{align} D(t)=c(t)\int |u|^2\,{\text {e}}^{-\phi}=c(t)\,I(t)\, . \end{align} It follows that $c(t)=U$ and ${\mathcal{L}}_{\phi}\,u=U\,u$. If we set \begin{align} v(x,t)={\text {e}}^{-U\,t}\,u (x,t)\, , \end{align} then we have that \begin{align} \partial_tv={\text {e}}^{-U\,t}\,(-U\,u+\partial_tu)={\text {e}}^{-U\,t}\,(-U\,u+{\mathcal{L}}_{\phi}\,u)=0\, . \end{align} From this the second claim follows. \end{proof} There is a natural correspondence on ${\bold R}^n$ between solutions of the ordinary heat equation and solutions of the drift heat equation: Given $u:{\bold R}^n\times (-\infty,0)\to {\bold R}$, define $v(x,t)=u(\sqrt{-t}\,x,t)$, $w(x,s)=v(x,-{\text {e}}^{-s})$ and $t=-{\text {e}}^{-s}$. We have the following: \begin{Lem} \label{l:COV} The function $w:{\bold R}^n \times (-\infty,0)\to {\bold R}$ defined as above satisfies \begin{align} \label{e:drifteq} (\partial_s-{\mathcal{L}}_{\frac{|x|^2}{4}})\,w (x,s)= {\text {e}}^{-s} \, \left( u_t - \Delta u \right)({\text {e}}^{ - \frac{s}{2}}x, - {\text {e}}^{-s}) \, . \end{align} \end{Lem} \begin{proof} To prove \eqr{e:drifteq}, we use the chain rule to get \begin{align} \partial_tv&=-\frac{1}{2\,\sqrt{-t}}\,\langle \nabla u,x\rangle+u_t \, ,\\ \partial_sw&=-\frac{\sqrt{-t}}{2}\,\langle \nabla u,x\rangle-t\, u_t\, , \label{e:e121} \\ \nabla w&= \sqrt{-t} \,\nabla u\, ,\\ \Delta\,w&=-t\,\Delta\,u\, . \label{e:e123} \end{align} Combining \eqr{e:e121}--\eqr{e:e123} gives \eqr{e:drifteq}. \end{proof} Poon, \cite{P}, considered solutions $u:{\bold R}^n\times (-\infty,0)\to {\bold R}$ to the ordinary heat equation on Euclidean space. He showed a monotonicity that is easily seen to be equivalent to that $s\to \log H({\text {e}}^{\frac{s}{2}})$ is convex, where \begin{align} H(R)=(-4\,\pi\,R^2)^{-\frac{n}{2}}\int u^2 (y, -R^2)\,{\text {e}}^{ - \frac{|y|^2}{4\,R^2}}\, . \end{align} The convexity of $ \log H({\text {e}}^{\frac{s}{2}})$ follows from Theorem \ref{t:hadamard} when $M={\bold R}^n$ and $\phi=\frac{|x|^2}{4}$. To see this suppose $u_t = \Delta u$, so that $(\partial_s-{\mathcal{L}}_{\frac{|x|^2}{4}})\,w=0$ by Lemma \ref{l:COV}. Using the definition of $I_w(s)$ and making the change of variables $y= {\text {e}}^{ - \frac{s}{2}} \, x$ and $R= {\text {e}}^{-\frac{s}{2}}$ gives \begin{align} I_w(s)=\int u^2({\text {e}}^{-\frac{s}{2}}\,x,-{\text {e}}^{-s})\,{\text {e}}^{-\frac{|x|^2}{4}} \, dx = R^{-n} \, \int u^2(y, - R^2) \, {\text {e}}^{ - \frac{|y|^2}{4R^2}}\, dy = H ({\text {e}}^{ \frac{s}{2}}) \, . \end{align} From this and Theorem \ref{t:hadamard} the convexity of $\log H({\text {e}}^{ \frac{s}{2}})$ follows. \section{More general operators} \begin{Thm} \label{e:genLem} If $u : M \times [a,b] \to {\bold R}^N$ satisfies \eqr{e:assumption}, then \begin{align} U'&\geq C^2 \,(U-1)\, ,\\ C^2&\geq \left[\log (1-U)\right]'\, .\label{e:consequence} \end{align} \end{Thm} \begin{proof} First we rewrite $D$ as follows \begin{align} D = \int \langle u , \, {\mathcal{L}}_{\phi}\, u \rangle \, {\text {e}}^{-\phi} = \int \langle u \, , [ u_t - \frac{1}{2}\,(u_t - {\mathcal{L}}_{\phi}\, u)] \rangle \, {\text {e}}^{-\phi} -\frac{1}{2}\int \langle u , \,(u_t-{\mathcal{L}}_{\phi}\,u) \rangle \,{\text {e}}^{-\phi} \, . \end{align} Differentiating $I(t)$ and rewriting gives \begin{align} \label{e:diffIgen} I'(t)&=2\int \langle u, \,u_t \rangle \,{\text {e}}^{-\phi}=2\int \langle u\, , {\mathcal{L}}_{\phi}\,u \rangle \,{\text {e}}^{-\phi}+2\int \langle u , \,(u_t-{\mathcal{L}}_{\phi}\,u) \rangle \,{\text {e}}^{-\phi}\notag\\ &=2 \int \langle u , \, [ u_t - \frac{1}{2}\,(u_t - {\mathcal{L}}_{\phi} u)] \rangle \, {\text {e}}^{-\phi}+\int \langle u , \,(u_t-{\mathcal{L}}_{\phi}\,u) \rangle \,{\text {e}}^{-\phi}\, . \end{align} Hence, \begin{align} \label{e:secondterm} I'(t)\,D(t)=2\,\left(\int \langle u , \, [ u_t - \frac{1}{2}\,(u_t - {\mathcal{L}}_{\phi}\, u) ] \rangle \, {\text {e}}^{-\phi}\right)^2-\frac{1}{2}\,\left(\int \langle u , \,(u_t-{\mathcal{L}}_{\phi}\,u) \rangle \,{\text {e}}^{-\phi}\right)^2\, . \end{align} Differentiating $D(t)$ and integrating by parts gives \begin{align} D'(t)=-2\,\int \langle \nabla u,\nabla u_t\rangle\,{\text {e}}^{-\phi} &= 2\int \langle u_t , \, {\mathcal{L}}_{\phi}\, u \rangle \, {\text {e}}^{-\phi} = 2\int \langle u_t , \, ( u_t - [u_t - {\mathcal{L}}_{\phi}\, u]) \rangle\, {\text {e}}^{-\phi} \notag \\ &= 2 \int \left\{ |u_t - \frac{1}{2} \, [ u_t - {\mathcal{L}}_{\phi}\, u]|^2 - \frac{1}{4} \, | u_t - {\mathcal{L}}_{\phi}\, u|^2 \right\} \, {\text {e}}^{-\phi} \, . \end{align} So \begin{align} D'(t)\,I(t)= 2\, I(t) \, \int |u_t - \frac{1}{2} \, [ u_t - {\mathcal{L}}_{\phi}\, u]|^2\, {\text {e}}^{-\phi} - \frac{I(t)}{2} \,\int | u_t - {\mathcal{L}}_{\phi}\, u|^2 \, {\text {e}}^{-\phi}\, . \label{e:firstterm} \end{align} Combining \eqr{e:secondterm} and \eqr{e:firstterm} and using the Cauchy-Schwarz inequality, \eqr{e:assumption} and the elementary inequality $(a+b)^2\leq 2\,(a^2+b^2)$ gives \begin{align} \label{e:keyinequality} D'\,I-I'\,D&= 2 \,\left[ \int |u|^2 \, {\text {e}}^{-\phi} \, \int |u_t - \frac{1}{2} \, [ u_t - {\mathcal{L}}_{\phi}\, u] |^2 \, {\text {e}}^{-\phi}-\left(\int \langle u , \, [ u_t - \frac{1}{2}\,(u_t - {\mathcal{L}}_{\phi}\, u)] \rangle \, {\text {e}}^{-\phi}\right)^2\right]\notag\\ &- \frac{I(t)}{2} \int | u_t - {\mathcal{L}}_{\phi}\, u|^2 \, {\text {e}}^{-\phi}+\frac{1}{2}\,\left(\int \langle u , \,(u_t-{\mathcal{L}}_{\phi}\,u) \rangle \,{\text {e}}^{-\phi}\right)^2\\ &\geq - \frac{I(t)}{2}\int |u_t - {\mathcal{L}}_{\phi}\, u|^2 \, {\text {e}}^{-\phi}\geq -\frac{C^2\,I(t)}{2}\int (|u|+|\nabla u|)^2\,{\text {e}}^{-\phi}\notag \geq -C^2 \,I(t)\,(I(t)-D(t))\, .\notag \end{align} Dividing both sides by $I^2(t)$ gives the first claim. The second follows from the first. \end{proof} This leads to the following generalization of Corollary \ref{c:UniqueCont}: \begin{Cor} \label{t:UniqueCont1} If $u:M\times [a,b]\to {\bold R}^N$ satisfies \eqr{e:assumption} then \begin{align} I(b)\geq I(a)\,\exp\,\left((b-a)\, (2 + \sup_{[a,b]} C) \, \left[\exp\, \left(\int_a^b C^2(s)\,ds\right)\,[U(a)-1]+1-\frac{3}{2}\,\sup_{[a,b]}\,C\right]\right)\, . \notag \end{align} In particular, if $u(\cdot,b)=0$, then $u\equiv 0$. \end{Cor} \begin{proof} It follows from \eqr{e:diffIgen}, the Cauchy-Schwarz inequality and the elementary inequality $a\leq \frac{1}{2}\,\left(a^2+1\right)$ applied to $a=\sqrt{-U}$ that \begin{align} \label{e:genlogI'} (\log I)'&\geq 2\,U-\frac{C}{I}\,\int |u|\,(|u|+|\nabla u|)\,{\text {e}}^{-\phi}\geq 2\,U-C\,(1+\sqrt{-U})\notag\\ &\geq \left(2+\frac{C}{2}\right)\,U-\frac{3\,C}{2}\, . \end{align} From this we get that \begin{align} \label{e:difference} \log I(b)-\log I(a)&=\int_a^b (\log I)'(s)\,ds\notag\\ &\geq \frac{1}{2}\,\left(4+\sup_{[a,b]}\, C\right)\int_a^b U(s)\,ds-\frac{3}{2}\,\sup_{[a,b]}\,C\,(b-a)\, . \end{align} From \eqr{e:consequence} we get that for $s\in [a,b]$ \begin{align} \log (1-U(s))\leq \log (1-U(a))+\int_a^s C^2(r)\,dr \leq \log (1-U(a))+\int_a^b C^2(s)\,ds\, . \end{align} Therefore \begin{align} U(s)\geq \exp\, \left(\int_a^s C^2(s)\,ds\right)\,(U(a)-1) +1\, . \end{align} Inserting this lower bound in \eqr{e:difference} and integrating gives \begin{align} \log I(b)-\log I(a) \geq (b-a)\,\left[\exp\, \left( (2 + \sup_{[a,b]} C) \, \int_a^b C^2(s)\,ds\right)\,[U(a)-1]+1-\frac{3}{2}\,\sup_{[a,b]}\,C\right]\, . \notag \end{align} \end{proof} Recall that that $u:M\times (a,\infty)\to {\bold R}^N$ vanishes to infinite order at $\infty$ if $\lim_{t\to\infty}{\text {e}}^{c\,t}\,I(t)=0$ for all constants $c$. Theorem \ref{t:UniqueCont1} implies the following strong unique continuation at $\infty$: \begin{Cor} \label{c:secondUniqueCont} Suppose that $\sup\, C+\int_a^{\infty}C^2(s)\,ds<\infty$ and $u:M\times [a,\infty)\to {\bold R}^N$ is a solution of \eqr{e:assumption} that vanishes to infinite order at $\infty$, then $u$ vanishes. \end{Cor} This corollary implies the unique continuation of Poon, \cite{P}, who considered functions $u$ on ${\bold R}^n$ into ${\bold R}$ with \begin{align} u_t - \Delta u = \langle b (x,t) , \nabla u \rangle + c(x,t) \, u \, , \end{align} where $|b| + |c| \leq C$ is uniformly bounded (cf. \cite{L}). We will see that the results here apply more generally to functions $u$ satisfying the differential inequality \begin{align} \left| u_t - \Delta u \right| \leq C \, (|u| + |\nabla u|) \, . \end{align} Applying the transformation in Lemma \ref{l:COV} to $u$, we get a function $w(y,s)$ with \begin{align} \left| \left(\partial_s - {\mathcal{L}}_{\frac{|y|^2}{4}}\right) \, w \right| &= {\text {e}}^{-s} \, \left| ( \partial_t - \Delta) \, u \right| \leq C \, {\text {e}}^{-s} \left( |u| + |\nabla u| \right) \notag \\ &\leq C \, {\text {e}}^{-s} |w| + C \, {\text {e}}^{ - \frac{s}{2} } \, |\nabla w| \, . \end{align} Since $\int_0^{\infty}{\text {e}}^{- {s}}\,ds<\infty$, Corollary \ref{c:secondUniqueCont} applies. Exponential decay of order $c$, i.e., decay like ${\text {e}}^{-c\,s}$, corresponds to polynomial decay $t^c$ in the transformed variable $t=-{\text {e}}^{-s}$. \subsection{Without $u$ term} In this subsection we assume that $u:M\times [a,b]\to {\bold R}^N$ satisfies \begin{align} \label{e:stronger} |(\partial_t-{\mathcal{L}}_{\phi})\,u|\leq C(t)|\nabla u|\, . \end{align} In this case we get better estimates when $U(a)$ is small. It follows from \eqr{e:keyinequality}, with obvious simplifications in the second to last inequality from using \eqr{e:stronger} in place of \eqr{e:assumption}, that $U'\geq \frac{C^2}{2}\,U$ or, equivalently, \begin{align} [\log (-U)]'\leq \frac{C^2}{2}\, . \end{align} We therefore get that \begin{align} U(s)\geq U(a)\,\exp\,\left(\frac{1}{2}\int_a^s C^2(\tau)\,d\tau\right)\, . \end{align} With similar simplifications in \eqr{e:genlogI'} we get that for $s\in [a,b]$ \begin{align} (\log I)'&\geq 2\,U-C\,\sqrt{-U}\notag\\ &\geq 2\,U(a)\,\exp\,\left(\frac{1}{2}\int_a^b C^2(\tau)\,d\tau\right) -C\,\sqrt{-U(a)}\,\exp\left(\,\frac{1}{4}\int_a^b C^2(\tau)\,d\tau\right)\, . \end{align} Integrating gives \begin{align} I(b)\geq I(a)\,\exp\,\left[(b-a)\,\left\{2\,U(a)\,\exp\,\left(\frac{1}{2}\int_a^b C^2(\tau)\,d\tau\right) -C\,\sqrt{-U(a)}\,\exp\left(\,\frac{1}{4}\int_a^b C^2(\tau)\,d\tau\right)\right\}\right]\, .\notag \end{align}
{ "timestamp": "2020-02-26T02:18:04", "yymm": "2002", "arxiv_id": "2002.11015", "language": "en", "url": "https://arxiv.org/abs/2002.11015", "abstract": "We prove monotonicity of a parabolic frequency on manifolds. This is a parabolic analog of Almgren's frequency function. Remarkably we get monotonicity on all manifolds and no curvature assumption is needed. When the manifold is Euclidean space and the drift operator is the Ornstein-Uhlenbeck operator this can been seen to imply Poon's frequency monotonicity for the ordinary heat equation. Monotonicity of frequency is a parabolic analog of the 19th century Hadamard three circles theorem about log convexity of holomorphic functions on $\\CC$. From the monotonicity, we get parabolic unique continuation and backward uniqueness.", "subjects": "Differential Geometry (math.DG); Analysis of PDEs (math.AP)", "title": "Parabolic frequency on manifolds", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9867771755256741, "lm_q2_score": 0.822189134878876, "lm_q1q2_score": 0.8113174722636748 }
https://arxiv.org/abs/1806.09953
On the maximum number of odd cycles in graphs without smaller odd cycles
We prove that for each odd integer $k \geq 7$, every graph on $n$ vertices without odd cycles of length less than $k$ contains at most $(n/k)^k$ cycles of length $k$. This generalizes the previous results on the maximum number of pentagons in triangle-free graphs, conjectured by Erdős in 1984, and asymptotically determines the generalized Turán number $\mathrm{ex}(n,C_k,C_{k-2})$ for odd $k$. In contrary to the previous results on the pentagon case, our proof is not computer-assisted.
\section{Introduction} In 1984, Erd\H{o}s \cite{Erdos} conjectured that every triangle-free graph on $n$ vertices contains at most $(n/5)^5$ cycles of length 5 and the maximum is attained at the balanced blow-up of a~$C_5$. Gy\"{o}ri \cite{Gyori} proved an upper bound within a~factor 1.03 of the optimal. Using flag algebras method, Grzesik \cite{Grzesik} and independently Hatami, Hladk\'{y}, Kr\'{a}l', Norine, and Razborov \cite{HHKNR} proved that any triangle-free graph on $n$ vertices has at most $(n/5)^5$ copies of $C_5$, which is a~tight bound for $n$ divisible by $5$. Michael \cite{Michael} presented a sporadic counterexample to the characterization of the extremal cases by presenting a graph on $8$ vertices showing that not only a balanced blow-up of a $C_5$ can achieve the maximum. Recently, Lidick\'{y} and Pfender \cite{LP}, also using flag algebras, completely determined the extremal graphs for every $n$ by showing that the graph pointed out by Michael is the only extremal graph which is not a balanced blow-up of a pentagon. Here, we prove the generalization of the above results by showing the following theorem. \begin{thm}\label{thm:main} For each odd integer $k\ge7$, any graph on $n$ vertices without odd cycles of length less than $k$ contains at most $(n/k)^k$ cycles of length $k$. Moreover, the balanced blow-up of a $k$-cycle is the only graph attaining this maximum. \end{thm} It is worth mentioning that, in contrary to the previous results on the pentagon case, our proof is not using flag algebras and is not computer-assisted. Estimating the maximum number of edges in an $H$-free graph on $n$ vertices, called the Tur\'an number of $H$ and denoted by $\ex(n,H)$, is one of the most well-studied problems in graph theory. The original Tur\'an Theorem \cite{Turan} solves it for cliques and the classical Erd\H os–Stone–Simonovits Theorem \cite{ESS} determines the asymptotic behavior of $\ex(n,H)$ for any other non-bipartite graph $H$. The remaining bipartite case contains many interesting and longstanding open problems, as well as important results, see for example surveys by F\"uredi and Simonovits \cite{FS}, Sidorenko \cite{Sidorenko} or, in the case of cycles, the survey by Verstra\"ete \cite{Verstraete}. Generalization of the Tur\'an number, calculating the maximum possible number of copies of a graph $T$ in any $H$-free graph on $n$ vertices, denoted by $\ex(n,T,H)$, is attracting recently a lot of attention. Some specific cases, including the above mentioned case of $\ex(n,C_5,C_3)$, were considered earlier, but systematic studies of this problem were initiated by Alon and Shikhelman \cite{AS}. Especially in the case of cycles many results lately appeared. In particular, Bollob\'as and Gy\"ori \cite{BG} proved that $\ex(n,C_3,C_5) = \Theta(n^{3/2})$, Gy\"ori and Li \cite{GL} extended this result to obtain bounds for $\ex(n,C_3,C_{2k+1})$, which were later improved by Alon and Shikhelman \cite{AS} and by F\"uredi and \"Ozkahya \cite{FO}. Recently, Gishboliner and Shapira \cite{GS} proved a correct order of magnitude of $\ex(n,C_k, C_\ell)$ for each $k$ and $\ell$ and independently Gerbner, Gy\"ori, Methuku, and Vizer \cite{GGMV} for all even cycles, together with the tight asymptotic value of $\ex(n,C_4, C_{2k})$. By standard application of the graph removal lemma, our result (Theorem \ref{thm:main}) determines the tight asymptotic value of $\ex(n,C_{k}, C_{k-2})$ for all odd $k$ unknown before. \begin{cor} For any odd integer $k\ge7$, $\ex(n,C_{k}, C_{k-2}) = (n/k)^k + o(n^k)$. \end{cor} The considered problem is closely related to the problem of finding the maximum number of induced cycles of a given length. Pippinger and Golumbic \cite{PG} conjectured in 1975 that for each $k\ge5$, any graph on $n$ vertices contains at most $n^k/(k^k-k)$ induced $k$-cycles and the extremal graphs are iterated blow-ups of $C_k$. This conjecture was confirmed by Balogh, Hu, Lidick\'{y}, and Pfender \cite{BHLP} for $k=5$. In their original paper, Pippinger and Golumbic proved a general bound for each $k\ge5$ within a multiplicative factor of $2e$. This was recently improved to $128e/81$ by Hefetz and Tyomkin \cite{HT} and to $2$ by Kr\'{a}l', Norin, and Volec \cite{KNV}. Our main result is based on the method they developed. \section{Main result} Fix an odd integer $k \geq 7$ and let $G$ be any graph without $C_\ell$ for all odd $\ell$ between $3$ and $k-2$. Since there are no smaller odd cycles than $k$, each $k$-cycle in $G$ is induced. For any vertices $v$ and $w$, by $d(v,w)$ we denote the minimum distance between the vertices $v$ and $w$ in $G$. For any $k$-cycle $v_0v_1\ldots v_{k-1}$ contained in $G$, by a \emph{good sequence} we denote a sequence $D = (z_i)_{i=0}^{k-1}$, where $z_i = v_i$ for $i \le 1$ and $i \geq 4$, $z_2 = v_3$, and $z_3 = v_2$, i.e., $v_2$ and $v_3$ are in the reversed order. Note that there are $2k$ different good sequences corresponding to a single induced $k$-cycle. For a~fixed good sequence $D$ we define the following sets: \begin{align*} A_0(D) & = V(G),\\ A_1(D) & = N(z_0),\\ A_2(D) & = \{ w \not\in N(z_0) : d(z_1,w) = 2\},\\ A_3(D) & = N(z_1) \cap N(z_2),\\ A_4(D) & = \{w : z_0z_1z_3z_2w \textrm{ is an induced path}\},\\ A_i(D) & = \{w : z_0z_1z_3z_2z_4 \ldots z_{i-1}w \textrm{ is an induced path}\} \textrm{ for } 5 \leq i \leq k-2,\\ A_{k-1}(D) & = \{w : z_0z_1z_3z_2z_4 \ldots z_{k-2}w \textrm{ is an induced cycle}\}. \end{align*} We then define a~\emph{weight} $w(D)$ of a~good sequence $D$ as \begin{align*} w(D) = \prod_{i=0}^{k-1} \abs{A_i(D)}^{-1}. \end{align*} \begin{clm}\label{claim:sum_of_all_weights} The sum of weights of all good sequences in $G$ is at most one. \end{clm} \begin{proof} We will prove by backward induction on $\ell$ ($-1 \leq \ell \leq k-1$) that for any good sequence $D = (z_0, \ldots, z_{k-1})$ the sum of all good sequences that start with $z_0, \ldots, z_\ell$ is not bigger than $\prod_{i=0}^\ell \abs{A_i(D)}^{-1}$. For $\ell=k-1$ it follows from the definition of the weight. Assume then we have proved it for some $\ell \leq k-1$ and want to estimate the sum of weights of all good sequences that start with $z_0, \ldots, z_{\ell-1}$. By the induction hypothesis, for any $w$, the sum of weights of all good sequences starting with $z_0, \ldots, z_{\ell-1}, w$ is at most $\prod_{i=0}^\ell \abs{A_i(D)}^{-1}$ (note that the values $\abs{A_i(D)}$ do not depend on the choice of $w$) and since there are at most $\abs{A_\ell(D)}$ reasonable choices of $w$, the induction step follows. \end{proof} Fix a $k$-cycle $v_0v_1\ldots v_{k-1}$ in $G$, let $C=\{v_0,v_1,\ldots, v_{k-1}\}$ be the set of its vertices, and let $D_j = (v_j, v_{j+1}, v_{j+3}, v_{j+2}, v_{j+4}, \ldots, v_{j+k-1})$, for $0 \leq j \leq k-1$, where the indices are considered modulo $k$, be all the good sequences with the same orientation corresponding to this cycle (half of the total number of good sequences corresponding to this cycle). Denote also $n_{i,j} = \abs{A_i(D_j)}$. If we prove that $$\left(2\sum_{j=0}^{k-1} w(D_j)\right)^{-1} \leq M$$ for some number $M$, then $2\sum_{j=0}^{k-1} w(D_j) \geq M^{-1}$. Thus, by summing over all $k$-cycles (with both orientations) and using Claim~\ref{claim:sum_of_all_weights}, we get that the total number of $k$-cycles is upper bounded by $M$. Since $$\left(2\sum_{j=0}^{k-1} w(D_j)\right)^{-1} = \left(2\sum_{i=0}^{k-1} \prod_{j=0}^{k-1} n_{i,j}^{-1} \right)^{-1} = n\left(\sum_{j=0}^{k-1} \left(\frac{n_{1,j}}{2}\right)^{-1}\prod_{i=2}^{k-1} n_{i,j}^{-1} \right)^{-1},$$ the maximum possible value of \begin{equation}\label{eq:maximizing_expression} n\left(\sum_{j=0}^{k-1} \left(\frac{n_{1,j}}{2}\right)^{-1}\prod_{i=2}^{k-1} n_{i,j}^{-1} \right)^{-1} \end{equation} is an upper bound on the number of $k$-cycles in $G$. \pagebreak Applying two times the AM-GM inequality, we obtain \begin{eqnarray*} n\left(\sum_{j=0}^{k-1} \left(\frac{n_{1,j}}{2}\right)^{-1}\prod_{i=2}^{k-1} n_{i,j}^{-1} \right)^{-1} &\le& \frac{n}{k}\left(\prod_{j=0}^{k-1} \left(\frac{n_{1,j}}{2}\right)^{-1}\prod_{i=2}^{k-1} n_{i,j}^{-1} \right)^{-\frac{1}{k}} \\ &=& \frac{n}{k}\left(\prod_{j=0}^{k-1} \frac{n_{1,j}}{2}\prod_{i=2}^{k-1} n_{i,j} \right)^{\frac{1}{k}}\\ &\le& \frac{n}{k}\left(\frac{1}{k(k-1)}\sum_{j=0}^{k-1} \left( \frac{n_{1,j}}{2}+\sum_{i=2}^{k-1} n_{i,j} \right)\right)^{k-1}. \end{eqnarray*} \begin{clm}\label{claim:sum_of_contributions} It holds that $$\sum_{j=0}^{k-1} \left( \frac{n_{1,j}}{2} + \sum_{i=2}^{k-1} n_{i,j} \right) \leq n(k-1)$$ with equality if and only if each vertex of $G$ is connected to two vertices of $C$ at distance two. \end{clm} \begin{proof} It is enough to prove that the contribution of any vertex $w \in V(G)$ to the above sum is at most $k-1$. And such contribution can only occur if $w$ is connected to two vertices of $C$ at distance two. Notice that any vertex $w\in V(G)$ has at most 2 neighbors in $C$, since otherwise it creates a shorter odd cycle. For the same reason, each vertex $w$ satisfies the following property. \begin{enumerate}[label=($\star$)] \item There are at most three vertices in $C$ at distance exactly 2 from $w$, and any two such vertices are not adjacent.\label{no_neighbors_in_N2} \end{enumerate} If $w$ has no neighbors in $C$, then for each $j$ it can contribute only to $n_{2,j}$. Moreover, if for some $j$ we have $d(w, v_{j}) = 2$, then $d(w, v_{j-1}) > 2$ and $d(w, v_{j+1}) > 2$ by \ref{no_neighbors_in_N2}, and so $w$ does not contribute to $n_{2,j}$ and $n_{2,j-2}$. Therefore such $w$ contributes in total by at most $k-2$. Assume then that $w$ has exactly one neighbor in $C$ --- from symmetry let it be $v_0$. Because of having only one neighbor, for each $j$, $w$ does not contribute to $n_{3,j}$ and $n_{k-1,j}$. In order to contribute to $n_{i,j}$ for $i \notin \{2, 3, k-1\}$, $w$ needs to be connected to $v_{i+j-1}$, and so it can contribute only to $n_{1,0}$ and $n_{i,k-i+1}$ for $4\le i\le k-2$. Finally, $w$ can contribute to $n_{2,j}$ only if $d(w,v_{j+1}) = 2$ and $w \notin N(v_j)$. By \ref{no_neighbors_in_N2} there are at most three vertices in $C$ at distance 2 from $w$, but one of them is $v_1$ and $w \in N(v_0)$, so $w$ contributes to $\sum_{j=0}^{k-1}n_{2,j}$ at most by 2. It follows that in this case $w$ contributes to the considered sum in total at most by $k-3 + \frac 12$. Finally, assume that $w$ has exactly two neighbors in $C$. These neighbors have to be at distance 2 in $C$, as otherwise it creates an odd cycle of length shorter than $k$. From symmetry let $v_{k-1}$ and $v_1$ be the neighbors of $w$. Then $d(w, v_i) = 2$ for $i = k-2, 0, 2$, and there are no more $i$ with this property by \ref{no_neighbors_in_N2}. Therefore, $w$ contributes only to $n_{1,k-1}$, $n_{1,2}$, $n_{2,k-3}$, $n_{3,k-2}$ and $n_{i,k-i}$ for $4\le i\le k-1$, hence $w$ contributes to the considered sum in total by $k-1$. \end{proof} Using the above Claim we immediately get the wanted bound $(n/k)^k$. It follows that the total number of $k$-cycles in $G$ is at most $(n/k)^k$, as desired. If a graph $G$ is achieving this bound, then $n$ needs to be divisible by $k$ and we need to have equalities in all the inequalities we considered. In particular, for each $k$-cycle, all the other vertices of $G$ need to be connected with exactly two vertices of the cycle, which are at distance~2 (as in the blow-up of a $k$-cycle). Since we used the AM-GM inequality, all the blobs need to have the same size. Thus, one can easily deduce that the only graph attaining the maximum is the balanced blow-up of a $k$-cycle. In the case of $n$ not divisible by $k$, in order to prove this way an exact bound on the number of $k$-cycles, one cannot use the AM-GM inequalities, but bound \eqref{eq:maximizing_expression} using Claim~\ref{claim:sum_of_contributions} in a bit more sophisticated way. Trying just to maximize \eqref{eq:maximizing_expression} over all such choices of $n_{i,j} \in \mathbb{N}$ that the inequality from Claim~\ref{claim:sum_of_contributions} holds, one can get higher value than for the numbers $n_{i,j}$ corresponding to blow-ups of a \hbox{$k$-cycle}, but such values may not be realizable by any graph. Therefore using this approach one would have to take into consideration also other relations between the numbers $n_{i,j}$. Still, if $n$ is big enough in relation to $k$, then the maximum of \eqref{eq:maximizing_expression} needs to be achieved when there is an equality in Claim~\ref{claim:sum_of_contributions}. Thus, for $n$ big enough the only graph achieving the maximum number of \hbox{$k$-cycles} is as balanced as possible blow-up of a $k$-cycle. \section{Concluding remarks and open problems} In our proof, basically the only place where we are using that $k$ is an odd number is to say that if a $k$-cycle is not induced (or, more generally, there is a short path in the graph between distant vertices of this cycle), then the graph contains a smaller odd cycle. This is not the case if $k$ is an even number. Moreover, we do not have an analogue of Theorem \ref{thm:main} for even $k$ as forbidding any even cycle prevents from having big blow-ups of a~single edge. Nevertheless, one can carefully analyze the proof to obtain the following result on \emph{induced} even cycles. \begin{obs} For each even integer $k \geq 8$, any graph on $n$ vertices without induced cycles $C_\ell$ for $\ell = 3$ and $5 \leq \ell \leq k-1$ and without induced $C_6$ with one or two main diagonals contains at most $(n/k)^k$ induced cycles of length $k$. \end{obs} It seems that the same construction (balanced blow-up of a $k$-cycle) gives the best possible number of induced $k$-cycles also if we only forbid triangles. \begin{con}\label{con:induced_cycles} For each integer $k\ge5$, any triangle-free graph on $n$ vertices contains at most $(n/k)^k$ induced cycles of length $k$. \end{con} Looking from the other side, if we forbid $C_\ell$ for some odd $\ell$ and try to maximize the number of $C_k$ for some larger odd $k$, it seems that asymptotically the best is always to take a balanced blow-up of an $(\ell+2)$-cycle. \begin{con}\label{con:odd_cycles} For any odd integers $k>\ell\ge3$, it holds $\ex(n,C_{k}, C_{\ell}) = \left(\binom{k}{\frac{k-(\ell+2)}{2}} + \binom{k}{\frac{k-3(\ell+2)}{2}} + \binom{k}{\frac{k-5(\ell+2)}{2}} + \ldots\right)\frac{n^k}{(\ell+2)^k} + o(n^k)$. \end{con} Using publicly available software Flagmatic \cite{flagmatic}, one can numerically verify that Conjecture \ref{con:induced_cycles} holds for $k\le8$ and Conjecture \ref{con:odd_cycles} holds for $k\le7$.
{ "timestamp": "2018-10-17T02:03:10", "yymm": "1806", "arxiv_id": "1806.09953", "language": "en", "url": "https://arxiv.org/abs/1806.09953", "abstract": "We prove that for each odd integer $k \\geq 7$, every graph on $n$ vertices without odd cycles of length less than $k$ contains at most $(n/k)^k$ cycles of length $k$. This generalizes the previous results on the maximum number of pentagons in triangle-free graphs, conjectured by Erdős in 1984, and asymptotically determines the generalized Turán number $\\mathrm{ex}(n,C_k,C_{k-2})$ for odd $k$. In contrary to the previous results on the pentagon case, our proof is not computer-assisted.", "subjects": "Combinatorics (math.CO)", "title": "On the maximum number of odd cycles in graphs without smaller odd cycles", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.989347490102537, "lm_q2_score": 0.8198933359135361, "lm_q1q2_score": 0.8111594140378532 }
https://arxiv.org/abs/1407.5311
SB-labelings and posets with each interval homotopy equivalent to a sphere or a ball
We introduce a new class of poset edge labelings for locally finite lattices which we call $SB$-labelings. We prove for finite lattices which admit an $SB$-labeling that each open interval has the homotopy type of a ball or of a sphere of some dimension. Natural examples include the weak order, the Tamari lattice, and the finite distributive lattices.
\section{Introduction} Anders Bj\"orner and Curtis Greene have raised the following question (personal communication of Bj\"orner; see also \cite{Gr} by Greene). \begin{qn} Why are there so many posets with the property that every interval has M\"obius function equalling $0, 1$ or $-1$? Is there a unifying explanation? \end{qn} This paper introduces a new type of edge labeling that a finite lattice may have which we dub an $SB$-labeling. We prove for finite lattices admitting such a labeling that each open interval has order complex that is contractible or is homotopy equivalent to a sphere of some dimension. This immediately yields that the M\"obius functions only takes the values $0,\pm 1$ on all intervals of the lattice. The construction and verification of validity of such labelings seems quite readily achievable on a variety of examples of interest. The name $SB$-labeling was chosen with $S$ and $B$ reflecting the possibility of spheres and balls, respectively. This method will easily yield that each interval in the weak Bruhat order of a finite Coxeter group, in the Tamari lattice, and in any finite distributive lattice is homotopy equivalent to a ball or a sphere of some dimension. In particular, this method may be applied to non-shellable examples, as the weak Bruhat order for finite Coxeter groups will demonstrate. Section ~\ref{disc-morse-bg-section} quickly reviews background that will be needed later in the paper. Section ~\ref{labeling-section} introduces a new class of edge labelings that a finite lattice may have which we call $SB$-labelings. This section gives two different formulations for the definition of $SB$-labeling and shows that the first of these two versions of the definition for $SB$-labeling will imply that each open interval $(u,w)$ in a finite lattice $L$ is homotopy equivalent to a ball or a sphere, with the homotopy type being that of a sphere if and only if $w$ is a join of atoms of the interval. Section ~\ref{equivalence-section} provides a proof that these two formulations of the definition for $SB$-labeling are equivalent to each other. The value in this may come from the fact that the second formulation is a local condition that appears to be more easily verifiable for families of posets of interest. Section ~\ref{application-section} gives applications, namely it provides $SB$-labelings for the finite distributive lattices, the weak order of any finite Coxeter group, and the Tamari lattices. \section{Background} \label{disc-morse-bg-section} A partially ordered set (poset) $P$ is a {\bf lattice} if each pair of elements $x,y\in P$ has a unique least upper bound, which we denote $x\vee y$, and a unique greatest lower bound, which we denote $x\wedge y$. In particular, we denote by $\hat{0}$ (resp. $\hat{1}$) the unique minimal (resp. maximal) element of a finite lattice. A {\bf cover relation} $u\prec v $ in a poset $P$ is a pair of elements $u < v$ with the further requirement that $u\le z \le v$ implies either $u=z$ or $z=v$. An {\bf open interval } in $P$, denoted $(u,v)$, is the subposet of elements $z$ satisfying $u < z < v$. Likewise, a {\bf closed interval} $[u,v]$ is the subposet comprised of those $z\in P$ such that $u\le z \le v$. We will sometimes refer to the open interval $(\hat{0},\hat{1})$ in a finite lattice $L$ as the {\bf proper part } of $L$. The {\bf M\"obius function}, denoted $\mu $, of a finite partially ordered set $P$ is defined recursively as follows. For each $u\in P$ we have $\mu_P (u,u) = 1$. For each $u < v$, $\mu_P(u,v) = -\sum_{u\le x < v} \mu_P(u,x)$. M\"obius functions provide the coefficients in inclusion-exclusion counting formulas. The {\bf order complex} of a finite poset $P$ is the simplicial complex, denoted $\Delta (P)$, whose $i$-faces are chains $ v_0 < \cdots < v_i $ of $i+1$ comparable poset elements. It is well-known for each $u<v$ in $P$ that $\mu_P (u,v) = \tilde{\chi }(\Delta (u,v)) $ where $\Delta (u,v)$ denotes the order complex of the open interval $(u,v)$. Sometimes we will speak of the homotopy type of a poset or poset interval, by which we mean the homotopy type of the order complex of that poset or poset interval. Our focus throughout this paper will be on posets in which the order complex of each open interval $(u,v)$ will turn out to be homotopy equivalent to a ball or a sphere, implying that $\tilde{\chi } (\Delta (u,v))$ and hence $\mu_P(u,v)$ equals $0, 1, $ or $-1$ for each pair $u<v$. A key tool underlying our work will be the crosscut theorem, which we review next. Recall from \cite{bjorner-top-methods} (see also \cite{Bj-81}, \cite{Folkman}, \cite{Rota}) that a subset $C$ of a poset $P$ is called a {\bf crosscut} if \begin{enumerate} \item $C$ is an antichain. \item For every finite chain $\sigma $ in $P$ there exists an element of $C$ that is comparable to every element of $\sigma $. \item For each $A\subseteq C$ which is bounded, i.e. which has an upper (resp. lower) bound, then the join (resp. meet) of the elements of $A$ exists as an element of $P$. \end{enumerate} Define the {\bf crosscut complex} given by a crosscut $C$ to be the simplicial complex whose faces are those subsets of $C$ which are bounded. \begin{rk} In a finite lattice $L$ (and hence also in the proper part of $L$), the set of atoms is a crosscut. Our focus in this paper will be on making use of the next theorem with the atoms as the chosen crosscut. \end{rk} \begin{thm}[Crosscut Theorem, Theorem 10.8 in \cite{bjorner-top-methods}]\label{crosscut-theorem} The crosscut complex given by any crosscut of a finite poset $P$ is homotopy equivalent to the order complex of $P$. \end{thm} \begin{rk} If one can prove that distinct sets of atoms have distinct joins, then the crosscut theorem will imply that the subposet of joins of atoms is homotopy equivalent to the entire poset. We will use this in the special case where our poset is the proper part of a finite lattice. An $SB$-labeling, a new type of edge labeling which we introduce shortly, will guarantee that distinct sets of atoms have distinct joins. \end{rk} \section{A new class of edge labelings: $SB$-labelings}\label{labeling-section} Next we introduce a new class of edge-labelings which we call $SB$-labelings. We will call a lattice admitting such a labeling an $SB$-lattice. We will give two different formulations of the definition of $SB$-labeling, and then we will prove that these are equivalent to each other. One formulation will be convenient for proving topological consequences of having an SB-labeling. In particular, we use this to prove that each open interval in a finite lattice with an SB-labeling is homotopy equivalent to a ball or a sphere. The other formulation seems likely to be more convenient for constructing SB-labelings on examples. Later in the paper we will indeed demonstrate that several well known lattices admit SB-labelings, in spite of the fact that some of these lattices cannot possibly be shellable. In particular, we will apply this method to the weak Bruhat order of a finite Coxeter group, the Tamari lattice, and the finite distributive lattices, giving short, uniform proofs of this sort of topological structure in some cases which had already been handled in the past by other methods. \begin{rk} It is natural to ask if this notion for edge labelings may be extended to a more general notion for chain labelings (in the sense of \cite{BW-on-lex}). However, key properties of these $SB$ labelings in fact will rely in an essential way on our usage of edge labelings rather than chain labelings. Therefore, we confine ourselves to considering edge labelings. \end{rk} \begin{defn}\label{lattice-labeling} An edge-labeling $\lambda $ of a finite lattice $L$ is a {\bf lower $SB$-labeling} if it may be constructed as follows. Begin with a label set $S$ such that there is a subset $\{ \lambda_a | a\in A(L) \} $ of $S$ whose members are in bijection with the set $A(L)$ of atoms of $L$. \begin{enumerate} \item No two labels upward from $\hat{0}$ to distinct atoms may be equal. This allows us to define the label $\lambda_a $ on each cover relation $\hat{0}\prec a$ to be the label corresponding to that atom $a$. \item The set of labels $\lambda (M)$ occurring with positive multiplicity on any saturated chain $M$ on any interval that can be expressed as $[\hat{0}, a_{i_1}\vee \cdots \vee a_{i_r} ] $ for $ \{ a_{i_1},\dots ,a_{i_r} \} \subseteq A(L)$ is exactly $\{ \lambda_{a_{i_j}} | a_{i_j} \in A(L) \} $. \end{enumerate} When an edge labeling $\lambda $ for a finite lattice $L$ meets these conditions upon restriction to each closed interval of $L$, then we call such a labeling an {\bf $SB$-labeling}. We call a lattice with an $SB$-labeling an {\bf $SB$-lattice}. \end{defn} \begin{rk} Notice that condition (2) above implies for $S, T$ distinct sets of atoms, that the join of the set of atoms in $S$ does not equal the join of the set of atoms in $T$. In particular, this implies that the subposet of joins of atoms is a Boolean algebra. \end{rk} Now we give what we call the ``index 2 formulation of $SB$-labeling'', a type of labeling that we will prove is equivalent to the notion of $SB$-labeling in Theorem ~\ref{equivalent-definitions}. In light of Theorem ~\ref{equivalent-definitions}, one may henceforth take either definition as a definition of $SB$-labeling. \begin{defn} The {\bf index 2 formulation of $SB$-labeling} is an edge labeling on a finite lattice $L$ satisfying the following conditions for each $u,v,w \in L$ such that $v$ and $w$ both cover $u$: \begin{enumerate} \item $\lambda (u,v)\ne \lambda (u,w)$ \item Each saturated chain on the interval $[u,v\vee w]$ uses both of these labels $\lambda (u,v)$ and $\lambda (u,w)$ a positive number of times. \item None of the saturated chains on the interval $[u,v\vee w]$ use any other types of labels besides $\lambda (u,v)$ and $\lambda (u,w)$. \end{enumerate} \end{defn} \begin{thm}\label{equivalent-definitions} An edge labeling on a finite lattice is an $SB$-labeling if and only if it satisfies the index 2 formulation for $SB$-labeling. \end{thm} \begin{proof} Theorem ~\ref{cover-enough} proves that the index 2 formulation of $SB$-labeling will always give an $SB$-labeling. On the other hand, if $\lambda $ is an $SB$-labeling, then Condition (1) for $SB$-labelings directly gives Condition (1) in the index 2 formulation for $SB$-labelings. Condition (2) for $SB$-labelings specialized to the case of a join of two atoms yields exactly conditions (2) and (3) of the index 2 formulation of $SB$-labeling. \end{proof} \begin{ex} In the case of the weak Bruhat order of a finite Coxeter group, we will label each cover relation $u\prec s_i u$ with the label $s_i$ and will prove that this labeling is an $SB$-labeling. For instance, the weak order interval $[\hat{0},s_1s_2]$ has a single saturated chain, and it uses the labels $s_1$ and $s_2$. The label $s_1$ corresponds to an atom while the label $s_2$ does not. \end{ex} \begin{thm}\label{homotopy-type-for-atom-labeled-lattices} Suppose $L$ is a finite lattice admitting a lower $SB$-labeling, and suppose $|L| > 2$. If $v\in L $ is a join of $d$ atoms for some $d>1$, then $(\hat{0} , v )$ has order complex homotopy equivalent to a sphere $S^{d-2}$. Otherwise, $(\hat{0},v)$ has order complex that is contractible. \end{thm} \begin{proof} Each subset of the atoms has a distinct join, by virtue of the fact that the set of labels appearing on the edges of all of the saturated chains upward to a join of atoms is exactly that set of atoms. But this implies that the crosscut complex for $(\hat{0},v)$ given by the atoms is the boundary of a simplex if $v$ is a join of atoms and is the entire simplex otherwise. In particular, this means that the crosscut complex is homotopy equivalent to a sphere $S^{d-2}$ if $v$ is a join of atoms and is contractible otherwise. Now the Crosscut Theorem (cf. Theorem ~\ref{crosscut-theorem}) yields the result. \end{proof} \begin{thm} \label{thm:sb} If $L$ is an SB-lattice, then each open interval is homotopy equivalent to a ball or a sphere of some dimension. Moreover, $\Delta (u,v)$ is homotopy equivalent to a sphere if and only if $v$ is a join of atoms of the interval, in which case it is a sphere $S^{d-2}$ where $d$ is the number of atoms in the interval. \end{thm} \begin{proof} This follows from Theorem ~\ref{homotopy-type-for-atom-labeled-lattices}. \end{proof} We conclude this section with some relaxations that may be made in the hypotheses of our main results without changing the conclusions. \begin{rk}\label{locally-finite-variant} In the notion of $SB$-labeling, we may replace the finiteness requirement for our lattices by instead requiring them to be locally finite with a unique minimal element. Our proofs all go through unchanged in such cases, allowing us to call such lattices $SB$-lattices and draw all of the same conclusions. Young's lattice will provide one such example. \end{rk} \begin{defn} Let us say that a finite poset $P$ with unique minimal and maximal elements is an {\bf atom-near-lattice} if each pair of elements $u,v \in P$ with $u<v$ has the property that each collection $S$ of atoms of the closed interval $[u,v]$ has a unique least upper bound $\vee_{a\in S} a$. \end{defn} \begin{rk} It is proven in Lemma 2.1 of \cite{BEZ} that this atom-near-lattice property in fact implies that $P$ is a lattice. This property may be easier to check in examples of interest than the property of being a lattice. Our proofs actually only rely upon this formulation of the lattice property. \end{rk} \section{Index 2 formulation is equivalent to $SB$-labeling}\label{equivalence-section} This section proves the equivalence of our two different definitions for $SB$-labeling. To this end, we will use the next two notions to prove that every labeling meeting the conditions in the index 2 formulation for $SB$-labeling is an $SB$-labeling. \begin{defn} We say that a pair of saturated chains $M_1, N_1$ in a finite lattice are connected by a {\bf basic move} if $M_1$ and $N_1$ coincide except on an open interval $(u,v)$ where $u\prec x$ in $M_1$ and $u\prec y$ in $N_1$ with $x\ne y$ and with $v = x\vee y$. \end{defn} \begin{ex}\label{braid-special} For example, in any interval $[u,wu]$ in the weak order the basic moves are given by the long and short braid moves on reduced expressions for the Coxeter group element $w$. \end{ex} \begin{defn} Define the {\bf total length} of a closed interval $[u,v]$ to be the sum of the lengths of all the saturated chains in that interval. \end{defn} \begin{lem} \label{sat} Any two saturated chains on an interval $[a,b]$ in a finite lattice $L$ are connected by a series of basic moves. \end{lem} \proof Suppose otherwise. Then choose an interval $[u,v]$ where this fails, making the total length of the interval as small as possible among all such examples. Let $M_1$ and $N_1$ be two saturated chains on $[u,v]$ that are not connected by a series of basic moves. Our minimality assumption on total length ensures for $u\prec u_1$ in $M_1$ and $u\prec v_1$ in $N_1$ that we must have $u_1\ne v_1$. We also may assume $u_1\vee v_1 \ne v$, since otherwise there would be a single basic move connecting $M_1$ to $N_1$ by definition of basic move. Our plan in this case is to give a series of steps $M_1\rightarrow M_2\rightarrow M_3\rightarrow N_1$ which convert $M_1$ to $N_1$ and to show that each of these three steps may be achieved through a series of basic moves, hence that their composition can as well. Let $M_2$ be a saturated chain on $[u, v]$ which agrees with $M_1$ except possibly on $(u_1,v)$; $M_2$ is chosen to include $u_1\vee v_1$ (since $v\ne u_1\vee v_1$, we have $u_1\vee v_1 < v$). Since the interval $[u_1, v]$ has strictly smaller total length than $[u,v]$ and $M_1$ agrees with $M_2$ except on this interval, we can conclude there is a series of basic moves converting the restriction of $M_1$ to $[u_1,v ]$ to the restriction of $M_2$ to this same interval, which in turn gives basic moves converting $M_1$ to $M_2$ in $[u,v]$. Now we similarly may convert $M_2$ to a chain $M_3$ which coincides with $M_2$ except on the interval $(u,u_1\vee v_1)$ and which has $u_1$ replaced by $v_1$; this interval also has strictly smaller total length than $[u,v]$, again implying the desired basic moves. Finally, we note that $M_3$ only differs from $N_1$ on the proper part of the interval $[v_1,v]$, which yet again has strictly smaller total length than $[u,v]$, enabling us to find a series of basic moves converting $M_3$ to $N_1$, completing the result. \qed \begin{rk} Lemma ~\ref{sat} may be regarded as an abstraction of the idea that the lattice property for the weak Bruhat order of a finite Coxeter group ensures that any two reduced expressions for the same Coxeter group element are connected by a series of long and short braid moves. This implication in the case of the weak order appears as Theorem 3.3.1 in \cite{BB}. \end{rk} \begin{thm}\label{cover-enough} If a finite lattice $L$ has an edge labeling that satisfies the index 2 formulation for an $SB$-labeling, then it is an $SB$-labeling. \end{thm} \begin{proof} Let $\lambda $ be an edge-labeling for a finite lattice $L$ which meets the requirements for the index 2 formulation of an $SB$-labeling. We will prove by induction on the number $r$ of atoms that $\lambda $ also meets the requirements to be a lower $SB$-labeling. In fact, this will imply it is an $SB$-labeling, by applying this argument to any closed interval to show we have a lower $SB$-labeling for each closed interval. The base case with 1 atom is tautologically true. Let us suppose that $\{ a_{i_1},\dots ,a_{i_r} \} $ is the set of atoms of $L$. Now consider the interval $L_{r-1} = [\hat{0},a_{i_1}\vee \cdots \vee a_{i_{r-1}}]$ within $L$. By induction, we may assume that this uses only the labels $\{ a_{i_1},\dots ,a_{i_{r-1}} \} $. We will progressively build from $L_{r-1}$ a larger subposet $L_{r-1,1}$ of $L$ all of whose cover relations are cover relations of $L$ with the further property that it includes an upper bound $m$ for $\{ a_{i_1}\dots ,a_{i_r} \} $. We will deduce from $a_{i_1}\vee \cdots \vee a_{i_r} \le m$ that $[\hat{0},a_{i_1}\vee\cdots\vee a_{i_r} ]$ also uses at most the labels $\{ a_{i_1},\dots ,a_{i_r}\} $. Finally, we will also show that each saturated chain from $\hat{0}$ to $a_{i_1}\vee \cdots \vee a_{i_r}$ in fact uses all of these labels. First we add to $L_{r-1}$ the additional atom $a_{i_r} $ as well as all elements belonging to the closed interval $[\hat{0}, a_{i_1}\vee a_{i_r}]$ to obtain a new poset $L_{r-1}^{(1)}$. By condition (3) in the index 2 formulation of $SB$-labeling, this slightly larger poset still only uses the allowed edge labels, since all of the new cover relations are in the interval $[0,a_{i_1}\vee a_{i_r}]$ which only uses the labels $a_{i_1}$ and $a_{i_r}$. Now either $a_{i_1}\vee a_{i_r}\in L_{r-1}$, in which case we are done constructing $L_{r-1,1}$, or there are at least two different maximal elements in $L_{r-1}^{(1)}$. For each maximal element $m_i^{(1)} \in L_{r-1}^{(1)}$, let $P_i$ be the subposet of elements $x\in L_{r-1}^{(1)}$ satisfying $x\le m_i^{(1)}$. Choose $u^{(1)}$ to be an element that is contained in both $P_j$ and $P_k$ for some $j\ne k$ such that there are no elements strictly greater than $u^{(1)}$ also having the property of being contained in some $P_{j'}$ as well as some $P_{k'}$ for $j'\ne k'$; finiteness of $L_{r-1}^{(1)}$ guarantees the existence of such an element $u^{(1)}$. Now consider cover relations $u^{(1)}\prec x_1^{(1)}$ and $u^{(1)}\prec x_2^{(1)}$ in $L_{r-1}^{(1)}$ such that $x_1^{(1)} \le m_i^{(1)}$ and $x_2^{(1)} \le m_j^{(1)}$ in $L_{r-1}^{(1)}$. Obtain from $L_{r-1}^{(1)}$ a strictly larger poset $L_{r-1}^{(2)}$ by adding all elements and cover relations from the interval $[u^{(1)},x_1^{(1)} \vee x_2^{(1)} ]$. Again by condition (3), this cannot introduce any new labels. Again, we either have a unique maximal element or we have two different maximal elements, allowing us to apply this same procedure and do so repeatedly until we have a unique maximal element $m$. Specifically, at the $k$-th iteration of the procedure, the input is a poset $L_{r-1}^{(k-1)}$ having distinct maximal elements. This allows us to find an element $u^{(k)}$ satisfying the same criterion at this step that $u^{(1)}$ satisfied at the first step, now using distinct maximal elements $m_i^{(k-1)}$ and $m_j^{(k-1)}$. This in turn ensures there are elements $x_1^{(k)}$ and $x_2^{(k)}$ with $u\prec x_1^{(k)} \le m_i^{(k-1)}$ and $u\prec x_2^{(k)} \le m_j^{(k-1)}$ such that $x_1^{(k)}$ is not less than or equal to any maximal element other than $m_i^{(k-1)}$ and $x_2^{(k)}$ is not less than or equal to any maximal element other than $m_j^{(k-1)}$. We obtain $L_{r-1}^{(k)}$ as the poset $L_{r-1}^{(k-1)}$ with the additional elements and cover relations from the interval $[u^{(k)},x_1^{(k)} \vee x_2^{(k)}]$ added to it. We iterate this process until it yields a poset $L_{r-1,1}$ with a unique maximal element $m$. This process must terminate within finitely many iterations due to finiteness of our original lattice. By construction, the unique maximal element $m$ of $L_{r-1,1}$ will be an upper bound for $\{ a_{i_1},\dots ,a_{i_r} \} $, and we will have only used the labels $a_{i_1},\dots ,a_{i_r}$ on the poset $L_{r-1,1}$ obtained by this process. The fact that we only ever insert cover relations from the original lattice implies that each saturated chain in $L_{r-1,1}$ from $\hat{0}$ to $m$ is also a saturated chain in the original lattice $L$. Since $a_{i_1}\vee\cdots \vee a_{i_r} \le m$ in $L$, there is a saturated chain from $\hat{0} $ to $m$ in $L$ which includes the element $a_{i_1}\vee \cdots \vee a_{i_r}$. By Lemma ~\ref{same}, this implies that the set of labels on each saturated chain from $\hat{0}$ to $a_{i_1}\vee \cdots \vee a_{i_r}$ must be a subset of the set of labels on a saturated chain from $\hat{0}$ to $m$. Thus, no labels other than $a_{i_1},\dots ,a_{i_r}$ appear on any saturated chain in $[\hat{0},a_{i_1}\vee\cdots \vee a_{i_r}]$. Now let us show that each saturated chain from $\hat{0} $ to $a_{i_1}\vee \cdots \vee a_{i_r}$ uses each of the labels $\{ a_{i_1}, \dots ,a_{i_r}\} $ a positive number of times. The point is that each atom $a_{i_j}$ for $1\le j\le r$ is in some saturated chain in $[\hat{0}, a_{i_1}\vee\cdots \vee a_{i_r}]$, implying that there exists a saturated chain using the label $a_{i_j}$; but this implies that all saturated chains use $a_{i_j}$ for each $1\le j\le r$, by Lemma ~\ref{same}. In conclusion, we have shown that each saturated chain uses exactly the set of labels $\{ a_{i_1},\dots ,a_{i_r} \} $, each with positive multiplicity. \end{proof} \begin{cor} \label{joins} If a finite lattice $L$ satisfies the conditions for the index 2 formulation of $SB$-labeling, then for any collection $\{ a_{i_1},\dots ,a_{i_d} \} $ of atoms of $L$, the interval $[\hat{0}, a_{i_1}\vee \cdots \vee a_{i_d} ]$ has no atoms other than $\{ a_{i_1},\dots ,a_{i_d} \} $. \end{cor} \begin{lem} \label{2a} Let $L$ be a finite lattice with an edge labeling $\lambda $ which satisfies the index 2 formulation for an $SB$-labeling. Then for any $u,v,w\in L $ with $v$ and $w$ both covering $u$, the interval $[u,v\vee w]$ cannot have any atoms other than $v$ and $w$. \end{lem} \proof If there were another atom $x$ in $[u,v\vee w]$, then we must have $\lambda (u,x) \ne \lambda (u,v)$ and $\lambda (u,x) \ne \lambda (u,w)$. We also must have $x\vee v\le v\vee w$ since $x\le v\vee w$ and $v \le v\vee w$. By virtue of $\lambda $ meeting the index 2 formulation for $SB$-labeling, we have that every saturated chain on the interval $[u,x\vee v]$ must use the label $\lambda (u,x)$. But then there will be saturated chains on the interval $[u,v\vee w]$ which also must use the label $\lambda (u,x)$, a contradiction. \qed \medskip \begin{lem} \label{same} If an edge labeling $\lambda $ on a finite lattice $L$ meets the conditions for the index 2 formulation of $SB$-labeling, then this guarantees for each interval $[a,b]$ in $L$ that any two saturated chains $M_1$ and $N_1$ on $[a,b]$ must use the same set of labels each a positive number of times, though not necessarily with the same multiplicities. \end{lem} \proof Lemma \ref{2a} implies that any two saturated chains $M_1$ and $N_1$ on an interval $[a,b]$ in a finite lattice that are connected by a series of basic moves use the same set of labels (though not necessarily with the same multiplicities). Lemma \ref{sat} checks that any two saturated chains $M_1$ and $N_1$ between $a$ and $b$ in a finite lattice are connected by a series of basic moves, so the result follows. \qed \section{Applications}\label{application-section} Now we turn to applications, beginning with finite distributive lattices. In this first example, the $SB$-labeling we give is also a well-known $EL$-labeling, implying the posets are shellable. The homotopy type of the intervals in finite distributive lattices was determined in \cite{Bjorner} indirectly by virtue of finite distributive lattices also being finite supersolvable lattices, relying on an earlier $R$-labeling given by Stanley in \cite{Stanley} for finite supersolvable lattices. \begin{thm}\label{finite-distributive-lattice} Any finite distributive lattice is an $SB$-lattice. \end{thm} \begin{proof} We will use the fact that any finite distribute lattice $L$ is the poset $J(P)$ of order ideals of a finite poset $P$ ordered by inclusion (cf. \cite{ec1}, Theorem 3.4.1). This allows us to regard each cover relation $u\prec v$ as adding to the order ideal associated to $u$ a single element $p\in P$ We use this element $p$ as the label for $u\prec v$. Whenever we have $u\prec v$ and $u\prec w$, then this implies that there are two different elements of $P$, either of which may individually be added to the order ideal given by $u$ to obtain a new order ideal. Therefore, $v\vee w$ covers both $v$ and $w$ with the further property that there cannot be any other elements $z$ with $u < z < v\vee w$. From this, Conditions (1), (2) and (3) for the index 2 formulation for $SB$-labeling follow directly. \end{proof} Recall that Young's lattice is the poset of integer partitions regarded as Young diagrams, with $u\prec v$ whenever $v$ is obtained from $u$ by adding a single box. Since Young's lattice is a locally finite, distributive lattice with a unique minimal element, Theorem ~\ref{finite-distributive-lattice} together with Remark ~\ref{locally-finite-variant} allows us to conclude: \begin{cor} Young's lattice is an $SB$-lattice. \end{cor} Next we turn to a non-shellable example, the weak Bruhat order on the elements of a finite Coxeter group $W$. Let $S$ be the set of simple reflections generating $W$. The weak Bruhat order has as its cover relations each $w\prec s_iw$ for $w\in W $ and $s_i\in S$ with $l(w) < l(s_i w)$, letting $l(w)$ denote the Coxeter-theoretic length of $w$. See e.g. \cite{BB} or \cite{Hu} for further background on Coxeter groups and on the weak Bruhat order. The homotopy type of each interval was originally determined in \cite{bj} (see also \cite{Ed} and \cite{EW} for related results regarding posets of regions). \begin{thm}\label{weak-theorem} The weak Bruhat order for any finite Coxeter group $W$ is an $SB$-lattice. Moreover, an open interval $(u,w)$ in $W$ is homotopy equivalent to a sphere $S^{d-2}$ if $u^{-1}w$ is the longest element of a parabolic subgroup $J_S$ with $|S|=d$, and $(u,w)$ is contractible otherwise. \end{thm} \begin{proof} See e.g. Theorem 3.2.1 in \cite{BB} for a proof that the weak Bruhat order of a finite Coxeter group is a lattice. We label each cover relation $u\prec v$ with the unique simple reflection $s_i$ such that $v = s_iu$. Consider a pair of elements $u < w = s_{i_1}\cdots s_{i_r} u $ where $l(w) - l(u) = r$ for $l(u)$ denoting the Coxeter-theoretic length of $u$. Known results about finite Coxeter groups to be used later in our proof are: (1) the isomorphism of Bruhat intervals $[u,w] \simeq [e,u^{-1}w]$ given in Proposition 3.1.6 in \cite{BB}, (2) the characterization of the joins of finite sets of atoms in Lemma 3.2.3 in \cite{BB} as exactly those Coxeter group elements which may be regarded as the longest element of the parabolic subgroup generated by exactly the simple reflections corresponding to the given finite set of atoms, and (3) the fact that the longest element $w_0(S)$ for a parabolic subgroup $J_S$ has a reduced expression beginning with any letter of $S$. This third assertion can be seen by noting that multiplying $w_0(S)$ on the right by any simple reflection in $S$ must decrease the length of $w_0(S)$. The requirements for the index 2 formulation for $SB$-labeling will now follow from the following observation that we justify next: for any two cover relations $u\prec v$ and $u\prec w$, there are unique saturated chains $u\prec v\prec \cdots \prec v\vee w$ and $u\prec w\prec\cdots \prec v\vee w$ on the interval $[u,v\vee w]$ and no other saturated chains on this interval; moreover, these two saturated chains have label sequences $s_i s_j s_i \cdots $ and $s_j s_i s_j \cdots $ each consisting of an alternation of only the letters $s_i$ and $s_j$, with each label sequence having the same length $m(i,j)$ where $m(i,j)$ is the order of the Coxeter group element $s_is_j$. In the case of $u=e$, this observation holds by definition. Otherwise, it follows from the isomorphism in Proposition 3.1.6 of \cite{BB} already mentioned above, completing the proof that the weak Bruhat order of a finite Coxeter group is an $SB$-lattice. Finally, facts (2) and (3) above combine to imply that $w$ will be a join of atoms of the interval $[u,w]$ if and only if $u^{-1}w$ is the longest element of a parabolic subgroup of $W$. Thus, Theorem ~\ref{thm:sb} implies that $[u,w]$ has the homotopy type of a sphere if and only if $u^{-1}w$ is the longest element of a parabolic subgroup of $W$. \end{proof} \begin{rk} We note that facts (2) and (3) in the proof of Theorem ~\ref{weak-theorem} also imply that the only labels that may occur on an interval $[u,w]$ in which $w$ is a join of atoms of the interval are labels that occur on cover relations $u\prec x$ upward from $u$ to atoms of the interval. Thus, we could have directly proved that the weak Bruhat order of a finite Coxeter group was an $SB$-lattice without invoking the index 2 formulation of $SB$-lattice. \end{rk} The Tamari lattice, our next example, is a partial order on the binary bracketings of a word with $n$ letters. Its significance comes in part from the fact that its Hasse diagram is the 1-skeleton of the associahedron, a polytope which goes back to work on homotopy associative $H$-spaces by Stasheff (cf.\cite{Stasheff}). The number of elements in the Tamari lattice is a Catalan number. It was proven to be non-pure shellable with each interval having the homotopy type of a ball or a sphere by Bj\"orner and Wachs in \cite{BW}. Earlier results regarding its M\"obius function and implicitly regarding its topological structure also appear in \cite{Pallo}. A cover relation $u\prec v$ in the Tamari lattice results from replacing $((x, y),z))$ by $(x,(y , z))$ somewhere in the parenthesized expression for $u$ to obtain $v$, letting the entities $x,y,z$ either be individual letters or themselves larger bracketed expressions. See e.g. \cite{BW}, \cite{HT}, \cite{Lo} for further background. \begin{thm} The Tamari lattice is an SB-lattice. \end{thm} \begin{proof} We regard each element as a binary bracketing of the word $1\cdots n$, so this consists of a sequence of $n-1$ left parentheses, $n-1$ right parentheses and $n$ letters being bracketed. The minimal element of the Tamari lattice $L$ is the ``leftmost'' bracketing $((\cdots ( 1, 2) 3 ) \cdots ) n$ while the maximal element is the ``rightmost'' bracketing $1 (2 (\cdots (n-1 , n))\cdots )$. We proceed up a cover relation by changing some triple of consecutive objects (not necessarily single letters) of the form $(a,b)c$ into a new triple $a (b,c)$. This has the impact of moving a single right parenthesis (as well as a single left parenthesis) farther to the right. For our proposed $SB$-labeling, we record at each step the letter to the immediate left of the unique right parenthesis which is moved to the right. Thus, we label the cover relation changing $(a,b)c$ to $a(b,c)$ with the rightmost letter appearing in the expression $b$. Let us now confirm that this edge labeling satisfies conditions (1), (2) and (3) required for the index 2 formulation for an $SB$-labeling. By construction, there will be at most one allowable way to move a particular right parenthesis to the right via a cover relation, yielding condition (1). To help us confirm (2) and (3), we first introduce a family of operators $u_i$ for $i \in \{ 2,3,\dots ,n-1 \} $. Applying to a binary bracketing with $n$ letters the operator $u_i$ has the impact of moving the rightmost right parenthesis which is in between the $i$-th and $(i+1)$-st letters to the right and acts as needed on its left partner parenthesis; if there is no such right parenthesis in this $i$-th position in $v$ that may be moved to the right by a cover relation, then $u_i(v)$ is formally set to $0$. Thus, $v\prec w$ in the Tamari lattice if and only if there exists an operator $u_i$ with $u_i(v) = w$. In the proof below, we will confirm the following things: \begin{enumerate} \item for every element $v$ in the Tamari lattice and every $i<j$ we have either $u_iu_j(v)=u_ju_i(v)$ or $u_iu_j(v)=u_ju_ju_i(v)$, \item there are no other elements in the interval $[v,u_iu_j(v)]$ than the 4 or 5 elements involved in the aforementioned relation, and \item that $u_iu_i(v) = 0$ for each $v\in L$ and each $2\le i < n$. \end{enumerate} More specifically for (1), we will show that we have the relation $u_iu_j(v) = u_ju_ju_i(v)$ for intervals which convert an expression $((a,b)c)d$ to $a(b(c,d))$ and instead we have the relation $u_iu_j(v) = u_ju_i(v)$ for all other pairs of operators $u_i,u_j$. Checking these things will yield conditions (2) and (3) in the index 2 formulation for $SB$-labeling, hence will imply that our labeling is an $SB$-labeling. Now we turn to justifying the first claim enumerated above. First suppose that we are converting a bracketing containing the expression $((a,b)c)d$ to one instead containing $a(b(c,d))$. When $a,b,c,d$ are single letters, one may check directly that we obtain the relation $u_iu_j(v) = u_ju_ju_i(v)$. Next observe that the labeling was not sensitive to whether $a,b,c,d$ were single letters are more complex expressions themselves. Otherwise, the operators $u_i,u_j$ must commute since neither right parenthesis to be moved to the right has any impact on how the other is moved. This includes the possibility of $u_i(v)=0$ or $u_j(v)=0$, since again the point is that the two operations do not impact each other. Thus, we get $u_iu_j(v) = u_ju_i(v)$ in this case. From the standpoint of trees, each of our operations $u_i$, will involve a pair of non-leaf nodes, one of which is the left child of the other; the operation will shift the right child of the lower non-leaf node to instead being a left child of a non-leaf node which is now a right child of the higher non-leaf node. The two operations given by $u_i$ and $u_j$ will commute unless the associated pairs of non-leaf nodes involved in the operations have at least one of these non-leaf nodes in common. They cannot have both of these non-leaf nodes associated to them in common with each other or else the two operations would by definition be the same operation. The case where the two operations have one of these two non-leaf nodes in common is exactly the case which already yielded the relation $u_iu_j(v) = u_ju_ju_i(v)$. Now to the proof of the second claim above. It follows from the definition of the Tamari lattice that there are no elements in the closed interval $[v, u_i(v) \vee u_j(v)]$ other than the 4 or 5 elements directly involved in this relation. The third claim above also has a simple explanation: the relation $u_iu_i(v)= 0 $ is immediate from the fact that there is at most one allowable way to move a right parenthesis which is to the immediate right of the $i$-th letter farther to the right in a binary expression to obtain a new binary expression. \end{proof} \begin{ex} Neither the dominance order on the partitions of an integer $n$ nor its dual poset admits an $SB$-labeling in general. This can be seen by considering the interval downward from the partition $(5,4,3,2,1)$ to the meet of the 4 elements its covers. This example also rules out the more general class of $I$-lattices studied by Greene in \cite{Gr}. Recall that the dominance order was proven to be non-pure shellable with each open interval homotopy equivalent to a ball or a sphere in \cite{BW}. The M\"obius function was determined prior to that in \cite{Bogart}, \cite{Brylawski} and \cite{Gr}. \end{ex} It would be interesting to know of additional examples of finite (or locally finite) lattices with $SB$-labelings. We have not made a comprehensive search for such examples, but rather have chosen to focus in this paper on some well-known families of lattices with the appropriate M\"obius function that seemed to us to be especially interesting classes of posets. \section*{Acknowledgments} The authors are grateful to Georgia Benkart, Stephanie van Willigenburg, Monica Vazirani, and the Banff International Research Station (BIRS) for conducting an inspiring workshop entitled Algebraic Combinatorixx in May 2011 for female researchers in algebraic combinatorics with the goal of helping establish new and fruitful collaborations. This project grew out of discussions that began at that workshop. The authors also thank Louis Billera, Anders Bj\"orner, Curtis Greene, Thomas McConville, Peter McNamara, and Vic Reiner for helpful discussions and references.
{ "timestamp": "2014-07-22T02:08:53", "yymm": "1407", "arxiv_id": "1407.5311", "language": "en", "url": "https://arxiv.org/abs/1407.5311", "abstract": "We introduce a new class of poset edge labelings for locally finite lattices which we call $SB$-labelings. We prove for finite lattices which admit an $SB$-labeling that each open interval has the homotopy type of a ball or of a sphere of some dimension. Natural examples include the weak order, the Tamari lattice, and the finite distributive lattices.", "subjects": "Combinatorics (math.CO); Algebraic Topology (math.AT)", "title": "SB-labelings and posets with each interval homotopy equivalent to a sphere or a ball", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9865717440735736, "lm_q2_score": 0.8221891370573388, "lm_q1q2_score": 0.8111485709050051 }
https://arxiv.org/abs/0705.4536
Refined bound for sum-free sets in groups of prime order
Improving upon earlier results of Freiman and the present authors, we show that if $p$ is a sufficiently large prime and $A$ is a sum-free subset of the group of order $p$, such that $n:=|A|>0.318p$, then $A$ is contained in a dilation of the interval $[n,p-n]\pmod p$.
\section{Introduction} The subset $A$ of an additively written semigroup is called \emph{sum-free} if there do not exist $a_1,a_2,a_3\in A$ with $a_1+a_2=a_3$; equivalently, if $A$ is disjoint with its \emph{sumset} $A+A:=\{a_1+a_2\colon a_1,a_2\in A\}$. Introduced by Schur in 1916 (``the set of positive integers cannot be partitioned into finitely many sum-free subsets''), sum-free sets become now a classical object of study in additive combinatorics; we refer the reader to \cite{b:df,b:l2} and the papers, cited there, for the history and overview of the subject area. Let $G$ be a finite abelian group. It is easy to see that a randomly chosen ``small'' subset of $G$ is sum-free with high probability, while a randomly chosen ``large'' subset of $G$ with high probability is \emph{not} sum-free. Thus, small sum-free subsets of $G$ can be unstructured, whereas large sum-free subsets possess a rigid structure. Unraveling this structure for various underlying groups $G$ is a fascinating problem which received much attention during the last decade. In the present paper we consider groups of prime order $p$, which we identify with the quotient group $\Z/p\Z$. Let $\phi_p$ denote the canonical homomorphism from ${\mathbb Z}$ onto $\Z/p\Z$, and for a set ${\mathcal S}\subseteq{\mathbb Z}$ let ${\mathcal S}_p$ denote the image of ${\mathcal S}$ under $\phi_p$; here the letter ${\mathcal S}$ will often be substituted by the interval notation so that, for instance, $[3,6)_{11}=\{-8,4,16\}_{11}$ etc. The well-known Cauchy-Davenport inequality implies readily that if $A\subseteq\Z/p\Z$ is sum-free, then $|A|\le\left\lfloor (p+1)/3\right\rfloor$. This estimate is sharp, as for $u=\left\lfloor(p+1)/3\right\rfloor$ the set $[u,2u-1]_p$, and consequently its dilates, are sum-free. The main results of both \refb{l2} and \refb{df} show that in fact for prime $p$, any large sum-free subset of $\Z/p\Z$ is close to a dilate of $(p/3,2p/3)_p$. Specifically, it is proved in \refb{l2} for $\alpha_0=0.33$, and in \refb{df} for $\alpha_0=0.324$ and $p$ large enough that if $A$ is a sum-free subset of $\Z/p\Z$ with $n:=|A|>\alpha_0 p$, then $A$ is contained in a dilate of $[n,p-n]_p$. (As shown in \refb{l2}, the interval $[n,p-n]_p$ is best possible in this context.) For an integer $d$ and a subset $A$ of an abelian group let $d\ast A:=\{d a\colon a\in A\}$. The goal of the present paper is to prove \begin{theorem}\label{t:main} Let $p$ be sufficiently large a prime and suppose that $A\subseteq\Z/p\Z$ is sum-free. If $n:=|A|>0.318 p$, then there exists $d\in{\mathbb Z}$ such that $A\subseteq d\ast[n,p-n]_p$. \end{theorem} The seemingly modest improvement of the constant from $0.324$ to $0.318$ requires a substantial effort and a number of new ideas, some at the level of Fourier analysis and others of a combinatorial nature; we believe that these ideas may actually be of more general interest than the improvement of the constant itself. An example, presented in \refb{l3}, shows that the constant in question cannot be reduced to below $0.2$. Though the value $0.318$ is not the precise limit of our method, narrowing significantly the gap between $0.2$ and $0.318$ seems to be a rather non-trivial and exciting problem. \section{Some lemmas}\label{s:lemmas} We gather here several auxiliary results, used in the next section to prove Theorem \reft{main}. It is well-known that if a set is sum-free, then its characteristic function has a large Fourier coefficient. Specifically, let ${\rm e}_p$ denote the character of the group $\Z/p\Z$, defined by ${\rm e}_p(1):=\exp(2\pi i/p)$, and given a set $A\subseteq\Z/p\Z$ and an integer $z$ write ${\widehat A}(z):=\sum_{a\in A} {\rm e}_p(az)$. A standard argument shows that if $A\subseteq\Z/p\Z$ is sum-free with $|A|>\alpha_0p$, then there exists $z\in{\mathbb Z}$ with $\phi_p(z)\neq 0$ such that $|{\widehat A}(z)|>\frac{\alpha_0^2}{1-\alpha_0}\,p$. For $\alpha_0=0.318$ this leads to $|{\widehat A}(z)|>0.148p$, while the following lemma allows us to get $|{\widehat A}(z)|>0.152p$. \begin{lemma}\label{l:intrick} Let $\varkappa>0$ and $\gamma\in(0,1)$ be real numbers, and write $K:=\left\lfloor \gamma^{-1/\varkappa}\right\rfloor$. Suppose that $P>K$ is an integer and $v_1,\ldots, v_P$ are non-negative real numbers, satisfying $$ \textstyle \sum_{i=1}^P v_i = 1 \quad\text{and}\quad \sum_{i=1}^P v_i^{1+\varkappa} \ge \gamma. $$ Then the equation $Kx^{1+\varkappa}+(1-Kx)^{1+\varkappa}=\gamma$ in the variable $x$ has exactly one solution in the interval $(1/(K+1),1/K]$, and denoting this solution by $X$ we have $$ \max \{ v_1,\ldots, v_P \} \ge X. $$ \end{lemma} \begin{proof} The existence and uniqueness of a solution is an immediate consequence of the intermediate value property: just notice that the function $f(x):=Kx^{1+\varkappa}+(1-Kx)^{1+\varkappa}$ is continuous and increasing on $[1/(K+1),1/K]$, and that $$ f(1/(K+1)) = \frac1{(K+1)^\varkappa} < \gamma \le \frac1{K^\varkappa} = f(1/K). $$ We now prove the second assertion. Let $W$ denote the set of all those real vectors $(w_1,\ldots, w_P)\in{\mathbb R}^P$ with non-negative coordinates, satisfying $$ \textstyle \sum_{i=1}^P w_i = 1 \quad\text{and}\quad \sum_{i=1}^P w_i^{1+\varkappa} \ge \gamma $$ (so that $(v_1,\ldots, v_P)\in W$). Observing that $W$ is compact and $\max\{w_1,\ldots, w_P\}$ is a continuous function on $W$, set $$ \mu := \min \{ \max\{w_1,\ldots, w_P\} \colon (w_1,\ldots, w_P) \in W \} $$ and $$ {\mathcal C} := \{ (w_1,\ldots, w_P) \in W \colon \max\{w_1,\ldots, w_P\} = \mu \}. $$ We notice that if $(w_1,\ldots, w_P)\in{\mathcal C}$, then $$ \gamma \le \sum_{i=1}^P w_i^{1+\varkappa} \le \mu^{\varkappa} \sum_{i=1}^P w_i = \mu^{\varkappa}, $$ whence \begin{equation}\label{e:floormu} \frac1\mu \le \gamma^{-1/\varkappa} < K+1. \end{equation} On the other hand, it is readily verified that if $w_i=1/K$ for $i\in[1,K]$ and $w_i=0$ for $i\in[K+1,P]$, then $(w_1,\ldots, w_P)\in W$, and hence $\mu \le 1/K$, implying $1/\mu\ge K$. Comparing this with \refe{floormu} we derive that \begin{equation}\label{e:floor} \left\lfloor 1/\mu \right\rfloor = K. \end{equation} For $\delta\in{\mathbb R}$ and $i,j\in[1,P]$ with $i\neq j$, define the operator $T_{ij}^{(\delta)}\colon{\mathbb R}^P\to{\mathbb R}^P$ by $T_{ij}^{(\delta)}(w_1,\ldots, w_P)=(w_1',\ldots, w_P')$, where $w_i'=w_i+\delta,\ w_j':=w_j-\delta$, and $w_k'=w_k$ for $k\in[1,P]\setminus\{i,j\}$. Observe that if $w_i<w_j$ and $0<\delta<(w_j-w_i)/2$, then $\sum_{i=1}^P (w_i')^{1+\varkappa}<\sum_{i=1}^P w_i^{1+\varkappa}$. Note, that if $(w_1,\ldots, w_P)\in{\mathcal C}$, then not all coordinates $w_i$ are equal to each other: else they all would be equal to $1/P$, implying $\gamma\le P\cdot(1/P)^{1+\varkappa}=P^{-\varkappa}$ and hence contradicting $P\ge K+1>\gamma^{-1/\varkappa}$. We claim now that if $(w_1,\ldots, w_P)\in{\mathcal C}$, then equality holds in $\sum_{i=1}^P w_i^{1+\varkappa}\ge\gamma$. Indeed, assuming that $\sum_{i=1}^P w_i^{1+\varkappa}>\gamma$, find $i,j\in[1,P]$ with $w_i<w_j=\mu$ and apply to $(w_1,\ldots, w_P)$ the transformation $T_{ij}^{(\delta)}$ with $\delta\in(0,(w_j-w_i)/2)$ small enough to ensure that the resulting vector $(w_1',\ldots, w_P')$ satisfies $\sum_{i=1}^P (w_i')^{1+\varkappa}>\gamma$. Repeating this procedure sufficiently many times, we find eventually a vector $(u_1,\ldots, u_P)\in W$ with $\max\{u_1,\ldots, u_P\}<\mu$, contradicting the definition of $\mu$. Next, we observe that for any $(w_1,\ldots, w_P)\in{\mathcal C}$ there is at most one index $i\in[1,P]$ such that $0<w_i<\mu$. For if $0<w_i\le w_j<\mu$, where $i,j\in[1,P]$ are distinct, then, applying to $(w_1,\ldots, w_P)$ the transformation $T_{ij}^{(\delta)}$ with $\delta$ negative and sufficiently small in absolute value, we obtain a vector $(w_1',\ldots, w_P')\in{\mathcal C}$ with $\sum_{i=1}^P (w_i')^{1+\varkappa}>\gamma$; however, we showed above that this is impossible. Fix $(w_1,\ldots, w_P)\in{\mathcal C}$. As it follows from our last observation, there is an integer $k\in[1,P-1]$ such that, re-ordering the coordinates of $(w_1,\ldots, w_P)$, if necessary, we can write $$ \mu = w_1=\dotsb= w_k > w_{k+1} \ge 0 = w_{k+2}=\dotsb= w_P. $$ From $$ k\mu \le \sum_{i=1}^P w_i < (k+1)\mu $$ it follows then that $k\le 1/\mu<k+1$, whence $k=\left\lfloor 1/\mu\right\rfloor=K$ by \refe{floor}, and consequently $w_{k+1}=1-K\mu$. This yields $$ K\mu^{1+\varkappa}+(1-K\mu)^{1+\varkappa}=\gamma, $$ so that in fact $\mu=X$, implying the second assertion of the lemma and indeed, showing that the estimate of the lemma is sharp. \end{proof} As indicated at the beginning of this section, Lemma \refl{intrick} will be used to show that if $A\subseteq\Z/p\Z$ is sum-free with $|A|>0.318p$, then there exists $z\in{\mathbb Z}$ with $\phi_p(z)\neq 0$ and such that $|{\widehat A}(z)|>0.152p$. A well-known result of Freiman leads then to the conclusion that there is an interval of the form $[u,u+p/2)_p$, with an integer $u$, containing at least $(|A|+|{\widehat A}(z)|)/2>0.235p$ elements of the dilation $z\ast A$. Our next lemma, which is a reformulation of \cite[Corollary 2]{b:l1}, allows us to improve this to $0.238p$. \begin{lemma}[\protect{\cite[Corollary 2]{b:l1}}]\label{l:frep} Let $p$ be a positive integer and suppose that $A\subseteq\Z/p\Z$. If $n=|A|$ and $S=\sum_{a\in A} {\rm e}_p(a)$, then there exists an integer $u$ such that $$ |A\cap[u,u+p/2)_p| \ge \frac{n}2\, + \frac{p}{2\pi}\,\arcsin\Big( |S|\sin \frac\pi{p} \Big). $$ \end{lemma} For a subset $A$ of an additively written abelian group write $$ A-A := \{ a_1-a_2 \colon a_1,a_2\in A \}. $$ The following lemma follows readily from the results of \refb{gaf}; see also \cite[Theorem~2]{b:ls}. \begin{lemma}\label{l:3n-3} Let $\ell$ and $m$ be positive integers and suppose that $A\subseteq[0,\ell]$ is a set of integers such that $|A|=m,\ 0\in A,\ \ell\in A$, and $\gcd(A)=1$. Then $$ |A-A| \ge \min \{ \ell + m, 3m - 3 \}. $$ \end{lemma} The next two lemmas deal with the structure of the difference set $A-A$ in the case where $A$ is a dense set of integers. \begin{lemma}[\protect{\cite[Lemma 2]{b:l2}}]\label{l:dif2} Let $m$ and $\ell$ be positive integers, satisfying $\ell\le 2m-2$, and suppose that $A\subseteq[0,\ell]$ is a set of integers such that $|A|=m$. Then for any integer $k\ge 1$ we have $$ \left( \frac{\ell-m+1}k,\,\frac{m}k \right) \subseteq A-A. $$ \end{lemma} \begin{lemma}[\protect{\cite[Lemma 3]{b:l2}}]\label{l:dif3} Let $m$ and $\ell$ be positive integers and suppose that $A\subseteq[0,\ell]$ is a set of integers such that $|A|=m$. If $\ell<\frac{2k-1}k\,m-1$ with an integer $k\ge 2$, then $$ \left(-\frac m{k-1},\frac m{k-1}\right) \subseteq A-A. $$ \end{lemma} We notice that Lemmas \refl{dif2} and \refl{dif3} remain valid if $A$ is a subset of $\Z/p\Z$ (instead of ${\mathbb Z}$), the condition $A\subseteq[0,\ell]$ is replaced by $A\subseteq[u,u+\ell]_p$ with integers $u$ and $\ell<p$, and the intervals in the conclusions of the lemmas are replaced by their images under $\phi_p$. Similarly, the estimate of Lemma \refl{3n-3} remains valid if $A\subseteq[u,u+\ell]_p$ with integer $u$ and $\ell<p/2$, and given that the set $\phi_p^{-1}(A)\cap[u,u+\ell]$ is not contained in an arithmetic progression of length, smaller than $\ell$. The next lemma is a restatement of a particular case of a ${\mathbb Z}/p{\mathbb Z}$-version of \cite[Lemma 3]{b:df}. \begin{lemma}\label{l:EZ} Let $p$ be a prime and let $1\le\ell<p$ and $u$ be integers. Suppose that $A\subseteq\Z/p\Z$ is sum-free and that $A_0\subseteq[u,u+\ell]_p\cap A$, and write $m:=|A_0|$. If $\ell\le 2m-2$, then for any integer $a\in[\ell/4,\ell/2]$ with $\phi_p(a)\in A$ we have $$ [2a-(2m-\ell-2), 2a+(2m-\ell-2)]_p \cap (A \cup (-A)) = \varnothing. $$ \end{lemma} For the convenience of the reader we provide a proof. \begin{proof}[Proof of Lemma \refl{EZ}] Since $|\{z,z+a\}_p\cap A|\le 1$ for any integer $z$, the set $A_0$ has at most $a$ elements in each of the intervals $[u,u+2a-1]_p$ and $[u+(\ell-2a+1),u+\ell]_p$, and consequently we have \begin{equation}\label{e:parti} |A_0\cap [u+2a,u+\ell]_p|\ge m-a\quad \text{and} \quad |A_0\cap [u,u+(\ell-2a)]_p|\ge m-a. \end{equation} Assuming now that there exists an integer $x\in [2a-(2m-\ell-2),2a+(2m-\ell-2)]$ with $\phi_p(x)\in A\cup(-A)$, we will obtain a contradiction. Suppose first that $x>2a$ and consider in this case the two-element sets $$ \{u,u+x\}_p,\ \{u+1,u+1+x\}_p,\ \ldots,\ \{u+\ell-x,u+\ell\}_p. $$ (We notice that \refe{parti}, along with $a\le \ell/2<m$, implies that $m-a\le \ell-2a+1$, whence $\ell\ge a+m-1$ and consequently, $\ell-x\ge \ell-(2a+(2m-\ell-2))=2(\ell-a-m+1)\ge 0$.) These sets are pairwise disjoint (as $u+\ell-x<u+x$ in view of $2x>4a\ge \ell$) and they all are contained in $[u,u+\ell-2a]_p\cup[u+2a,u+\ell]_p$. Since at most one element out of each of these $\ell-x+1$ sets belongs to $A_0$, we conclude that $$ |[u,u+\ell-2a]_p\setminus A_0| + |[u+2a,u+\ell]_p\setminus A_0| \ge \ell-x+1. $$ Therefore \begin{align*} |[u,u+\ell-2a]_p\cap A_0| + |[u+2a,u+\ell]_p\cap A_0| & \le 2(\ell-2a+1)-(\ell-x+1) \\ & \le \ell-4a + (2a+(2m-\ell-2)) +1 \\ & = 2(m-a) - 1, \end{align*} contradicting \refe{parti}. Similarly, if $x<2a$, then we obtain a contradiction with \refe{parti} considering the $l-4a+x+1$ sets \begin{multline*} \{u+2a-x,u+2a\}_p,\ \{u+2a-x+1,u+2a+1\}_p, \\ \ldots,\ \{u+l-2a,u+l-2a+x\}_p \end{multline*} which, again, are pairwise disjoint and contained in $[u,u+l-2a]_p\cup[u+2a,u+l]_p$. \end{proof} \begin{lemma}\label{l:p-n3} Let $p$ be a prime and suppose that $A\subseteq\Z/p\Z$ is sum-free. Write $n:=|A|$. If $$ [-(p-n+1)/3,(p-n+1)/3]_p \cap A = \varnothing, $$ then $A\subseteq[n,p-n]_p$. \end{lemma} \begin{proof} Set \begin{equation}\label{e:mu} \mu := \min \{ |z| \colon z\in{\mathbb Z},\,\phi_p(z)\in A \}, \end{equation} so that $\mu>(p-n+1)/3$. Clearly, for any $a\in[\mu,2\mu)_p\cap A$ we have $a+\mu\in[2\mu,3\mu)_p\setminus A$, which gives $$ |[\mu,3\mu)_p \cap A| \le \mu. $$ Assuming that $\mu<p/4$ we get then $$ n \le \mu + |[3\mu,p-\mu]_p| \le \mu + (p-4\mu+1) = p - 3\mu + 1 $$ whence $3\mu\le p-n+1$, contradicting the assumptions. We have therefore $\mu>p/4$ and then $3\mu>p-\mu$, implying $$ n = |A\cap[\mu,3\mu)_p| \le \mu; $$ that is, $[0,n-1]_p\cap A=\varnothing$. In a similar way (or applying the argument above to the set $-A:=\{-a\colon a\in A\}$) we obtain $[p-(n-1),p]_p\cap A=\varnothing$. The result follows. \end{proof} \section{Proof of Theorem \reft{main}}\label{s:mainproof} Suppose that $p$ is a prime and $A\subseteq\Z/p\Z$ is a sum-free set with $n:=|A|>0.318p$. The computations below tacitly assume that $p$ is sufficiently large. Recalling the definition of ${\widehat A}(z)$ from the beginning of Section \refs{lemmas}, we start with \begin{claim} There exists an integer $z_0$ with $\phi_p(z_0)\neq 0$ such that $|{\widehat A}(z_0)|>0.152p$. \end{claim} \begin{proof} Let $\alpha:=n/p$, so that $\alpha>0.318$. By the Parseval identity we have \begin{equation}\label{e:parseval} \sum_{z=1}^{p-1} |{\widehat A}(z)|^2 = \alpha(1-\alpha)p^2. \end{equation} Using the fact that $|{\widehat A}(p-z)|=|{\widehat A}(z)|$ for any $z\in{\mathbb Z}$ and letting $P:=(p-1)/2$ we re-write \refe{parseval} as $$ \sum_{z=1}^P \frac{|{\widehat A}(z)|^2}{\alpha(1-\alpha)p^2/2} = 1. $$ On the other hand, since $A$ is sum-free we have $$ \sum_{z=0}^{p-1} |{\widehat A}(z)|^2 {\widehat A}(z) = 0, $$ whence $$ \sum_{z=1}^{p-1} |{\widehat A}(z)|^3 \ge - \sum_{z=1}^{p-1} |{\widehat A}(z)|^2{\widehat A}(z) = |{\widehat A}(0)|^3 = \alpha^3 p^3 $$ and consequently $$ \sum_{z=1}^P \left( \frac{|{\widehat A}(z)|^2}{\alpha(1-\alpha)p^2/2} \right)^{3/2} \ge \sqrt{2} \left( \frac{\alpha}{1-\alpha} \right)^{3/2} > 0.4502. $$ Applying Lemma \refl{intrick} with $\varkappa=0.5$ and $\gamma=0.4502$ (which leads to $K=4$), we conclude that there exists $z_0\in[1,P]$ with $$ \frac{|{\widehat A}(z_0)|^2}{\alpha(1-\alpha)p^2/2} > 0.2131 $$ and accordingly $$ |{\widehat A}(z_0)| > \sqrt{0.2131\,\alpha(1-\alpha)/2} \; p > 0.152p. $$ \end{proof} Dilating $A$, if necessary, we assume that, in fact, \begin{equation}\label{e:Ahat1} |{\widehat A}(1)| > 0.152p. \end{equation} Choose an integer $u_0$ such that the number of elements of $A$ in $[u_0,u_0+p/2)_p$ is maximized, set $A_0:=A\cap[u_0,u_0+p/2)_p$ and $m:=|A_0|$, and let $B_0:=\phi_p^{-1}(A_0)\cap[u_0,u_0+p/2)$. Furthermore, put $\ell:=\max B_0 - \min B_0$; thus $A_0$ is contained in a block of $\ell+1$ consecutive elements of $\Z/p\Z$ and $$ m = \max \{ |A\cap[u,u+p/2)_p| \colon u\in{\mathbb Z} \}. $$ We notice that the last equality implies that \begin{equation}\label{e:inhalf} n-m \le |A\cap[u,u+p/2)_p| \le m \end{equation} for all real $u$. By Lemma \refl{frep} we have \begin{equation}\label{e:n0large} m \ge \frac n2\, + \frac{p}{2\pi} \,\arcsin\Big( |{\widehat A}(1)| \sin \frac\pi{p} \Big) > \Big( 0.159 + \frac1{2\pi} \arcsin(0.1519\pi) \Big) p > 0.238 p. \end{equation} Since $B_0$ is a subset of an interval of length $\ell<p/2$, this shows that $B_0$ is not contained in an arithmetic progression with difference greater than $2$, and we now dispose of the case where $B_0$ is contained in an arithmetic progression with difference $2$. \begin{claim}\label{c:gcd2} If $B_0$ is contained in an arithmetic progression with difference $2$, then the conclusion of the theorem holds true. \end{claim} \begin{proof} If $B_0$ is contained in an arithmetic progression with difference $2$, then there is an integer $u$ and a set $C\subseteq\Z/p\Z$ such that $C\subseteq[u,u+p/4)_p$ and either $A_0=2\ast C$, or $A_0=2\ast C+1$. Evidently, we have $\left\lfloor p/4\right\rfloor<\frac32\,m-1$, whence $(-m,m)_p\subseteq C-C$ by Lemma \refl{dif3} (see also the remark after the lemma), implying $2\ast(-m,m)_p\subseteq A_0-A_0$. Since $A$ is sum-free, we derive that the set $2\ast(-m,m)_p$ is disjoint with $A$, and replacing $A$ with its dilation by the factor $(p-1)/2$ we obtain $A\subseteq[m,p-m]_p$. The assertion now follows from Lemma \refl{p-n3}, as $$ (p-n+1)/3 < 0.228 p < m $$ by \refe{n0large}. \end{proof} In what follows we assume that $B_0$ is \emph{not} contained in an arithmetic progression with difference greater than $1$. Since the sets $A$ and $A-A$ are disjoint, we have $$ n \le p - |A-A| \le p - |A_0-A_0|. $$ To estimate $|A_0-A_0|$ we apply Lemma \refl{3n-3}; this gives \begin{equation}\label{e:diffr} p-n \ge |A_0-A_0| \ge \min \{ \ell+m, 3m-3 \}. \end{equation} Assuming that $\ell\ge 2m-3$ and using \refe{n0large} we then obtain $$ p \ge n + 3m - 3 > (0.318+3\cdot0.238)p - 3 = 1.032p - 3, $$ a contradiction. Thus \begin{gather} \ell \le 2m-4, \label{e:l0small} \\ |A_0-A_0| \ge \ell+m, \label{e:A0-A0} \end{gather} and $p-n \ge \ell+m$ by \refe{diffr}, whence \begin{equation}\label{e:l0smallprime} \ell \le p-n-m. \end{equation} We assume, furthermore, that \begin{equation}\label{e:l032n0} \ell \ge \frac32\,m-1; \end{equation} for otherwise $(-m,m)_p\cap A=\varnothing$ by Lemma \refl{dif3}, and consequently $A\subseteq[n,p-n]_p$ (as at the end of the proof of Claim \refc{gcd2}). Assumption \refe{l032n0} will eventually lead us to a contradiction. \begin{claim}\label{c:n02excluded} We have $$ A\cap(-m/2,m/2)_p=\varnothing. $$ \end{claim} \begin{proof} Let $\mu$ be defined by \refe{mu}; we want to show that $\mu\ge m/2$. Notice, that by \refe{l0small} and Lemma \refl{dif2} we have $((\ell-m+1)/2,m/2)_p\subseteq A_0-A_0\subseteq A-A$, and consequently $((\ell-m+1)/2,m/2)_p\cap A=\varnothing$; thus, it actually suffices to prove that $\mu>(\ell-m)/2$. Assume that this is wrong, and hence \begin{equation}\label{e:loc20} \cos \pi \, \frac\mu p \ge \cos \pi \, \frac{\ell-m}{2p} \ge \cos \pi \, \frac{p-n-2m}{2p} = \sin \pi \, \frac{n+2m}{2p} \end{equation} holds by \refe{l0smallprime}. Since $A\cap(A+\mu)=\varnothing$, we have $|A\cup(A+\mu)|=2n$ and \begin{equation}\label{e:Strick} \bigg| \sum_{a\in A\cup(\mu+A)} {\rm e}_p(a) \bigg| = \big|(1+{\rm e}_p(\mu))\,{\widehat A}(1)\big| = 2 |{\widehat A}(1)|\, \cos\pi\frac\mu p. \end{equation} We distinguish two cases. Assume first that $m>0.25p$. From \refe{Strick} we get $$ 2 |{\widehat A}(1)|\, \cos\pi\frac{\mu}{p} \le \bigg| \sum_{z=0}^{2n-1} {\rm e}_p(z) \bigg| = \frac{\sin 2\pi n/p}{\sin \pi/p} $$ which, along with \refe{loc20}, \refe{Ahat1}, and the assumption $m>0.25p$, implies \begin{align*} \sin2\pi\frac np &\ge 2|{\widehat A}(1)|\, \cos\pi\frac{\mu}{p} \,\sin\frac\pi{p} \\ &> 0.954 \, \frac{|{\widehat A}(1)|}{0.152p} \, \cdot \frac{\sin\pi/p}{3.141/p} \cdot \cos\pi\frac{\mu}{p} \\ &> 0.954 \sin \pi \, \frac{n+2m}{2p} \\ &> 0.954 \sin \frac{\pi}2\left( \frac np+0.5\right). \end{align*} It is easy to verify, however, that the function $$ \sin 2\pi x - 0.954 \sin \frac{\pi}2\,(x+0.5) $$ is negative for any $x\in(0.318,0.334)$, a contradiction. Assume now that $m<0.25p$. In this case we apply Lemma \refl{frep} to the set $A\cup(A+\mu)$, observing that by \refe{inhalf} any interval of the form $[u,u+p/2)_p$ with integer $u$ contains at most $2m$ elements of this set; in view of \refe{Strick} this yields $$ 2m \ge n + \frac{p}{2\pi}\, \arcsin\left( 2|{\widehat A}(1)|\cos\pi\frac\mu p \, \sin\frac\pi p \right) $$ and hence $$ 2\pi\frac{2m-n}p \ge \arcsin \left( 2|{\widehat A}(1)|\cos\pi\frac\mu p \, \sin\frac\pi p \right). $$ Since $2\pi(2m-n)/p\le 2\pi m/p<\pi/2$ we obtain $$ \sin 2\pi\frac{2m-n}p \ge 2|{\widehat A}(1)|\cos\pi\frac\mu p \, \sin\frac\pi p $$ which, as above, yields \begin{equation}\label{e:muest} \sin 2\pi\frac{2m-n}p > 0.954 \cos\pi\frac\mu p > 0.954 \sin \pi \, \frac{n+2m}{2p}. \end{equation} On the other hand, it is not difficult to see that the function $$ \sin 2\pi (2y-x) - 0.954 \sin\frac\pi2\,(x+2y) $$ is negative in the region $x\in(0.318,0.334),\,y\in(0.23,0.25)$, a contradiction again. \end{proof} Set \begin{alignat*}{5} A_1^+ &:= A\cap[m/2,\ell-m+1]_p\,, &\quad A_2^+ &:= A\cap[m,p/4)_p\,, \\ A_1^- &:= A\cap[-(\ell-m+1),-m/2]_p\,, &\quad A_2^- &:= A\cap(-p/4,-m]_p\,, \intertext{and} A_1 &:= A_1^+\cup A_1^-\,, &\quad A_2 &:= A_2^+\cup A_2^-, \end{alignat*} so that \begin{equation}\label{e:A1A2} A\cap[0,p/4)_p=A_1^+\cup A_2^+\quad \text{and} \quad A\cap(-p/4,0]_p=A_1^-\cup A_2^- \end{equation} by Lemma \refl{dif2} (applied with $k=1$) and Claim \refc{n02excluded}; observe also that $m/2\le \ell-m+1$ by \refe{l032n0}, and that $A_2=\varnothing$ if $m>0.25p$. For definiteness, we assume for the rest of the proof that \begin{equation}\label{e:+>-} |A_1^+| \ge |A_1^-| \end{equation} and hence $A_1^+\neq\varnothing$: otherwise by \refe{inhalf} we would have $$ n-m \le |A_2| \le \max \{ 0, p/2-2m \}, $$ leading to either $m=n$ (in which case we are done by Lemma \refl{p-n3}), or $n+m<p/2$ (which contradicts \refe{n0large}). Given two subsets $S_1$ and $S_2$ of an additively written semigroup, we write $$ S_1+S_2 := \{ s_1+s_2\colon s_1\in S_1,\,s_2\in S_2 \}. $$ It is well-known and easy to prove that if $S_1$ and $S_2$ are finite non-empty sets of integers, then $|S_1+S_2|\ge|S_1|+|S_2|-1$ holds. Clearly, this inequality remains valid also if $S_1$ and $S_2$ are non-empty subsets of $\Z/p\Z$, contained in two intervals of total length, smaller than $p$. Our next claim refines the estimate \refe{l0smallprime}. \begin{claim}\label{c:l0verysmall} We have $$ \ell \le p - n - m - 2|A_2| + 2. $$ \end{claim} \begin{proof} The assertion follows from the fact that the sets $A$ and $$ A_2^++A_2^+ \subseteq [2m,p/2)_p\,, \quad A_2^-+A_2^- \subseteq (p/2,p-2m]_p\,, \quad A_0-A_0\subseteq[-\ell,\ell] $$ are pairwise disjoint, the estimate \refe{A0-A0}, and the observation that $|A_2^++A_2^+|\ge 2|A_2^+|-1$ and $|A_2^-+A_2^-|\ge 2|A_2^-|-1$. \end{proof} Write $I:=[-(2m-\ell-2),2m-\ell-2]_p$ and $J:=2\ast A_1^++I$. We observe that \begin{equation}\label{e:caseI2} J \cap A = (-J)\cap A = \varnothing \end{equation} by Lemma \refl{EZ} (which is applicable since $A_1^+\subseteq[m/2,\ell-m+1]_p\subseteq[\ell/4,\ell/2]_p$, as it follows from \refe{l0small}). Let $k:=|A_1^+|$, write $A_1^+=\{a_1,\ldots, a_k\}$, where the elements are so numbered that their inverse images in $[m/2,\ell-m+1]$ under $\phi_p$ form an increasing sequence, and for $i\in[1,k]$ set $S_i:=2\ast\{a_1,\ldots, a_i\}+I$. We have then $S_1=|I|=4m-2\ell-3$ and $|S_{i+1}\setminus S_i|\ge 2$ for $i\in[1,k-1]$, and it follows that \begin{equation}\label{e:Slarge} |J| = |S_k| \ge (4m-2\ell-3) + 2(k-1) = 2|A_1^+|+4m-2\ell-5. \end{equation} We are now in a position to complete the proof showing that the above-made assumptions (see the remark following \refe{l032n0}) lead to a contradiction. We consider separately two cases: $m<0.244p$ and $m>0.244p$. \subsection*{Case I: $m<0.244p$} We revisit the proof of Claim \refc{n02excluded}, defining $\mu$ by \refe{mu} and observing that \refe{muest} gives \begin{equation}\label{e:arccos} \mu > \frac1\pi\, \arccos \Big( 1.049 \sin 2\pi\Big( \frac{2m}p-0.318 \Big) \Big) p. \end{equation} (This estimate is stronger, than $\mu\ge m/2$, for small values of $m$, and in particular for $m<0.244p$.) Since $A_1^+\subseteq[\mu,\ell-m+1]_p$, we have $$ J \subseteq [\ell-2m+2\mu+2,\ell]_p, $$ hence \refe{+>-}, \refe{caseI2} and \refe{Slarge} yield \begin{align*} |[\ell-2m+2\mu+2,\ell]_p\setminus A| &\ge 2|A_1^+|+4m-2\ell-5 \\ \intertext{and} |[-\ell,-(\ell-2m+2\mu+2)]_p\setminus A| &\ge 2|A_1^+|+4m-2\ell-5 \\ &\ge 2|A_1^-|+4m-2\ell-5. \end{align*} Adding up these estimates we obtain $$ |[\ell-2m+2\mu+2, p-(\ell-2m+2\mu+2)]_p \setminus A| \ge 2|A_1|+8m-4\ell-10, $$ and it follows that \begin{equation}\label{e:caseI5} |[\ell-2m+2\mu+2, p-(\ell-2m+2\mu+2)]_p \cap A| \le p-2|A_1|+2\ell-4m-4\mu+7. \end{equation} We notice now that \begin{multline*} \ell - 2m + 2\mu + 1 \le \ell - m + 1 \le p - n - 2m + 1 \\ < (1-0.318-2\cdot 0.238)p + 1 < 0.25p \end{multline*} by \refe{l0smallprime} and \refe{n0large}, and consequently $$ |[-(\ell-2m+2\mu+1),\ell-2m+2\mu+1]_p\cap A| \le |A_1|+|A_2| $$ holds. Taking the sum of the last inequality and \refe{caseI5}, observing that $|A_1|+|A_2|\ge n-m$ by \refe{inhalf}, and using the estimate of Claim \refc{l0verysmall}, we get \begin{align*} n &\le p - |A_1| + |A_2| + 2\ell - 4m - 4\mu + 7 \\ &\le 3p - |A_1| - |A_2| - 2n -6m - 4\mu + 11 \\ & \le 3p - 3n - 5m -4\mu + 11. \end{align*} This yields $$ \frac{4\mu}{p} \le 3 - \frac{4n}p - \frac{5m}p + \frac{11}p < 1.729-\frac{5m}p, $$ and comparing this with \refe{arccos} we obtain $$ \frac4\pi\, \arccos \Big( 1.049 \sin 2\pi\Big( \frac{2m}p-0.318 \Big) \Big) < 1.729-\frac{5m}p. $$ However, a routine investigation shows that $$ \frac4\pi\, \arccos\big( 1.049\sin 2\pi (2x-0.318) \big) - 1.729 + 5x $$ is positive for $x\in(0.238,0.244)$. \subsection*{Case II: $m>0.244p$} Recalling \refe{+>-} we get \begin{equation}\label{e:A1large} |A_1^+| \ge \frac12\,{|A_1|} = \frac12\, \big( |(-p/4,p/4)_p\cap A| - |A_2| \big) \ge \frac12\, ( n-m - |A_2|) \end{equation} by \refe{inhalf} and the definitions of $A_1$ and $A_2$. We also observe that \begin{align} |A_1^+| &\le (\ell-m+1) - m/2 + 1 \nonumber \\ &\le p-n-\frac52\,m + 2 \nonumber \\ &= (n-m) - \Big( 2n+\frac32\,m-p \Big) + 2 \nonumber \\ &< (n-m) - (2\cdot0.318+1.5\cdot0.244-1)p + 2 \nonumber \\ &< n-m \label{e:A1small} \end{align} by \refe{l0smallprime}. Using Claim \refc{l0verysmall} we derive from \refe{A1large} that \begin{align*} |[m/2,\ell-m+1]_p\setminus A_1^+| &\le \Big( \ell-\frac32\,m+2 \Big) - \frac12\,(n - m - |A_2|) \\ &= \ell - m - \frac12\,n + \frac12\,|A_2| + 2 \\ &= 2(2m-\ell-2) + 3\ell - 5m - \frac12\,n + \frac12\,|A_2| + 6 \\ &\le 2(2m-\ell-2) + 3p - 8m - \frac72\,n + 12 \\ &< 2(2m-\ell-2) - (8\cdot 0.238 + 3.5\cdot 0.318 - 3)p + 12 \\ &< 2(2m-\ell-2), \end{align*} and it follows that the set $J$ is an interval in $\Z/p\Z$. Consequently, by \refe{inhalf}, Claim \refc{n02excluded}, the definition of $A_1^+$, \refe{A1A2}, and \refe{caseI2} we have \begin{align} n-m &\le |(-m/2,(p-m)/2)_p\cap A| \nonumber \\ &= |A_1^+| + |[m,(p-m)/2)_p\cap A| \nonumber \\ &\le |A_1^+| + |[m,(p-m)/2)_p\setminus J|. \label{e:stmJ} \end{align} Since $J\subseteq[\ell-m+2,\ell]_p$ (as it is immediate from the definitions of $J$ and $A_1^+$), we have $$ |[m,(p-m)/2)_p\setminus J| \le \max \{ |[m,(p-m)/2)_p\setminus J_1|, \: |[m,(p-m)/2)_p\setminus J_2|\}, $$ where $$ J_1 := [\ell-m+2,\ell-m+|J|+1]_p \quad \text{and}\quad J_2 := [\ell+1-|J|,\ell]_p $$ (so that $J_1$ and $J_2$ are subintervals of the interval $[\ell-m+2,\ell]_p$ with $|J_1|=|J_2|=|J|$, adjacent to the endpoints of this interval). Accordingly, from \refe{stmJ} we deduce that \begin{equation}\label{e:J12} |[m,(p-m)/2)_p\setminus J_i| \ge n-m-|A_1^+| \end{equation} holds true with either $i=1$, or $i=2$. By \refe{A1small}, assuming that \refe{J12} holds with $i=1$ we obtain $$ n-m-|A_1^+| < (p-m)/2 - (\ell-m+|J|+1) + 1. $$ Using \refe{Slarge}, \refe{A1large}, and Claim \refc{l0verysmall} we now get \begin{gather*} |A_1^+| + 4m - 2\ell - 5 \le |J|-|A_1^+| < 0.5p + \frac32\,m - n - \ell, \\ |A_1^+| < 0.5p - \frac52\,m - n + \ell + 5 \le 1.5p - \frac72\,m - 2n -2|A_2| + 7, \\ \frac52\,n + 3m < 1.5p + 7, \end{gather*} which is wrong since $$ \frac52\,n + 3m > (2.5\cdot0.318+3\cdot0.238)p = 1.509p. $$ Assume now that \refe{J12} holds with $i=2$. Since $$ \frac12\,(n-m-|A_2|) \le |A_1^+| \le \ell-\frac32\,m + 2 $$ by \refe{A1large} and the definition of $A_1^+$, we have $$ \ell \ge \frac12\,n + m - \frac12\,|A_2| - 2 = \frac12\,(p-m) + \frac12\,(n + 3m - p - |A_2|) - 2 > \frac12\,(p-m), $$ as $|A_2|\le\max\{0,0.5p-2m+2\}$ and \begin{gather*} n + 3m > (0.318+3\cdot 0.238)p = 1.032 p, \\ n + 5m > (0.318+5\cdot0.238)p = 1.508 p. \end{gather*} Similarly to the above we now obtain from \refe{J12} and \refe{A1small} $$ n - m - |A_1^+| \le (\ell + 1 - |J|) - m + 1 $$ and using \refe{Slarge}, \refe{A1large}, and Claim \refc{l0verysmall} derive that \begin{gather*} |A_1^+| + 4m - 2\ell - 5 \le |J|-|A_1^+| \le \ell - n + 2, \\ |A_1^+| \le 3\ell - 4m - n + 7 \le 3p - 7m - 4n -6|A_2| + 13, \\ 3p + 13 \ge \frac92\,n + \frac{13}2\,m > (4.5\cdot 0.318 + 6.5\cdot0.244)p = 3.017p, \end{gather*} a contradiction, as wanted. \section*{Acknowledgement} The authors are grateful to Dr. K. Srinivas for helpful discussions.
{ "timestamp": "2007-05-31T10:17:01", "yymm": "0705", "arxiv_id": "0705.4536", "language": "en", "url": "https://arxiv.org/abs/0705.4536", "abstract": "Improving upon earlier results of Freiman and the present authors, we show that if $p$ is a sufficiently large prime and $A$ is a sum-free subset of the group of order $p$, such that $n:=|A|>0.318p$, then $A$ is contained in a dilation of the interval $[n,p-n]\\pmod p$.", "subjects": "Number Theory (math.NT); Combinatorics (math.CO)", "title": "Refined bound for sum-free sets in groups of prime order", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9865717440735736, "lm_q2_score": 0.8221891327004132, "lm_q1q2_score": 0.8111485666065855 }
https://arxiv.org/abs/1301.7544
The random graph
Erdős and Rényi showed the paradoxical result that there is a unique (and highly symmetric) countably infinite random graph. This graph, and its automorphism group, form the subject of the present survey.
\chapter{The Random Graph} \label{ch32:chap32} \textbf{Summary.} Erd\H{o}s and R\'{e}nyi showed the paradoxical result that there is a unique (and highly symmetric) countably infinite random graph. This graph, and its automorphism group, form the subject of the present survey. \section{Introduction \label{ch32:sec2.1} In 1963, Erd\H{o}s and R\'{e}nyi \cite{ch32:bib18} showed: \begin{theorem}\label{ch32:them1.1} There exists a graph $R$ with the following property. If a countable graph is chosen at random, by selecting edges independently with probability $\frac{1}{2}$ from the set of $2$-element subsets of the vertex set, then almost surely (i.e., with probability $1$), the resulting graph is isomorphic to $R$. \end{theorem} This theorem, on first acquaintance, seems to defy common sense --- a random process whose outcome is predictable. Nevertheless, the argument which establishes it is quite short. (It is given below.) Indeed, it formed a tailpiece to the paper of Erd\H{o}s and R\'{e}nyi, which mainly concerned the much less predictable world of finite random graphs. (In their book \emph{Probabilistic Methods in Combinatorics}, Erd\H{o}s and Spencer \cite{ch32:bib19} remark that this result ``demolishes the theory of infinite random graphs.'') I will give the proof in detail, since it underlies much that follows. The key is to consider the following property, which a graph may or may not have: \begin{itemize} \item[($\ast$)] \emph{Given finitely many distinct vertices $u_1, \ldots, u_m, v_1, \ldots, v_n$, there exists a vertex $z$ which is adjacent to $u_1,\ldots, u_m$ and nonadjacent to $v_1,\ldots,v_n$.} \end{itemize} Often I will say, for brevity, ``$z$ is correctly joined''. Obviously, a graph satisfying ($\ast$) is infinite, since $z$ is distinct from all of $u_1, \ldots, u_m, v_1, \ldots, v_n$. It is not obvious that any graph has this property. The theorem follows from two facts: \begin{fact}\label{ch32:subsec2.1.1} With probability $1$, a countable random graph satisfies ($\ast$). \end{fact} \begin{fact}\label{ch32:subsec2.1.2} Any two countable graphs satisfying ($\ast$) are isomorphic. \end{fact} \begin{proof}[Proof (of Fact~\ref{ch32:subsec2.1.1})] We have to show that the event that ($\ast$) fails has probability $0$, i.e., the set of graphs not satisfying ($\ast$) is a null set. For this, it is enough to show that the set of graphs for which ($\ast$) fails for some given vertices $u_1, \ldots, u_m, v_1, \ldots, v_n$ is null. (For this deduction, we use an elementary lemma from measure theory: the union of countably many null sets is null. There are only countably many values of $m$ and $n$, and for each pair of values, only countably many choices of the vertices $u_1, \ldots, u_m, v_1, \ldots, v_n$.) Now we can calculate the probability of this set. Let $z_1,\ldots,z_N$ be vertices distinct from $u_1, \ldots, u_m, v_1, \ldots, v_n$. The probability that any $z_i$ is not correctly joined is $1- \frac{1}{2^{m+n}}$; since these events are independent (for different $z_i$), the probability that none of $z_1,\ldots,z_N$ is correctly joined is $(1 - \frac{1}{2^{m+n}})^N$. This tends to $0$ as $N \rightarrow \infty$; so the event that no vertex is correctly joined does have probability $0$. \end{proof} Note that, at this stage, we know that graphs satisfying ($\ast$) exist, though we have not constructed one --- a typical ``probabilistic existence proof''. Note also that ``probability $\frac{1}{2}$'' is not essential to the proof; the same result holds if edges are chosen with fixed probability $p$, where $0 < p < 1$. Some variation in the edge probability can also be permitted. \begin{proof}[Proof (of Fact~\ref{ch32:subsec2.1.2})] Let $\Gamma_1$ and $\Gamma_2$ be two countable graphs satisfying ($\ast$). Suppose that $f$ is a map from a finite set $\{x_1,\ldots, x_n\}$ of vertices of $\Gamma_1$ to $\Gamma_2$, which is an isomorphism of induced subgraphs, and $x_{n+1}$ is another vertex of $\Gamma_1$. We show that $f$ can be extended to $x_{n+1}$. Let $U$ be the set of neighbours of $x_{n+1}$ within $\{x_1,\ldots, x_n\}$, and $V = \{x_1,\ldots, x_n\} \setminus U$. A potential image of $x_{n+1}$ must be a vertex of $\Gamma_2$ adjacent to every vertex in $f(U)$ and nonadjacent to every vertex in $f(V)$. Now property ($\ast$) (for the graph $\Gamma_2$) guarantees that such a vertex exists. Now we use a model-theoretic device called ``back-and-forth''. (This is often attributed to Cantor \cite{ch32:bib11}, in his characterization of the rationals as countable dense ordered set without endpoints. However, as Plotkin \cite{ch32:bib41} has shown, it was not used by Cantor; it was discovered by Huntington \cite{ch32:bib32} and popularized by Hausdorff \cite{ch32:bib25}.) Enumerate the vertices of $\Gamma_1$ and $\Gamma_2$, as $\{x_1, x_2, \ldots\}$ and $\{y_1, y_2, \ldots\}$ respectively. We build finite isomorphisms $f_n$ as follows. Start with $f_0 = \emptyset$. Suppose that $f_n$ has been constructed. If $n$ is even, let $m$ be the smallest index of a vertex of $\Gamma_1$ not in the domain of $f_n$; then extend $f_n$ (as above) to a map $f_{n+1}$ with $x_m$ in its domain. (To avoid the use of the Axiom of Choice, select the correctly-joined vertex of $\Gamma_2$ with smallest index to be the image of $x_m$.) If $n$ is odd, we work backwards. Let $m$ be the smallest index of a vertex of $\Gamma_2$ which is not in the range of $f_n$; extend $f_n$ to a map $f_{n+1}$ with $y_m$ in its range (using property ($\ast$) for $\Gamma_1$). Take $f$ to be the union of all these partial maps. By going alternately back and forth, we guaranteed that every vertex of $\Gamma_1$ is in the domain, and every vertex of $\Gamma_2$ is in the range, of $f$. So $f$ is the required isomorphism. \end{proof} The graph $R$ holds as central a position in graph theory as $\mathbb{Q}$ does in the theory of ordered sets. It is surprising that it was not discovered long before the 1960s! Since then, its importance has grown rapidly, both in its own right, and as a prototype for other theories. \begin{remark}\rm Results of Shelah and Spencer \cite{ch32:bib47} and Hrushovski \cite{ch32:bib31} suggest that there are interesting countable graphs which ``control'' the first-order theory of finite random graphs whose edge-probabilities tend to zero in specified ways. See Wagner \cite{ch32:bib54}, Winkler \cite{ch32:bib55} for surveys of this. \end{remark} \section{Some constructions \label{ch32:sec2.2} Erd\H{o}s and R\'{e}nyi did not feel it necessary to give an explicit construction of $R$; the fact that almost all countable graphs are isomorphic to $R$ guarantees its existence. Nevertheless, such constructions may tell us more about $R$. Of course, to show that we have constructed $R$, it is necessary and sufficient to verify condition ($\ast$). I begin with an example from set theory. The downward L\"{o}wenheim-Skolem theorem says that a consistent first-order theory over a countable language has a countable model. In particular, there is a countable model of set theory (the \emph{Skolem paradox}). \begin{theorem}\label{ch32:them2.1} Let M be a countable model of set theory. Define a graph $M^{\ast}$ by the rule that $x\sim y$ if and only if either $x\in y$ or $y\in x$. Then $M^{\ast}$ is isomorphic to $R$. \end{theorem} \begin{proof} Let $u_1, \ldots, u_m, v_1, \ldots, v_n$ be distinct elements of $M$. Let $x = \{v_1,\ldots, v_n\}$ and $z = \{u_1, \ldots, u_m, x\}$. We claim that $z$ is a witness to condition ($\ast$). Clearly $u_i\sim z$ for all $i$. Suppose that $v_j \sim z$. If $v_j \in z$, then either $v_j = u_i$ (contrary to assumption), or $v_j = x$ (whence $x \in x$, contradicting the Axiom of Foundation). If $z\in v_j$, then $x \in z \in v_j \in x$, again contradicting Foundation. \end{proof} Note how little set theory was actually used: only our ability to gather finitely many elements into a set (a consequence of the Empty Set, Pairing and Union Axioms) and the Axiom of Foundation. In particular, the Axiom of Infinity is not required. Now there is a familiar way to encode finite subsets of $\mathbb{N}$ as natural numbers: the set $\{a_1, \ldots,a_n\}$ of distinct elements is encoded as $2^{a_1}+ \cdots+ 2^{a_n}$. This leads to an explicit description of $R$: the vertex set is $\mathbb{N}$; $x$ and $y$ are adjacent if the $x^{\rm th}$ digit in the base $2$ expansion of $y$ is a $1$ or \emph{vice versa}. This description was given by Rado \cite{ch32:bib42}. The next construction is more number-theoretic. Take as vertices the set $\mathbb{P}$ of primes congruent to $1 \pmod{4}$. By quadratic reciprocity, if $p,q \in\mathbb{P}$, then $\big(\frac{p}{q}\big) = 1$ if and only if $\big(\frac{q}{p}\big) = 1$. (Here ``$\big(\frac{p}{q}\big) = 1$'' means that $p$ is a quadratic residue $\pmod{q}$.) We declare $p$ and $q$ adjacent if $\big(\frac{p}{q}\big)= 1$. Let $u_1, \ldots, u_m, v_1, \ldots, v_n\in\mathbb{P}$. Choose a fixed quadratic residue $a_i\pmod{u_i}$ (for example, $a_i = 1$), and a fixed non-residue $b_j\pmod{v_j}$. By the Chinese Remainder Theorem, the congruences \[ x \equiv 1\pmod{4},\quad x \equiv a_i \pmod{u_i},\quad x \equiv b_j \pmod{v_j}, \] have a unique solution $x \equiv x_0\pmod {4u_1 \ldots u_m v_1 \ldots v_n}$. By Dirichlet's Theorem, there is a prime $z$ satisfying this congruence. So property ($\ast$) holds. A set $S$ of positive integers is called $universal$ if, given $k\in\mathbb{N}$ and $T \subseteq \{1,\ldots, k\}$, there is an integer $N$ such that, for $i = 1, \ldots,k$, \[ N+ i \in S\quad\mbox{if and only if}\quad i \in T. \] (It is often convenient to consider binary sequences instead of sets. There is an obvious bijection, under which the sequence $\sigma$ and the set $S$ correspond when $(\sigma_i = 1) ~\Leftrightarrow~(i \in S)$ --- thus $\sigma$ is the characteristic function of $S$. Now a binary sequence $\sigma$ is universal if and only if it contains every finite binary sequence as a consecutive subsequence.) Let $S$ be a universal set. Define a graph with vertex set $\mathbb{Z}$, in which $x$ and $y$ are adjacent if and only if $|x-y|\in S$. This graph is isomorphic to $R$. For let $u_1, \ldots, u_m, v_1, \ldots, v_n$ be distinct integers; let $l$ and $L$ be the least and greatest of these integers. Let $k = L - l + 1$ and $T = \{u_i - l + 1 : i = 1, \ldots, m\}$. Choose $N$ as in the definition of universality. Then $z = l - 1 - N$ has the required adjacencies. The simplest construction of a universal sequence is to enumerate all finite binary sequences and concatenate them. But there are many others. It is straight\-forward to show that a random subset of $\mathbb{N}$ (obtained by choosing positive integers independently with probability $\frac{1}{2}$) is almost surely universal. (Said otherwise, the base $2$ expansion of almost every real number in $[0,1]$ is a universal sequence.) Of course, it is possible to construct a graph satisfying ($\ast$) directly. For example, let $\Gamma_0$ be the empty graph; if $\Gamma_k$ has been constructed, let $\Gamma_{k+1}$ be obtained by adding, for each subset $U$ of the vertex set of $\Gamma_k$, a vertex $z(U)$ whose neighbour set is precisely $U$. Clearly, the union of this sequence of graphs satisfies ($\ast$). \section{Indestructibility \label{ch32:sec2.3} The graph $R$ is remarkably stable: if small changes are made to it, the resulting graph is still isomorphic to $R$. Some of these results depend on the following analogue of property ($\ast$), which appears stronger but is an immediate consequence of ($\ast$) itself. \begin{proposition}\label{ch32:prop3.1} Let $u_1, \ldots, u_m, v_1, \ldots, v_n$ be distinct vertices of $R$. Then the set \[ Z = \{z : z\sim u_i\mbox{ for }i = 1,\ldots,m; z\nsim v_j\mbox{ for }j = 1,\ldots n\} \] is infinite; and the induced subgraph on this set is isomorphic to $R$. \end{proposition} \begin{proof} It is enough to verify property ($\ast$) for $Z$. So let $u_1^{\prime}, \ldots, u_k^{\prime}, v_1^{\prime}, \ldots, v_l^{\prime}$ be distinct vertices of $Z$. Now the vertex $z$ adjacent to $u_1, \ldots, u_n, u_1^{\prime}, \ldots, u_k^{\prime}$ and not to $v_1, \ldots, v_n, v_1^{\prime}, \ldots, v_l^{\prime}$, belongs to $Z$ and witnesses the truth of this instance of ($\ast$) there. \end{proof} The operation of $switching$ a graph with respect to a set $X$ of vertices is defined as follows. Replace each edge between a vertex of $X$ and a vertex of its complement by a non-edge, and each such non-edge by an edge; leave the adjacencies within $X$ or outside $X$ unaltered. See Seidel \cite{ch32:bib46} for more properties of this operation. \begin{proposition}\label{ch32:prop3.2} The result of any of the following operations on $R$ is isomorphic to $R$: \begin{itemize} \item[(a)] deleting a finite number of vertices; \item[(b)] changing a finite number of edges into non-edges or vice versa; \item[(c)] switching with respect to a finite set of vertices. \end{itemize} \end{proposition} \begin{proof} In cases (a) and (b), to verify an instance of property ($\ast$), we use Proposition~\ref{ch32:prop3.1} to avoid the vertices which have been tampered with. For (c), if $U =\{u_1,\ldots,u_m\}$ and $V = \{v_1,\ldots, v_n\}$, we choose a vertex outside $X$ which is adjacent (in $R$) to the vertices of $U \setminus X$ and $V \cap X$, and non-adjacent to those of $U\cap X$ and $V \setminus X$. \end{proof} Not every graph obtained from $R$ by switching is isomorphic to $R$. For example, if we switch with respect to the neighbours of a vertex $x$, then $x$ is an isolated vertex in the resulting graph. However, if $x$ is deleted, we obtain $R$ once again! Moreover, if we switch with respect to a random set of vertices, the result is almost certainly isomorphic to $R$. $R$ satisfies the \emph{pigeonhole principle}: \begin{proposition}\label{ch32:prop3.3} If the vertex set of $R$ is partitioned into a finite number of parts, then the induced subgraph on one of these parts is isomorphic to $R$. \end{proposition} \begin{proof} Suppose that the conclusion is false for the partition $X_1 \cup \ldots \cup X_k$ of the vertex set. Then, for each $i$, property ($\ast$) fails in $X_i$, so there are finite disjoint subsets $U_i, V_i$ of $X_i$ such that no vertex of $X_i$ is ``correctly joined'' to all vertices of $U_i$, and to none of $V_i$. Setting $U = U_1 \cup \ldots \cup U_k$ and $V = V_1 \cup \ldots \cup V_k$, we find that condition ($\ast$) fails in $R$ for the sets $U$ and $V$, a contradiction. \end{proof} Indeed, this property is characteristic: \begin{proposition}\label{ch32:prop3.4} The only countable graphs $\Gamma$ which have the property that, if the vertex set is partitioned into two parts, then one of those parts induces a subgraph isomorphic to $\Gamma$, are the complete and null graphs and $R$. \end{proposition} \begin{proof} Suppose that $\Gamma$ has this property but is not complete or null. Since any graph can be partitioned into a null graph and a graph with no isolated vertices, we see that $\Gamma$ has no isolated vertices. Similarly, it has no vertices joined to all others. Now suppose that $\Gamma$ is not isomorphic to $R$. Then we can find $u_1,\ldots,u_m$ and $v_1,\ldots,v_n$ such that ($\ast$) fails, with $m +n$ minimal subject to this. By the preceding paragraph, $m + n > 1$. So the set $\{u_1,\ldots, v_n\}$ can be partitioned into two non-empty subsets $A$ and $B$. Now let $X$ consist of $A$ together with all vertices (not in $B$) which are not ``correctly joined'' to the vertices in $A$; and let $Y$ consist of $B$ together with all vertices (not in $X$) which are not ``correctly joined'' to the vertices in $B$. By assumption, $X$ and $Y$ form a partition of the vertex set. Moreover, the induced subgraphs on $X$ and $Y$ fail instances of condition ($\ast$) with fewer than $m +n$ vertices; by minimality, neither is isomorphic to $\Gamma$, a contradiction. \end{proof} Finally: \begin{proposition}\label{ch32:prop3.5} $R$ is isomorphic to its complement. \end{proposition} For property ($\ast$) is clearly self-complementary. \section{Graph-theoretic properties \label{ch32:sec2.4} The most important property of $R$ (and the reason for Rado's interest) is that it is \emph{universal}: \begin{proposition}\label{ch32:prop4.1} Every finite or countable graph can be embedded as an induced subgraph of $R$. \end{proposition} \begin{proof} We apply the proof technique of Fact~\ref{ch32:subsec2.1.2}; but, instead of back-and-forth, we just ``go forth''. Let $\Gamma$ have vertex set $\{x_1, x_2,\ldots\}$, and suppose that we have a map $f_n : \{x_1, \ldots,x_n\} \rightarrow R$ which is an isomorphism of induced subgraphs. Let $U$ and $V$ be the sets of neighbours and non-neighbours respectively of $x_{n+1}$ in $\{x_1, \ldots,x_n\}$. Choose $z \in R$ adjacent to the vertices of $f(U)$ and nonadjacent to those of $f(V)$, and extend $f_n$ to map $x_{n+1}$ to $z$. The resulting map $f_{n+1}$ is still an isomorphism of induced subgraphs. Then $f = \bigcup f_n$ is the required embedding. (The point is that, going forth, we only require that property ($\ast$) holds in the target graph.) \end{proof} In particular, $R$ contains infinite cliques and cocliques. Clearly no finite clique or coclique can be maximal. There do exist infinite maximal cliques and cocliques. For example, if we enumerate the vertices of $R$ as $\{x_1, x_2, \ldots\}$, and build a set $S$ by $S_0 = \emptyset, S_{n +1} = S_n \cup \{x_m\}$ where $m$ is the least index of a vertex joined to every vertex in $S_n$, and $S = \bigcup S_n$, then $S$ is a maximal clique. Dual to the concept of induced subgraph is that of \emph{spanning subgraph}, using all the vertices and some of the edges. Not every countable graph is a spanning subgraph of $R$ (for example, the complete graph is not). We have the following characterization: \begin{proposition}\label{ch32:prop4.2} A countable graph $\Gamma$ is isomorphic to a spanning subgraph of $R$ if and only if, given any finite set $\{v_1, \ldots, v_n\}$ of vertices of $\Gamma$, there is a vertex $z$ joined to none of $v_1, \ldots, v_n$. \end{proposition} \begin{proof} We use back-and-forth to construct a bijection between the vertex sets of $\Gamma$ and $R$, but when going back from $R$ to $\Gamma$, we only require that \emph{nonadjacencies} should be preserved. \end{proof} This shows, in particular, that every infinite locally finite graph is a spanning subgraph (so $R$ contains $1$-factors, one- and two-way infinite Hamiltonian paths, etc.). But more can be said. The argument can be modified to show that, given any non-null locally finite graph $\Gamma$, any edge of $R$ lies in a spanning subgraph isomorphic to $\Gamma$. Moreover, as in the last section, if the edges of a locally finite graph are deleted from $R$, the result is still isomorphic to $R$. Now let $\Gamma_1, \Gamma_2,\ldots$ be given non-null locally finite countable graphs. Enumerate the edges of $R$, as $\{e_1,e_2,\ldots\}$. Suppose that we have found edge-disjoint spanning subgraphs of $R$ isomorphic to $\Gamma_1,\ldots,\Gamma_n$. Let $m$ be the smallest index of an edge of $R$ lying in none of these subgraphs. Then we can find a spanning subgraph of $R - (\Gamma_1 \cup\cdots\cup\Gamma_n)$ containing $e_m$ and isomorphic to $\Gamma_{n +1}$. We conclude: \begin{proposition}\label{ch32:prop4.3} The edge set of $R$ can be partitioned into spanning subgraphs isomorphic to any given countable sequence of non-null countable locally finite graphs. \end{proposition} In particular, $R$ has a $1$-factorization, and a partition into Hamiltonian paths. \section{Homogeneity and categoricity \label{ch32:sec2.5} We come now to two model-theoretic properties of $R$. These illustrate two important general theorems, the Engeler--Ryll-Nardzewski--Svenonius theorem and Fra\"{i}ss\'{e}'s theorem. The context is first-order logic; so a \emph{structure} is a set equipped with a collection of relations, functions and constants whose names are specified in the language. If there are no functions or constants, we have a \emph{relational structure}. The significance is that any subset of a relational structure carries an induced substructure. (In general, a substructure must contain the constants and be closed with respect to the functions.) Let $M$ be a relational structure. We say that $M$ is \emph{homogeneous} if every isomorphism between finite induced substructures of $M$ can be extended to an automorphism of $M$. \begin{proposition}\label{ch32:prop5.1} $R$ is homogeneous. \end{proposition} \begin{proof} In the proof of Fact~\ref{ch32:subsec2.1.2}, the back-and-forth machine can be started with any given isomorphism between finite substructures of the graphs $\Gamma_1$ and $\Gamma_2$, and extends it to an isomorphism between the two structures. Now, taking $\Gamma_1$ and $\Gamma_2$ to be $R$ gives the conclusion. \end{proof} Fra\"{i}ss\'{e} \cite{ch32:bib21} observed that $\mathbb{Q}$ (as an ordered set) is homogeneous, and used this as a prototype: he gave a necessary and sufficient condition for the existence of a homogeneous structure with prescribed finite substructures. Following his terminology, the \emph{age} of a structure $M$ is the class of all finite structures embeddable in $M$. A class $\mathcal{C}$ of finite structures has the \emph{amalgamation property} if, given $A$, $B_1$, $B_2 \in \mathcal{C}$ and embeddings $f_1 : A \rightarrow B_1$ and $f_2 : A \rightarrow B_2$, there exists $C \in\mathcal{C}$ and embeddings $g_1 : B_1 \rightarrow C$ and $g_2 : B_2 \rightarrow C$ such that $f_1g_1 = f_2g_2$. (Less formally, if the two structures $B_1, B_2$ have isomorphic substructures $A$, they can be ``glued together'' so that the copies of $A$ coincide, the resulting structure $C$ also belonging to the class $\mathcal{C}$.) We allow $A=\emptyset$ here. \begin{theorem}\label{ch32:them5.1} \begin{itemize} \item[(a)] A class $C$ of finite structures (over a fixed relational language)is the age of a countable homogeneous structure $M$ if and only if $\mathcal{C}$ is closed under isomorphism, closed under taking induced substructures, contains only countably many non-isomorphic structures, and has the amalgamation property. \item[(b)] If the conditions of (a) are satisfied, then the structure $M$ is unique up to isomorphism. \end{itemize} \end{theorem} A class $\mathcal{C}$ having the properties of this theorem is called a \emph{Fra\"{\i}ss\'e class}, and the countable homogeneous structure $M$ whose age is $\mathcal{C}$ is its \emph{Fra\"{i}ss\'{e} limit}. The class of all finite graphs is Fra\"{i}ss\'{e} class; its Fra\"{i}ss\'{e} limit is $R$. The Fra\"{i}ss\'{e} limit of a class $\mathcal{C}$ is characterized by a condition generalizing property ($\ast$): \emph{If $A$ and $B$ are members of the age of $M$ with $A\subseteq B$ and $|B| = |A| + 1$, then every embedding of $A$ into $M$ can be extended to an embedding of $B$ into $M$.} In the statement of the amalgamation property, when the two structures $B_1, B_2$ are ``glued together'', the overlap may be larger than $A$. We say that the class $\mathcal{C}$ has the \emph{strong amalgamation property} if this doesn't occur; formally, if the embeddings $g_1, g_2$ can be chosen so that, if $b_1g_1 = b_2g_2$, then there exists $a \in A$ such that $b_1 = af_1$ and $b_2 = af_2$. This property is equivalent to others we have met. \begin{proposition}\label{ch32:prop5.2} Let $M$ be the Fra\"{i}ss\'{e} limit of the class $\mathcal{C}$, and $G = \Aut(M)$. Then the following are equivalent: \begin{itemize} \item[(a)] $\mathcal{C}$ has the strong amalgamation property; \item[(b)] $M \setminus A \cong M$ for any finite subset $A$ of $M$; \item[(c)] the orbits of $G_A$ on $M \setminus A$ are infinite for any finite subset $A$ of $M$, where $G_A$ is the setwise stabiliser of $A$. \end{itemize} \end{proposition} See Cameron \cite{ch32:bib6}, El-Zahar and Sauer \cite{ch32:bib15}. A structure $M$ is called \emph{$\aleph_0$-categorical} if any countable structure satisfying the same first-order sentences as $M$ is isomorphic to $M$. (We must specify countability here: the upward L\"{o}wenheim--Skolem theorem shows that, if $M$ is infinite, then there are structures of arbitrarily large cardinality which satisfy the same first-order sentences as $M$.) \begin{proposition}\label{ch32:prop5.3} $R$ is $\aleph_0$-categorical. \end{proposition} \begin{proof} Property ($\ast$) is not first-order as it stands, but it can be translated into a countable set of first-order sentences $\sigma_{m,n}$ (for $m, n \in \mathbb{N}$), where $\sigma_{m,n}$ is the sentence \[ (\forall u_1..u_mv_1..v_n)\bigg(\!\bigg({(u_1\neq v_1)\& \ldots \& \atop (u_m \neq v_n)}\bigg)\rightarrow (\exists z)\bigg({(z\sim u_1)\& \ldots \& (z\sim u_m)\& \atop \neg(z\sim v_1)\& \ldots \& \neg(z\sim v_n)}\bigg)\!\bigg).\qedhere \] \end{proof} Once again this is an instance of a more general result. An \emph{$n$-type} in a structure $M$ is an equivalence class of $n$-tuples, where two tuples are equivalent if they satisfy the same ($n$-variable) first-order formulae. Now the following theorem was proved by Engeler \cite{ch32:bib16}, Ryll-Nardzewski \cite{ch32:bib44} and Svenonius \cite{ch32:bib49}: \begin{theorem}\label{ch32:them5.2} For a countable first-order structure $M$, the following conditions are equivalent: \begin{itemize} \item[(a)] $M$ is $\aleph_0$-categorical; \item[(b)] $M$ has only finitely many $n$-types, for every $n$; \item[(c)] the automorphism group of $M$ has only finitely many orbits on $M^n$, for every $n$. \end{itemize} \end{theorem} Note that the equivalence of conditions (a) (axiomatizability) and (c) (symmetry) is in the spirit of Klein's Erlanger Programm. The fact that $R$ satisfies (c) is a consequence of its homogeneity, since $(x_1,\ldots, x_n)$ and $(y_1,\ldots,y_n)$ lie in the same orbit of Aut($R$) if and only if the map $(x_i \rightarrow y_i)$ $(i = 1,\ldots, n)$ is an isomorphism of induced subgraphs, and there are only finitely many $n$-vertex graphs. \begin{remark}\label{ch32:rem5.1}\rm The general definition of an $n$-type in first-order logic is more complicated than the one given here: roughly, it is a maximal set of $n$-variable formulae consistent with a given theory. I have used the fact that, in an $\aleph_0$-categorical structure, any $n$-type is $realized$ (i.e., satisfied by some tuple) --- this is a consequence of the G\"{o}del--Henkin completeness theorem and the downward L\"{o}wenheim--Skolem theorem. See Hodges \cite{ch32:bib28} for more details. \end{remark} Some properties of $R$ can be deduced from either its homogeneity or its $\aleph_0$-categoricity. For example, Proposition~\ref{ch32:prop4.1} generalizes. We say that a countable relational structure $M$ is \emph{universal} (or \emph{rich for its age}, in Fra\"{i}ss\'{e}'s terminology \cite{ch32:bib22}) if every countable structure $N$ whose age is contained in that of $M$ (i.e., which is \emph{younger} than $M$) is embeddable in $M$. \begin{theorem}\label{ch32:them5.3} If $M$ is either $\aleph_0$-categorical or homogeneous, then it is universal. \end{theorem} The proof for homogeneous structures follows that of Proposition~\ref{ch32:prop4.1}, using the analogue of property ($\ast$) described above. The argument for $\aleph_0$-categorical structures is a bit more subtle, using Theorem 5.4 and K\"{o}nig's Infinity Lemma: see Cameron \cite{ch32:bib7}. \section{First-order theory of random graphs \label{ch32:sec2.6} The graph $R$ controls the first-order theory of finite random graphs, in a manner I now describe. This theory is due to Glebskii {\it et al.} \cite{ch32:bib24}, Fagin \cite{ch32:bib20}, and Blass and Harary \cite{ch32:bib2}. A property P holds in \emph{almost all finite random graphs} if the proportion of $N$-vertex graphs which satisfy P tends to $1$ as $N \rightarrow \infty$. Recall the sentences $\sigma_{m,n}$ which axiomatize $R$. \begin{theorem}\label{ch32:them6.1} Let $\theta$ be a first-order sentence in the language of graph theory. Then the following are equivalent: \begin{itemize} \item[(a)] $\theta$ holds in almost all finite random graphs; \item[(b)] $\theta$ holds in the graph $R$; \item[(c)] $\theta$ is a logical consequence of $\{\sigma_{m,n} : m, n \in \mathbb{N}\}$. \end{itemize} \end{theorem} \begin{proof} The equivalence of (b) and (c) is immediate from the G\"{o}del--Henkin completeness theorem for first-order logic and the fact that the sentences $\sigma_{m,n}$ axiomatize $R$. We show that (c) implies (a). First we show that $\sigma_{m,n}$ holds in almost all finite random graphs. The probability that it fails in an $N$-vertex graph is not greater than $N^{m+n}(1 - \frac{1}{2^{m+n}})^{N - m- n}$, since there are at most $N^{ m +n}$ ways of choosing $m + n$ distinct points, and $(1 - \frac{1}{2^{m+n}})^{N - m- n}$ is the probability that no further point is correctly joined. This probability tends to $0$ as $N \rightarrow \infty$. Now let $\theta$ be an arbitrary sentence satisfying (c). Since proofs in first-order logic are finite, the deduction of $\theta$ involves only a finite set $\Sigma$ of sentences $\sigma_{m,n}$. It follows from the last paragraph that almost all finite graphs satisfy the sentences in $\Sigma$; so almost all satisfy $\theta$ too. Finally, we show that not (c) implies not (a). If (c) fails, then $\theta$ doesn't hold in $R$, so $(\neg \theta)$ holds in $R$, so $(\neg \theta)$ is a logical consequence of the sentences $\sigma_{m,n}$. By the preceding paragraph, $(\neg \theta)$ holds in almost all random graphs. \end{proof} The last part of the argument shows that there is a zero-one law: \begin{corollary}\label{ch32:coro6.1} Let $\theta$ be a sentence in the language of graph theory. Then either $\theta$ holds in almost all finite random graphs, or it holds in almost none. \end{corollary} It should be stressed that, striking though this result is, most interesting graph properties (connectedness, hamiltonicity, etc.) are not first-order, and most interesting results on finite random graphs are obtained by letting the probability of an edge tend to zero in a specified manner as $N \rightarrow \infty$, rather than keeping it constant (see Bollob\'{a}s \cite{ch32:bib3}). Nevertheless, we will see a recent application of Theorem~\ref{ch32:them6.1} later. \section{Measure and category \label{ch32:sec2.7} When the existence of an infinite object can be proved by a probabilistic argument (as we did with $R$ in Section~\ref{ch32:sec2.1}), it is often the case that an alternative argument using the concept of Baire category can be found. In this section, I will sketch the tools briefly. See Oxtoby \cite{ch32:bib39} for a discussion of measure and Baire category. In a topological space, a set is \emph{dense} if it meets every nonempty open set; a set is \emph{residual} if it contains a countable intersection of open dense sets. The \emph{Baire category theorem} states: \begin{theorem}\label{ch32:them7.1} In a complete metric space, any residual set is non-empty. \end{theorem} (The analogous statement for probability is that a set which contains a countable intersection of sets of measure $1$ is non-empty. We used this to prove Fact~\ref{ch32:subsec2.1.1}.) The simplest situation concerns the space $2^{\mathbb{N}}$ of all infinite sequences of zeros and ones. This is a probability space, with the ``coin-tossing measure'' --- this was the basis of our earlier discussion --- and also a complete metric space, where we define $d(x, y) = \frac{1}{2^n}$ if the sequences $x$ and $y$ agree in positions $0, 1, \ldots, n - 1$ and disagree in position $n$. Now the topological concepts translate into combinatorial ones as follows. A set $S$ of sequences is open if and only if it is \emph{finitely determined}, i.e., any $x \in S$ has a finite initial segment such that all sequences with this initial segment are in $S$. A set $S$ is dense if and only if it is \emph{always reachable}, i.e., any finite sequence has a continuation lying in $S$. Now it is a simple exercise to prove the Baire category theorem for this space, and indeed to show that a residual set is dense and has cardinality $2^{\aleph_0}$. We will say that ``almost all sequences have property P (in the sense of Baire category)'' if the set of sequences which have property P is residual. We can describe countable graphs by binary sequences: take a fixed enumeration of the $2$-element sets of vertices, and regard the sequence as the characteristic function of the edge set of the graph. This gives meaning to the phrase ``almost all graphs (in the sense of Baire category)''. Now, by analogy with Fact~\ref{ch32:subsec2.1.1}, we have: \begin{fact}\label{ch32:subsec2.7.1} Almost all countable graphs (in the sense of either measure or Baire category) have property ($\ast$). \end{fact} The proof is an easy exercise. In fact, it is simpler for Baire category than for measure --- no limit is required! In the same way, almost all binary sequences (in either sense) are universal (as defined in Section~\ref{ch32:sec2.2}). A binary sequence defines a path in the binary tree of countable height, if we start at the root and interpret $0$ and $1$ as instructions to take the left or right branch at any node. More generally, given any countable tree, the set of paths is a complete metric space, where we define the distance between two paths to be $\frac{1}{2^n}$ if they first split apart at level $n$ in the tree. So the concept of Baire category is applicable. The combinatorial interpretation of open and dense sets is similar to that given for the binary case. For example, the age of a countable relational structure $M$ can be described by a tree: nodes at level $n$ are structures in the age which have point set $\{0, 1, \ldots, n-1\}$, and nodes $X_n, X_{n +1}$ at levels $n$ and $n +1$ are declared to be adjacent if the induced structure of $X_{n+1}$ on the set $\{0, 1, \ldots, n-1\}$ is $X_n$. A path in this tree uniquely describes a structure $N$ on the natural numbers which is younger than $M$, and conversely. Now Fact~\ref{ch32:subsec2.7.1} generalizes as follows: \begin{proposition}\label{ch32:prop7.1} If $M$ is a countable homogeneous relational structure, then almost all countable structures younger than $M$ are isomorphic to $M$. \end{proposition} It is possible to formulate analogous concepts in the measure-theoretic framework, though with more difficulty. But the results are not so straightforward. For example, almost all finite triangle-free graphs are bipartite (a result of Erd\H{o}s, Kleitman and Rothschild \cite{ch32:bib17}); so the ``random countable triangle-free graph'' is almost surely bipartite. (In fact, it is almost surely isomorphic to the ``random countable bipartite graph'', obtained by taking two disjoint countable sets and selecting edges between them at random.) A structure which satisfies the conclusion of Proposition~\ref{ch32:prop7.1} is called \emph{ubiquitous} (or sometimes \emph{ubiquitous in category}, if we want to distinguish measure-theoretic or other forms of ubiquity). Thus the random graph is ubiquitous in both measure and category. See Bankston and Ruitenberg \cite{ch32:bib1} for further discussion. \section{The automorphism group \label{ch32:sec2.8} \subsection{General properties} From the homogeneity of $R$ (Proposition~\ref{ch32:prop5.1}), we see that it has a large and rich group of automorphisms: the automorphism group $G = \Aut(R)$ acts transitively on the vertices, edges, non-edges, etc. --- indeed, on finite configurations of any given isomorphism type. In the language of permutation groups, it is a rank $3$ permutation group on the vertex set, since it has three orbits on ordered pairs of vertices, viz., equal, adjacent and non-adjacent pairs. Much more is known about $G$; this section will be the longest so far. First, the cardinality: \begin{proposition}\label{ch32:prop8.1} $|\Aut(R)| = 2^{\aleph_0}$. \end{proposition} This is a special case of a more general fact. The automorphism group of any countable first-order structure is either at most countable or of cardinality $2^{\aleph_0}$, the first alternative holding if and only if the stabilizer of some finite tuple of points is the identity. The normal subgroup structure was settled by Truss \cite{ch32:bib51}: \begin{theorem}\label{ch32:them8.1} $\Aut(R)$ is simple. \end{theorem} Truss proved a stronger result: if $g$ and $h$ are two non-identity elements of $\Aut(R)$, then $h$ can be expressed as a product of five conjugates of $g$ or $g^{-1}$. (This clearly implies simplicity.) Recently Macpherson and Tent \cite{ch32:new11} gave a different proof of simplicity which applies in more general situations. Truss also described the cycle structures of all elements of $\Aut(R)$. A countable structure $M$ is said to have the \emph{small index property} if any subgroup of $\Aut(M)$ with index less than $2^{\aleph_0}$ contains the pointwise stabilizer of a finite set of points of $M$i; it has the \emph{strong small index property} if any such subgroup lies between the pointwise and setwise stabilizer of a finite set. Hodges {\it et al.} \cite{ch32:bib29} and Cameron \cite{ch32:new4} showed: \begin{theorem}\label{ch32:them8.2} $R$ has the strong small index property. \end{theorem} The significance of this appears in the next subsection. It is also related to the question of the reconstruction of a structure from its automorphism group. For example, Theorem~\ref{ch32:them8.2} has the following consequence: \begin{corollary}\label{ch32:coro8.1} Let $\Gamma$ be a graph with fewer than $2^{\aleph_0}$ vertices, on which $\Aut(R)$ acts transitively on vertices, edges and non-edges. Then $\Gamma$ is isomorphic to $R$ (and the isomorphism respects the action of $\Aut(R)$). \end{corollary} \subsection{Topology} The symmetric group $\Sym(X)$ on an infinite set $X$ has a natural topology, in which a neighbourhood basis of the identity is given by the pointwise stabilizers of finite tuples. In the case where $X$ is countable, this topology is derived from a complete metric, as follows. Take $X=\mathbb{N}$. Let $m(g)$ be the smallest point moved by the permutation $g$. Take the distance between the identity and $g$ to be $\max\{2^{-m(g)},2^{-m(g^{-1})}\}$. Finally, the metric is translation-invariant, so that $d(f,g)=d(fg^{-1},1)$. \begin{proposition} Let $G$ be a subgroup of the symmetric group on a countable set $X$. Then the following are equivalent: \begin{itemize} \item[(a)] $G$ is closed in $\Sym(X)$; \item[(b)] $G$ is the automorphism group of a first-order structure on $X$; \item[(c)] $G$ is the automorphism group of a homogeneous relational structure on $X$. \end{itemize} \end{proposition} So automorphism groups of homogeneous relational structures such as $R$ are themselves topological groups whose topology is derived from a complete metric. In particular, the Baire category theorem applies to groups like $\Aut(R)$. So we can ask: is there a ``typical'' automorphism? Truss \cite{ch32:bib53} showed the following result. \begin{theorem}\label{ch32:them9.2} There is a conjugacy class which is residual in $\Aut(R)$. Its members have infinitely many cycles of each finite length, and no infinite cycles. \end{theorem} Members of the residual conjugacy class (which is, of course, unique) are called \emph{generic automorphisms} of $R$. I outline the argument. Each of the following sets of automorphisms is residual: \begin{itemize} \item[(a)] those with no infinite cycles; \item[(b)] those automorphisms $g$ with the property that, if $\Gamma$ is any finite graph and $f$ any isomorphism between subgraphs of $\Gamma$, then there is an embedding of $\Gamma$ into $R$ in such a way that $g$ extends $f$. \end{itemize} (Here (a) holds because the set of automorphisms for which the first $n$ points lie in finite cycles is open and dense.) In fact, (b) can be strengthened; we can require that, if the pair $(\Gamma, f)$ extends the pair $(\Gamma_0, f_0)$ (in the obvious sense), then any embedding of $\Gamma_0$ into $R$ such that $g$ extends $f_0$ can be extended to an embedding of $\Gamma$ such that $g$ extends $f$. Then a residual set of automorphisms satisfy both (a) and the strengthened (b); this is the required conjugacy class. Another way of expressing this result is to consider the class $\mathcal{C}$ of finite structures each of which is a graph $\Gamma$ with an isomorphism $f$ between two induced subgraphs (regarded as a binary relation). This class satisfies Fra\"{i}ss\'{e}'s hypotheses, and so has a Fra\"{i}ss\'{e} limit $M$. It is not hard to show that, as a graph, $M$ is the random graph $R$; arguing as above, the map $f$ can be shown to be a (generic) automorphism of $R$. More generally, Hodges \textit{et al.} \cite{ch32:bib29} showed that there exist ``generic $n$-tuples'' of automorphisms of $R$, and used this to prove the small index property for $R$; see also Hrushovski \cite{ch32:bib30}. The group generated by a generic $n$-tuple of automorphisms is, not surprisingly, a free group; all its orbits are finite. In the next subsection, we turn to some very different subgroups. To conclude this section, we revisit the strong small index property. Recall that a neighbourhood basis for the identity consists of the pointwise stabilisers of finite sets. If the strong small index property holds, then every subgroup of small index (less than $2^{\aleph_0}$) contains one of these, and so is open. So we can take the subgroups of small index as a neighbourhood basis of the identity. So we have the following reconstruction result: \begin{proposition} \label{ch32:reconst} If $M$ is a countable structure with the strong small index property (for example, $R$), then the structure of $\Aut(M)$ as topological group is determined by its abstract group structure. \end{proposition} \subsection{Subgroups} Another field of study concerns small subgroups. To introduce this, we reinterpret the last construction of $R$ in Section~\ref{ch32:sec2.2}. Recall that we took a universal set $S \subseteq \mathbb{N}$, and showed that the graph $\Gamma(S)$ with vertex set $\mathbb{Z}$, in which $x$ and $y$ are adjacent whenever $|x- y|\in S$, is isomorphic to $R$. Now this graph admits the ``shift'' automorphism $x \mapsto x + 1$, which permutes the vertices in a single cycle. Conversely, let $g$ be a cyclic automorphism of $R$. We can index the vertices of $R$ by integers so that $g$ is the map $x \mapsto x + 1$. Then, if $S = \{n \in \mathbb{N}: n \sim 0\}$, we see that $x \sim y$ if and only if $|x- y|\in S$, and that $S$ is universal. A short calculation shows that two cyclic automorphisms are conjugate in $\Aut(R)$ if and only if they give rise to the same set $S$. Since there are $2^{\aleph_0}$ universal sets, we conclude: \begin{proposition}\label{ch32:prop8.2} $R$ has $2^{\aleph_0}$ non-conjugate cyclic automorphisms. \end{proposition} (Note that this gives another proof of Proposition~\ref{ch32:prop8.1}.) Almost all subsets of $\mathbb{N}$ are universal --- this is true in either sense discussed in Section~\ref{ch32:sec2.7}. The construction preceding Proposition~\ref{ch32:prop8.2} shows that graphs admitting a given cyclic automorphism correspond to subsets of $\mathbb{N}$; so almost all ``cyclic graphs'' are isomorphic to $R$. What if the cyclic permutation is replaced by an arbitrary permutation or permutation group? The general answer is unknown: \begin{conjecture}\label{ch32:conj8.1} Given a permutation group $G$ on a countable set, the following are equivalent: \begin{itemize} \item[(a)] some $G$-invariant graph is isomorphic to $R$; \item[(b)] a random $G$-invariant graph is isomorphic to $R$ with positive probability. \end{itemize} \end{conjecture} A random $G$-invariant graph is obtained by listing the orbits of $G$ on the $2$-subsets of the vertex set, and deciding randomly whether the pairs in each orbit are edges or not. We cannot replace ``positive probability'' by ``probability $1$'' here. For example, consider a permutation with one fixed point $x$ and two infinite cycles. With probability $\frac{1}{2}$, $x$ is joined to all or none of the other vertices; if this occurs, the graph is not isomorphic to $R$. However, almost all graphs for which this event does not occur are isomorphic to $R$. It can be shown that the conjecture is true for the group generated by a single permutation; and Truss' list of cycle structures of automorphisms can be re-derived in this way. Another interesting class consists of the \emph{regular} permutation groups. A group is \emph{regular} if it is transitive and the stabilizer of a point is the identity. Such a group $G$ can be considered to act on itself by right multiplication. Then any $G$-invariant graph is a \emph{Cayley graph} for $G$; in other words, there is a subset $S$ of $G$, closed under inverses and not containing the identity, so that $x$ and $y$ are adjacent if and only if $xy^{-1}\in S$. Now we can choose a \emph{random Cayley graph} for $G$ by putting inverse pairs into $S$ with probability $\frac{1}{2}$. It is not true that, for every countable group $G$, a random Cayley graph for $G$ is almost surely isomorphic to $R$. Necessary and sufficient conditions can be given; they are somewhat untidy. I will state here a fairly general sufficient condition. A \emph{square-root set} in $G$ is a set \[ \sqrt{a} = \{x \in G : x^2=a\}; \] it is \emph{principal} if $a = 1$, and \emph{non-principal} otherwise. \begin{proposition}\label{ch32:prop8.3} Suppose that the countable group $G$ cannot be expressed as the union of finitely many translates of non-principal square-root sets and a finite set. Then almost all Cayley graphs for $G$ are isomorphic to $R$. \end{proposition} This proposition is true in the sense of Baire category as well. In the infinite cyclic group, a square-root set has cardinality at most $1$; so the earlier result about cyclic automorphisms follows. See Cameron and Johnson \cite{ch32:bib9} for further details. \subsection{Overgroups} There are a number of interesting overgroups of $\Aut(R)$ in the symmetric group on the vertex set $X$ of $R$. Pride of place goes to the \emph{reducts}, the overgroups which are closed in the topology on $\Sym(X)$ (that is, which are automorphism groups of relational structures which can be defined from $R$ without parameters). These were classified by Simon Thomas \cite{ch32:bib50}. An \emph{anti-automorphism} of $R$ is an isomorphism from $R$ to its complement; a \emph{switching automorphism} maps $R$ to a graph equivalent to $R$ by switching. The concept of a \emph{switching anti-automorphism} should be clear. \begin{theorem}\label{ch32:them8.3} There are exactly five reducts of $R$, viz.: $A = \Aut(R)$; the group $D$ of automorphisms and anti-automorphisms of $R$; the group $S$ of switching automorphisms of $R$; the group $B$ of switching automorphisms and anti-automorphisms of $R$; and the symmetric group. \end{theorem} \begin{remark}\label{ch32:rema8.1}\rm The set of all graphs on a given vertex set is a $\mathbb{Z}_2$-vector space, where the sum of two graphs is obtained by taking the symmetric difference of their edge sets. Now complementation corresponds to adding the complete graph, and switching to adding a complete bipartite graph. Thus, it follows from Theorem~\ref{ch32:them8.3} that, if $G$ is a closed supergroup of $\Aut(R)$, then the set of all images of $R$ under $G$ is contained in a coset of a subspace $W(G)$ of this vector space. (For example, $W(B)$ consists of all complete bipartite graphs and all unions of at most two complete graphs.) Moreover, these subspaces are invariant under the symmetric group. It is remarkable that the combinatorial proof leads to this algebraic conclusion. \end{remark} Here is an application due to Cameron and Martins \cite{ch32:bib10}, which draws together several threads from earlier sections. Though it is a result about finite random graphs, the graph $R$ is inextricably involved in the proof. Let $\mathcal{F}$ be a finite collection of finite graphs. For any graph $\Gamma$, let $\mathcal{F}(\Gamma)$ be the hypergraph whose vertices are those of $\Gamma$, and whose edges are the subsets which induce graphs in $\mathcal{F}$. To what extent does $\mathcal{F}(\Gamma)$ determine $\Gamma$? \begin{theorem}\label{ch32:them8.4} Given $\mathcal{F}$, one of the following possibilities holds for almost all finite random graphs $\Gamma$: \begin{itemize} \item[(a)] $\mathcal{F}(\Gamma)$ determines $\Gamma$ uniquely; \item[(b)] $\mathcal{F}(\Gamma)$ determines $\Gamma$ up to complementation; \item[(c)] $\mathcal{F}(\Gamma)$ determines $\Gamma$ up to switching; \item[(d)] $\mathcal{F}(\Gamma)$ determines $\Gamma$ up to switching and/or complementation; \item[(e)] $\mathcal{F}(\Gamma)$ determines only the number of vertices of $\Gamma$. \end{itemize} \end{theorem} I sketch the proof in the first case, that in which $\mathcal{F}$ is not closed under either complementation or switching. We distinguish two first-order languages, that of graphs and that of hypergraphs (with relations of the arities appropriate for the graphs in $\mathcal{F}$). Any sentence in the hypergraph language can be ``translated'' into the graph language, by replacing ``$E$ is an edge'' by ``the induced subgraph on $E$ is one of the graphs in $\mathcal{F}$''. By the case assumption and Theorem~\ref{ch32:them8.3}, we have $\Aut(\mathcal{F}(R)) = \Aut(R)$. Now by Theorem~\ref{ch32:them5.2}, the edges and non-edges in $R$ are $2$-types in $\mathcal{F}(R)$, so there is a formula $\phi(x, y)$ (in the hypergraph language) such that $x \sim y$ in $R$ if and only if $\phi(x, y)$ holds in $\mathcal{F}(R)$. If $\phi^{\ast}$ is the ``translation'' of $\phi$, then $R$ satisfies the sentence \[ (\forall x, y)((x \sim y) \leftrightarrow \phi^{\ast}(x, y)). \] By Theorem~\ref{ch32:them6.1}, this sentence holds in almost all finite graphs. Thus, in almost all finite graphs, $\Gamma$, vertices $x$ and $y$ are joined if and only if $\phi(x, y)$ holds in $\mathcal{F}(\Gamma)$. So $\mathcal{F}(\Gamma)$ determines $\Gamma$ uniquely. By Theorem~\ref{ch32:them8.3}, $\Aut(\mathcal{F}(R))$ must be one of the five possibilities listed; in each case, an argument like the one just given shows that the appropriate conclusion holds. \medskip There are many interesting overgroups of $\Aut(R)$ which are not closed, some of which are surveyed (and their inclusions determined) in a forthcoming paper of Cameron \emph{et al.} \cite{ch32:new5}. These arise in one of two ways. First, we can take automorphism groups of non-relational structures, such as hypergraphs with infinite hyperedges (for example, take the hyperedges to be the subsets of the vertex set which induce subgraphs isomorphic to $R$), or topologies or filters (discussed in the next section). Second, we may weaken the notion of automorphism. For example, we have a chain of subgroups \[ \Aut(R)<\Aut_1(R)<\Aut_2(R)<\Aut_3(R)<\Sym(V(R)) \] with all inclusions proper, where \begin{itemize} \item $\Aut_1(R)$ is the set of permutations which change only finitely many adjacencies (such permutations are called \emph{almost automorphisms} of $R$); \item $\Aut_2(R)$ is the set of permutations which change only finitely many adjacencies at any vertex of $R$; \item $\Aut_3(R)$ is the set of permutations which change only finitely many adjacencies at all but finitely many vertices of $R$. \end{itemize} All these groups are \emph{highly transitive}, that is, given any two $n$-tuples $(v_1,\ldots,v_n)$ and $(w_1,\ldots,w_n)$ of distinct vertices, there is an element of the relevant group carrying the first tuple to the second. This follows from $\Aut_1(R)$ by the indestructibility of $R$. If $R_1$ and $R_2$ are the graphs obtained by deleting all edges within $\{v_1,\ldots,v_n\}$ and within $\{w_1,\ldots,w_n\}$ respectively, then $R_1$ and $R_2$ are both isomorphic to $R$. By homogeneity of $R$, there is an isomorphism from $R_1$ to $R_2$ mapping $(v_1,\ldots,v_n)$ to $(w_1,\ldots,w_n)$; clearly this map is an almost-automorphism of $R$. Indeed, any overgroup of $R$ which is not a reduct preserves no non-trivial relational structure, and so must be highly transitive. \section{Topological aspects \label{ch32:sec2.9} There is a natural way to define a topology on the vertex set of $R$: we take as a basis for the open sets the set of all finite intersections of vertex neighbourhoods. It can be shown that this topology is homeomorphic to $\mathbb{Q}$ (using the characterization of $\mathbb{Q}$ as the unique countable, totally disconnected, topological space without isolated points, due to Sierpi\'{n}iski \cite{ch32:bib48}, see also Neumann \cite{ch32:bib38}). Thus: \begin{proposition}\label{ch32:prop9.1} $\Aut(R)$ is a subgroup of the homeomorphism group of $\mathbb{Q}$. \end{proposition} This is related to a theorem of Mekler \cite{ch32:bib36}: \begin{theorem}\label{ch32:them9.1} A countable permutation group $G$ is embeddable in the homeomorphism group of $\mathbb{Q}$ if and only if the intersection of the supports of any finite number of elements of $G$ is empty or infinite. \end{theorem} Here, the support of a permutation is the set of points it doesn't fix. Now of course $\Aut(R)$ is not countable; yet it does satisfy Mekler's condition. (If $x$ is moved by each of the automorphisms $g_1,\ldots,g_n$, then the infinitely many vertices joined to $x$ but to none of $xg_1,\ldots, xg_n$ are also moved by these permutations.) The embedding in Proposition \ref{ch32:prop9.1} can be realised constructively: the topology can be defined directly from the graph. Take a basis for the open sets to be the sets of witnesses for our defining property $(*)$; that is, sets of the form \[Z(U,V)=\{z\in V(R): (\forall u\in U)(z\sim u) \wedge (\forall v\in V)(z \not \sim v)\}\] for finite disjoint sets $U$ and $V$. Now given $u\ne v$, there is a point $z\in Z(\{u\},\{v\})$; so the open neighbourhood of $z$ is open and closed in the topology and contains $u$ but not $v$. So the topology is totally disconnected. It has no isolated points, so it is homeomorphic to $\mathbb{Q}$, by Sierpi\'nski's Theoreom. There is another interesting topology on the vertex set of $R$, which can be defined in three different ways. Let $B$ be the ``random bipartite graph'', the graph with vertex set $X\cup Y$ where $X$ and $Y$ are countable and disjoint, where edges between $X$ and $Y$ are chosen randomly. (A simple modification of the Erd\H{o}s--R\'enyi argument shows that there is a unique graph which occurs with probability $1$.) Now consider the following topologies on a countable set $X$: \begin{description} \item{$\mathcal{T}$:} point set $V(R)$, sub-basic open sets are open vertex neighbourhoods. \item{$\mathcal{T}^*$:} points set $V(R)$, sub-basic open sets are closed vertex neighbourhoods. \item{$\mathcal{T}^\dag$:} points are one bipartite block in $B$, sub-basic open sets are neighbourhoods of vertices in the other bipartite block. \end{description} \begin{proposition} \begin{itemize} \item[(a)] The three topologies defined above are all homeomorphic. \item[(b)] The homeomorphism groups of these topologies are highly transitive. \end{itemize} \end{proposition} Note that the topologies are homeomorphic but not identical. For example, the identity map is a continuous bijection from $\mathcal{T}^*$ to $\mathcal{T}$, but is not a homeomorphism. \section{Some other structures \label{ch32:sec2.10} \subsection{General results} As we have seen, $R$ has several properties of a general kind: for example, homogeneity, $\aleph_0$-categoricity, universality, ubiquity. Much effort has gone into studying, and if possible characterizing, structures of other kinds with these properties. (For example, they are all shared by the ordered set $\mathbb{Q}$.) Note that, of the four properties listed, the first two each imply the third, and the first implies the fourth. Moreover, a homogeneous structure over a finite relational language is $\aleph_0$-categorical, since there are only finitely many isomorphism types of $n$-element structure for each $n$. Thus, homogeneity is in practice the strongest condition, most likely to lead to characterizations. A major result of Lachlan and Woodrow \cite{ch32:bib35} determines the countable homogeneous graphs. The graphs $H_n$ in this theorem are so-called because they were first constructed by Henson \cite{ch32:bib26}. \begin{theorem}\label{ch32:them10.1} A countable homogeneous graph is isomorphic to one of the following: \begin{itemize} \item[(a)] the disjoining union of $m$ complete graphs of size $n$, where $m, n \leq \aleph_0$ and at least one of $m$ and $n$ is $\aleph_0$; \item[(b)] complements of (a); \item[(c)] the Fra\"{i}ss\'{e} limit $H_n$ of the class of $K_n$-free graphs, for fixed $n \geq 3$; \item[(d)] complements of (c); \item[(e)] the random graph $R$. \end{itemize} \end{theorem} The result of Macpherson and Tent \cite{ch32:new11} shows that the automorphism groups of the Henson graphs are simple. It follows from Proposition \ref{ch32:reconst} that $\Aut(R)$ is not isomorphic to $\Aut(H_n)$. It is not known whether these groups are pairwise non-isomorphic. Other classes in which the homogeneous structures have been determined include finite graphs (Gardiner \cite{ch32:bib23}), tournaments (Lachlan \cite{ch32:bib34} --- surprisingly, there are just three), digraphs (Cherlin \cite{ch32:bib12} (there are uncountably many, see Henson \cite{ch32:bib27}), and posets (Schmerl \cite{ch32:bib45}). In the case of posets, Droste \cite{ch32:bib14} has characterizations under weaker assumptions. For a number of structures, properties of the automorphism group, such as normal subgroups, small index property, or existence of generic automorphisms, have been established. A theorem of Cameron \cite{ch32:bib4} determines the reducts of $\Aut(\mathbb{Q})$: \begin{theorem}\label{ch32:them10.2} There are just five closed permutation groups containing the group $\Aut(\mathbb{Q})$ of order-preserving permutations of $\mathbb{Q}$, viz.: $\Aut(\mathbb{Q})$; the group of order preserving or reversing permutations; the group of permutations preserving a cyclic order; the group of permutations preserving or reversing a cyclic order; and $\Sym(\mathbb{Q})$. \end{theorem} However, there is no analogue of Theorem~\ref{ch32:them8.4} in this case, since there is no Glebskii--B1ass--Fagin--Harary theory for ordered sets. ($\mathbb{Q}$ is dense; this is a first-order property, but no finite ordered set is dense.) Simon Thomas \cite{ch32:new15} has determined the reducts of the random $k$-uniform hypergraph for all $k$. Since my paper with Paul Erd\H{o}s concerns sum-free sets (Cameron and Erd\H{o}s \cite{ch32:bib8}), it is appropriate to discuss their relevance here. Let $H_n$ be the Fra\"{i}ss\'{e} limit of the class of $K_n$-free graphs, for $n \geq 3$ (see Theorem~\ref{ch32:them10.1}). These graphs were first constructed by Henson \cite{ch32:bib26}, who also showed that $H_3$ admits cyclic automorphisms but $H_n$ does not for $n > 3$. We have seen how a subset $S$ of $\mathbb{N}$ gives rise to a graph $\Gamma(S)$ admitting a cyclic automorphism: the vertex set is $\mathbb{Z}$, and $x \sim y$ if and only if $|x - y|\in S$. Now $\Gamma(S)$ is triangle-free if and only if $S$ is \emph{sum-free} (i.e., $x,y \in S \Rightarrow x + y \notin S$). It can be shown that, for almost all sum-free sets $S$ (in the sense of Baire category), the graph $\Gamma(S)$ is isomorphic to $H_3$; so $H_3$ has $2^{\aleph_0}$ non-conjugate cyclic automorphisms. However, the analogue of this statement for measure is false; and, indeed, random sum-free sets have a rich and surprising structure which is not well understood (Cameron \cite{ch32:bib5}). For example, the probability that $\Gamma(S)$ is bipartite is approximately $0.218$. It is conjectured that a random sum-free set $S$ almost never satisfies $\Gamma(S)\cong H_3$. In this direction, Schoen \cite{ch32:new14} has shown that, if $\Gamma(S)\cong H_3$, then $S$ has density zero. The Henson $K_n$-free graphs $H_n$, being homogeneous, are ubiquitous in the sense of Baire category: for example, the set of graphs isomorphic to $H_3$ is residual in the set of triangle-free graphs on a given countable vertex set (so $H_3$ is ubiquitous, in the sense defined earlier). However, until recently, no measure-theoretic analogue was known. We saw after Proposition \ref{ch32:prop7.1} that a random triangle-free graph is almost surely bipartite! However, Petrov and Vershik \cite{ch32:new13} recently managed to construct an exchangeable measure on graphs on a given countable vertex set which is concentrated on Henson's graph. More recently, Ackerman, Freer and Patel showed that the construction works much more generally: the necessary and sufficient condition turns out to be the strong amalgamation property, which we discussed in Section \ref{ch32:sec2.5}. Universality of a structure $M$ was defined in a somewhat introverted way in Section~\ref{ch32:sec2.5}: $M$ is universal if every structure younger than $M$ is embeddable in $M$. A more general definition would start with a class $\mathcal{C}$ of structures, and say that $M \in \mathcal{C}$ is \emph{universal} for $\mathcal{C}$ if every member of $\mathcal{C}$ embeds into $M$. For a survey on this sort of universality, for various classes of graphs, see Komjath and Pach \cite{ch32:bib33}. Two negative results, for the classes of locally finite graphs and of planar graphs, are due to De Bruijn (see Rado \cite{ch32:bib42}) and Pach \cite{ch32:bib40} respectively. \subsection{The Urysohn space} A remarkable example of a homogeneous structure is the celebrated Urysohn space, whose construction predates Fra\"{\i}ss\'e's work by more than two decades. Urysohn's paper \cite{ch32:new16} was published posthumously, following his drowning in the Bay of Biscay at the age of 26 on his first visit to western Europe (one of the most romantic stories in mathematics). An exposition of the Urysohn space is given by Vershik \cite{ch32:new17}. The Urysohn space is a complete separable metric space $\mathbb{U}$ which is universal (every finite metric space is isometrically embeddable in $\mathbb{U}$) and homogeneous (any isometry between finite subsets can be extended to an isometry of the whole space). Since $\mathbb{U}$ is uncountable, it is not strictly covered by the Fra\"{\i}ss\'e theory, but one can proceed as follows. The set of finite \emph{rational} metric spaces (those with all distances rational) is a Fra\"{\i}ss\'e class; the restriction to countable distances ensures that there are only countably many non-isomorphic members. Its Fra\"{\i}ss\'e limit is the so-called \emph{rational Urysohn space} $\mathbb{U}_Q$. Now the Urysohn space is the completion of $\mathbb{U}_Q$. Other interesting homogeneous metric spaces can be constructed similarly, by restricting the values of the metric in the finite spaces. For example, we can take integral distances, and obtain the \emph{integral Urysohn space} $\mathbb{U}_Z$. We can also take distances from the set $\{0,1,2,\ldots,k\}$ and obtain a countable homogeneous metric space with these distances. For $k=2$, we obtain precisely the path metric of the random graph $R$. (Property $(\ast)$ guarantees that, given two points at distance $2$, there is a point at distance $1$ from both; so, defining two points to be adjacent if they are at distance $1$, we obtain a graph whose path metric is the given metric. It is easily seen that this graph is isomorphic to $R$.) Note that $R$ occurs in many different ways as a reduct of $\mathbb{U}_Q$. Split the positive rationals into two dense subsets $A$ and $B$, and let two points $v,w$ be adjacent if $d(v,w)\in A$; the graph we obtain is $R$. A study of the isometry group of the Urysohn space, similar to that done for $R$, was given by Cameron and Vershik \cite{ch32:new7}. The automorphism group is not simple, since the isometries which move every point by a bounded distance form a non-trivial normal subgroup. \subsection{KPT theory} I conclude with a brief discussion of a dramatic development at the interface of homogeneous structures, Ramsey theory, and topological dynamics. The first intimation of such a connection was pointed out by Ne\v{s}et\v{r}il \cite{ch32:new10}, and in detail in Hubi\v{c}ka and Ne\v{s}et\v{r}il \cite{ch32:new9}. We use the notation $A\choose B$ for the set of all substructures of $A$ isomorphic to $B$. A class $\mathcal{C}$ of finite structures is a \emph{Ramsey class} if, given a natural number $r$ and a pair $A,B$ of structures in $\mathcal{C}$, there exists a structure $C\in\mathcal{C}$ such that, if $C\choose A$ is partitioned into $r$ classes, then there is an element $B'\in{C\choose B}$ for which $B'\choose A$ is contained in a single class. In other words, if we colour the $A$-substructures of $C$ with $r$ colours, then there is a $B$-substructure of $C$, all of whose $A$-substructures belong to the same class. The classical theorem of Ramsey asserts that the class of finite sets is a Ramsey class. \begin{theorem} A hereditary, isomorphism-closed Ramsey class is a Fra\"{\i}ss\'e class. \end{theorem} There are simple examples which show that a good theory of Ramsey classes can only be obtained by making the objects rigid. The simplest way to do this is to require that a total order is part of the structure. Note that, if a Fra\"{\i}ss\'e class has the strong amalgamation property, than we may adjoin to it a total order (independent of the rest of the structure) to obtain a new Ramsey class. We refer to \emph{ordered structures} in this situation. Now the theorem above suggests a procedure for finding Ramsey classes: take a Fra\"{\i}ss\'e class of ordered structures and test the Ramsey property. A number of Ramsey classes, old and new, arise in this way: ordered graphs, $K_n$-free graphs, metric spaces, etc. Indeed, if we take an ordered set and ``order'' it as above to obtain a set with two orderings, we obtain the class of \emph{permutation patterns}, which is also a Ramsey class: see Cameron \cite{ch32:new2}, B\"ottcher and Foniok \cite{ch32:new4}. The third vertex of the triangle was quite unexpected. A \emph{flow} is a continuous action of a topological group $G$ on a topological space $X$, usually assumed to be a compact Hausdorff space. A topological group $G$ admits a unique \emph{minimal flow}, or universal minimal continuous action on a compact space $X$. (Here \emph{minimal} means that $X$ has no non-empty proper closed $G$-invariant subspace, and \emph{universal} means that it can be mapped onto any minimal $G$-flow.) The group $G$ is said to be \emph{extremely amenable} if its minimal flow consists of a single point. The theorem of Kechris, Pestov and Todorcevic \cite{ch32:new10} asserts: \begin{theorem} Let $X$ be a countable set, and $G$ a closed subgroup of $\Sym(X)$. Then $G$ is extremely amenable if and only if it is the automorphism group of a homogeneous structure whose age is a Ramsey class of ordered structures. \end{theorem} As a simple example, the theorem shows that $\Aut(\mathbb{Q})$ (the group of order-preserving permutations of $\mathbb{Q}$ is extremely amenable (a result of Pestov). The fact that the two conditions are equivalent allows information to be transferred in both directions between combinatorics and topological dynamics. In particular, known Ramsey classes such as ordered graphs, ordered $K_n$-free graphs, ordered metric spaces, and permutation patterns give examples of extremely amenable groups. The theorem can also be used in determining the minimal flows for various closed subgroups of $\Sym(X)$. For example, the minimal flow for $\Sym(X)$ is the set of all total orderings of $X$ (a result of Glasner and Weiss \cite{ch32:new8}).
{ "timestamp": "2013-02-01T02:02:30", "yymm": "1301", "arxiv_id": "1301.7544", "language": "en", "url": "https://arxiv.org/abs/1301.7544", "abstract": "Erdős and Rényi showed the paradoxical result that there is a unique (and highly symmetric) countably infinite random graph. This graph, and its automorphism group, form the subject of the present survey.", "subjects": "Combinatorics (math.CO)", "title": "The random graph", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9838471675459396, "lm_q2_score": 0.8244619285331332, "lm_q1q2_score": 0.8111445331367859 }
https://arxiv.org/abs/1008.1286
Subalgebras of Matrix Algebras Generated by Companion Matrices
Let $f,g\in Z[X]$ be monic polynomials of degree $n$ and let $C,D\in M_n(Z)$ be the corresponding companion matrices. We find necessary and sufficient conditions for the subalgebra $Z< C,D>$ to be a sublattice of finite index in the full integral lattice $M_n(Z)$, in which case we compute the exact value of this index in terms of the resultant of $f$ and $g$. If $R$ is a commutative ring with identity we determine when $R< C,D>=M_n(R)$, in which case a presentation for $M_n(R)$ in terms of $C$ and $D$ is given.
\section{Introduction} About twenty years ago a question of Chatters [C1] generated a series of articles concerned with the problem of identifying full matrix rings. We refer the reader to the papers [A], [AMR], [C2], [LRS], [R] cited in the bibliography for more details. In particular, very simple presentations of full matrix rings, involving just two generators, were obtained. In this paper we concentrate on the algebra generated two matrices $A,B\in M_n(R)$, where $R$ is a commutative ring with identity and $n\geq 2$. Is it possible to find a presentation for $R\langle A,B\rangle$? If $A$ and $B$ happen not to generate $M_n(R)$, can we somehow measure the degree of this failure? Adopting a more precise and geometric viewpoint, we look at $M_n({\mathbb Z})$ as an integral lattice in $M_n({\mathbb R})$ and ask when will the sublattice ${\mathbb Z}\langle A,B\rangle$ have maximal rank and, in that case, what will be its index in the full lattice $M_n({\mathbb Z})$. The answers to these questions depend on more specific information about $A$ and $B$. Focusing attention on two companion matrices $C,D\in M_n(R)$ of monic polynomials $f,g\in R[X]$ of degree $n$, section \ref{gen} gives necessary and sufficient conditions for $C$ and $D$ to generate $M_n(R)$, while section \ref{presmn} determines how they do it. If $R$ is a unique factorization domain, section \ref{prescd} exhibits a presentation of $R\langle C,D\rangle$, proves it to be a free $R$-module, and computes its rank. In section \ref{indic} we find the exact index of ${\mathbb Z}\langle C,D\rangle$ in $M_n({\mathbb Z})$ and extend this result to other number rings. The index is obtained by means of a determinantal identity, found in section \ref{D}, which is of independent interest and valid under no restrictions on $R$. We will keep the above notation as well as the following. Let $R[X,Y]$ be the $R$-span of $X^i Y^j$ in $R\langle X,Y\rangle$, where $0\leq i,j$. We have a natural map $R\langle X,Y\rangle\to M_n(R)$ sending $X$ to $A$ and $Y$ to $B$. Let $R[A,B]$ stand for the image of $R[X,Y]$ under this map. Since $A$ and $B$ are annihilated by their characteristic polynomials, we see that $R[A,B]$ is spanned by $A^iB^j$, where $0\leq i,j\leq n-1$. Clearly $R[A,B]\subseteq R\langle A,B\rangle$, with equality if and only if $R[A,B]$ is a subalgebra, which is definitely not always true. Perhaps surprisingly, section~\ref{cesd} proves that $R[C,D]=R\langle C, D\rangle$. A more detailed discussion of this is given in section \ref{pj}. The resultant of $f$ and $g$ will be denoted by $R(f,g)$. A fact used repeatedly below is that $R(f,g)$ is a unit if and only if $f$ and $g$ are relatively prime when reduced modulo every maximal ideal of~$R$. \section{A theorem of Burnside} For the record, we state here general conditions for a subset $S$ of $M_n(R)$ to generate $M_n(R)$ as an algebra. The field case follows from Burnside's Theorem (see $\S 27$ of [CR]) whereby one obtains the general case by localization. \begin{thm} \label{dense} Let $F$ be a field and let $S$ be subset of $M_n(F)$. Then the subalgebra generated by $S$ is the full matrix algebra $M_n(F)$ if and only if the following conditions~hold: \begin{enumerate} \item [(C1)] The only matrices in $M_n(F)$ commuting with all matrices in $S$ are the scalar matrices. \item [(C2) ] The only subspaces of the column space $V=F^n$ that are invariant under the action of all matrices in $S$ are 0 and $V$. \end{enumerate} \end{thm} \begin{note} In the above notation, if $F$ is a subfield of $K$ then $\mathrm{dim}_F F\langle S\rangle=\mathrm{dim}_K K\langle S\rangle$, so $F\langle S\rangle=M_n(F)$ if and only if $K\langle S\rangle=M_n(K)$. \end{note} \begin{thm} \label{local} For each maximal ideal ${\mathfrak m}$ of $R$, let $\Lambda_{\mathfrak m}: M_n(R)\to M_n(R/{\mathfrak m})$ be the ring epimorphism associated to the projection $R\to R/{\mathfrak m}$. Let $S$ be subset of $M_n(R)$. Then $$ R\langle S\rangle=M_n(R)\Leftrightarrow (R/{\mathfrak m})\langle \Lambda_{\mathfrak m}(S)\rangle=M_n(R/{\mathfrak m}) $$ for every maximal ideal ${\mathfrak m}$ of $R$. \end{thm} \noindent{\sl Proof.} One implication is obvious. For the other, suppose that $(R/{\mathfrak m}) \langle\Lambda_{\mathfrak m}(S)\rangle=M_n(R/{\mathfrak m})$ for all maximal ideals ${\mathfrak m}$ of $R$. This is equivalent to $\Lambda_{\mathfrak m}(R\langle S\rangle)=\Lambda_{\mathfrak m}(M_n(R))$, that is, $R\langle S\rangle+\mathrm{ ker\,} \Lambda_{\mathfrak m}=M_n(R)$, which obviously means, $R\langle S\rangle+{\mathfrak m}\, M_n(R)=M_n(R)$, or just ${\mathfrak m} \big(M_n(R)/R\langle S\rangle\big) = M_n(R)/R\langle S\rangle$, for all maximal ideals ${\mathfrak m}$ of $R$. The quotient, say $U=M_n(R)/R\langle S\rangle$, in this last statement is an $R$-module. Now $M_n(R)$ is a finitely generated $R$-module, and hence so is $U$. We are thus faced with a finitely generated $R$-module, namely $U$, such that ${\mathfrak m} U=U$ for all maximal ideals ${\mathfrak m}$ of $R$. Localizing $R$ and $U$ at ${\mathfrak m}$ (see chapter 3 of [AM]), we obtain that ${\mathfrak M} U_{\mathfrak m}=U_{\mathfrak m}$, where ${\mathfrak M}$ is the maximal ideal of $R_{\mathfrak m}$. As $U_{\mathfrak m}$ is a finitely generated $R_{\mathfrak m}$-module, it now follows from Nakayama's Lemma that $U_{\mathfrak m}=0$ for all maximal ideals ${\mathfrak m}$ of $R$. As all localizations at maximal ideals of $U$ are zero, it follows from Proposition 3.8 of [AM] that $U$ itself is zero, that is, $R\langle S\rangle=M_n(R)$. \quad $\blacksquare$ \section{Matrices commuting with $C$ and $D$} We fix the following notation for the remainder of the paper: $e_1, \dots , e_n$ will stand for the canonical basis of the column space $R^n$ and $R_n[X]$ for the $R$-submodule of $R[X]$ with basis $1,X,\dots,X^{n-1}$. If $p\in R_n[X]$ then $[p]$ stands for the coordinates of $p$ relative to this basis. Recall that $C$ is the companion matrix to $f=f_0+f_1X+\cdots+f_{n-1}X^{n-1}+X^n$, that is $$ C=\left(% \begin{array}{ccccc} 0 & 0 & \cdots & 0 & -f_0 \\ 1 & 0 & \cdots & 0 & -f_1 \\ 0 & 1 & \cdots & 0 & -f_2 \\ \vdots & \vdots & \cdots & \vdots & \vdots \\ 0 & 0 & \cdots & 1 & -f_{n-1} \\ \end{array}% \right). $$ It is an easy exercise to verify that, as in the field case, the minimal polynomial of $C$ is $f$. Thus $I,C, \dots , C^{n-1}$ is an $R$-basis of $R[C]$. If $A\in R[C]$ we write $[A]$ for the coordinates of $A$ relative to this basis. The next result is borrowed from [GS]. \begin{lem} \label{coord} If $A\in R[C]$ then $A = ([A] \;C[A] \;\dots \;C^{n-1}[A])$. \end{lem} \noindent{\sl Proof.} We have $A = y_0I + y_1C + \cdots + y_{n-1}C^{n-1}$ with $y_j\in R$. Multiplying both sides by $e_1$ gives $Ae_1 = y_0e_1 + y_1e_2 + \cdots + y_{n-1}e_n = [A]$. If $2\le j\le n$ then $Ae_j = AC^{j-1}e_1= C^{j-1}Ae_1 = C^{j-1}[A]$. Thus the matrices in question have the same columns. $\quad \blacksquare$ \begin{lem} \label{columnspq} Let $p,q\in R_n[X]$. Then $p(C)[q]=q(C)[p]$. Also, $p(C)[q]=0\Leftrightarrow f|pq$. \end{lem} \noindent{\sl Proof.} The first column of $p(C)q(C)=q(C)p(C)$ equals both $p(C)q(C)e_1=p(C)[q]$ and $q(C)p(C)e_1=q(C)[p]$ by Lemma \ref{coord}. The remaining columns of $pq(C)$ are $p(C)q(C)e_i=p(C)C^i[q]=C^ip(C)[q]$, $1\leq i\leq n-1$, so $p(C)[q]=0 \Leftrightarrow pq(C)=0 \Leftrightarrow f|pq$. $\quad \blacksquare$ \begin{lem} \label{esescalar} Let $A\in R[C]$. Assume $A_{n,1}=\cdots= A_{n,n-1}=0$. Then $A$ is scalar. \end{lem} \smallskip \noindent {\sl Proof.} By Lemma \ref{coord} and hypothesis we have \begin{equation} \label{holaR} A=A_{1,1}I+A_{2,1}C+\cdots+A_{n-1,1}C^{n-2}. \end{equation} If $n=2$ we are done. Otherwise, applying both sides to $e_2$ gives $$Ae_2=A_{1,1}e_2+A_{2,1}e_3+\cdots+A_{n-1,1}e_n.$$ By hypothesis $e_n$ does not appear in the second column of $A$, namely $Ae_2$. Therefore $A_{n-1,1}=0$. Going back to (\ref{holaR}), eliminating $A_{n-1,1}C^{n-2}$, and repeating the argument with $e_3,\dots,e_{n-1}$ yields $A=A_{1,1}I$, as required. \quad $\blacksquare$ \begin{lem} \label{inter} Suppose that $R$ is an integral domain and that $f\neq g$. Then the only matrices in $M_n(R)$ that commute with $C$ and $D$ are the scalar matrices. \end{lem} \noindent{\sl Proof.} Suppose $A\in M_n(R)$ commutes with $C$ and $D$. Then $A$ commutes with $Z=C-D$. But the first $n-1$ columns of $Z$ are equal to zero and, by hypothesis, at least one entry of the last column of $Z$ is not zero. Applying these facts to the equation $AZ=ZA$ immediately gives $A_{n,1}=\cdots= A_{n,n-1}=0$. Thus $A$ is scalar by Lemma \ref{esescalar}. \quad $\blacksquare$ \section{Common invariant subspaces under companion matrices} The invariant subspaces under a single cyclic transformation are well-known and easily determined. We gather all relevant information below. \begin{lem} \label{subespacio} Let $F$ be a field and $V$ a vector space over $F$ of finite dimension $n$. Let $T:V\to V$ be a cyclic linear transformation with cyclic vector $v$ and minimal polynomial~$f$. Then the distinct $T$-invariant subspaces of $V$ are of the form $$ V(g)=V(g,T)=\{g(T)x\,|\, x\in V\}, $$ where $g$ runs through the monic factors of $f$. Moreover, $V(g)$ has dimension $n-\mathrm{deg}\,g$ and the $T$-conductor of $V$ into $V(g)$ is precisely $g$. \end{lem} \begin{lem} \label{coprimo} Let $F$ be a field. Let $f_1, \dots , f_m$ be monic polynomials in $F[X]$ of degree $n$. Then their companion matrices $C_{f_1}, \dots , C_{f_m}$ have a common invariant subspace different from 0 and $V=F^n$ if and only $f_1, \dots , f_m$ have a common monic factor whose degree is strictly between 0 and $n$. \end{lem} \noindent{\sl Proof.} Suppose $h$ is a common monic factor to all $f_1, \dots ,f_m$ of degree strictly between~0 and $n$. By Lemma \ref{subespacio} if $1\leq i\leq m$ then $V(h,C_{f_i})$ is a $C_{f_i}$-invariant subspace of $V$ of dimension $m=n-\mathrm{deg}\,h$, which is strictly between 0 and $n$. Now a basis for $V(h,C_{f_i})$ is $$ h(C_{f_i})e_1, C_{f_i}h(C_{f_i})e_1, \dots , C_{f_i}^{m-1}h(C_{f_i})e_1, $$ which by Lemma \ref{coord} equals $$ [h(X)], [Xh(X)], \dots , [X^{m-1}h(X)]. $$ Therefore all these subspaces are equal to each other. Suppose conversely that $W$ is subspace of $V$ different from 0 and $V$ and invariant under $C_{f_1}, \dots , C_{f_m}$. By Lemma \ref{subespacio} we have $W=V(h_i,C_{f_i})$, where $h_i$ is a monic factor of $f_i$ for each $i$. All $h_i$ have the same degree and this degree is strictly between 0 and $n$, also by Lemma \ref{subespacio}. We claim that the $h_i$ are all equal to $h_1$. Indeed if $i>1$ then $$[h_i]=h_i(C_{f_i})e_1\in V(h_i,C_{f_i})=W=V(h_1,C_{f_1}),$$ and therefore $$[h_i]=s(C_{f_1})h_1(C_{f_1})e_1$$ for some $s\in F[X]$ of degree less than $n-\mathrm{deg}\, h_1$. Hence by Lemma \ref{coord} $[h_i]=[sh_1]$ and therefore $h_i=sh_1$. But $h_i$ and $h_1$ are monic of the same degree, so $s=1$. \quad $\blacksquare$ \section{Generation of $M_n(R)$ by companion matrices} \label{gen} \begin{thm} Let $f_1, \dots , f_m$, $m\geq 2$, be monic polynomials in $R[X]$ of degree $n$ with companion matrices $C_{f_1}, \dots , C_{f_m}$. Then $R\langle C_{f_1},\dots, C_{f_m}\rangle=M_n(R)$ if and only if $f_1,\dots,f_m$ are relatively prime when reduced modulo every maximal ideal of $R$. \end{thm} \noindent{\sl Proof.} By Theorem \ref{local} $R\langle C_{f_1},\dots, C_{f_m}\rangle=M_n(R)$ if and only if this equality is preserved when $f_1,\dots,f_m$ and $R$ are reduced modulo every maximal ideal. But at the field level, generation is equivalent to the given polynomials being relatively prime, by Theorem \ref{dense} and Lemmas \ref{inter} and \ref{coprimo}. \quad $\blacksquare$ \begin{cor} \label{unidad} $R\langle C,D\rangle=M_n(R)$ if and only if $R(f,g)$ is a unit. \end{cor} \noindent{\bf Remark.} This does not generalize to arbitrary matrices. Indeed, if $F$ is field then while two distinct Jordan blocks in $M_n(F)$ have relatively prime minimal polynomials, they share a common eigenvector, so they cannot generate the full matrix algebra. \section{The identity $R[C,D]=R\langle C,D\rangle=R[D,C]$} \label{cesd} \begin{lem} \label{tecnico} Let $R\langle A,B\rangle$ be an $R$-algebra, where $B$ is integral over $R$ of degree at most $n$. Then the following three statements are equivalent: \begin{enumerate} \item[(a)] $B^jA\in R[A,B]$ for all $1\leq j\leq n-1$. \item[(b)] $R[A,B]=R\langle A,B\rangle$. \item[(c)] $(A-B)B^j(A-B)\in R[A,B]$ for all $0\leq j\leq n-2$. \end{enumerate} \end{lem} \noindent{\sl Proof.} As $B$ is integral over $R$ of degree at most $n$, condition (a) ensures that $R[A,B]$ is invariant under right multiplication by $A$, which easily implies (b). On the other hand, it is clear that (b) implies (c). Suppose finally that (c) holds. We wish to prove that $B,BA,B^2A,\dots,B^{n-1}A$ are in $R[A,B]$. We show this by induction. Clearly $B\in R[A,B]$. Suppose $0<j\leq n-1$ and $B^{j-1}A\in R[A,B]$. By (c) $$ B^{j+1}-B^jA-AB^j+AB^{j-1}A=(A-B)B^{j-1}(A-B)\in R[A,B]. $$ By definition $B^{j+1},AB^j\in R[A,B]$, while $A(B^{j-1}A)\in R[A,B]$ by inductive hypothesis. Hence $B^jA\in R[A,B].\quad \blacksquare$ \begin{lem} \label{Z} Suppose the first $n-1$ columns of $Z\in M_n(R)$ are equal to 0 and its last column has entries $z_1,\dots,z_n$. Let $Q\in M_n(R)$ have entries $q_1,\dots,q_n$ in its last row. Then $$ ZQZ=(q_1z_1+\cdots+q_nz_n)Z. $$ \end{lem} \smallskip \noindent{\sl Proof.} We have $$ \begin{aligned} ZQZ &=\left(% \begin{array}{cccc} 0 & \cdots & 0 & z_1 \\ \vdots & & \vdots & \vdots \\ 0 & \cdots & 0 & z_n \\ \end{array}% \right)\left(% \begin{array}{ccc} * & \cdots & * \\ \vdots & & \vdots \\ q_1 & \cdots & q_n \\ \end{array}% \right)\left(% \begin{array}{cccc} 0 & \cdots & 0 & z_1 \\ \vdots & & \vdots & \vdots \\ 0 & \cdots & 0 & z_n \\ \end{array}% \right)\\ &=\left(% \begin{array}{ccc} z_1q_1 & \cdots & z_1q_n \\ \vdots & & \vdots \\ z_n q_1& \cdots & z_n q_n\\ \end{array}% \right)\left(% \begin{array}{cccc} 0 & \cdots & 0 & z_1 \\ \vdots & & \vdots & \vdots \\ 0 & \cdots & 0 & z_n \\ \end{array}% \right)=(q_1z_1+\cdots+q_nz_n)Z.\quad \blacksquare \end{aligned} $$ \begin{cor} \label{iguales} Suppose that $A,B\in M_n(R)$ share the first $n-1$ columns. Then $R[A,B]=R\langle A,B\rangle=R[B,A]$. In particular, this holds when $A=C$ and $B=D$. \end{cor} \noindent{\sl Proof.} This follows at once from Lemmas \ref{tecnico} and \ref{Z}.$\quad \blacksquare$ \bigskip \noindent{\bf Remark.} In general it is false that $R\langle A, B\rangle=R[A,B]$ for arbitrary matrices $A$ and $B$, even when $M_n(R)=R\langle A, B\rangle$. Indeed, consider the case when $R=F$ is a field, $n\geq 3$, $A$ is a diagonal matrix with distinct diagonal entries and $B$ is the all-ones matrix. The only matrices commuting with $A$ must be diagonal and the only diagonal matrices commuting with $B$ are scalar. Moreover, the only non-zero subspaces of $V=F^n$ invariant under $A$ are spanned by non-empty subsets of $e_1,\dots,e_n$ and none of them is $B$-invariant except for $V$ itself. It follows from Burnside's Theorem that $M_n(F)=F\langle A,B\rangle$. If we had $F\langle A,B\rangle=F[A,B]$ then the $n^2$ matrices $A^iB^j$, with $0\leq i,j\leq n-1$, spanning $F[A,B]$, would necessarily be linearly independent, but they are not since $B^2=nB$. \section{The polynomials $p_0,p_1,\dots,p_{n-1}\text{ behind }R[C,D]=R\langle C,D\rangle$} \label{pj} Since $R\langle C, D\rangle=R[C,D]$ or, equivalently, $R[C,D]$ is invariant under right multiplication by $C$, there must exist $n-1$ polynomials $P_1, \dots , P_{n-1}\in R[X,Y]$ satisfying: \begin{equation} \label{polyecu} D^jC=P_j(C,D),\quad j=1,\dots , n-1. \end{equation} In this section we define and explore an explicit sequence of polynomials satisfying (\ref{polyecu}). For the remainder of the paper we let $s=g-f\in R_n[X]$. Write $a_{j}$ for the $(n,j)$-entry of $s(D)$ and set $u_n=e_n^t$. Using first Lemma \ref{Z} and then Lemma \ref{coord} we see that \begin{equation} \label{aji} (C-D)D^{j-1}(C-D)\! = \! u_n D^{j-1}[s](C-D)= u_ns(D)e_j(C-D)=a_j(C-D),\, 1\leq j\leq n. \end{equation} \begin{thm} \label{polinomios} Define $p_0,p_1,\dots,p_{n-1}\in R_n[X]$ and $P_0,P_1,\dots,P_{n-1}\in R[X,Y]$ by $$p_0(X)=1,\quad p_j(X) = X^j - a_{1}X^{j-1} - \cdots - a_{j-1}X - a_{j}, \quad j=1, \dots , n-1, $$ $$ P_j(X,Y) = p_j(X)(X-Y) + Y^{j+1}, \quad j=0, \dots , n-1. $$ Then \begin{enumerate} \item [(a)] $p_j(C)(C-D)=D^j(C-D)$ for all $0\leq j\leq n-1$. \item[(b)] The polynomials $P_1,\dots,P_{n-1}\in R[X,Y]$ satisfy (\ref{polyecu}). \item[(c)] If $P=([p_0]\,[p_1]\dots [p_{n-1}])\in M_n(R)$ then $g(C)P=-f(D)$. \item[(d)] If $q_0, q_1, \dots , q_{n-1}\in R_n[X]$ and $Q=([q_0]\,[q_1]\,\dots \,[q_{n-1}])\in M_n(R)$ then $$D^j(C-D)\!=\!q_j(C)(C-D)\text{ for all }0\leq j\leq n-1\Leftrightarrow g(C)Q\!=\!-f(D)\Leftrightarrow g(C)(Q-P)\!=\!0.$$ \item[(e)] If $R(f,g)$ is a unit then $p_0,p_1,\dots,p_{n-1}$ is the only sequence in $R_n[X]$ satisfying (a). \item [(f)] If $s$ is a constant then $P_j(X,Y) = X^{j+1} + Y^{j+1}- X^jY$ for all $0\leq j\leq n-1$. \end{enumerate} \end{thm} \noindent{\sl Proof.} It is clear that $p_0(C)(C-D)=D^0(C-D)$. Let $0< j\le n-1$ and suppose that $p_{j-1}(C)(C-D)=D^{j-1}(C-D)$. Then (\ref{aji}) and the identity $p_j(X) = Xp_{j-1}(X) - a_{j}$ yield \begin{align*} p_j(C)(C-D) & = (Cp_{j-1}(C)-a_jI)(C-D)= Cp_{j-1}(C)(C-D)-a_j(C-D)\\ &=CD^{j-1}(C-D)-(C-D)D^{j-1}(C-D)=D^j(C-D). \end{align*} This proves (a), which clearly implies (b). Note that $q_j(C)(C-D)=D^j(C-D)$ can be written as $q_j(C)[s]=D^j[s]$, where $0 \leq j\leq n-1$, which by Lemma \ref{columnspq} translates into $s(C)Q=s(D)$, that is, $g(C)Q=-f(C)$. The sequence $p_0,p_1,\dots,p_{n-1}$ does satisfy (a), so $(c)$ is true, whence $g(C)Q=-f(C)\Leftrightarrow g(C)(Q-P)=0$, completing the proof of (d). If $R(f,g)$ is a unit then $g(C)$ is invertible, in which case $g(C)(Q-P)=0$ implies $Q=P$. This gives (e). If $s$ is a constant then $a_j=0$ for all $1\leq j\leq n-1$, so $p_j=X^j$ and a fortiori $P_j(X,Y)=X^j(X-Y)+Y^{j+1}=X^{j+1} + Y^{j+1}- X^jY$ for all $0\leq j\leq n-1$.$\quad \blacksquare$ \section{A presentation of $M_n(R)$} \label{presmn} \begin{thm} \label{thA} Suppose $R(f, g)$ is a unit. Let $P_1,\dots, P_{n-1}$ be polynomials in $R[X,Y]$ as defined in Theorem \ref{polinomios} or, more generally, be arbitrary as long as they satisfy (\ref{polyecu}). Then the matrix algebra $M_n(R)$ has presentation: $$ \langle X,Y\,|\, f(X)=0,\, g(Y)=0,\, Y^jX=P_j(X,Y),\quad j=1,\dots,n-1\rangle. $$ In the particular case when $g-f$ is a unit in $R$, the matrix algebra $M_n(R)$ has presentation $$ \langle X,Y\,|\, f(X)=0,\, g(Y)=0,\, Y^jX+X^jY=X^{j+1}+Y^{j+1},\quad j=1,\dots,n-1\rangle. $$ \end{thm} \noindent{\sl Proof.} Write $\Omega:R\langle X,Y\rangle\to R\langle C,D\rangle$ for the natural $R$-algebra epimorphism that sends $X$ to $C$ and $Y$ to $D$. Let $K$ be the kernel of $\Omega$. Set $S=R\langle X,Y\rangle/K$ and let $A$ and $B$ be the images of $X$ and $Y$ in $S$. We have $S=R[A,B]$ by Lemma \ref{tecnico}, and it is clear that $S$ is $R$-spanned by $A^iB^j$, $0\leq i,j\leq n-1$. If $t\in\mathrm{ ker\,} \Omega$ then $t$ is a linear combination of the $A^iB^j$. The images of these under $\Omega$ are linearly independent, as $M_n(R)$ is free of rank $n^2$ and, by Corollary \ref{unidad}, the $n^2$ matrices $C^iD^j$ span $M_n(R)$. Hence $t=0$. If $g-f$ is a unit in $R$ then so is $R(f,g)$. Therefore the last statement of the theorem follows from above and part (f) of Theorem \ref{polinomios}. $\quad \blacksquare$ \medskip As an illustration, let $R=\mathbb Q$, $f=X^n-2$, $g=X^n-3$. Let $\alpha,\beta$ stand for the real $n$-th roots of 2 and 3, respectively. Theorem \ref{thA} says that $M_n(\mathbb Q)=\mathbb Q(\alpha)\mathbb Q(\beta)$, where $\mathbb Q(\alpha)$ and $\mathbb Q(\beta)$ are embedded as maximal subfields of $M_n(\mathbb Q)$ which intersect only at $\mathbb Q$ and multiply according to the rules: $$\beta^j\alpha+\alpha^j\beta=\alpha^{j+1}+\beta^{j+1},\quad 1\leq j\leq n-1.$$ \section{A presentation of $R\langle C,D\rangle$} \label{prescd} \begin{lem} \label{h} Suppose that $d\in R[X]$ is a common monic factor of $f$ and $g$. Let $f=hd$, where $h\in R_n[X]$. Then $h(C)C=h(C)D$. \end{lem} \medskip \noindent{\sl Proof.} By hypothesis $d|s$, whence $hd|hs$. But $hd=f$, so Lemma \ref{columnspq} gives $h(C)[s]=0$, which implies $h(C)(C-D)=(0,\dots,0,h(C)[s])=0$. $\quad \blacksquare$ \begin{thm} \label{basisfg} Let $R$ be a unique factorization domain and let $m=\mathrm{deg}\,\gcd(f,g)$. Then $R\langle C,D\rangle=R[C,D]$ is a free $R$-module of rank $n+(n-m)(n-1)$ with basis: $$ I,C,\dots,C^{n-1},D,CD,\dots,C^{n-m-1}D,\dots,D^{n-1},CD^{n-1},\dots,C^{n-m-1}D^{n-1}. $$ \end{thm} \noindent{\sl Proof.} Suppose that \begin{equation} \label{yu} p_0(C)I+p_1(C)D+\cdots+p_{n-1}(C)D^{n-1}=0, \end{equation} with $p_i\in R[X]$. We need to show that $f$ divides $p_0$ and that $h$ divides $p_1,...,p_{n-1}$. Clearly $$ p_1(C)D+\cdots+p_{n-1}(C)D^{n-1}=-p_0(C)I, $$ so $p_1(C)D+\cdots+p_{n-1}(C)D^{n-1}$ commutes with $C$, which means \begin{equation} \label{crucial} p_1(C)(CD-DC)+p_2(C)(CD^2-D^2C)+\cdots+p_{n-1}(C)(CD^{n-1}-D^{n-1}C)=0. \end{equation} Now $$ (CD-DC)e_1=\cdots=(CD^{n-2}-D^{n-2}C)e_1=0, $$ so $$ 0=p_{n-1}(C)(CD^{n-1}-D^{n-1}C)e_1=p_{n-1}(C)[s]. $$ By Lemma \ref{columnspq}, $f$ divides $p_{n-1}s$ and hence $p_{n-1}g$. It follows that $h=f/\gcd(f,g)$ divides $p_{n-1}$. Thus by Lemma \ref{h} the last summand of (\ref{crucial}) is 0 and can be eliminated. Proceeding like this with $e_2,\dots, e_{n-1}$ we see that $h$ divides $p_{n-2},p_{n-3},\dots,p_1$ and all these terms can be eliminated from (\ref{crucial}). Going back to (\ref{yu}) shows that $f$ must divide $p_0$. $\quad \blacksquare$ \begin{thm} \label{preF} Let $R$ be a unique factorization domain and set $h=f/\gcd(f,g)$. Let the polynomials $P_1,\dots,P_{n-1}\in R[X,Y]$ be defined as in section \ref{pj} or, more generally, be arbitrary while satisfying (\ref{polyecu}). Then the algebra $R\langle C,D\rangle$ has presentation $$ \langle X,Y\,|\, f(X)=0,\, g(Y)=0,\, h(X)(X-Y)=0,\, Y^jX=P_j(X,Y),\quad j=1,\dots,n\rangle. $$ \end{thm} \noindent{\sl Proof.} The proof of Theorem \ref{thA} works as well, except that the relation $h(A)(A-B)=0$ allows $R[A,B]$ to be spanned by the reduced list of $n+(n-m)(n-1)$ matrices: $$ I,A,\dots,A^{n-1}, B,AB,\dots,A^{n-m-1}B,\dots,B^{n-1},AB^{n-1},\dots,A^{n-m-1}B^{n-1}. $$ As their images under $\Omega$ are linearly independent by theorem \ref{basisfg}, the result follows.$\quad \blacksquare$ \section{A determinantal identity} \label{D} The following remarkable identity is valid for any commutative ring $R$ with identity. \begin{thm} \label{thmD} Let the columns of $M_{f, g}\in M_{n^2}(R)$ be the coordinates of $C^iD^j$, with $0\leq i,j\leq n-1$, relative to the canonical basis of $M_n(R)$ formed by all basic matrices $E^{kl}$, where $1\leq k,l\leq n$, and the lists of matrices $C^iD^j$ and $E^{kl}$ are ordered as indicated below. Let $M(f,g)=\mathrm{det}\, M_{f, g}$. Then $M(f,g)=R(f,g)^{n-1}$. \end{thm} \noindent{\sl Proof.} We order the matrices $C^iD^j$ in the following manner: $$D^{n-1}, CD^{n-1}, \dots , C^{n-1}D^{n-1}, \,\,D^{n-2}, CD^{n-2}, \dots , C^{n-1}D^{n-2}, \,\,\dots , \,\,I, C , \dots , C^{n-1}.$$ The basic matrices $E^{kl}$ are ordered first by column and then by row as follows: $$ E^{11},E^{21},\dots,E^{n1},\dots, E^{1n},E^{2n},\dots, E^{nn}. $$ The proof consists of a sequence of reductive steps. \begin{enumerate} \item[(1)] Let $a\mapsto a'$ be a ring homomorphism $R\to R'$. Let $p\to p'$ and $A\to A'$ stand for corresponding ring homomorphisms $R[X]\to R'[X]$ and $M_n(R)\to M_n(R')$. Then $M(f,g)=R(f,g)^{n-1}$ implies $M(f',g')=R(f',g')^{n-1}$. This follows from the fact that $M(f,g)$ and $R(f,g)$ are defined in such a way as to be compatible with the above ring homomorphisms. \item[(2)] If $R$ is an integral domain then $M(f,g)=0$ if and only if $R(f,g)=0$. Indeed, $M(f,g)=0$ means that the matrices $C^iD^j$ are linearly dependent over the field of fractions of $R$, which is equivalent to $R(f,g)=0$ by Theorem \ref{basisfg}. \item[(3) ] $M(f,g)$ belongs to a prime ideal $P$ of $R$ if and only if $R(f,g)$ belongs to $P$. This follows from (2) by using (1) with the ring homomorphism $R\to R/P$. \item[(4) ] If $R$ is a unique factorization domain then $M(f,g)$ and $R(f,g)$ are both zero, both a unit, or both share the same irreducible factors in their prime factorization. This follows from (3). \item[(5) ] It suffices to prove the result for the ring $S=\mathbb Z[Y_1,\dots,Y_{n},Z_1,\dots,Z_{n}]$. Given $f'=a_0+\cdots+a_{n-1}X^{n-1}+X^n$ and $g'=b_0+\cdots+b_{n-1}X^{n-1}+X^n$ in $R[X]$ we consider the ring homomorphism $S\to R$ that restricts to the canonical map $\mathbb Z\to R$, and sends $Y_1,\dots,Y_{n}$ to $a_0, \dots,a_{n-1}$ and $Z_1,\dots,Z_{n}$ to $b_0, \dots,b_{n-1}$. Now use (1). \item[(6) ] It suffices to prove the result for the field ${\mathbb C}$ of complex numbers. Clearly, to prove the result for an integral domain it is sufficient to prove it for any field extension of its field of fractions. In our case, $\mathbb C$ is an extension of the field of fractions of ${\mathbb Z}[Y_1,\dots,Y_{n},Z_1,\dots,Z_{n}]$, so our claim follows from (5). \item[(7) ] It suffices to prove the result for the ring $S=\mathbb Z[Y_1,\dots,Y_{n},Z_1,\dots,Z_{n}]$ and the polynomials $f=(X-Y_1)\cdots (X-Y_n)$ and $g=(X-Z_1)\cdots (X-Z_n)$. Let $f',g'\in \mathbb C[X]$ be monic of degree $n$. Then $f'=(X-a_1)\cdots(X-a_n)$ and $g'=(X-b_1)\cdots(X-b_n)$ for some complex numbers $a_i,b_j$. First use (1) to derive the result for $f'$ and $g'$ from the one for $f$ and $g$. Then apply (6). \end{enumerate} \medskip We will now show that indeed $M(f,g)=R(f,g)^{n-1}$ for $f=(X-Y_1)\cdots (X-Y_n)$ and $g=(X-Z_1)\cdots (X-Z_n)$ in $S[X]$. This will complete the proof. We have $R(f,g)=\Pi(Y_i-Z_j)$, with $1\leq i,j\leq n$, which is a product of $n^2$ non-associate prime elements in the unique factorization domain $S$. By (4) these are the prime factors of $M(f,g)$. In particular, $R(f,g)$ divides $M(f,g)$. Let $\sigma$ and $\tau$ be permutations of $1,\dots,n$. Let $\Omega$ be the automorphism of $S$ corresponding to them via $Y_i\mapsto Y_{\sigma(i)}$ and $Z_j\mapsto Z_{\tau(j)}$. This naturally extends to automorphisms of $S[X]$ and $M_n(S)$, also denoted by $\Omega$. As $f$ and $g$ are $\Omega$-invariant, so are $M_{f,g}$ and $M(f,g)$. Now if $Y_i-Z_j$ has multiplicity $a_{ij}$ in $M(f,g)$ then $Y_{\sigma(i)}-Z_{\tau(j)}$ will have multiplicity $a_{ij}$ in $\Omega (M(f,g))$. Since $\Omega (M(f,g))=M(f,g)$, it follows that all prime factors of $M(f,g)$ have the same multiplicity, say $m\geq 1$. Since the only units in $S$ are 1 and -1, we see that $$ M(f,g)=\epsilon R(f,g)^m,\quad \epsilon\in\{1,-1\}.$$ Let $T={\mathbb Z}[Z_1,\dots,Z_n]$ and let $p\in T[X]$ be the generic polynomial $$ p=(X-Z_1)\cdots(X-Z_n). $$ Then $$ R(f,g)=p(Y_1)\cdots p(Y_n)\in T[Y_1,\dots,Y_n]. $$ From these equations we see that the total degree of $R(f,g)$ is $n^2$ and the only monomial of such a degree in $R(f,g)$ is $Y_1^n\cdots Y_n^n$, which appears with coefficient 1. Therefore $M(f,g)$ has degree $n^2m$ and the only monomial of that degree in $M(f,g)$ is $(Y_1\cdots Y_n)^{nm}$, which appears with coefficient $\epsilon$. Substituting all $Z_1,\dots,Z_n$ by 0 yields $$ M(f,X^n)=\epsilon R(f,X^n)^m, $$ where $$ R(f,X^n)=(Y_1\cdots Y_n)^n=(\mathrm{det}\, C)^n. $$ We are thus reduced to proving that $M(f, X^n)= (\text{det} \,C)^{n(n-1)}$. This we do now. Set $g=X^n$ and refer to the order of the matrices $C^iD^j$ and $E^{k l}$ given at the beginning of the proof. Expressing each vector $C^iD^j$ in the canonical basis of $M_n(S)$ as the column vector $$\begin{pmatrix} C^iD^je_1\\ \vdots \\ C^iD^je_n \end{pmatrix}\in S^{n^2}, $$ we get the block decomposition $M_{f,\, g}\!=\! (C_{k\,j})$, where the columns of $C_{k,\, j}\in M_n(S)$ are $$C_{k,\, j} = (D^{n-j}e_k \,\,CD^{n-j}e_k \,\,\dots \,\,C^{n-1}D^{n-j}e_k), \quad 1\le k, j\le n.$$ Let $0\le i \le n-1$, $1\le k, j \le n$. Then $$C^iD^{n-j}e_k=\begin{cases} C^{n-j+k-1}e_{i+1} &\text{ if $k \le j$}\\ 0 &\text{ otherwise.}\end{cases}$$ Therefore, $$C_{k, j} =\begin{cases} C^{n-j+k-1} &\text{ if $k\le j$}\\ 0 &\text{ otherwise.}\end{cases}$$ In other words, we have $$M_{f,\, g} = \begin{pmatrix} C^{n-1} & C^{n-2} & \dots & C & I\\ 0& C^{n-1} & \dots & C^2 & C\\ \vdots & \vdots & \ddots & \vdots & \vdots\\ 0 & 0 & \dots & C^{n-1} & C^{n-2} \\ 0 & 0 & \dots & 0 & C^{n-1} \end{pmatrix}.$$ Hence, $$M(f, g) = (\text{det} \,C)^{n(n-1)}.\quad \blacksquare$$ \section{The index of $R\langle C,D\rangle$ in $M_n(R)$} \label{indic} Let $R$ be a principal ideal domain where each maximal ideal has finite index. Any non-zero ideal $Ra$ is easily seen to have finite index, which will be denoted by $N(a)$. As an example, we may take $R$ to be the ring of integers of an algebraic number field $\mathbb K$ of class number one, in which case $N(a)=|N_{\mathbb K/\mathbb Q}(a)|$. In particular, $N(a)=|a|$ when $R=\mathbb Z$. \begin{thm} Let $R$ be a principal ideal domain where each maximal ideal has finite index in $R$. Then $R\langle C,D\rangle$ has maximal rank in $M_n(R)$ if and only if $R(f,g)\neq 0$, in which case $[M_n(R):R\langle C,D\rangle]=N(R(f,g))^{n-1}$. \end{thm} \noindent{\sl Proof.} Let $R^*$ be the monoid of non-zero elements of $R$ and write $\mathbb N$ for the monoid of natural numbers. By hypothesis each maximal ideal $Rp$ has finite index, denoted by $N(p)$. If $a\in R$ is not zero or a unit then $a=p_1^{a_1}\cdots p_m^{a_m}$, where the $p_i$ are non-associate primes in $R$ and $a_i\geq 1$. Using the Chinese Remainder Theorem and the fact that $p^iR/p^{i+1}R$ is a one-dimensional vector space over $R/p$ for every prime $p$, it follows at once that $Ra$ also has finite index, say $N(a)$, in $R$, where $N(a)=N(p_1)^{a_1}\cdots N(p_m)^{a_m}$. Thus $N:R^*\to\mathbb N$ is a homomorphism of monoids whose kernel is the unit group of $R$. We have the free $R$-module of rank $M_n(R)$ of rank $n^2$ and its submodule $R\langle C,D\rangle$, which is free of rank $\leq n^2$. By Corollary \ref{iguales} the matrices $C^iD^j$, with $0\leq i,j\leq n-1$, span $R\langle C,D\rangle$. The matrix expressing the coordinates of these generators in the basis of $M_n(R)$ formed by all $E^{ij}$ is the matrix $M_{f, g}$ of Theorem \ref{thmD}. Let $a_1,\dots , a_{n^2}$ be the invariant factors of $M_{f, g}$. Then $M_n(R)$ has a basis $u_1,\dots,u_{n^2}$ such that $a_1u_1,\dots,a_{n^2}u_{n^2}$ span $R\langle C, D\rangle$. Hence $R\langle C, D\rangle$ has rank $n^2$ if and only if $M(f,g)=a_1\cdots a_{n^2}\neq 0$. Since $M_n(R)/R\langle C, D\rangle\cong R/Ra_1\times\cdots \times R/Ra_{n^2}$ as $R$-modules, if all $a_1,\dots,a_{n^2}$ are non-zero then $[M_n(R):R\langle C, D\rangle]=N(a_1\cdots a_{n^2})=N(M(f, g)).$ Now apply Theorem \ref{thmD}.$\quad \blacksquare$ \bigskip \noindent{\bf {\Large Acknowledgements}} \bigskip The authors thank D. Stanley for useful conversations and D. Djokovic for writing a computer program to verify that Theorem \ref{thmD} was indeed true when $n=3,4$. \bigskip \noindent{\bf{\Large References}} \medskip \small \begin{enumerate} \item[[A]$\!\!\!$] G. Agnarsson, On a class of presentations of matrix algebras. Comm. Algebra 24 (1996), 4331-4338. \item[[AM]$\!\!\!$] M.F. Atiyah and I.G. Macdonald, Introduction to commutative algebra, Addison-Wesley, 1969. \item[[AMR]$\!\!\!$] G. Agnarsson, S.A. Amitsur and J.C. Robson, Recognition of matrix rings II, Israel J. Math. 96 (1996), 1-13. \item[[CR]$\!\!\!$] C.W. Curtis, and I. Reiner, Representation theory of finite groups and associative algebras, Interscience, 1962. \item[ [C1]$\!\!\!$] A.W. Chatters, Representation of tiled matrix rings as full matrix rings, Math. Proc. Cambridge Philos. Soc. 105 (1989), 67-72. \item[[C2]$\!\!\!$] A.W. Chatters, Matrices, idealisers, and integer quaternions, J. Algebra 150 (1992), 45-56. \item[[GS]$\!\!\!$] N.H. Guersenzvaig and F. Szechtman, A closed formula for the product in simple integral extensions, Linear Algebra Appl. 430 (2009), 2464-2466. \item[ [LRS]$\!\!\!$] L.S. Levy, J.C. Robson and J.T. Stafford, Hidden matrices, Proc. London Math. Soc. (3) 69 (1994), 277-308. \item[ [R]$\!\!\!$] J.C. Robson, Recognition of matrix rings, Comm. Algebra 19 (1991), 2113-2124. \end{enumerate} \end{document}
{ "timestamp": "2010-08-10T02:00:22", "yymm": "1008", "arxiv_id": "1008.1286", "language": "en", "url": "https://arxiv.org/abs/1008.1286", "abstract": "Let $f,g\\in Z[X]$ be monic polynomials of degree $n$ and let $C,D\\in M_n(Z)$ be the corresponding companion matrices. We find necessary and sufficient conditions for the subalgebra $Z< C,D>$ to be a sublattice of finite index in the full integral lattice $M_n(Z)$, in which case we compute the exact value of this index in terms of the resultant of $f$ and $g$. If $R$ is a commutative ring with identity we determine when $R< C,D>=M_n(R)$, in which case a presentation for $M_n(R)$ in terms of $C$ and $D$ is given.", "subjects": "Rings and Algebras (math.RA)", "title": "Subalgebras of Matrix Algebras Generated by Companion Matrices", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9783846716491915, "lm_q2_score": 0.8289388167733099, "lm_q1q2_score": 0.8110210320660242 }
https://arxiv.org/abs/2105.08391
A Steiner general position problem in graph theory
Let $G$ be a graph. The Steiner distance of $W\subseteq V(G)$ is the minimum size of a connected subgraph of $G$ containing $W$. Such a subgraph is necessarily a tree called a Steiner $W$-tree. The set $A\subseteq V(G)$ is a $k$-Steiner general position set if $V(T_B)\cap A = B$ holds for every set $B\subseteq A$ of cardinality $k$, and for every Steiner $B$-tree $T_B$. The $k$-Steiner general position number ${\rm sgp}_k(G)$ of $G$ is the cardinality of a largest $k$-Steiner general position set in $G$. Steiner cliques are introduced and used to bound ${\rm sgp}_k(G)$ from below. The $k$-Steiner general position number is determined for trees, cycles and joins of graphs. Lower bounds are presented for split graphs, infinite grids and lexicographic products. The lower bound for the latter products leads to an exact formula for the general position number of an arbitrary lexicographic product.
\section{Introduction} \label{sec:intro} In this work, $G = (V(G), E(G))$ denotes a simple graph. The \emph{distance} $d_G(u,v)$ between two vertices $u$ and $v$ of $G$ is the minimum number of edges on a $u,v$-path in $G$. If there is no such path, then we set $d_G(u,v)=\infty$. A $u,v$-path of length $d_G(u,v)$ is called a {\em $u,v$-geodesic}. A {\em general position set} of a graph $G$ is a set of vertices $S\subseteq V(G)$ such that no three vertices from $S$ lie on a common geodesic. The cardinality of a largest possible general position set is the {\em general position number} ${\rm gp}(G)$ of $G$. The problem of finding the general position number was independently introduced in~\cite{manuel-2018a, ullas-2016} and earlier studied on hypercubes in~\cite{korner-1995}. The paper~\cite{manuel-2018a} opened a wide interest for the topic, articles~\cite{anand-2019, ghorbani-2019, klavzar-2019+, klavzar-2021, klavzar-2019, manuel-2018b, neethu-2021, patkos-2020, thomas-2020, tian-2020, tian-2021} bring many different results on the general position sets and numbers. General position sets were also generalized to general $d$-position sets, where $d$ is a threshold on the length of geodesics on which triples of vertices are not allowed to lie~\cite{klavzar-2021+}. For a nonempty set $W\subseteq V(G)$, the {\em Steiner distance} of $W$, denoted by $d_G(W)$, is the minimum size of a connected subgraph of $G$ containing $W$~\cite{Chartrand-1989}. Such a subgraph is clearly a tree called a \emph{Steiner $W$-tree}. If $G$ is not connected and the vertices of $W$ lie in at least two components of $G$, then no Steiner $W$-tree exists and we set $d_G(W) = \infty$. Papers~\cite{dankelmann-2021, li-2016, martinez-2018, nielsen-2009, zhang-2019} represent a selection of studies on Steiner trees. We now introduce the key new concept. Let $k\in {\mathbb N}$ and let $G$ be a graph. Then $A\subseteq V(G)$ is a {\em $k$-Steiner general position set} if for every set $B\subseteq A$ of cardinality $k$ (from now on a $k$-set), and for every Steiner $B$-tree $T_B$, it follows that $V(T_B)\cap A = B$. In other words, $A$ is a $k$-Steiner general position set if no $k+1$ distinct vertices from $A$ lie on a common Steiner $B$-tree, where $B\subseteq A$ and $|B| = k$. Clearly, if $|A|\le k$, then $A$ is $k$-Steiner general position set. Hence we may define the {\em $k$-Steiner general position number} of $G$, denoted by ${\rm sgp}_k(G)$, as the cardinality of a largest $k$-Steiner general position set in $G$. A $k$-Steiner general position set of cardinality ${\rm sgp}_k(G)$ will be called a {\em $k$-sgp-set}. Note that ${\rm sgp}_2(G) = {\rm gp}(G)$. Recall that if $G$ is a graph and $A\subseteq V(G)$, then $A$ is {\em Steiner convex} if for any subset $B\subseteq A$, all vertices in every Steiner tree $T_B$ belong to $A$, see~\cite{caceras-2008, gologranc-2015, gologranc-2018}. The new concept of the Steiner general position set is hence a concept dual to the Steiner convex set. We proceed as follows. In the rest of this section some further definitions are listed. In the next section we present basic bounds for ${\rm sgp}_k(G)$ and settle the extreme case when ${\rm sgp}_k(G)$ is equal to the order of $G$. The section devoted to ${\rm sgp}_k(G)$ for trees and cycles follows. In the fourth section we present an exact result for ${\rm sgp}_k(G\vee H)$, where $G\vee H$ represents the join of graphs $G$ and $H$. This enables us to present several exact results for some known families of graphs. After that comes a section with the $k$-Steiner general position number of lexicographic products. The before last section brings lower bounds for ${\rm sgp}_k(G)$ for split graphs and infinite grid graphs. We present several open problems and questions in the last section. We conclude the introduction by some necessary terminology and notation. By $n(G)$ we denote the order of a graph $G$. As usual, $\delta(v)$ represents the \emph{degree} of a vertex $v\in V(G)$, \emph{i.e.}, the number of neighbors of $v$. If $T$ is a tree, then $L(T)$ is the set of its leaves and $\ell(T)=|L(T)|$. The \emph{clique number} $\omega(G)$ of $G$ is the cardinality of a largest complete subgraph of $G$. If $A\subseteq V(G)$, then the subgraph of $G$ induced by $A$ is denoted by $G[A]$. A graph $G$ is $d$-\emph{connected}, where $1\le d < n(G)$, if the removal of fewer than $d$ vertices from $G$ always yields a connected graph. By $\overline{G}$ we denote the \textit{complement graph} of $G$; it is defined by $V(\overline{G})=V(G)$ and $uv\in E(\overline{G})$ if and only if $uv\notin E(G)$. For positive integers $i < j$ we use the notation $[i:j] = \{i,i+1,\ldots ,j\}$. Other definitions that will be needed, like the join of graphs, the lexicographic product and others, will be given along the way. \section{Bounding the $k$-Steiner general position number} \label{sec:bounding} To bound the $k$-Steiner general position number from below, we introduce in this section $k$-Steiner cliques, a concept that might be of interest also elsewhere. We characterize graphs $G$ with ${\rm sgp}_k(G) = n(G)$ and discuss monotonicity of ${\rm sgp}_k(G)$ with respect to the parameter $k$. Let $G$ be a connected graph. Since every nonempty set $A\subseteq V(G)$ is trivially a $1$-Steiner general position set and because ${\rm sgp}_{n(G)}(G) = n(G)$, in the rest of the paper we restrict our considerations to the $k$-Steiner general position sets of $G$ with $k\in [2:n(G)-1]$. Induced subgraphs of cliques are cliques, hence ${\rm sgp}_k(G) \ge \omega(G)$. If $k\ge 2$ is a fixed integer, then $A\subseteq V(G)$ is a {\em $k$-Steiner clique} if $G[B]$ is connected for every $k$-set $B\subseteq A$. The cardinality of a largest $k$-Steiner clique will be denoted by $s\omega_k(G)$. Note that $s\omega_2(G)=\omega(G)$. We make the assumption that every set on $k-1$ or less vertices is a $k$-Steiner clique, since indeed there are no $k$-sets as subsets of such set, and we can use this fact to deal with disconnected graphs as well. If $G$ is not connected, and the order of each component of $G$ is smaller than $k$, then every set on $\min\{n(G),k-1\}$ vertices of $G$ represents a $k$-Steiner clique. Hence, in such a case we have $s\omega_k(G)=\min \{n(G),k-1\}$. Notice that if $G$ is a connected graph with $n(G)\ge k$, then $s\omega_k(G)\geq k$. Further, a clique is a $k$-Steiner clique for every $k\ge 2$, thus $s\omega_k(G) \ge \omega(G)$. Moreover, if $A$ is a $k$-Steiner clique of $G$, then it is also a $k$-Steiner general position set, hence ${\rm sgp}_k(G) \ge s\omega_k(G)$. This discussion can be summarized as follows. \begin{remark} \label{rem:trivial} If $G$ is a connected graph and $k \in [2:n(G)-1]$, then $$\max\{k,\omega(G)\}\le s\omega_k(G)\le {\rm sgp}_k(G)\le n(G)\,.$$ \end{remark} If $n>k$, then $s\omega_k(P_n)=k$, hence the first inequality in Remark~\ref{rem:trivial} is sharp. If $n>k\ge 3$, then $s\omega_k(K_n-M)=n$, where $M$ is a matching of $K_n$. Hence the last two inequalities in Remark~\ref{rem:trivial} are also sharp. On the other hand, ${\rm sgp}_k(G)$ can be arbitrarily larger than $s\omega_k(G)$. For instance, if $r\ge 4$, then $s\omega_3(K_{1,r}) = 3$ and ${\rm sgp}_3(K_{1,r}) = r$. The graphs attaining the equality in the rightmost inequality of Remark~\ref{rem:trivial} can be described as follows. \begin{proposition} \label{prop:charact-n-d-connected} Let $G$ be a graph and let $k\in [2:n(G)-1]$. Then, ${\rm sgp}_k(G)=n(G)$ if and only if $G$ is $(n(G)-k+1)$-connected. \end{proposition} \noindent{\bf Proof.\ } The statement ${\rm sgp}_k(G)=n(G)$ is equivalent to the fact that for every $k$-set $W$, a Steiner $W$-tree contains $k$ vertices. That every such subgraph $G[W]$ is connected is in turn equivalent to the fact that removing an arbitrary set of cardinality $n(G)-k$ does not disconnect the graph $G$, that is, $G$ is $(n(G)-k+1)$-connected. \hfill $\square$ \bigskip Proposition~\ref{prop:charact-n-d-connected} in particular asserts that ${\rm sgp}_2(G)=n(G)$ if and only if $G$ is $(n(G)-1)$-connected, that is, if and only if $G$ is a complete graph. This fact was earlier observed in~\cite[Theorem 2.1]{ullas-2016}. By Proposition~\ref{prop:charact-n-d-connected}, this fact can be extended to the statement that the equality chain ${\rm sgp}_2(G)= \cdots ={\rm sgp}_{n(G)-1}(G)=n(G)$ holds if and only if $G$ is a complete graph. We next show that, a bit surprisingly, it is in general not true that a $k$-Steiner general position set is also a $k'$-Steiner general position set for some $k' > k$. \begin{proposition} \label{prop:k-1-and-not-k} For every $k\ge 3$ there exist a graph $G^{(k)}$ and a set $A_k\subseteq V(G^{(k)})$ such that $A_k$ is a $(k-1)$-Steiner general position set and is not a $k$-Steiner general position set of $G^{(k)}$. \end{proposition} \noindent{\bf Proof.\ } Let $k\ge 3$ and construct $G^{(k)}$ as follows. First take the join of the cycle $C_k$ with consecutive vertices $v_1, v_2, \ldots, v_k$ and the one vertex graph $K_1$ with the vertex $w$. This creates a wheel $W_{k+1}$ with the center $w$, cf.\ Section~\ref{sec:joins}. Then subdivide each of the edges of the cycle $C_k$ by $k-1$ vertices and subdivide each of the spokes of the wheel with $k-2$ vertices. See Fig.~\ref{figure} where the graph $G^{(3)}$ is drawn. \begin{figure}[htb] \begin{center} \begin{tikzpicture}[scale=1.1,style=thick,x=1cm,y=1cm] \def\vr{2.5pt} \path (0,0) coordinate (a); \path (1,1) coordinate (x); \path (2,2) coordinate (u1); \path (3,3) coordinate (c); \path (4,2) coordinate (y); \path (5,1) coordinate (u2); \path (6,0) coordinate (b); \path (2,0) coordinate (u3); \path (4,0) coordinate (z); \path (3,1) coordinate (d); \path (3,2) coordinate (u4); \path (1.5,0.5) coordinate (u5); \path (4.5,0.5) coordinate (u6); \draw (a)--(c); \draw (a)--(d); \draw (c)--(d); \draw (b)--(d); \draw (a)--(b); \draw (b)--(c); \draw (a)[fill=black] circle (\vr); \draw (b)[fill=black] circle (\vr); \draw (c)[fill=black] circle (\vr); \draw (d)[fill=black] circle (\vr); \draw (x)[fill=black] circle (\vr); \draw (y)[fill=black] circle (\vr); \draw (z)[fill=black] circle (\vr); \draw (u1)[fill=black] circle (\vr); \draw (u2)[fill=black] circle (\vr); \draw (u3)[fill=black] circle (\vr); \draw (u4)[fill=black] circle (\vr); \draw (u5)[fill=black] circle (\vr); \draw (u6)[fill=black] circle (\vr); \draw[anchor = north] (a) node {$v_1$}; \draw[anchor = north] (b) node {$v_2$}; \draw[anchor = south] (c) node {$v_3$}; \draw[anchor = north] (d) node {$w$}; \draw[anchor = east] (x) node {$x$}; \draw[anchor = west] (y) node {$y$}; \draw[anchor = north] (z) node {$z$}; \end{tikzpicture} \end{center} \caption{The graph $G^{(3)}$. Also, the 2-sgp set $\{v_1,v_2,v_3,w\}$ is not 3-sgp set, and $\{w,x,y,z\}$ is a $k$-sgp set for $k\in \{2,3\}$.}\label{figure} \end{figure} We now claim that the set $A_k = \{w, v_1, v_2, \ldots, v_k\}$ is a $(k-1)$-Steiner general position set of $G^{(k)}$ and is not a $k$-Steiner general position set of $G^{(k)}$. Clearly, $|A_k| = k+1$. Let $B_k = A_k \setminus \{w\}$, so that $|B_k| = k$. A smallest spanning tree that contains the vertices from $B_k$ and does not contain the vertex $w$ proceeds along the subdivided cycle $C_k$ and is of size $k(k-1)$. On the other hand, a spanning tree that contains the vertices from $B_k$ as well as the vertex $w$, contains all the subdivided spokes and is of size $k(k-1)$. Hence $d_{G^{(k)}}(B_k) = k(k-1)$ and the second described spanning tree implies that $A_k$ is not a $k$-Steiner general position set of $G^{(k)}$. Consider next $(k-1)$-subsets $B$ of $A_k$. By symmetry, there are only two cases to consider. Suppose first that $w\in B$, so that $B$ contains $w$ and $k-2$ vertices of $C_k$. Then it is clear that the unique Steiner $B$-tree contains $w$ and the subdivided spokes between $w$ and the other vertices from $B$. (Its size is $(k-1)(k-2)$.) Suppose second that $B$ consists of $k-1$ vertices from $C_k$. Then a smallest spanning tree that contains vertices of $B$ and does not contain $w$ is of size $(k-2)k$. On the other hand, a spanning tree that contains $w$ and the subdivided spokes between $w$ and the vertices from $B$ is of size $(k-1)(k-1)$. Since $k(k-2) = k^2 - 2k < k^2 - 2k + 1 = (k-1)(k-1)$, we see that $d_{G^{(k)}}(B) = k(k-2)$ and conclude that $A_k$ is a $(k-1)$-Steiner general position set of $G^{(k)}$. \hfill $\square$ \bigskip The proposition above asserts that there exist graphs containing $(k-1)$-sgp sets that are not $k$-Steiner general position sets for $k\ge 2$. However, there could yet exist other sets in such graphs that are $(k-1)$-sgp sets as well as $k$-Steiner general position sets, as for instance the set $\{w,x,y,z\}$ of Fig.~\ref{figure}, which is a $k$-sgp set for $k\in \{2,3\}$. In consequence, although there is no monotonicity with respect to inclusion for $k$-Steiner general position sets, it still could be a monotonicity relation with respect to the value of ${\rm sgp}_k(G)$ for every connected graph $G$. Proving or disproving a general monotonicity relation like this one seems to be a challenging problem. The graph $G^{(3)}$ from Fig.~\ref{figure} was presented in~\cite{changat-2010} as a first in a family of graphs that has a 2-Steiner convex set that is not 3-Steiner convex. Later it was also mentioned in~\cite{ACKP}. \section{Trees and cycles} \label{sec:trees-cycles} In this section we determine the $k$-Steiner general position number of trees and cycles. \begin{theorem} If $T$ is a tree with $n(T)\ge 3$ and $k\in [2:n(T)-1]$, then $${\rm sgp}_k(T)=\left\{\begin{array}{ll} \ell(T); & k\leq \ell(T), \\[0.1cm] k; & k>\ell(T). \end{array} \right.$$ \end{theorem} \noindent{\bf Proof.\ } Let $T$ be a tree of order at least $3$, and let $k\in [2:n(T)-1]$. Clearly, $L(T)$ is a $k$-Steiner general position set. From Remark~\ref{rem:trivial} we also know that ${\rm sgp}_k(T) \ge k$. Therefore, ${\rm sgp}_k(T) \ge \max \{\ell(T), k \}$. To establish a corresponding lower bound, we distinguish two cases. Assume first that $k\le \ell(T)$. Let us suppose that ${\rm sgp}_k(T) > \ell(T)$, and let $S$ be a $k$-sgp-set of $T$. Since $|S| > \ell(T)$, there exist three vertices $u,v,w\in S$ such that the shortest $u,v$-path in $T$ contains $w$. By taking a set $A\subseteq S$ of $k$ vertices, including $u,v$ and not including the vertex $w$, we obtain that the Steiner $A$-tree contains the vertex $w$, which is not possible. Hence, we conclude that ${\rm sgp}_k(T) \le \ell(T)$, and thus ${\rm sgp}_k(T) = \ell(T)$, in the case when $k\le \ell(T)$. Assume now that $k > \ell(T)$. By using a similar argument as in the previous paragraph, we find three vertices $u,v,w$ (and some other ones which could also exist) that allow to claim that there cannot be more than $k$ vertices in any $k$-sgp-set of $T$. For otherwise, we will obtain a Steiner tree with a not allowed vertex. Therefore, we deduce ${\rm sgp}_k(T)\le k$, which leads to the conclusion that if $k > \ell(T)$, then ${\rm sgp}_k(T) = k$. \hfill $\square$ \bigskip \begin{theorem} \label{thm:cycle} If $n\ge 3$ and $k\in[2:n-1]$, then $${\rm sgp}_k(C_n)=\left\{\begin{array}{ll} k; & k\in\left[\left\lfloor\frac{2n}{3}\right\rfloor :n-2\right], \\ k+1; & \mbox{otherwise}. \end{array} \right.$$ \end{theorem} \noindent{\bf Proof.\ } Let $C_n=v_0\cdots v_{n-1}v_0$. In the rest of the proof all the operations with the indices of the vertices of $C_n$ are done modulo $n$. Consider a set $A\subseteq V(C_n)$ with $|A| = k+2$. Let $a,b,c$ be three consecutive vertices of $A$, where consecutive refers to their order on $C_n$. Then $B=A-\{a,c\}$ is a $k$-set. Since every Steiner $B$-tree contains at least one of the vertices $a$ or $c$, we infer that $A$ is not a $k$-Steiner general position set. By Remark~\ref{rem:trivial} we have ${\rm sgp}_k(C_n)\ge k$, hence ${\rm sgp}_k(C_n) \in \{k, k+1\}$ follows. As $C_n$ is $2$-connected, Proposition \ref{prop:charact-n-d-connected} implies that ${\rm sgp}_{n-1}(C_n)=n$. In the rest we may thus assume that $k\le n-2$. We claim that if $S$ is a $k$-Steiner general position set of $C_n$ of cardinality $k+1$, then $S$ contains no three consecutive vertices of $C_n$. Suppose on the contrary that, without loss of generality, $S$ contains $v_0$, $v_1$, and $v_2$. Then for the set $S'=S\setminus\{v_1\}$, there is at least one Steiner $S'$-tree which contains the vertex $v_1$, a contradiction proving the claim. From it and by the pigeonhole principle, if ${\rm sgp}_k(C_n)=k+1$, then $k+1\le \left\lfloor\frac{2n}{3}\right\rfloor$. We have thus proved that ${\rm sgp}_k(C_n)=k$ for every $k\in \left[\left\lfloor\frac{2n}{3}\right\rfloor :n-2\right]$. In the rest of the proof let $k\in [2:\left\lfloor\frac{2n}{3}\right\rfloor-1]$. We want to show that in these cases ${\rm sgp}_k(C_n) = k+1$. Since we already know that ${\rm sgp}_k(C_n) \le k+1$ it remains to construct a $k$-Steiner general position set of cardinality $k+1$. For this sake write $n$ as $n=q(k+1)+r$, where $r< k+1$, and consider the following two cases. Assume first that $r<q$. Let $$A=\{v_0,v_{q},v_{2q},\dots,v_{kq}\}$$ and note that $|A| = k+1$. If $v_{i_1},v_{i_2},v_{i_3}$ are any three consecutive vertices of $A$, then we note that $d_{C_n}(v_{i_1},v_{i_3})\in \{2q, 2q+r\}$, which is strictly larger than $d_{C_n}(v_{j},v_{l})\in \{q, q+r\}$, where $v_j$ and $v_l$ are arbitrary consecutive vertices of $A$. In consequence, we deduce that for any vertex $v_j\in A$, the Steiner $(A\setminus\{v_j\})$-tree does not contain the vertex $v_j$. Thus, $A$ is a $k$-Steiner general position set of $C_n$ and so, ${\rm sgp}_k(C_n)\ge k+1$ as required. Assume now that $r\ge q$. The idea is then to make a partition of $V(C_n)$ into $k+2$ sets of consecutive vertices starting with $v_0$: $r-q+1$ sets of cardinality $q+1$; $k+1-r+q-1$ sets of cardinality $q$; and one set of cardinality $q-1$. Take first vertex from each of the first $k+1$ sets (and no vertex from the set of cardinality $q-1$). The set obtained is: $$A'=\{v_0,v_{q+1},\dots , v_{(r-q+1)(q+1)}, v_{(r-q+1)(q+1)+q}, \dots, v_{(r-q+1)(q+1)+(k+1-r+q-2)q}\}\,.$$ Notice that the last element of $A'$ is $v_{(r-q+1)(q+1)+(k+1-r+q-2)q}=v_{n-2q+1}$. If $v_{i_1}$, $v_{i_2}$, and $v_{i_3}$ are arbitrary three consecutive vertices of $A'$, then $d_{C_n}(v_{i_1},v_{i_3})\in \{2q, 2q+1, 2q+2, 3q\}$. Moreover, if $v_j$ and $v_l$ are two consecutive vertices of $A'$, then $d_{C_n}(v_{j},v_{l})\in \{q, q+1, 2q-1\}$. Thus, $d_{C_n}(v_{i_1},v_{i_3})$ is always strictly larger than $d_{C_n}(v_{j},v_{l})$. As a consequence, we again deduce that for any vertex $v_j\in A$, the Steiner $(A\setminus\{v_j\})$-tree does not contain the vertex $v_j$. Theorefore, $A$ is a $k$-Steiner general position. Hence, also in this second case we have ${\rm sgp}_k(C_n)\ge k+1$ and we are done. \hfill $\square$ \bigskip \section{Joins of graphs} \label{sec:joins} In this section we give a formula for the $k$-Steiner general position number of graph joins. For the Steiner diameter of joins see~\cite{wang-2019}, and for the Steiner diameter of several other graph operations~\cite{mao-2017, wang-2019}. The join $G\vee H$ of disjoint graphs $G$ and $H$ is the graph with vertex set $V(G\vee H)=V(G)\cup V(H)$ and edge set $$E(G\vee H)=E(G)\cup E(H)\cup \{gh:g\in V(G),h\in V(H)\}.$$ Known families of graphs that can be presented as the join of two graphs include complete graphs $K_n=K_p\vee K_{n-p}$, complete bipartite graphs $K_{s,t}=\overline{K}_s\vee \overline{K}_t$, wheel graphs $W_r=K_1\vee C_{r-1}$, $r\geq 4$, and fan graphs $F_n=K_1\vee P_{n-1}$, $n\geq 2$. It is well-known that $\omega(G\vee H) =\omega(G) + \omega(H)$ and also not difficult to extend this result to $s\omega_k(G\vee H)=s\omega_k(G)+s\omega_k(H)$ for connected graphs $G$ and $H$. In the general case the corresponding result still holds as shown next. \begin{lemma} \label{lem:join} If $G$ and $H$ are graphs and $k\in [2:n(G)+n(H)]$, then $$s\omega_k(G\vee H)=s\omega_k(G)+s\omega_k(H).$$ \end{lemma} \noindent{\bf Proof.\ } If $A\subseteq V(G)$ and $B\subseteq V(H)$ are $k$-Steiner cliques of $G$ and $H$, respectively, then $A\cup B$ is a $k$-Steiner clique of $G\vee H$ even if $G$ or $H$ are disconnected and every component has less than $k$ vertices. In this latter case, one selects any arbitrary $\min\{n(G),k-1\}$ vertices in $G$, which together with a $k$-Steiner clique in $H$ form a $k$-Steiner clique of $G\vee H$. A similar argument can be used symmetrically for $H$. Thus, $s\omega_k(G\vee H)\ge s\omega_k(G)+s\omega_k(H)$. Suppose that $s\omega_k(G\vee H)>s\omega_k(G)+s\omega_k(H)$ and let $S$ be $k$-Steiner clique of $G\vee H$. Hence, it must happen that $|S\cap V(G)|>s\omega_k(G)$ or $|S\cap V(H)|>s\omega_k(H)$. Without loss of generality, we may assume that $|S\cap V(G)|>s\omega_k(G)$. Then $S\cap V(G)$ is not a $k$-Steiner clique of $G$, and there exists a $k$-subset $B$ of $S\cap V(G)$, where $G[B]$ is not connected. This also means $(G\vee H)[B]$ is not connected, and $S$ is not a $k$-Steiner clique of $G\vee H$, a contradiction. Hence, $s\omega_k(G\vee H)\leq s\omega_k(G)+s\omega_k(H)$ and the equality follows. \hfill $\square$ \bigskip Let $G$ be a graph and let $A\subseteq V(G)$ be a set of vertices of cardinality at least $k$. Then we say that $A$ is a $k$-\textit{Steiner join-critical set} of $G$ if for each $k$-subset $B$ of $A$ we have $d_{G[A]}(B)\neq k$. That is, $A$ is a $k$-Steiner join-critical set if there exists no $k$-set $B\subseteq A$ such that a Steiner $B$-tree in $G[A]$ contains $k+1$ vertices. By ${\rm sjc}_k(G)$ we denote the cardinality of a largest $k$-Steiner join-critical set. For a given set $D\subseteq V(G)$, if every connected component of $G[D]$ is of order at most $k$, then $D$ is $k$-Steiner join-critical. For the particular case in which $D=V(G)$, if every connected component of $G$ has order at most $k$, then ${\rm sjc}_k(G) = n(G)$. Note also that, by definition, if $k \ge n(G)$, then ${\rm sjc}_k(G) = n(G)$. It seems that determining ${\rm sjc}_k(G\vee H)$ is a hard problem, but one can express the exact value for ${\rm sgp}_k(G\vee H)$ in terms of ${\rm sjc}_k(G)$, ${\rm sjc}_k(H)$, and $s\omega_k(G\vee H)$. In addition, by Lemma~\ref{lem:join}, the latter invariant can also be expressed by related invariants of $G$ and $H$. \begin{theorem}\label{join} If $G$ and $H$ are graphs and $k\in[2:n(G\vee H)-1]$, then $${\rm sgp}_k(G\vee H)=\max\{s\omega_k(G\vee H),{\rm sjc}_k(G), {\rm sjc}_k(H)\}.$$ \end{theorem} \noindent{\bf Proof.\ } Let $G$ and $H$ be graphs and let $M=\max\{s\omega_k(G\vee H),{\rm sjc}_k(G), {\rm sjc}_k(H)\}$. If $M=s\omega_k(G\vee H)$, then ${\rm sgp}_k(G\vee H)\geq M$ by Remark \ref{rem:trivial}. Suppose next that $M={\rm sjc}_k(G)$. Let $A\subseteq V(G)$ be a $k$-Steiner join-critical set of $G$ of cardinality ${\rm sjc}_k(G)$. We wish to show that $A$ is a $k$-Steiner general position set of $G\vee H$. Let $B$ be any $k$-subset of $A$. If $G[B]$ is connected, then we are done. Hence assume that $G[B]$ is not connected. Let $h$ be an arbitrary vertex of $H$. Then the set $B_h=B\cup\{h\}$ induces a connected subgraph of $G\vee H$ which means that $d_{G\vee H}(B)\leq k$. On the other hand, there does not exists a Steiner $B$-tree of order $k+1$ that contains only vertices of $A$, because $A$ is $k$-Steiner join-critical in $G$. So, every Steiner $B$-tree in $G\vee H$ does not contain any additional vertex from $A$, and consequently $A$ is a $k$-Steiner general position set for $G\vee H$, meaning that ${\rm sgp}_k(G\vee H)\geq M$ also holds in this case. By the symmetry of $G$ and $H$ in $G\vee H$, we also get that ${\rm sgp}_k(G\vee H)\geq M$ when $M={\rm sjc}_k(H)$. Suppose now that there exist a $k$-Steiner general position set $A$ of $G\vee H$ of cardinality greater than $M$. Assume first that $A_H=A\cap V(H)\neq \emptyset$ and $A_G=A\cap V(G)\neq \emptyset$. Since $|A|>M\geq s\omega_k(G\vee H)$, Lemma \ref{lem:join} implies that at least one of $A_G$ and $A_H$ contains more vertices than $s\omega_k(G)$ and $s\omega_k(H)$, respectively. Assume without loss of generality that $|A_G|>s\omega_k(G)$. Recall that for any graph $G$ either \begin{itemize} \item $s\omega_k(G)=\min\{n(G), k-1\}$ (which means that every connected component of $G$ has cardinality smaller than $k$), or \item $s\omega_k(G)\geq k$. \end{itemize} Let $M_1=\min\{n(G), k-1\}$. If $M_1=k-1$, then in both situations above, $|A_G|>M_1$ implies that $|A_G|\geq k$. This means there exists a $k$-subset $B$ of $A_G$, such that $G[B]$ is not a connected subgraph and $d_{G\vee H}(B)\geq k$. The set $B_h=B\cup\{h\}$, with $h\in A_H$, induces a connected subgraph of $G\vee H$ and $d_{G\vee H}(B_h)=k$. So, $B_h$ induces a Steiner $B$-tree that contains a vertex from $A\setminus B$, a contradiction with $B$ being a $k$-Steiner general position set of $G\vee H$. So, let now $M_1 = n(G) < k$. In this case we have $|A_G|> M_1 = n(G)$, a contradiction again. It remains to consider the case when $A_G=\emptyset$ or $A_H=\emptyset$. We may, without loss of generality, assume that $A_H=\emptyset$. Because $|A|>M\geq {\rm sjc}_k(G)$, $A$ is not $k$-Steiner join-critical and there exists a $k$-subset $B$ of $A$, for which $d_{G[A]}(B)=k$. Clearly, $G[B]$ is not connected and $d_{G\vee H}(B)=k$, a contradiction with $A$ being a $k$-Steiner general position set. Hence ${\rm sgp}_k(G\vee H)\leq M$ and the equality follows. \hfill $\square$ \bigskip To give some applications of Theorem~\ref{join}, we first determine exact results for ${\rm sjc}_k(P_n)$ and ${\rm sjc}_k(C_n)$. \begin{proposition}\label{pathcycle} Let $n\ge 3$. If $k\in[2:n-1]$, then ${\rm sjc}_k(P_n)=n-\left\lfloor \frac{n}{k+1}\right\rfloor$. If $\ell\in[2:n-2]$, then ${\rm sjc}_{\ell}(C_n)=n-1-\left\lfloor \frac{n-1}{\ell+1}\right\rfloor$. Moreover, ${\rm sjc}_{n-1}(C_n)=n$. \end{proposition} \noindent{\bf Proof.\ } Divide the vertex set of $P_n$ into $\left\lfloor \frac{n}{k+1}\right\rfloor$ sets of $k+1$ consecutive vertices and a remainder set with at most $k$ vertices. Let $Q$ be the set consisting of the last vertex from each of the $\left\lfloor \frac{n}{k+1}\right\rfloor$ sets of $k+1$ consecutive vertices. Every connected component of the subgraph induced by $B=V(P_n)-Q$ has at most $k$ vertices, therefore $B$ is a $k$-Steiner join-critical set of cardinality $n-\left\lfloor \frac{n}{k+1}\right\rfloor$. If ${\rm sjc}_k(P_n)>n-\left\lfloor \frac{n}{k+1}\right\rfloor$ would hold, then every $k$-Steiner join-critical set $B$ of cardinality ${\rm sjc}_k(P_n)$ would contain a connected component $C$ with at least $k+1$ vertices. If $x$ is a middle vertex of such a component $C$, then $d_{P_n[B]}(C-\{x\})=k$, a contradiction. So, the equality holds for paths. For cycles we can use the same steps for $\ell\in[2:n-2]$, only that we need to put also the last vertex into $Q$, and then the result follows. If $\ell=n-1$, then any $\ell$ vertices of $C_n$ induce a connected graph and we are done. \hfill $\square$ \bigskip We can now apply Theorem~\ref{join} to specific families of graphs as follows. \begin{corollary}\label{exact} The following assertions hold for positive integers $k,n,r,s$. \begin{itemize} \item[(i)] If $k\in[2:r+s-1]$, then ${\rm sgp}_k(K_{r+s})={\rm sgp}_k(K_r\vee K_s)=r+s$. \item[(ii)] If $n\geq 6$ and $k\in[2:n-1]$, then ${\rm sgp}_k(W_n)={\rm sgp}_k(K_1\vee C_{n-1})=\max\{k+1,n-2-\left\lfloor \frac{n-2}{k+1}\right\rfloor$\}. \item[(iii)] If $n\geq 4$ and $k\in[2:n-1]$, then ${\rm sgp}_k(F_n)={\rm sgp}_k(K_1\vee P_{n-1})=\max\{k+1,n-1-\left\lfloor \frac{n-1}{k+1}\right\rfloor$\}. \item[(iv)] If $r\leq s$ and $k\in[2:r+s-1]$, then $${\rm sgp}_k(K_{r,s})={\rm sgp}_k(\overline{K}_r\vee \overline{K}_s)=\left\{\begin{array}{ll} \max\{s,\min\{k-1,r\}+k-1\}; & k\leq s, \\[0.1cm] r+s; & k>s. \end{array} \right.$$ \item[(v)] If $k\in[2:r+s-1]$, then $${\rm sgp}_k(K_r\vee \overline{K}_s)=\left\{\begin{array}{ll} r+\min\{s,k-1\}; & k>\min\{r,s\}, \\[0.1cm] \max\{r+k-1,s\}; & k\leq\min\{r,s\}. \end{array} \right.$$ \end{itemize} \end{corollary} \noindent{\bf Proof.\ } The results are obtained by straightforward applications of Lemma~\ref{lem:join} and Theorem~\ref{join}. For items $(ii)$ and $(iii)$, Proposition~\ref{pathcycle} is also needed. For some cases, Proposition \ref{prop:charact-n-d-connected} can also be applied. \hfill $\square$ \bigskip \section{Lexicographic products} \label{sec:lexico} Let $G$ and $H$ be two graphs. The lexicographic product $G\circ H$ is a graph with $V(G\circ H)=V(G)\times V(H)$. Two vertices $(g,h)$ and $(g',h')$ are adjacent if $gg'\in E(G)$ or ($g=g'$ and $hh'\in E(H$)). The lexicographic product is a kind of generalization of join because $K_2\circ G\cong G\vee G$ for any graph $G$. The map $p_G:(g,h)\mapsto g$ is the projection of $V(G\circ H)$ to $V(G)$. The set $G^h=\{(g,h):g\in V(G)\}$ is called the $G$-\emph{layer} (through $h$). Similarly, $^gH=\{(g,h):h\in V(H\}$ is called the $H$-\emph{layer} (through $g$). The subgraphs of $G\circ H$ induced by $G^h$ and by $^gH$ are clearly isomorphic to $G$ and $H$, respectively. Steiner trees are behaving relatively nice with respect to the first factor $G$ of lexicographic product. More accurate, the following lemma was proved in \cite[Lemma 3.1]{ACKP}. \begin{lemma} \label{simple}Let $g_{1},\ldots ,g_{k}$ be different vertices of a connected graph $G$. Then for any (not necessarily different) vertices $% h_{1},,\ldots ,h_{k}$ of a graph $H$, a Steiner tree of $% g_{1},\ldots ,g_{k}$ (in $G$) and a Steiner tree of $% (g_{1},h_{1}),\ldots ,(g_{k},h_{k})$ (in $G\circ H$) have the same size. \end{lemma} This forms a basis for any set $B=\{(g_{1},h_{1}),\ldots ,(g_{k},h_{k})\}$ of those vertices that project to at least two different vertices of $G$. Namely, we have the same size as the Steiner tree of $p_G(B)=\{g_1,\ldots, g_k\}$ plus $m_i-1$, for every $i\in [k]$, where $m_i$ represents the number of vertices from $B$ that project to $g_i$. So, the only case that is not connected with the Steiner tree in $G$ occurs when all the vertices of $B$ project to the same vertex $g_1$. Let $k$ and $\ell $ be two positive integers with $k\leq\ell$. A set $A\subseteq V(G)$ is a $[k:\ell]$-\emph{Steiner general position set} of a graph $G$, or a $[k:\ell]$-sgp set for short, if it is a $j$-Steiner general position set for every $j\in[k:\ell]$. The cardinality of a largest $[k:\ell]$-sgp set for $G$ is represented as ${\rm sgp}_{[k:\ell]}(G)$. The family of all $[k:\ell]$-sgp sets of a graph $G$ is denoted by ${\cal G}_{k,\ell}$. For every $k$-general Steiner position set $S$, we partition $S$ into two sets ${\cal I}_S$ and ${\cal J}_S$, where ${\cal I}_S$ contains all isolated vertices in the subgraph of $G[S]$ and ${\cal J}_S=S\setminus {\cal I}_S$. Every set of vertices of cardinality at most $k$ is a $[k:\ell]$-sgp set. On the other hand, as can be seen from the proof of Theorem~\ref{thm:cycle}, for a cycle $C_n$, any set with $k+2$ vertices is not a $k$-sgp set. So, any $[k:\ell]$-sgp set of $C_n$ contains at most $k+1$ vertices. Fig.~\ref{figure} shows a graph where the set $S_1=\{v_1,v_2,v_3,w\}$ is a 2-sgp set, but not a 3-sgp set. However, the set $S_2=\{x,y,z,w\}$ is a $[2:13]$-sgp set. Clearly, ${\cal I}_{S_1}=S_1$ and ${\cal I}_{S_2}=S_2$, while ${\cal J}_{S_1}=\emptyset={\cal J}_{S_2}$. \begin{theorem}\label{lex} Let $G$ and $H$ be nontrivial graphs, let $G$ be connected, and let $k\in[2:n(G)\cdot n(H)-1]$. If $j=\left\lceil \frac{k}{n(H)}\right\rceil$ and $\ell=\min\{k,n(G)\}$, then $${\rm sgp}_k(G\circ H)\geq\left\{\begin{array}{ll} \max_{S\in {\cal G}_{2,k}}\{|{\cal I}_S|{\rm sjc}_k(H)+|{\cal J}_S|s\omega_k(H)\}; & k\leq n(H), \\[0.1cm] {\rm sgp}_{[j:\ell]}(G)n(H); & n(H)<k<n(G)\cdot n(H). \end{array} \right.$$ Moreover, the equality holds if $k>(n(G)-1)n(H)$. \end{theorem} \noindent{\bf Proof.\ } Let first $k\leq n(H)$ and let $M=\max_{S\in {\cal G}_{2,k}}\{|{\cal I}_S|{\rm sjc}_k(H)+|{\cal J}_S|s\omega_k(H)\}$. Fix an arbitrary $S\in {\cal G}_{2,k}$ together with ${\cal I}_S$ and ${\cal J}_S$. In addition, let $D_1$ be a ${\rm sjc}_k(H)$-set and let $D_2$ be an $s\omega_k(H)$-set. We will show that $A=({\cal I}_S\times D_1)\cup ({\cal J}_S\times D_2)$ is a $k$-sgp set of $G\circ H$. Let $B$ be any $k$-subset of $A$. Suppose first that all vertices of $B$ belong to one layer $^gH$. Clearly, any Steiner $B$-tree contains either $k$ or $k+1$ vertices. Moreover, we have $d_H(B)=k-1$ when $p_H(B)$ induces a connected subgraph. If $g\in {\cal J}_S$, then $B$ induces a connected subgraph since $D_2$ is an $s\omega_k(G)$-set and we are done. If $g\in {\cal I}_S$, then again we are done when $d_H(p_H(B))=k-1$, since $B$ induces a connected graph. Otherwise, let $d_H(p_H(B))\geq k+1$ (recall $D_1$ is a ${\rm sjc}_k(H)$-set), and the additional vertex of a Steiner $B$-tree, say $(x,y)$, belongs either to $^gH$ or to $^{g'}H$ for some neighbor $g'$ of $g$. If $x=g$, then $y\notin D_1$ by the definition of a ${\rm sjc}_k(H)$-set and $(x,y)\notin A$. If $x=g'$, then $(x,y)\notin A$ because $g\in {\cal I}_S$. Thus, the only vertices from $A$ that are included at some Steiner $B$-tree are only those ones from $B$. Suppose now that $|p_G(B)|=j>1$, that is, $j\in [2:k]$. As $S\in {\cal G}_{2,k}$, any Steiner $p_G(B)$-tree in $G$ contains no other vertices from $S$ than those ones from $p_G(B)$. By a consequence of Lemma \ref{simple}, we get that every Steiner $B$-tree in $G\circ H$ contains no other vertices from $A$ than those ones from $B$. Hence, $A$ is a $k$-sgp set and we have ${\rm sgp}_k(G\circ H)\geq M$. Let now $n(H)<k<n(G)\cdot n(H)$, $j=\left\lceil \frac{k}{n(H)}\right\rceil$, and $\ell=\min\{k,n(G)\}$. We will show that ${\rm sgp}_k(G\circ H)\geq {\rm sgp}_{[j:\ell]}(G)n(H)$ by proving that $A=A_G\times V(H)$ is a $k$-Steiner general position set of $G\circ H$, where $A_G$ is a ${\rm sgp}_{[j:\ell]}(G)$-set. Let $B$ be any $k$-subset of $A$ and let $B_G=p_G(B)$. Clearly, $j\leq|B_G|\leq \ell$ and every Steiner $B_G$-tree does not contain any additional vertex from $A_G$ because $A_G$ is a ${\rm sgp}_{[j:\ell]}(G)$-set. But then by Lemma \ref{simple} and the comment after it, also the Steiner $B$-tree does not contain any additional vertex from $A$. Thus, $A$ is a $k$-Steiner general position set of $G\circ H$ and the first inequality follows. If $k>(n(G)-1)n(H)$, then $p_G(B) = V(G)$ for any $k$-set $B$. Since $G$ is connected, $(G\circ H)[B]$ is also connected and every $B$-Steiner tree contains only vertices from $B$. Therefore, the equality holds. \hfill $\square$ \bigskip If we set $k=2$, then we can show the equality in Theorem \ref{lex}. For this, notice that in any 2-Steiner join-critical set $A$, two nonadjacent vertices cannot have any of their common neighbors also in $A$. Hence, $G[A]$ is a disjoint union of complete graphs. Moreover, $S\in{\cal G}_{2,2}$ simply means that $S$ is a general position set of $G$. The study of the general position number of the lexicographic product of graphs was initiated in \cite{klavzar-2019}, but just finding a connection between such parameter and other related structure. We next give a formula for the general position number of this product. \begin{theorem}\label{lexgp} If $G$ and $H$ are nontrivial graphs where $G$ is connected, then $${\rm gp}(G\circ H)=\max_{S\in {\cal G}_{2,2}}\{|{\cal I}_S|{\rm sjc}_2(H)+|{\cal J}_S|\omega(H)\}.$$ \end{theorem} \noindent{\bf Proof.\ } By Theorem \ref{lex} we have ${\rm gp}(G\circ H)\geq \max_{S\in {\cal G}_{2,2}}\{|{\cal I}_S|{\rm sjc}_2(H)+|{\cal J}_S|\omega(H)\}$. To show the equality, let $A$ be a ${\rm gp}(G\circ H)$-set and let $A_G=p_G(A)$. We denote by ${\cal I}_{A_G}$ the set of isolated vertices from $G[A_G]$, and let ${\cal J}_{A_G}=A_G\setminus{\cal I}_{A_G}$. Suppose first that there exists $g\in {\cal I}_{A_G}$ such that $|^gH\cap A|>{\rm sjc}_2(H)\geq 2$. By the definition of ${\rm sjc}_2(H)$, there exists a $2$-subset $B$ of $^gH\cap A$, such that any Steiner $B$-tree contains a vertex from $(^gH\cap A)-B$, a contradiction with $A$ being a ${\rm gp}(G\circ H)$-set. Thus, $|^gH\cap A|\leq{\rm sjc}_2(H)$ for every $g\in {\cal I}_{A_G}$. Assume now that there exists $g\in {\cal J}_{A_G}$ such that $|^gH\cap A|>\omega(H)\geq 1$. By the definition of $\omega(H)$, there exists a $2$-subset $B$ of $^gH\cap A$, such that $p_H(B)$ is not connected in $H$. Therefore, $B\cup\{(g',h)\}$ forms a Steiner $B$-tree, where $g'$ is a neighbor of $g$ in ${\cal J}_{A_G}$ and $(g',h)\in A$, the same contradiction again. Hence, for every ${\rm gp}(G\circ H)$-set $A$ we have $|A|\leq|{\cal I}_{A_G}|{\rm sjc}_2(H)+|{\cal J}_{A_G}|\omega(H)$. We still need to show that $A_G\in {\cal G}_{2,2}$. If not, then there exists a $2$-subset $B_G=\{g,g'\}$ of $A_G$ such that a $g,g'$-geodesic contains another vertex $g_0$ from $A_G$. Let $h,h',h_0\in V(H)$ be such that $(g,h),(g',h'),(g_0,h_0)\in A$. Clearly $(g_0,h_0)$ belongs to a $(g,h),(g',h')$-geodesic in $G\circ H$, a contradiction with $A$ being a ${\rm gp}(G\circ H)$-set. \hfill $\square$ \bigskip Notice that the first two paragraphs of the proof above also suit for every $2< k\leq n(H)$. Unfortunately, there are several problems with the last paragraph while trying to get the equality in Theorem \ref{lex}. We next illustrate that we need sets from ${\cal G}_{2,k}$ when $2< k\leq n(H)$. For this, notice that, by Theorem~\ref{thm:cycle}, the cardinality of a set $S$ such that $S\in {\cal C_n}_{2,k}$ is at most three for $n\geq 5$. By Theorem~\ref{lex}, we have $${\rm sgp}_k(C_n\circ K_\ell)\geq 3\ell.$$ On the other hand, if a general position set $S$ of $C_n\circ K_\ell$ projects to more than three vertices of $G$, then let $g_1,\dots,g_t$ be consecutive vertices of $p_G(S)$ on $C_n$. By $S_{i,j}$ we denote the subset of $S$ that projects to $P_G(S)-\{g_i,g_j\}$ where $|i-j|>1$ and $\{i,j\}\neq\{1,t\}$. If $|S_{i,j}|\geq k$, then there exists a $k$-subset $B$ of $S_{i,j}$ that projects to some vertices $g_p,g_r$ where $i<p<j$ and ($r<i$ or $r>j$). Clearly, there exists a $B$-Steiner tree that contains a vertex from $S\cap ^{g_i}H$ or from $S\cap ^{g_j}H$, a contradiction. So, $|S_{i,j}|<k$ and consequently $|S|<2k-1$. Hence $S$ is not a ${\rm sgp}_k(C_n\circ K_\ell)$-set. With this we have also shown the following. \begin{corollary}\label{lexCK} If $n\geq 5$ and $\ell\geq 2$, then $${\rm sgp}_k(C_n\circ K_\ell)=3\ell.$$ \end{corollary} \section{Split graphs and infinite grids} \label{sec:grids} A connected graph $G$ is a \emph{split graph} if we can partition $V(G)$ in two sets $Q$ and $I$, such that $G[Q]$ is a clique $K_r$ and $G[I]$ is a graph without edges $\overline{K}_s$. By $G_{r,s}$ we denote a split graph with clique $K_r$ and independent set $\overline{K}_s$. Among joins, $K_r\vee \overline{K}_s$ is a split graph (see Corollary \ref{exact} $(v)$ for its $k$-Steiner general position number). We order the vertices of $I=\{v_1,\ldots,v_s\}$ by their degree, that is $\delta(v_1)\geq\cdots\geq\delta(v_s)$. Let $i(G)$ be defined as follows. If $\delta(v_1)\le r-k+1$, then $i(G)=0$. Otherwise, $i(G)$ denotes the largest integer $i\in[s]$ such that $\delta(v_i)>r-k+i$. A \emph{universal vertex} of a graph $G$ is a vertex of degree $n(G)-1$. The set of all universal vertices in a graph $G_{r,s}$ is denoted by $U$. We also define $$u_k(G_{r,s})=\left\{\begin{array}{lll} |U|; & k>s, \\[0.1cm] 0; & k\leq s. \end{array} \right.$$ \begin{theorem}\label{split} If $G_{r,s}$ is a split graph, then $${\rm sgp}_k(G_{r,s})\geq\max\{r+i(G_{r,s}),s+u_k(G_{r,s}),k\}.$$ \end{theorem} \noindent{\bf Proof.\ } Let $G_{r,s}$ be a split graph with clique $Q$ on $r$ vertices and independent set $I$ on $s$ vertices and let $M=\max\{r+i(G_{r,s}),s+u_k(G_{r,s}),k\}$. If $M=k$, then the inequality holds by Remark \ref{rem:trivial} and we are done. Suppose next that $M=s+u_k(G_{r,s})$. This means that $s+u_k(G_{r,s})>k$, for otherwise we are in the case $M=k$ above. Add to the set $A$ all vertices from $I$ and all universal vertices when $k>s$. Select any $k$-subset $B$ of $A$. If $u_k(G_{r,s})=0$, then $A\subset I$ and every Steiner $B$-tree contains only vertices from $Q$, beside those already in $B$. Moreover, if there is a universal vertex in $A$, then there is at least one universal vertex also in $B$ because $k>s$. In this case, $B$ induces a connected subgraph and any Steiner $B$-tree does not contain any vertex from $A\setminus B$. Thus, $A$ is a $k$-Steiner general position set for $G_{r,s}$ and ${\rm sgp}_k(G_{r,s})\geq M$. Finally, let $M=r+i(G_{r,s})$. This means that $r+i(G_{r,s})>k$, for otherwise we are in the earlier situation $M=k$. If $i(G_{r,s})=0$, then $M=r$, $A=Q$ and the deduction is trivial because vertices of $Q$ induce a clique. Suppose now that $A$ contains all $r$ vertices of $Q$ and first $i(G_{r,s})$ vertices of $I$ ordered by descending degree sequence. Let $B$ be any $k$-subset of $A$. First notice that $i(G_{r,s})<k$. Otherwise, if $i(G_{r,s})\geq k$, then $\delta(v_k)>r-k+k=r$, which is not possible as every vertex from $I$ can have at most $r$ neighbors. This means that $B_Q=B\cap Q\neq \emptyset$ and there are at least $k-i(G_{r,s})$ vertices from $Q$ in $B$. Since $\delta(v_i)\geq\delta(v_{i(G_{r,s})})>r-k+i(G_{r,s})$ for every $i\in[i(G_{r,s})]$, every vertex from $B_I=B\cap I$ contains more than $r-k+i(G_{r,s})$ neighbors in $Q$. In other words, every vertex from $B_I$ has a neighbor in $B_Q$. But then $B$ induces a connected subgraph and every Steiner $B$-tree contains only vertices from $B$. Therefore, $A$ is a $k$-Steiner general position set of $G_{r,s}$ and ${\rm sgp}_k(G_{r,s})\geq M$ follows. \hfill $\square$ \bigskip With $P_\infty$ we denote the two ways infinite path. Let $V(P_{\infty})=\{\dots,-2,-1,0,1,2,\dots\}$ where $i$ is adjacent to $j$ if and only if $|i-j|=1$. The infinite grid $P_{\infty}\,\Box\, P_{\infty}$ is the Cartesian product of two infinite paths, that is $V(P_{\infty}\,\Box\, P_{\infty})=\{(i,j)\,:\,i,j\in \mathbb{Z}\}$ and $(i,j)(k,\ell)\in E(P_{\infty}\,\Box\, P_{\infty})$ when $|i-j|+|k-\ell|=1$, see Fig.~\ref{fig:grid-k-4}. \begin{theorem}\label{grid} ${\rm sgp}_k(P_{\infty}\,\Box\, P_{\infty})\ge 2k$. \end{theorem} \noindent{\bf Proof.\ } Let $S=S_1\cup S_2$ be a set of vertices with $S_1= \{(k-1,0),(k-2,1),\dots,(1,k-2),(0,k-1)\}$ and $S_2=\{(-k+1,0),(-k+2,-1),\dots,(-1,-k+2),(0,-k+1)\}$. See Fig.~\ref{fig:grid-k-4} for an example when $k=4$. Notice that $|S_1|=|S_2|=k$. We will show prove that $S$ is a $k$-Steiner general position set of $P_{\infty}\,\Box\, P_{\infty}$. Let $B\subset S$ be a $k$-subset. If $B=S_1$ or $B=S_2$, then it can be easily noted that any Steiner $B$-tree does not contain any vertex of $S\setminus B$. Hence, we may assume $B_1=B\cap S_1\ne \emptyset$ and $B_2=B\cap S_2\ne \emptyset$. \begin{figure}[ht!] \centering \begin{tikzpicture}[scale=.55, transform shape] \node [draw, shape=circle,scale=0.7,fill=black] (a1) at (0,0) {}; \node [draw, shape=circle,scale=0.7,fill=black] (a2) at (0,1) {}; \node [draw, shape=circle,scale=0.7,fill=black] (a3) at (0,2) {}; \node [draw, shape=circle,scale=0.7,fill=black] (a4) at (0,3) {}; \node [draw, shape=circle,scale=0.7,fill=black] (a5) at (0,4) {}; \node [draw, shape=circle,scale=0.7,fill=black] (a6) at (0,5) {}; \node [draw, shape=circle,scale=0.7,fill=black] (a7) at (0,6) {}; \node [draw, shape=circle,scale=0.7,fill=black] (a8) at (0,7) {}; \node [draw, shape=circle,scale=0.7,fill=black] (a9) at (0,8) {}; \node [draw, shape=circle,scale=0.7,fill=black] (b1) at (1,0) {}; \node [draw, shape=circle,scale=0.7,fill=black] (b2) at (1,1) {}; \node [draw, shape=circle,scale=0.7,fill=black] (b3) at (1,2) {}; \node [draw, shape=circle,scale=0.7,fill=black] (b4) at (1,3) {}; \node [draw, shape=rectangle,scale=1.2,fill=black] (b5) at (1,4) {}; \node [draw, shape=circle,scale=0.7,fill=black] (b6) at (1,5) {}; \node [draw, shape=circle,scale=0.7,fill=black] (b7) at (1,6) {}; \node [draw, shape=circle,scale=0.7,fill=black] (b8) at (1,7) {}; \node [draw, shape=circle,scale=0.7,fill=black] (b9) at (1,8) {}; \node [draw, shape=circle,scale=0.7,fill=black] (c1) at (2,0) {}; \node [draw, shape=circle,scale=0.7,fill=black] (c2) at (2,1) {}; \node [draw, shape=circle,scale=0.7,fill=black] (c3) at (2,2) {}; \node [draw, shape=rectangle,scale=1.2,fill=black] (c4) at (2,3) {}; \node [draw, shape=circle,scale=0.7,fill=black] (c5) at (2,4) {}; \node [draw, shape=circle,scale=0.7,fill=black] (c6) at (2,5) {}; \node [draw, shape=circle,scale=0.7,fill=black] (c7) at (2,6) {}; \node [draw, shape=circle,scale=0.7,fill=black] (c8) at (2,7) {}; \node [draw, shape=circle,scale=0.7,fill=black] (c9) at (2,8) {}; \node [draw, shape=circle,scale=0.7,fill=black] (d1) at (3,0) {}; \node [draw, shape=circle,scale=0.7,fill=black] (d2) at (3,1) {}; \node [draw, shape=rectangle,scale=1.2,fill=black] (d3) at (3,2) {}; \node [draw, shape=circle,scale=0.7,fill=black] (d4) at (3,3) {}; \node [draw, shape=circle,scale=0.7,fill=black] (d5) at (3,4) {}; \node [draw, shape=circle,scale=0.7,fill=black] (d6) at (3,5) {}; \node [draw, shape=circle,scale=0.7,fill=black] (d7) at (3,6) {}; \node [draw, shape=circle,scale=0.7,fill=black] (d8) at (3,7) {}; \node [draw, shape=circle,scale=0.7,fill=black] (d9) at (3,8) {}; \node [draw, shape=circle,scale=0.7,fill=black] (e1) at (4,0) {}; \node [draw, shape=rectangle,scale=1.2,fill=black] (e2) at (4,1) {}; \node [draw, shape=circle,scale=0.7,fill=black] (e3) at (4,2) {}; \node [draw, shape=circle,scale=0.7,fill=black] (e4) at (4,3) {}; \node [draw, shape=circle,scale=0.7,fill=black] (e5) at (4,4) {}; \node [draw, shape=circle,scale=0.7,fill=black] (e6) at (4,5) {}; \node [draw, shape=circle,scale=0.7,fill=black] (e7) at (4,6) {}; \node [draw, shape=rectangle,scale=1.2,fill=black] (e8) at (4,7) {}; \node [draw, shape=circle,scale=0.7,fill=black] (e9) at (4,8) {}; \node [draw, shape=circle,scale=0.7,fill=black] (f1) at (5,0) {}; \node [draw, shape=circle,scale=0.7,fill=black] (f2) at (5,1) {}; \node [draw, shape=circle,scale=0.7,fill=black] (f3) at (5,2) {}; \node [draw, shape=circle,scale=0.7,fill=black] (f4) at (5,3) {}; \node [draw, shape=circle,scale=0.7,fill=black] (f5) at (5,4) {}; \node [draw, shape=circle,scale=0.7,fill=black] (f6) at (5,5) {}; \node [draw, shape=rectangle,scale=1.2,fill=black] (f7) at (5,6) {}; \node [draw, shape=circle,scale=0.7,fill=black] (f8) at (5,7) {}; \node [draw, shape=circle,scale=0.7,fill=black] (f9) at (5,8) {}; \node [draw, shape=circle,scale=0.7,fill=black] (g1) at (6,0) {}; \node [draw, shape=circle,scale=0.7,fill=black] (g2) at (6,1) {}; \node [draw, shape=circle,scale=0.7,fill=black] (g3) at (6,2) {}; \node [draw, shape=circle,scale=0.7,fill=black] (g4) at (6,3) {}; \node [draw, shape=circle,scale=0.7,fill=black] (g5) at (6,4) {}; \node [draw, shape=rectangle,scale=1.2,fill=black] (g6) at (6,5) {}; \node [draw, shape=circle,scale=0.7,fill=black] (g7) at (6,6) {}; \node [draw, shape=circle,scale=0.7,fill=black] (g8) at (6,7) {}; \node [draw, shape=circle,scale=0.7,fill=black] (g9) at (6,8) {}; \node [draw, shape=circle,scale=0.7,fill=black] (h1) at (7,0) {}; \node [draw, shape=circle,scale=0.7,fill=black] (h2) at (7,1) {}; \node [draw, shape=circle,scale=0.7,fill=black] (h3) at (7,2) {}; \node [draw, shape=circle,scale=0.7,fill=black] (h4) at (7,3) {}; \node [draw, shape=rectangle,scale=1.2,fill=black] (h5) at (7,4) {}; \node [draw, shape=circle,scale=0.7,fill=black] (h6) at (7,5) {}; \node [draw, shape=circle,scale=0.7,fill=black] (h7) at (7,6) {}; \node [draw, shape=circle,scale=0.7,fill=black] (h8) at (7,7) {}; \node [draw, shape=circle,scale=0.7,fill=black] (h9) at (7,8) {}; \node [draw, shape=circle,scale=0.7,fill=black] (i1) at (8,0) {}; \node [draw, shape=circle,scale=0.7,fill=black] (i2) at (8,1) {}; \node [draw, shape=circle,scale=0.7,fill=black] (i3) at (8,2) {}; \node [draw, shape=circle,scale=0.7,fill=black] (i4) at (8,3) {}; \node [draw, shape=circle,scale=0.7,fill=black] (i5) at (8,4) {}; \node [draw, shape=circle,scale=0.7,fill=black] (i6) at (8,5) {}; \node [draw, shape=circle,scale=0.7,fill=black] (i7) at (8,6) {}; \node [draw, shape=circle,scale=0.7,fill=black] (i8) at (8,7) {}; \node [draw, shape=circle,scale=0.7,fill=black] (i9) at (8,8) {}; \draw(a1)--(a9);\draw(b1)--(b9);\draw(c1)--(c9);\draw(d1)--(d9);\draw(e1)--(e9); \draw(f1)--(f9);\draw(g1)--(g9);\draw(h1)--(h9);\draw(i1)--(i9); \draw(a1)--(i1);\draw(a2)--(i2);\draw(a3)--(i3);\draw(a4)--(i4);\draw(a5)--(i5); \draw(a6)--(i6);\draw(a7)--(i7);\draw(a8)--(i8);\draw(a9)--(i9); \draw[dotted](0,8)--(0,9);\draw[dotted](1,8)--(1,9);\draw[dotted](2,8)--(2,9);\draw[dotted](3,8)--(3,9);\draw[dotted](4,8)--(4,9); \draw[dotted](5,8)--(5,9);\draw[dotted](6,8)--(6,9);\draw[dotted](7,8)--(7,9);\draw[dotted](8,8)--(8,9); \draw[dotted](0,0)--(0,-1);\draw[dotted](1,0)--(1,-1);\draw[dotted](2,0)--(2,-1);\draw[dotted](3,0)--(3,-1);\draw[dotted](4,0)--(4,-1); \draw[dotted](5,0)--(5,-1);\draw[dotted](6,0)--(6,-1);\draw[dotted](7,0)--(7,-1);\draw[dotted](8,0)--(8,-1); \draw[dotted](-1,0)--(0,0);\draw[dotted](-1,1)--(0,1);\draw[dotted](-1,2)--(0,2);\draw[dotted](-1,3)--(0,3);\draw[dotted](-1,4)--(0,4); \draw[dotted](-1,5)--(0,5);\draw[dotted](-1,6)--(0,6);\draw[dotted](-1,7)--(0,7);\draw[dotted](-1,8)--(0,8); \draw[dotted](8,0)--(9,0);\draw[dotted](8,1)--(9,1);\draw[dotted](8,2)--(9,2);\draw[dotted](8,3)--(9,3);\draw[dotted](8,4)--(9,4); \draw[dotted](8,5)--(9,5);\draw[dotted](8,6)--(9,6);\draw[dotted](8,7)--(9,7);\draw[dotted](8,8)--(9,8); \node [scale=1.2] at (0,-1) {$-4$};\node [scale=1.2] at (1,-1) {$-3$};\node [scale=1.2] at (2,-1) {$-2$}; \node [scale=1.2] at (3,-1) {$-1$};\node [scale=1.2] at (4,-1) {$0$};\node [scale=1.2] at (5,-1) {$1$}; \node [scale=1.2] at (6,-1) {$2$};\node [scale=1.2] at (7,-1) {$3$};\node [scale=1.2] at (8,-1) {$4$}; \node [scale=1.2] at (-1.2,0) {$-4$};\node [scale=1.2] at (-1.2,1) {$-3$};\node [scale=1.2] at (-1.2,2) {$-2$}; \node [scale=1.2] at (-1.2,3) {$-1$};\node [scale=1.2] at (-1.2,4) {$0$};\node [scale=1.2] at (-1.2,5) {$1$}; \node [scale=1.2] at (-1.2,6) {$2$};\node [scale=1.2] at (-1.2,7) {$3$};\node [scale=1.2] at (-1.2,8) {$4$}; \end{tikzpicture} \caption{By taking $k=4$, the vertices of the set $S$ appear squared.} \label{fig:grid-k-4} \end{figure} Let $j_1,i_2$ and $j_2,i_1$ be the largest and the smallest indexes, respectively, such that the vertices $(i_1,j_1),(i_2,j_2)$ belong to $S_1$. Analogously, let $j_4,i_3$ and $j_3,i_4$ be the largest and the smallest indexes, respectively, such that the vertices $(i_3,j_3),(i_4,j_4)$ belong to $S_2$. We remark that it could happen $(i_1,j_1)=(i_2,j_2)$ or $(i_3,j_3)=(i_4,j_4)$. Let $X=\{(i_1,j_1),(i_2,j_2),(i_3,j_3),(i_4,j_4)\}$ and consider a Steiner $X$-tree. According to the structure of the set $S$, we notice that only those vertices of $Y$ given below could belong to a Steiner $X$-tree. See Fig.~\ref{fig:Steiner-tree-shape} for a sketch of $Y$. \begin{align*} Y= & \left(\{i_3,\dots,i_1\}\times \{j_4,\dots,j_2\}\right)\cup \left(\{i_1,\dots,i_2\}\times\{j_2\}\right)\cup \left(\{i_1\}\times\{j_2,\dots,j_1\}\right) \\ &\cup \left(\{i_4,\dots,i_3\}\times\{j_4\}\right) \cup \left(\{i_3\}\times\{j_3,\dots,j_4\}\right). \end{align*} \begin{figure}[ht!] \centering \begin{tikzpicture}[scale=.6, transform shape] \draw(0,-4)--(0,4); \draw(-4.5,0)--(4,0); \node [draw, shape=circle,scale=0.7,fill=black] (a) at (-4,-2) {}; \node [draw, shape=circle,scale=0.7,fill=black] (b) at (-2.5,-3.5) {}; \node [draw, shape=circle,scale=0.7,fill=black] (c) at (-2.5,-2) {}; \node [draw, shape=circle,scale=0.7,fill=black] (d) at (3,2) {}; \node [draw, shape=circle,scale=0.7,fill=black] (e) at (1.5,3.5) {}; \node [draw, shape=circle,scale=0.7,fill=black] (f) at (1.5,2) {}; \draw[thick] (c) rectangle (f); \draw[thick] (a)--(c)--(b); \draw[thick] (d)--(f)--(e); \draw[dotted] (d)--(3,0); \draw[dotted] (a)--(-4,0); \draw[dotted] (b)--(0,-3.5); \draw[dotted] (e)--(0,3.5); \node [scale=1.2] at (0,-4.4) {$0$}; \node [scale=1.2] at (-5,0) {$0$}; \node [scale=1.2] at (3.2,-0.3) {$i_2$}; \node [scale=1.2] at (1.85,-0.3) {$i_1$}; \node [scale=1.2] at (-4,0.3) {$i_4$}; \node [scale=1.2] at (-2.15,0.3) {$i_3$}; \node [scale=1.2] at (0.35,-3.5) {$j_3$}; \node [scale=1.2] at (0.35,-1.7) {$j_4$}; \node [scale=1.2] at (-0.35,3.5) {$j_1$}; \node [scale=1.2] at (-0.35,1.7) {$j_2$}; \end{tikzpicture} \begin{tikzpicture}[scale=.6, transform shape] \draw(0,-4)--(0,4); \draw(-4.5,0)--(4,0); \node [draw, shape=circle,scale=0.7,fill=black] (a) at (-4,-2) {}; \node [draw, shape=circle,scale=0.7,fill=black] (b) at (-2.5,-3.5) {}; \node [draw, shape=circle,scale=0.7,fill=black] (c) at (-2.5,-2) {}; \node [draw, shape=circle,scale=0.7,fill=black] (d) at (1.5,2) {}; \node [draw, shape=circle,scale=0.7,fill=black] (e) at (1.5,2) {}; \node [draw, shape=circle,scale=0.7,fill=black] (f) at (1.5,2) {}; \draw[thick] (c) rectangle (f); \draw[thick] (a)--(c)--(b); \draw[thick] (d)--(f)--(e); \draw[dotted] (a)--(-4,0); \draw[dotted] (b)--(0,-3.5); \node [scale=1.2] at (0,-4.4) {$0$}; \node [scale=1.2] at (-5,0) {$0$}; \node [scale=1.2] at (2.35,-0.3) {$i_1=i_2$}; \node [scale=1.2] at (-4,0.3) {$i_4$}; \node [scale=1.2] at (-2.15,0.3) {$i_3$}; \node [scale=1.2] at (0.35,-3.5) {$j_3$}; \node [scale=1.2] at (0.35,-1.7) {$j_4$}; \node [scale=1.2] at (-0.85,2.3) {$j_2=j_1$}; \end{tikzpicture} \caption{Two sketches of possibilities for the set $Y$. In the second one, $i_1=i_2$ and $j_2=j_1$, or equivalently, $(i_1,j_1)=(i_2,j_2)$.}\label{fig:Steiner-tree-shape} \end{figure} Now, we first observe that every vertex $(i,j)\in S$ such that $i\notin [i_1:i_2]\cup [i_4:i_3]$ and $j\notin [j_2:j_1]\cup [j_3:j_4]$ does not belong to $Y$, and so, it is not included in any Steiner $X$-tree. On the other hand, if $(i,j)\in B_1\setminus X$, then to obtain a Steiner $X\cup \{(i,j)\}$-tree we can only add to an Steiner $X$-tree some vertices of the set $\{i_1,\dots,i\}\times \{j_2,\dots,j\}$. This allows to claim that if $(i',j')\notin B_1$, then it will not belong to any Steiner $X\cup \{(i,j)\}$-tree for every $(i,j)\in B_1\setminus X$. This procedure can be iterated for all the remaining vertices of $B_1$, and consequently, we will obtain that any Steiner $X\cup B_1$-tree does not contain vertices from $S\setminus B_1$. By using some symmetrical arguments, we will obtain that any Steiner $X\cup (B_1\cup B_2)$-tree does not contain vertices from $S\setminus (B_1\cup B_2)$, which is precisely that any Steiner $B$-tree does not contain vertices from $S\setminus B$. Therefore, $S$ is a $k$-Steiner general position set for $P_{\infty}\,\Box\, P_{\infty}$, and the lower bound follows. \hfill $\square$ \bigskip \section{Concluding remarks and problems} In Proposition~\ref{prop:k-1-and-not-k} we have demonstrated that a $k$-Steiner general position need not be a $k'$-Steiner general position set for $k' > k$. This result indicates that there is no monotony relation for $k$-Steiner general position sets with respect to inclusion, for every graph $G$, but not necessarily for the value of the parameter ${\rm sgp}_k(G)$. Hence, we pose the following problem. \begin{problem} Is there any monotony relation between ${\rm sgp}_k(G)$ and ${\rm sgp}_{k+1}(G)$? \end{problem} It is already known that computing the general position number of graphs is NP-hard in general. In this sense, the answer to the following problem seems obvious. \begin{problem} Determine the complexity of computing the $k$-Steiner general position number. \end{problem} We are not aware of any lexicographic product for which the bound of Theorem~\ref{lex} is not sharp, hence se pose: \begin{problem} Is the bound of Theorem~\ref{lex} sharp for all lexicographic products? \end{problem} It is easy to construct several split graphs such that the bound of Theorem~\ref{split} is sharp. It remains to describe all split graphs for which the equality holds. \begin{problem} For which split graphs is the bound of Theorem~\ref{split} sharp? \end{problem} The bound from Theorem~\ref{grid} is tight because ${\rm sgp}_2(P_{\infty}\,\Box\, P_{\infty})={\rm gp}(P_{\infty}\,\Box\, P_{\infty})=4$ was proved in~\cite{manuel-2018b}. We wonder whether a parallel result holds for each $k > 2$: \begin{problem} Does the equality ${\rm sgp}_k(P_{\infty}\,\Box\, P_{\infty})=2k$ holds for $k>2$? \end{problem} Finally, in~\cite{klavzar-2021} the general position number of arbitrary integer lattices was determined, that is, of the Cartesian product of finitely many factors $P_{\infty}$. Hence we also pose: \begin{problem} Investigate ${\rm sgp}_k(P_{\infty}\,\Box\, \cdots \,\Box\, P_{\infty})$ for $k>2$. \end{problem} \section*{Acknowledgements} Sandi Klav\v{z}ar and Iztok Peterin acknowledge the financial support from the Slovenian Research Agency (research core funding No.\ P1-0297 and projects J1-9109, J1-1693, N1-0095, N1-0108). Dorota Kuziak and Ismael G. Yero have been partially supported by the Spanish Ministry of Science and Innovation through the grant PID2019-105824GB-I00.
{ "timestamp": "2021-05-19T02:14:29", "yymm": "2105", "arxiv_id": "2105.08391", "language": "en", "url": "https://arxiv.org/abs/2105.08391", "abstract": "Let $G$ be a graph. The Steiner distance of $W\\subseteq V(G)$ is the minimum size of a connected subgraph of $G$ containing $W$. Such a subgraph is necessarily a tree called a Steiner $W$-tree. The set $A\\subseteq V(G)$ is a $k$-Steiner general position set if $V(T_B)\\cap A = B$ holds for every set $B\\subseteq A$ of cardinality $k$, and for every Steiner $B$-tree $T_B$. The $k$-Steiner general position number ${\\rm sgp}_k(G)$ of $G$ is the cardinality of a largest $k$-Steiner general position set in $G$. Steiner cliques are introduced and used to bound ${\\rm sgp}_k(G)$ from below. The $k$-Steiner general position number is determined for trees, cycles and joins of graphs. Lower bounds are presented for split graphs, infinite grids and lexicographic products. The lower bound for the latter products leads to an exact formula for the general position number of an arbitrary lexicographic product.", "subjects": "Combinatorics (math.CO)", "title": "A Steiner general position problem in graph theory", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9919380075184121, "lm_q2_score": 0.8175744761936438, "lm_q1q2_score": 0.8109831969134325 }
https://arxiv.org/abs/1308.5459
Unseparated pairs and fixed points in random permutations
In a uniform random permutation \Pi of [n] := {1,2,...,n}, the set of elements k in [n-1] such that \Pi(k+1) = \Pi(k) + 1 has the same distribution as the set of fixed points of \Pi that lie in [n-1]. We give three different proofs of this fact using, respectively, an enumeration relying on the inclusion-exclusion principle, the introduction of two different Markov chains to generate uniform random permutations, and the construction of a combinatorial bijection. We also obtain the distribution of the analogous set for circular permutations that consists of those k in [n] such that \Pi(k+1 mod n) = \Pi(k) + 1 mod n. This latter random set is just the set of fixed points of the commutator [\rho, \Pi], where \rho is the n-cycle (1,2,...,n). We show for a general permutation \eta that, under weak conditions on the number of fixed points and 2-cycles of \eta, the total variation distance between the distribution of the number of fixed points of [\eta,\Pi] and a Poisson distribution with expected value 1 is small when n is large.
\section{Introduction} The goal of any procedure for shuffling a deck of $n$ cards labeled with, say, $[n] := \{1,2,\ldots,n\}$ is to take the cards in some specified original order, which we may take to be $(1,2,\ldots,n)$, and re-arrange them randomly in such a way that all $n!$ possible orders are close to being equally likely. A natural approach to checking empirically whether the outcomes of a given shuffling procedure deviate from uniformity is to apply some fixed numerical function to each of the permutations produced by several independent instances of the shuffle and determine whether the resulting empirical distribution is close to the distribution of the random variable that would arise from applying the chosen function to a uniformly distributed permutation. {\em Smoosh} shuffling (also known as {\em wash, corgi, chemmy} or {\em Irish} shuffling) is a simple physical mechanism for randomizing a deck of cards -- see \cite{You11} for an article that has a brief discussion of smoosh shuffling and a link to a video of the first author carrying it out, and \cite{Dia13, wiki_shuffle} for other short descriptions. In their forthcoming analysis of this shuffle, \cite{BaCoDi13} use the approach described above with the function that takes a permutation $\pi \in \mathfrak{S}_n$, the set of permutations of $[n] := \{1,2, \ldots, n\}$, and returns the cardinality of the set of labels $k \in [n-1]$ such that $\pi(k+1) = \pi(k)+1$. That is, they count the number of pairs of cards that were adjacent in the original deck and aren't separated or in a different relative order at the completion of the shuffle. For example, the permutation $\pi$ of $[6]$ given by \[ \pi = \begin{pmatrix} 1 & 2 & 3 & 4 & 5 & 6 & 7 \\ 5 & 6 & 7 & 4 & 1 & 2 & 3 \end{pmatrix} \] has $\{k \in [6]: \pi(k+1) = \pi(k)+1\} = \{1,2,5,6\}$. If we write $\Pi_n$ for a random permutation that is uniformly distributed on $\mathfrak{S}_n$ and $\mathbf{S}_n \subseteq [n-1]$ for the set of labels $k \in [n-1]$ such that $\Pi_n(k+1) = \Pi_n(k) + 1$, then, in order to support the contention that the smoosh shuffle is producing a random permutation with a distribution close to uniform, it is necessary to know, at least approximately, the distribution of the integer-valued random variable $\#\mathbf{S}_n$. Banklader et al. \cite{BaCoDi13} use Stein's method (see, for example, \cite{MR2121796} for a survey), to show that the distribution of $\#\mathbf{S}_n$ is close to a Poisson distribution with expected value $1$ when $n$ is large. The problem of computing $\mathbb{P}\{\#\mathbf{S}_n = 0\}$ (or, more correctly, the integer $n! \mathbb{P}\{\#\mathbf{S}_n = 0\}$) appears in various editions of the 19th century textbook on combinatorics and probability, {\em Choice and Chance} by William Allen Whitworth. For example, Proposition XXXII in Chapter IV of \cite{Whi63} gives \begin{equation} \label{E:Whitworth} \mathbb{P}\{\#\mathbf{S}_n = 0\} = \sum_{k=0}^n \frac{(-1)^k}{k!} + \frac{1}{n} \sum_{k=0}^{n-1} \frac{(-1)^k}{k!}. \end{equation} This formula is quite suggestive. The probability that $\Pi_n$ has no fixed points is $\sum_{k=0}^n \frac{(-1)^k}{k!}$ by de Montmort's \cite{deM_13} celebrated enumeration of derangements, and so if we write $\mathbf{T_n} \subseteq [n-1]$ for the set of labels $k \in [n-1]$ such that $\Pi_n(k) = k$ (that is, $\mathbf{T_n}$ is the set of fixed points of $\Pi_n$ that fall in $[n-1]$), then $\mathbb{P}\{\#\mathbf{T}_n = 0\} = \mathbb{P}\{\#\mathbf{S}_n = 0\}$ because in order for the set $\mathbf{T}_n$ to be empty either the permutation $\Pi_n$ has no fixed points or it has $n$ as a fixed point (an event that has probability $\frac{1}{n}$) and the resulting restriction of $\Pi_n$ to $[n-1]$ is a permutation of $[n-1]$ that has no fixed points. The following theorem and the remark after it were pointed out to us by Jim Pitman; they show that much more is true. Pitman's proof was similar to the enumerative one we present in Section~\ref{S:enumerative} and he asked if there are other, more ``conceptual'' proofs. We present two further proofs in Section~\ref{S:Markov} and Section~\ref{S:bijective} that we hope make it clearer ``why'' the result is true. \begin{theorem} \label{T:main} For all $n \in \mathbb{N}$, the random sets $\mathbf{S}_n$ and $\mathbf{T}_n$ have the same distribution. In particular, for $0 \le m \le n-1$, \[ \begin{split} \mathbb{P}\{\#\mathbf{S}_n = m\} & = \mathbb{P}\{\#\mathbf{T}_n = m\} \\ & = \left(\frac{1}{m!} \sum_{k=0}^{n-m} \frac{(-1)^k}{k!}\right) \frac{n-m}{n} + \left(\frac{1}{(m+1)!} \sum_{k=0}^{n-m-1} \frac{(-1)^k}{k!}\right) \frac{m+1}{n}. \\ \end{split} \] \end{theorem} \begin{remark} \label{R:exchangeable} Perhaps the most surprising consequence of this result is that the random set $\mathbf{S}_n \subseteq [n-1]$ is exchangeable; that is, conditional on $\# \mathbf{S}_n = m$, the conditional distribution of $\mathbf{S}_n$ is that of $m$ random draws without replacement from the set $[n-1]$. This follows because the same observation holds for the random set $\mathbf{T}_n$ by a symmetry that does not at first appear to have a counterpart for $\mathbf{S}_n$. For example, it does not seem obvious {\em a priori} that $\mathbb{P}\{\{i,i+1\} \subseteq \mathbf{S}_n\}$ for some $i \in [n-2]$ should be the same as $\mathbb{P}\{\{j,k\} \subseteq \mathbf{S}_n\}$ for some $j,k \in [n-1]$ with $|j-k| > 1$. \end{remark} \begin{remark} \label{R:count_distribution} Once we know that $\mathbf{S}_n$ and $\mathbf{T}_n$ have the same distribution, the formula given in Theorem~\ref{T:main} for the common distribution distribution of $\#\mathbf{S}_n$ and $\#\mathbf{T}_n$ follows from the well-known fact that the probability that $\Pi_n$ has $m$ fixed points is $\frac{1}{m!} \sum_{k=0}^{n-m} \frac{(-1)^k}{k!}$ (something that follows straightforwardly from the formula above for the probability that $\Pi_n$ has no fixed points) coupled with the observation that $\#\mathbf{T}_n = m$ if and only if either $\Pi_n$ has $m$ fixed points and all of these fall in $[n-1]$ or $\Pi_n$ has $m+1$ fixed points and one of these is $n$. \end{remark} We present an enumerative proof of Theorem~\ref{T:main} in Section~\ref{S:enumerative}. Although this proof is simple, it is not particularly illuminating. We show in Section~\ref{S:Markov} that the result can be derived with essentially no computation from a comparison of two different ways of iteratively generating uniform random permutations. Theorem~\ref{T:main} is, of course, equivalent to the statement that for every subset $S \subseteq [n-1]$ the set $\{\pi \in \mathfrak{S}_n : \{k \in [n-1] : \pi(k+1) = \pi(k)+1\} = S\}$ has the same cardinality as $\{\pi \in \mathfrak{S}_n : \{k \in [n-1] : \pi(k) = k\} = S\}$, and so if the theorem holds then there must exist a bijection $\mathcal{H}: \mathfrak{S}_n \to \mathfrak{S}_n$ such that $\{k \in [n-1] : \pi(k+1) = \pi(k)+1\} = \{k \in [n-1] : \mathcal{H} \pi(k) = k\}$. Conversely, exhibiting such a bijection proves the theorem, and we present a natural construction of one in Section~\ref{S:bijective}. The analogue of $\mathbf{S}_n$ for circular permutations is the random set consisting of those $k \in [n]$ such that $\Pi(k+1 \mod n) = \Pi(k) + 1 \mod n$. We obtain the distribution of this random set via an enumeration in Section~\ref{S:circular} and then present some bijective proofs of facts suggested by the enumerative results. Note that the latter random set is just the set of fixed points of the commutator $[\rho, \Pi]$, where $\rho$ is the $n$-cycle $(1,2,\ldots,n)$. In Section~\ref{S:commutator} we show for a general permutation $\eta$ that, under weak conditions on the number of fixed points and $2$-cycles of $\eta$, the total variation distance between the distribution of the number of fixed points of $[\eta,\Pi]$ and a Poisson distribution with expected value $1$ is small when $n$ is large. \begin{remark} It is clear from Theorem~\ref{T:main} that the common distribution of $\#\mathbf{S}_n$ and $\#\mathbf{T}_n$ is approximately Poisson with expected value $1$ when $n$ is large. Write $\mathbf{F}_n := \{k \in [n] : \Pi_n(k) = k\}$ for the set of fixed points of the uniform random permutation $\Pi_n$ and $\mathbb{Q}$ for the Poisson probability distribution with expected value $1$. It is well-known that the total variation distance between the distribution of $\# \mathbf{F}_n$ and $\mathbb{Q}$ is amazingly small: \[ d_{\mathrm{TV}}(\mathbb{P}\{\# \mathbf{F}_n \in \cdot\}, \mathbb{Q}) \le \frac{2^n}{n!}, \] and so it is natural to ask whether the common distribution of $\#\mathbf{S}_n$ and $\#\mathbf{T}_n$ is similarly close to $\mathbb{Q}$. Because $\mathbb{P}\{\#\mathbf{T}_n \ne \#\mathbf{F}_n\} = \frac{1}{n}$, we might suspect that the total variation distance between the distributions of $\#\mathbf{T}_n$ and $\#\mathbf{F}_n$ is on the order of $\frac{1}{n}$, and so the total variation distance between the distribution of $\#\mathbf{S}_n$ and $\mathbb{Q}$ is also of that order. Indeed, it follows from \eqref{E:Whitworth} that \[ \begin{split} \mathbb{P}\{\#\mathbf{S}_n = 0\} & = \mathbb{P}\{\#\mathbf{F}_n = 0\} + \frac{1}{n} \sum_{k=0}^{n-1} \frac{(-1)^k}{k!} \\ & \ge \mathbb{Q}\{0\} - \frac{2^n}{n!} + \frac{1}{n} \sum_{k=0}^{n-1} \frac{(-1)^k}{k!}, \\ \end{split} \] and so \[ d_{\mathrm{TV}}(\mathbb{P}\{\# \mathbf{S}_n \in \cdot\}, \mathbb{Q}) \ge \frac{e^{-1}}{n} + \mathrm{o}\left(\frac{1}{n}\right). \] \end{remark} \section{An enumerative proof} \label{S:enumerative} Our first approach to proving Theorem~\ref{T:main} is to compute $\#\{\pi \in \mathfrak{S}_n: \pi(k_i + 1) = \pi(k_i)+1, \, 1 \le i \le m\}$ for a subset $\{k_1, \ldots, k_m\} \subseteq [n-1]$ and show that this number is $(n-m)! = \#\{\pi \in \mathfrak{S}_n: \pi(k_i) = k_i, \, 1 \le i \le m\}$. This establishes that \[ \mathbb{P}\{\{k_1, \ldots, k_m\} \subseteq \mathbf{S}_n\} = \mathbb{P}\{\{k_1, \ldots, k_m\} \subseteq \mathbf{T}_n\}, \] and an inclusion-exclusion argument completes the proof. We begin by noting that we can build up a permutation of $[n]$ by first taking the elements of $[n]$ in any order and then imagining that we lay elements down successively so that the $h^{\mathrm{th}}$ element goes in one of the $h$ ``slots'' defined by the $h-1$ elements that have already been laid down, that is, the slot before the first element, the slot after the last element, or one of the $h-2$ slots between elements. Consider first the set $\{\pi \in \mathfrak{S}_n: \pi(k+1) = \pi(k)+1\}$ for some fixed $k \in [n-1]$. We can count this set by imagining that we first put down $k$ and $k+1$ next to each other in that order and then successively lay down the remaining elements $[n] \setminus \{k,k+1\}$ in such a way that no element is ever laid down in the slot between $k$ and $k+1$. It follows that \[ \#\{\pi \in \mathfrak{S}_n : \pi(k+1) = \pi(k)+1\} = 2 \times 3 \times \cdots (n-1) = (n-1)!, \] as required. Now consider the set $\{\pi \in \mathfrak{S}_n : \pi(k+1) = \pi(k)+1 \, \& \, \pi(k+2) = \pi(k+1)+1\}$ for some fixed $k \in [n-2]$. We can count this set by imagining that we first put down $k$, $k+1$ and $k+2$ next to each other in that order and then successively lay down the remaining elements $[n] \setminus \{k,k+1,k+2\}$ in such a way that no element is ever laid down in the slot between $k$ and $k+1$ or the slot between $k+1$ and $k+2$. The number of such permutations is thus $2 \times 3 \times \cdots \times (n-2) = (n-2)!$, again as required. On the other hand, suppose we fix $k,\ell \in [n-1]$ with $|j-k| > 1$ and consider the set $\{\pi \in \mathfrak{S}_n : \pi(k+1) = \pi(k)+1$ \, \& \, $\pi(\ell+1) = \pi(\ell)+1\}$. We imagine that we first put down $k$ and $k+1$ next to each other in that order and then $\ell$ and $\ell+1$ next to each other in that order either before or after the pair $k$ and $k+1$. There are two ways to do this. Then we successively lay down the remaining elements $[n] \setminus \{k,k+1, \ell, \ell+1\}$ in such a way that no element is ever laid down in the slot between $k$ and $k+1$ or the slot between $\ell$ and $\ell+1$. There are $3 \times 4 \times \cdots \times (n-2)$ ways to do this second part of the construction, and so the number of permutations we are considering is $2 \times 3 \times 4 \times \cdots \times (n-2) = (n-2)!$, once again as required. It is clear how this argument generalizes. Suppose we have a subset $\{k_1, \ldots, k_m\} \subseteq [n-1]$ and we wish to compute $\#\{\pi \in \mathfrak{S_n} : \pi(k_i + 1) = \pi(k_i), \, 1 \le i \le m\}$. We can break $\{k_1, \ldots, k_m\}$ up into $r$ ``blocks'' of consecutive labels for some $r$. There are $r!$ ways to lay down the blocks and then $(r+1) \times (r+2) \times \cdots \times (n-m)$ ways of laying down the remaining labels $[n] \setminus \{k_1, \ldots, k_m\}$ so that no label is inserted into a slot within one of the blocks. Thus, the cardinality we wish to compute is indeed $r! \times (r+1) \times (r+2) \times \cdots \times (n-m) = (n-m)!$. \section{A Markov chain proof} \label{S:Markov} The following proof proceeds by first showing that the random set $\mathbf{S}_n$ is exchangeable and then establishing that the distribution of $\# \mathbf{S}_n$ is the same as the distribution of $\# \mathbf{T}_n$ without explicitly calculating either distribution. Suppose that we build the uniform random permutations $\Pi_1, \Pi_2, \ldots$ sequentially in the following manner: $\Pi_{n+1}$ is obtained from $\Pi_n$ by inserting $n+1$ uniformly at random into one of the $n+1$ ``slots'' defined by the ordered list $\Pi_n$ (i.e. as in Section~\ref{S:enumerative}, we have slots before and after the first and last elements of the list and $n-1$ slots between successive elements). The choice of slot is independent of $\mathcal{F}_n$, where $\mathcal{F}_n$ is the $\sigma$-field generated by $\Pi_1, \ldots, \Pi_n$. It is clear that the set-valued stochastic process $(\mathbf{S}_n)_{n \in \mathbb{N}}$ is Markovian with respect to the filtration $(\mathcal{F}_n)_{n \in \mathbb{N}}$. In fact, if we write $\mathbf{S}_n = \{X_1^n, \ldots, X_{M_n}^n\}$, then \[ \mathbb{P}\{\mathbf{S}_{n+1} = \{X_1^n, \ldots, X_{M_n}^n\} \setminus \{X_i^n\} \, | \, \mathcal{F}_n\} = \frac{1}{n+1}, \quad 1 \le i \le M_n, \] corresponding to $n+1$ being inserted in the slot between the successive elements $X_i^n$ and $X_i^n + 1$ in the list, \[ \mathbb{P}\{\mathbf{S}_{n+1} = \{X_1^n, \ldots, X_{M_n}^n\} \cup \{n\} \, | \, \mathcal{F}_n\} = \frac{1}{n+1}, \] corresponding to $n+1$ being inserted in the slot to the right of $n$, and \[ \mathbb{P}\{\mathbf{S}_{n+1} = \{X_1^n, \ldots, X_{M_n}^n\} \, | \, \mathcal{F}_n\} = \frac{(n+1) - M_n - 1}{n+1}. \] Moreover, it is obvious from the symmetry inherent in these transition probabilities and induction that $\mathbf{S}_n$ is an exchangeable random subset of $[n-1]$ for all $n$. Furthermore, the nonnegative integer valued process $(M_n)_{n \in \mathbb{N}} = (\# \mathbf{S}_n)_{n \in \mathbb{N}}$ is also Markovian with respect to the filtration $(\mathcal{F}_n)_{n \in \mathbb{N}}$ with the following transition probabilities: \[ \mathbb{P}\{M_{n+1} = M_{n} - 1 \, | \, \mathcal{F}_n\} = \frac{M_{n}}{n+1}, \] \[ \mathbb{P}\{M_{n+1} = M_{n} \, | \, \mathcal{F}_n\} = \frac{(n+1) - M_n - 1}{n+1}, \] and \[ \mathbb{P}\{M_{n+1} = M_{n} + 1 \, | \, \mathcal{F}_n\} = \frac{1}{n+1}. \] Because the conditional distribution of $\mathbf{S}_n$ given $\# \mathbf{S}_n =m$ is, by exchangeability, the same as that of $\mathbf{T}_n$ given $\# \mathbf{T}_n =m$ for $0 \le m \le n-1$, it will suffice to show that the distribution of $\# \mathbf{S}_n$ is the same as that of $\# \mathbf{T}_n$. Moreover, because $\# \mathbf{T}_n$ has the same distribution as $\#\{2 \le k \le n : \Pi_n(k) = k\}$ for all $n \in \mathbb{N}$ and $\# \mathbf{S}_1 = \# \mathbf{T}_1 = 0$, it will certainly be enough to build another sequence $(\Sigma_n)_{n \in \mathbb{N}}$ such that \begin{itemize} \item $\Sigma_n$ is a uniform random permutation of $[n]$ for all $n \in \mathbb{N}$, \item $(\Sigma_n)_{n \in \mathbb{N}}$ is Markovian with respect to some filtration $(\mathcal{G}_n)_{n \in \mathbb{N}}$, \item $(N_n)_{n \in \mathbb{N}} := (\#\{2 \le k \le n : \Sigma_n(k) = k\})_{n \in \mathbb{N}}$ is also Markovian with respect to the filtration $(\mathcal{G}_n)_{n \in \mathbb{N}}$ with the following transition probabilities \[ \mathbb{P}\{N_{n+1} = N_{n} - 1 \, | \, \mathcal{G}_n\} = \frac{N_{n}}{n+1}, \] \[ \mathbb{P}\{N_{n+1} = N_{n} \, | \, \mathcal{G}_n\} = \frac{(n+1) - N_n - 1}{n+1}, \] and \[ \mathbb{P}\{N_{n+1} = N_{n} + 1 \, | \, \mathcal{G}_n\} = \frac{1}{n+1}. \] \end{itemize} We recall the simplest instance of the {\em Chinese restaurant process} that iteratively generates uniform random permutations (see, for example, \cite{MR2245368}). Individuals labeled $1,2,\ldots$ successively enter a restaurant equipped with an infinite number of round tables. Individual $1$ sits at some table. Suppose that after the first $n-1$ individuals have entered the restaurant we have a configuration of individuals sitting around some number of tables. When individual $n$ enters the restaurant he is equally likely to sit to the immediate left of one of the individuals already present or to sit at an unoccupied table. The permutation $\Sigma_n$ is defined in terms of the resulting seating configuration by setting $\Sigma_n(i) = j$, $i \ne j$, if individual $j$ is sitting immediately to the left of individual $i$ and $\Sigma_n(i) = i$ if individual $i$ is sitting by himself at some table. Each occupied table corresponds to a cycle of $\Sigma_n$ and, in particular, tables with a single occupant correspond to fixed points of $\Sigma_n$. It is clear that if we let $\mathcal{G}_n$ be the $\sigma$-field generated by $\Sigma_1, \ldots, \Sigma_n$, then all of the requirements listed above for $(\Sigma_n)_{n \in \mathbb{N}}$ and $(N_n)_{n \in \mathbb{N}}$ are met. \section{A bijective proof} \label{S:bijective} As we remarked in the Introduction, in order to prove Theorem~\ref{T:main} it suffices to find a bijection $\mathcal{H}: \mathfrak{S}_n \to \mathfrak{S}_n$ such that $\{k \in [n-1] : \pi(k+1) = \pi(k)+1\} = \{k \in [n-1] : \mathcal{H} \pi(k) = k\}$ for all $\pi \in \mathfrak{S}_n$. Not only will we find such a bijection, but we will prove an even more general result that requires we first set up some notation. Fix $1 \le h < n$. Let $\rho \in \mathfrak{S}_n$ be the permutation that maps $i \in [n]$ to $i + h \mod n \in [n]$. Next define the following bijection of $\mathfrak{S}_n$ to itself that is essentially the {\em transformation fondamentale} of \cite[Section 1.3]{MR0272642} (such bijections seem to have been first introduced implicitly in \cite[Chapter 8]{MR0096594}). Take a permutation $\pi$ and write it in cycle form $(a_1, a_2, \ldots, a_r)(b_1, b_2, \ldots, b_s) \cdots (c_1, c_2, \ldots, c_t)$, where in each cycle the leading element is the least element of the cycle and these leading elements form a decreasing sequence. That is, $a_1 > b_1 > \cdots > c_1$. Next, remove the parentheses to form an ordered listing $(a_1, a_2, \ldots, a_r, b_1, b_2 , \ldots , b_s, c_1, c_2, \ldots, c_t)$ of $[n]$ and define $\hat \pi \in \mathfrak{S}_n$ by taking $(\hat \pi(1), \hat \pi(2), \ldots, \hat \pi(n))$ to be this ordered listing. The following result for $h=1$ provides a bijection that establishes Theorem~\ref{T:main}. \begin{theorem} \label{T:bijection} For every $\pi \in \mathfrak{S}_n$, \[ \{k \in [n-h] : \widehat{\rho \pi}^{-1}(k+h) = \widehat{\rho \pi}^{-1}(k) + 1\} = \{k \in [n-h] : \pi(k) = k\}. \] \end{theorem} \begin{proof} Suppose for some $k \in [n-h]$ that $\pi(k) = k$. Then, $\rho \pi(k) = k+h$, because no reduction modulo $n$ takes place. If we write the cycle decomposition of $\rho \pi$ in the canonical form described above, then there will be a cycle of the form $(\ldots, k, k+h, \ldots)$ because of the convention that each cycle begins with its least element. After the parentheses are removed to form $\widehat{\rho \pi}$, we will have $\widehat{\rho \pi}(j) = k$ and $\widehat{\rho \pi}(j + 1) = k+h$ for some $j \in [n]$. Hence, $\widehat{\rho \pi}^{-1}(k) = j$ and $\widehat{\rho \pi}^{-1}(k+h) = j+1 = \widehat{\rho \pi}^{-1}(k)+1$. Conversely, suppose for some $k \in [n-h]$ that $\widehat{\rho \pi}^{-1}(k+h) = \widehat{\rho \pi}^{-1}(k) + 1$, so that $\widehat{\rho \pi}^{-1}(k)=j$ and $\widehat{\rho \pi}^{-1}(k+h) = j+1$ for some $j \in [n]$. Then, $\widehat{\rho \pi}(j)=k$ and $\widehat{\rho \pi}(j+1) = k+h$. The canonical cycle decomposition of $\rho \pi$ is obtained by taking the ordered listing $(\widehat{\rho \pi}(1), \widehat{\rho \pi}(2), \ldots, \widehat{\rho \pi}(n))$, placing left parentheses before each element that is smaller than its predecessors to the left, and then inserting right parentheses as necessary to produce a legal bracketing. It follows that $\rho \pi$ must have a cycle of the form $(\ldots, k, k+h, \ldots)$, and hence $\rho \pi(k) = k+h$. Thus, $\pi(k) = k$, as required. \end{proof} \begin{remark} We give the following example of the construction of $\widehat{\rho \pi}^{-1}$ from $\pi$ for the benefit of the reader. Suppose that $n=7$ and \[ \pi = \begin{pmatrix} 1 & 2 & 3 & 4 & 5 & 6 & 7 \\ 7 & 2 & 6 & 4 & 1 & 3 & 5 \end{pmatrix}, \] so that $\pi$ has canonical cycle decomposition \[ (4) (3,6) (2) (1,7,5). \] For $h=1$, \[ \rho \pi = \begin{pmatrix} 1 & 2 & 3 & 4 & 5 & 6 & 7 \\ 1 & 3 & 7 & 5 & 2 & 4 & 6 \end{pmatrix}. \] The canonical cycle decomposition of $\rho \pi$ is \[ (2,3,7,6,4,5) (1). \] Thus, \[ \widehat{\rho \pi} = \begin{pmatrix} 1 & 2 & 3 & 4 & 5 & 6 & 7 \\ 2 & 3 & 7 & 6 & 4 & 5 & 1 \end{pmatrix}. \] and \[ \widehat{\rho \pi}^{-1} = \begin{pmatrix} 1 & 2 & 3 & 4 & 5 & 6 & 7 \\ 7 & 1 & 2 & 5 & 6 & 4 & 3 \end{pmatrix}. \] Note that it is indeed the case that \[ \{k \in [6] : \pi(k) = k\} = \{2,4\} = \{k \in [6] : \widehat{\rho \pi}^{-1}(k+1) = \widehat{\rho \pi}^{-1}(k) + 1\}. \] \end{remark} \begin{remark} It follows from Theorem~\ref{T:bijection} and the argument outlined in Remark~\ref{R:count_distribution} that the probability the random variable $\#\{k \in [n-h] : \Pi_n(k+h) = \Pi_n(k) + 1\}$ takes the values $m$ is \[ \sum_{\ell = m}^{m+h} \left(\frac{1}{\ell!} \sum_{k=0}^{n-\ell} \frac{(-1)^k}{k!}\right) \frac{\binom{n-h}{m} \binom{h}{\ell - m}}{\binom{n}{\ell}} \] for $0 \le m \le n-h$. \end{remark} \section{Circular permutations} \label{S:circular} A question closely related to the ones we have been considering so far is to ask for the distribution of the random set \[ \mathbf{U}_n := \{k \in [n] : \Pi_n(k+1 \mod n) = \Pi_n(k)+1 \mod n\}. \] That is, we think of our deck $[n]$ as being ``circularly ordered'', with $n$ followed by $1$, and ask for the distribution of the number of cards that are followed immediately by their original successor when we lay the shuffled deck out around the circumference of a circle. \begin{proposition} \label{P:circular} The random set $\mathbf{U}_n$ is exchangeable with \[ \mathbb{P}\{\#\mathbf{U}_n = m\} = \frac{1}{m!} \left(\sum_{h=0}^{n-m-1} (-1)^h \frac{1}{h!} \frac{n}{(n-m-h)} + (-1)^{n-m} \frac{1}{(n-m)!} n \right) \] for $0 \le m \le n$. \end{proposition} \begin{proof} Consider a subset $\{k_1, \ldots, k_m\} \subseteq [n]$. We wish to compute \[ \#\{\pi \in \mathfrak{S}_n: \pi(k_i + 1 \mod n) = \pi(k_i) + 1 \mod n, \, 1 \le i \le m\}. \] When $m=n$ this number is clearly $n$ and when $m=0$ it is $n!$. Consider $1 \le m \le n$. For some positive integer $r$ we can break $\{k_1, \ldots, k_m\}$ up into $r$ ``runs'' of labels that are ``consecutive'' modulo $n$; that is we can write $\{k_1, \ldots, k_m\}$ as the disjoint union of sets $\{\ell_1, \ell_1+1, \ldots, \ell_1 + s_1-1\}$ $\{\ell_2, \ell_2 +1, \ldots, \ell_2 + s_2-1\}$, $\ldots$, $\{\ell_r, \ell_2 +1, \ldots, \ell_r + s_r-1\}$, where all additions are $\mod n$ and $\ell_i + s_i \ne \ell_j$ for $i \ne j$. This leads to $r$ disjoint ``blocks'' $\{\ell_1, \ell_1+1, \ldots, \ell_1 + s_1\}$, $\ldots$, $\{\ell_r, \ell_2 +1, \ldots, \ell_r + s_r\}$ of labels that must be kept together if we take the permutation and join up the last element of the resulting ordered listing of $[n]$ with the first to produce a circularly ordered list. There are $(r-1)!$ ways to circularly order the blocks. Initially this leaves $r$ slots between the $r$ blocks when we think of them as being ordered around a circle. Also, there are initially $n-m-r$ labels that are not contained in some block. It follows that there are then $r \times (r+1) \times \cdots \times (n-m-1)$ ways of laying down the remaining $n-m-r$ elements of $[n]$ that aren't in a block so that no element is inserted into a slot within one of the blocks. Finally, there are $n$ places between the $n$ circularly ordered elements of $[n]$ where we can cut to produce a permutation of $[n]$. Thus, the cardinality we wish to compute is $(r-1)! \times r \times (r+1) \times \cdots \times (n-m-1) \times n = (n-m-1)! \times n$. We see that \[ \mathbb{P}\{\{k_1, \ldots, k_m\} \subseteq \mathbf{U}_n\} = \begin{cases} 1,& \quad m = 0, \\ \frac{1}{(n-m) (n-m+1) \cdots (n-1)},& \quad 1 \le m \le n-1, \\ \frac{1}{(n-1)!},& \quad m = n. \end{cases} \] Consequently, by inclusion-exclusion, \[ \begin{split} \mathbb{P}\{\mathbf{U}_n = \{k_1, \ldots, k_m\}\} & = \sum_{h=0}^{n-m-1} (-1)^h \binom{n-m}{h} \frac{1}{(n-m-h) (n-m-h+1) \cdots (n-1)} \\ & \quad + (-1)^{n-m} \frac{1}{(n-1)!}\\ & = \frac{(n-m)!}{(n-1)!} \sum_{h=0}^{n-m-1} (-1)^h \frac{1}{h!} \frac{1}{(n-m-h)} + (-1)^{n-m} \frac{1}{(n-1)!}\\ \end{split} \] In particular, $\mathbf{U}_n$ is exchangeable and \[ \begin{split} \mathbb{P}\{\# \mathbf{U}_n = m\} & = \binom{n}{m} \left( \frac{(n-m)!}{(n-1)!} \sum_{h=0}^{n-m-1} (-1)^h \frac{1}{h!} \frac{1}{(n-m-h)} + (-1)^{n-m} \frac{1}{(n-1)!}\right)\\ & = \frac{1}{m!} \left( \sum_{h=0}^{n-m-1} (-1)^h \frac{1}{h!} \frac{n}{(n-m-h)} + (-1)^{n-m} \frac{1}{(n-m)!} n \right).\\ \end{split} \] \end{proof} \begin{remark} As expected, $\mathbb{P}\{\mathbf{U}_n = m\}$ converges to the Poisson probability $e^{-1} \frac{1}{m!}$ as $n \to \infty$. \end{remark} The exchangeability of $\mathbf{U}_n$ implies that there are is at least one bijection (and hence many) between the sets \[ \#\{\pi \in \mathfrak{S}_n: \pi(k_i' + 1 \mod n) = \pi(k_i') + 1 \mod n, \, 1 \le i \le m\} \] and \[ \#\{\pi \in \mathfrak{S}_n: \pi(k_i'' + 1 \mod n) = \pi(k_i'') + 1 \mod n, \, 1 \le i \le m\} \] for two subsets $\{k_1', \ldots, k_m'\}$ and $\{k_1'', \ldots, k_m''\}$ of $[n]$. This leads to the question of whether there is a bijection with a particularly nice description. Rather than pursue this question directly, we give a bijective explanation of the following interesting consequence of Proposition~\ref{P:circular} from which the desired bijection can be readily derived. Observe that \[ \begin{split} \mathbb{P}\{\mathbf{U}_n = \{k_1, \ldots, k_m\}\} & = \sum_{h=0}^{n-m-1} (-1)^h \binom{n-m}{h} \frac{1}{(n-m-h) \cdots (n-1)} \\ & \quad + (-1)^{n-m} \frac{1}{(n-1)!}, \\ \end{split} \] whereas \[ \begin{split} \mathbb{P}\{\mathbf{U}_{n-m} = \emptyset\} & = \sum_{h=0}^{n-m-1} (-1)^h \binom{n-m}{h} \frac{1}{(n-m-h) \cdots (n-m-1)} \\ & \quad + (-1)^{n-m} \frac{1}{(n-m-1)!}, \\ \end{split} \] so that \begin{equation} \label{E:circular_count} (n-1)! \mathbb{P}\{\mathbf{U}_n = \{k_1, \ldots, k_m\}\} = (n-m-1)! \mathbb{P}\{\mathbf{U}_{n-m} = \emptyset\}. \end{equation} Let $\rho \in \mathfrak{S}_n$ be the permutation that maps $i \in [n]$ to $i + 1 \mod n \in [n]$. Define an equivalence relation on $\mathfrak{S}_n$ by declaring that $\pi'$ and $\pi''$ are equivalent if and only if $\rho^k \pi' = \pi''$ for some $k \in \{0,1,\ldots,n-1\}$. We call the set of equivalence classes the circular permutations of $[n]$ and denote this set by $\mathfrak{C}_n$. Note that $\# \mathfrak{C}_n = (n-1)!$. We will write $\sigma \in \mathfrak{C}_n$ as an ordered listing $(\sigma(1), \ldots, \sigma(n))$ of $[n]$, with the understanding that the listings produced by a cyclic permutation of the coordinates also represent $\sigma$: a permutation $\pi \in \mathfrak{S}_n$ is in the equivalence class $\sigma$ if for some $k \in \{0,1,\ldots,n-1\}$ we have $\pi(i) = \sigma(i + k \mod n)$ for $i \in [n]$. We can also think of $(\sigma(1), \ldots, \sigma(n))$ as the cycle representation of a permutation $\tilde \sigma$ of $[n]$ consisting of a single $n$-cycle (that is, the permutation $\tilde \sigma$ sends $\sigma(i)$ to $\sigma(i+1 \mod n)$). Hence we can also regard $\mathfrak{C}_n$ as the set of $n$-cycles in $\mathfrak{S}_n$. If $\pi \in \mathfrak{S}_n$, then the set \[ \{j \in [n] : \pi^{-1}(j+1 \mod n) = \pi^{-1}(j) + 1 \mod n\} \] is unchanged if we replace $\pi$ by an equivalent permutation. We denote the common value for the equivalence class $\sigma \in \mathfrak{C}_n$ to which $\pi$ belongs by $\Theta_n(\sigma)$. In terms of the $n$-cycle $\tilde \sigma \in \mathfrak{S}_n$ associated with $\sigma$, \[ \Theta_n(\sigma) = \{j \in [n]: \tilde \sigma(j) = j + 1 \mod n\}. \] The identity \eqref{E:circular_count} is equivalent to the identity \begin{equation} \label{E:circular_count_2} \#\{\tau \in \mathfrak{C}_n : \Theta_n(\tau) = \{k_1, \ldots, k_m\}\} = \#\{\sigma \in \mathfrak{C}_{n-m} : \Theta_n(\sigma) = \emptyset\} \end{equation} for any subset $\{k_1, \ldots, k_m\} \subseteq [n]$, and we will give a bijective proof of this fact. Consider $\sigma \in \mathfrak{C}_{n-m}$ with $\Theta_{n-m}(\sigma) = \emptyset$. Suppose that we have indexed $\{k_1, \ldots, k_m\}$ so that $k_1 < k_2 < \ldots < k_m$. Note that $k_i \in [n-m+i]$ for $1 \le i \le m$. We are going to recursively build circular permutations $\sigma = \sigma_0, \sigma_1, \ldots, \sigma_m$ with $\sigma_i \in \mathfrak{C}_{n-m+i}$ and $\Theta_{n-m}(\sigma_i) = \{k_1, k_2, \ldots, k_i\}$ for $1 \le i \le m$. Suppose that $\sigma = \sigma_0, \ldots, \sigma_i$ have been built. Write $\sigma_i \in \mathfrak{C}_{n-m+i}$ as $(h_1, \ldots, h_{n-m+i})$, where $h_1, \ldots, h_{n-m+i}$ is a listing of $[n-m+i]$ in some order and we recognize two such representations as describing the same circular permutation if each can be obtained from the other by a circular permutation of the the entries. We first add one to each entry of $(h_1, \ldots, h_{n-m+i})$ that is greater than $k_{i+1}$, thereby producing a vector that is still of length $n-m+i$ and has entries that are a listing of $\{1,\ldots,k_{i+1}\} \cup \{k_{i+1} + 2, \ldots, n-m+i+1\}$. Now insert $k_{i+1} + 1$ immediately to the right of $k_{i+1}$, thereby producing a vector that is now of length $n-m+i+1$ and has entries that are a listing of $[n-m+i+1]$. We can describe the procedure more formally as follows. Either $k_{i+1} \le n-m+i$ or $k_{i+1} = n-m+i+1$. In the first case, let $j^* \in [n-m+i]$ be such that $\sigma_i(j^*) = k_{i+1}$ and define $\sigma_{i+1} = (\sigma_{i+1}(1), \ldots, \sigma_{i+1}(n-m+i+1))$ by setting \[ \sigma_{i+1}(j) = \begin{cases} \sigma_i(j),& \quad \text{if $j \le j^*$ and $\sigma_i(j) \le k_{i+1}$,}\\ \sigma_i(j)+1,& \quad \text{if $j \le j^*$ and $\sigma_i(j) > k_{i+1}$,}\\ k_{i+1}+1,& \quad \text{if $j=j^*+1$,}\\ \sigma_i(j-1),& \quad \text{if $j > j^*+1$ and $\sigma_i(j) \le k_{i+1}$,}\\ \sigma_i(j-1)+1,& \quad \text{if $j > j^*+1$ and $\sigma_i(j) > k_{i+1}$.}\\ \end{cases} \] On the other hand, if $k_{i+1} = n-m+i+1$, then let $j^* \in [n-m+i]$ be such that $\sigma_i(j^*) = 1$ and define $\sigma_{i+1} = (\sigma_{i+1}(1), \ldots, \sigma_{i+1}(n-m+i+1))$ by setting \[ \sigma_{i+1}(j) = \begin{cases} \sigma_i(j),& \quad \text{if $j < j^*$,}\\ k_{i+1}=n-m+i+1,& \quad \text{if $j=j^*$,}\\ \sigma_i(j-1),& \quad \text{if $j > j^*$.}\\ \end{cases} \] It is not difficult to check in either case that a cyclic permutation of the coordinates in the chosen representation of $\sigma_i$ induces a cyclic permutation in the coordinates of $\sigma_{i+1}$, and so $\sigma_i \mapsto \sigma_{i+1}$ is a well-defined map from $\mathfrak{C}_{n-m+i}$ to $\mathfrak{C}_{n-m+i+1}$. It is clear that $\Theta_{n-m+i+1}(\sigma_{i+1}) = \{k_1, \ldots, k_{i+1}\}$. \begin{example} \label{E:circular_perm} Here are two examples of the construction just described. Suppose that $n=7$, $\sigma = (3,1,6,5,7,2,4)$, $m=3$ and $\{k_1,k_2,k_3\} = \{3,5,6\}$. We begin by adding one to each entry of $\sigma$ greater than $k_1 = 3$. This gives us \[ (3,1,7,6,8,2,5). \] We then insert $4 = k_1 + 1$ immediately to the right of $k_1 = 3$ to get \[ \sigma_1 = (\underline{3},4,1,7,6,8,2,5). \] Now we add one to each entry greater than $k_2 = 5$. This gives us \[ (3,4,1,8,7,9,2,5). \] We then insert $6 = k_2 + 1$ immediately to the right of $k_2 = 5$ to get \[ \sigma_2 = (\underline{3},4,1,8,7,9,2,\underline{5},6). \] We next add one to each entry greater than $k_3 = 6$. This gives us \[ (3,4,1,9,8,10,3,5,6). \] Lastly, we insert $7 = k_3 + 1$ immediately to the right of $k_3 = 6$ to get \[ \sigma_3 = (\underline{3},4,1,9,8,10,2,\underline{5},\underline{6},7). \] Suppose that $n=7$, $\sigma = (6,1,3,5,4,7,2)$, $m=3$ and $\{k_1, k_2, k_3\} = \{5,8,9\}$. Then, \[ \sigma_1 = (7,1,3,\underline{5},6,4,8,2), \] \[ \sigma_2 = (7,1,3,\underline{5},6,4,\underline{8},9,2), \] and \[ \sigma_3 = (7,1,3,\underline{5},6,4,\underline{8},\underline{9},10,2). \] \end{example} It remains to show that each of the maps $\sigma_i \mapsto \sigma_{i+1}$ is invertible. Suppose we have the circular permutation $\sigma_{i+1} \in \mathfrak{C}_{n-m+i+1}$ with $\Theta_{n-m+i+1}(\sigma_{i+1}) = \{k_1, \ldots, k_{i+1} \}$. The circular permutation $\sigma_i \in \mathfrak{C}_{n-m+i}$ is recovered as follows. If $k_{i+1} \leq n-m+i$, then let $j_* \in [n-m+i+1]$ be such that $\sigma_{i+1}(j_*) = k_{i+1}$ and and define $\sigma_i = (\sigma_i(1), \ldots, \sigma_i(n-m+i))$ by setting \[ \sigma_i(j) = \begin{cases} \sigma_{i+1}(j),& \quad \text{if $j \le j_*$ and $\sigma_{i+1}(j) \le k_{i+1}$,}\\ \sigma_{i+1}(j)-1,& \quad \text{if $j \le j_*$ and $\sigma_{i+1}(j) > k_{i+1}+1$,}\\ \sigma_{i+1}(j+1),& \quad \text{if $j > j_*+1$ and $\sigma_{i+1}(j) \le k_{i+1}$,}\\ \sigma_{i+1}(j+1)-1,& \quad \text{if $j > j_*+1$ and $\sigma_{i+1}(j) > k_{i+1}+1$.}\\ \end{cases} \] On the other hand, if $k_{i+1} = n-m+i+1$, then let $j_* \in [n-m+i+1]$ be such that $\sigma_{i+1}(j_*) = k_{i+1} = n-m+i+1$ and define $\sigma_i = (\sigma_i(1), \ldots, \sigma_i(n-m+i))$ by setting \[ \sigma_i(j) = \begin{cases} \sigma_{i+1}(j),& \quad \text{if $j < j_*$,}\\ \sigma_{i+1}(j+1),& \quad \text{if $j \ge j_*$.}\\ \end{cases} \] \begin{example} \label{E:circular_perm_inverse} We illustrate the inversion procedure just outlined with the second example in Example~\ref{E:circular_perm}. We start with \[ \sigma_3 = (7,1,3,\underline{5},6,4,\underline{8},\underline{9},10,2). \] remove the entry $9$ and subtract $1$ from every entry greater than $9$ to produce \[ \sigma_2 = (7,1,3,\underline{5},6,4,\underline{8},9,2). \] We then remove the entry $8$ and subtract $1$ from every entry greater than $8$ to produce \[ \sigma_1 = (7,1,3,\underline{5},6,4,8,2). \] Lastly, we remove the entry $5$ and subtract $1$ from every entry greater than $5$ to produce \[ \sigma = (6,1,3,5,4,7,2). \] \end{example} \begin{remark} Note that \[ \begin{split} & \#\{\sigma \in \mathfrak{C}_n : \Theta_n(\sigma) = \emptyset\} \\ & \quad = (n-1)! \mathbb{P}\{\mathbf{U}_n = \emptyset\} \\ & \quad \sum_{h=0}^{n-1} (-1)^h \binom{n}{h} (n-h-1)! \\ & \quad + (-1)^{n}. \\ \end{split} \] The values of this quantity for $1 \le n \le 10$ are \[ 0, 0, 1, 1, 8, 36, 229, 1625, 13208, 120288. \] Recall that the number of permutations of $[n]$ with no fixed points (that is, the number of derangements of $n$) is given by \[ D(n)=n!\sum_{j=0}^n \frac{(-1)^j}{j!} \] and values of this quantity for $1 \le n \le 10$ are \[ 0, 1, 2, 9, 44, 265, 1854, 14833, 133496, 1334961. \] A comparison of these sequences suggests that \begin{equation} \label{E:derangements_Theta} D(n) = \#\{\sigma \in \mathfrak{C}_n : \Theta_n(\sigma) = \emptyset\} + \#\{\sigma \in \mathfrak{C}_{n+1} : \Theta_{n+1}(\sigma) = \emptyset\}, \end{equation} and this follows readily from the observation that \[ \begin{split} & \binom{n+1}{h+1}(n-h-1)! - \binom{n}{h}(n-h-1)! \\ & \quad = \frac{n!}{h! (n-h)!} (n-h-1)! \left[\frac{n+1}{h+1}-1\right] \\ & \quad = \frac{n!}{h! (n-h)!} (n-h-1)! \frac{n-h}{h+1} \\ & \quad = \frac{n!}{(h+1)!}. \\ \end{split} \] A bijective proof of \eqref{E:derangements_Theta} follows from Corollary 2 of \cite{MR2992405}, where it is shown via a bijection that \begin{equation} \label{E:derangements_quasi-fixed_points} D(n) = \#\{\sigma \in \mathfrak{C}_{n+1} : \tilde \sigma(j) \ne j+1, \, j \in [n]\}. \end{equation} If $\sigma \in \mathfrak{C}_{n+1}$ is such that $\tilde \sigma(j) \ne j+1$ for $j \in [n]$, then either $\tilde \sigma(j) \ne j+1 \mod n$ for $j \in [n+1]$, so that $\Theta_{n+1}(\sigma) = \emptyset$, or $\tilde \sigma(j) \ne j+1$ for $j \in [n]$ and $\tilde \sigma(n+1) = 1$. The set of $\sigma$ in the latter category are in a bijective correspondence with the set of $\tau \in \mathfrak{C}_n$ such that $\Theta_n(\tau) = \emptyset$ via the bijection that sends a $\sigma\in \mathfrak{C}_{n+1}$ to the $\tau \in \mathfrak{C}_n$ given by \[ \tilde \tau(j) = \begin{cases} \tilde \sigma(j),& \quad \text{if $\tilde \sigma(j) \ne n+1$}, \\ \sigma(n+1) = 1,& \quad \text{if $\tilde \sigma(j) = n+1$}.\\ \end{cases} \] The identity \eqref{E:derangements_quasi-fixed_points} has the following probabilistic interpretation: if $\Pi_n$ is a uniform random permutation of $[n]$ and $\Gamma_{n+1}$ is a uniform random $n+1$-cycle in $\mathfrak{S}_{n+1}$, then \[ \mathbb{P}\{\#\{k \in [n] : \Pi_n(k) = k\}=0\} = \mathbb{P}\{\#\{k \in [n] : \Gamma_{n+1}(k) = k+1\}=0\}. \] It is, in fact, the case that the two random sets $\mathbf{F}_n := \{k \in [n] : \Pi_n(k) = k\}$ and $\mathbf{G}_n := \{k \in [n] : \Gamma_{n+1}(k) = k+1\}$ have the same distribution. We will show this using an argument similar to that in Section~\ref{S:Markov}. Suppose that $\Pi_1, \Pi_2, \ldots$ are generated using the Chinese Restaurant Process and $\Gamma_2, \Gamma_3, \ldots$ are generated recursively by constructing $\Gamma_{n+1}$ from $\Gamma_n$ by picking $K$ uniformly at random from $[n]$ and replacing $(\ldots,K,\Gamma_n(K),\ldots)$ in the cycle representation of $\Gamma_n$ by $(\ldots,K,n+1,\Gamma_n(K),\ldots)$. It is clear that the random set $\mathbf{F}_n$ is exchangeable. The process $\mathbf{G}_2, \mathbf{G}_3, \ldots$ is Markovian: writing $N_n := \# \mathbf{G}_n$ and $\mathbf{G}_n = \{Y_1^n, \ldots, Y_{N_n}^n\}$, we have \[ \mathbb{P}\{\mathbf{G}_{n+1} = \{Y_1^n, \ldots, Y_{N_n}^n\} \setminus \{Y_i^n\} \, | \, \mathbf{G}_n\} = \frac{1}{n}, \quad 1 \le i \le N_n, \] corresponding to $n+1$ being inserted immediately to the right of $Y_i$, \[ \mathbb{P}\{\mathbf{G}_{n+1} = \{Y_1^n, \ldots, Y_{N_n}^n\} \cup \{Y_i^n\} \, | \, \mathbf{G}_n\} = \frac{1}{n}, \] corresponding to $n+1$ being inserted immediately to the right of $n$, and \[ \mathbb{P}\{\mathbf{G}_{n+1} = \{Y_1^n, \ldots, Y_{N_n}^n\} \, | \, \mathbf{G}_n\} = \frac{n - N_n - 1}{n}. \] It is obvious from the symmetry inherent in these transition probabilities and induction that $\mathbf{G}_n$ is an exchangeable random subset of $[n]$ for all $n$. It therefore suffices to show that $N_{n+1}$ has the same distribution as $M_n := \# \mathbf{F}_n$. Observe that $M_1 = N_2 = 1$. It is clear that $N_2, N_3, \ldots$ is a Markov chain with the following transition probabilities \[ \mathbb{P}\{N_{n+1} = N_n - 1 \, | \, N_n\} = \frac{N_n}{n}, \] \[ \mathbb{P}\{N_{n+1} = N_n \, | \, N_n\} = \frac{n - N_n - 1}{n}, \] and \[ \mathbb{P}\{N_{n+1} = N_n + 1 \, | \, N_n\} = \frac{1}{n}. \] It follows from the Chinese Restaurant construction that \[ \mathbb{P}\{M_{n+1} = M_n - 1 \, | \, M_n\} = \frac{M_n}{n+1}, \] \[ \mathbb{P}\{M_{n+1} = M_n \, | \, M_n\} = \frac{(n+1) - M_n - 1}{n+1}, \] and \[ \mathbb{P}\{M_{n+1} = M_n + 1 \, | \, M_n\} = \frac{1}{n+1}, \] and so $M_n$ and $N_{n+1}$ do indeed have the same distribution for all $n$. \end{remark} \section{Random commutators} \label{S:commutator} If we write $\rho$ for the permutation of $[n]$ given by $\rho(i) = i+1 \mod n$, then the random set $\mathbf{U}_n$ of Section~\ref{S:circular} is nothing other than \[ \{i \in [n] : \rho \Pi_n(i) = \Pi_n \rho(i)\} \] or, equivalently, the set \[ \{i \in [n] : \rho^{-1} \Pi_n^{-1} \rho \Pi_n(i) = i\}. \] This is just the set of fixed points of the commutator $[\rho,\Pi_n] = \rho^{-1} \Pi_n^{-1} \rho \Pi_n$. In this section we investigate the asymptotic behavior of the distribution of the set of fixed points of the commutators $[\eta_n,\Pi_n]$ for a sequence of permutations $(\eta_n)_{n \in \mathbb{N}}$, where $\eta_n \in \mathfrak{S}_n$. Write $\chi_n : \mathfrak{S}_n \to \{0,1,\ldots,n\}$ for the function that gives the number of fixed points (i.e. $\chi_n$ is the character of the defining representation of $\mathfrak{S}_n$). It follows from of \cite[Corollary 1.2]{MR1300595} (see also of \cite[Theorem 25]{MR2674623}) that if $\Pi_n'$ and $\Pi_n''$ are independent uniformly distributed permutations of $[n]$, then the distribution of $\chi_n([\Pi_n',\Pi_n''])$ is approximately Poisson with expected value $1$ when $n$ is large. The results of \cite{MR1300595, MR2674623} suggest that if $n$ is large and $\eta_n$ is a ``generic'' element of $\mathfrak{S}_n$, then the distribution of $\chi_n([\eta_n, \Pi_n])$ should be close to Poisson with expected value $1$. Of course, such a result will not hold for arbitrary sequences $(\eta_n)_{n \in \mathbb{N}}$. For example, if $\eta_n$ is the identity permutation, then $\chi_n([\eta_n,\Pi_n]) = n$. The behavior of $\chi_n([\eta_n,\Pi_n])$ for a deterministic sequence $(\eta_n)_{n \in \mathbb{N}}$ does not appear to have been investigated in the literature. However, we note that if $\tilde \Pi_n$ is an independent uniform permutation of $[n]$, then \[ \begin{split} \chi_n([\eta_n,\Pi_n]) & = \chi_n(\eta_n^{-1} \Pi_n^{-1} \eta_n \Pi_n) \\ & = \chi_n(\tilde \Pi_n^{-1} \, \eta_n^{-1} \Pi_n^{-1} \eta_n \Pi_n \, \tilde \Pi_n) \\ & = \chi_n(\tilde \Pi_n^{-1} \eta_n^{-1} \tilde \Pi_n \; \tilde \Pi_n^{-1} \, \Pi_n^{-1} \eta_n \Pi_n \, \tilde \Pi_n). \\ & = \#\{i \in [n] : U_n(i) = V_n(i)\}, \end{split} \] where \[ U_n := \tilde \Pi_n^{-1} \eta_n \tilde \Pi_n \] and \[ V_n := \tilde \Pi_n^{-1} \, \Pi_n^{-1} \eta_n \Pi_n \, \tilde \Pi_n \] are independent random permutations of $[n]$ that are uniformly distributed on the conjugacy class of $\eta_n$. Since $U_n$ has the same distribution as $U_n^{-1}$, we see that $\chi_n([\eta_n,\Pi_n])$ is distributed as the number of fixed points of the random permutation $U_n V_n$ and we could, in principle, determine the distribution of $\chi_n([\eta_n,\Pi_n])$ if we knew the the distribution of the conjugacy class to which $U_n V_n$ belongs. Given a partition $\lambda \vdash n$, write $C_\lambda$ for the conjugacy class of $\mathfrak{S}_n$ consisting of permutations with cycle lengths given by $\lambda$ and let $K_\lambda$ be the element $\sum_{\pi \in C_\lambda} \pi$ of the group algebra of $\mathfrak{S}_n$. If $C_\nu$ is another conjugacy class with cycle lengths $\mu \vdash n$, then, writing $\ast$ for the multiplication in the group algebra, $K_\lambda \ast K_\mu = \sum_{\nu \vdash n} c_{\lambda \mu}^\nu K_\nu$ for nonnegative integer coefficients $c_{\lambda \mu}^\nu$. Denote by $\gamma_n \vdash n$ the partition of $n$ given by the cycles lengths of $\eta_n$. If we knew $c_{\gamma_n \gamma_n}^\nu$ for all $\nu \vdash n$, then we would know the distribution of the conjugacy class to which $U_n V_n$ belongs and hence, in principle the distribution of $\chi_n([\eta_n,\Pi_n])$. Unfortunately, the determination of the coefficients $c_{\lambda \mu}^\nu$ appears to be a rather difficult problem. The special case when $\lambda = \mu = n$ (that is, the conjugacy class of $n$-cycles is being multiplied by itself) is treated in \cite{MR558612, MR676430, MR1032633} and fairly explicit formulae for some other simple cases are given in \cite{MR1165162, MR1273294}, but in general there do not seem to be usable expressions. In order to get a better feeling for what sort of conditions we will need to impose on $(\eta_n)_{n \in \mathbb{N}}$ to get the hoped for Poisson limit, we make a couple of simple observations. Firstly, it follows that if we write $f_n := \chi_n(\eta_n)$ for the number of fixed points of $\eta_n$, then \[ \mathbb{E}[\chi_n([\eta_n,\Pi_n])] = n \mathbb{P}\{U_n(i) = V_n(i)\} = n \left[ \left(\frac{n - f_n}{n}\right)^2 \frac{1}{n-1} + \left(\frac{f_n}{n}\right)^2 \right], \] and so it appears that we will at least require some control on the sequence $(f_n)_{n \in \mathbb{N}}$. A second, and somewhat more subtle, potential difficulty becomes apparent if we consider the permutation $\eta_n$ that is made up entirely of $2$-cycles (so that $n$ is necessarily even). In this case, $U_n(i) = V_n(i)$ if and only if $U_n(U_n(i)) = i = V_n(V_n(i))$, and so $\chi_n([\eta_n,\Pi_n])$ is even. Going a little further, we may write $m = n/2$, take $\eta_n$ to have the cycle decomposition $(1,m+1)(2,m+2) \cdots (m,2m)$, and note that $\chi_n([\eta_n,\Pi_n]) = \#\{i \in [n] : U_n(i) = V_n(i)\}$ has the same distribution as $\#\{i \in [n] : U_n(i) = \eta_n(i)\} = 2 \#\{i \in [m] : U_n(i) = \eta_n(i)\} = 2 M_n$, where $M_n:= \sum_{i=1}^m I_{ni}$, with $I_{ni}$ the indicator of the event $\{U_n(i) = \eta_n(i)\}$. It is not difficult to show that \[ \begin{split} \mathbb{E}[M_n(M_n-1)\cdots (M_n-k+1)] & = \frac{m(m-1) \cdots (m-k+1)}{(2m-1)(2m-3) \cdots (2m - 2k + 1)} \\ & \rightarrow \frac{1}{2^k} \quad \text{as $m \to \infty$}, \\ \end{split} \] and so the distribution of $\chi_n([\eta_n,\Pi_n])/2$ converges to a Poisson distribution with expected value $\frac{1}{2}$. Returning to the case of a general permutation $\eta_n$ and writing $t_n$ for the number of $2$-cycles in the cycle decomposition of $\eta_n$, it seems that in order for the distribution of the random variable $\chi_n([\eta_n,\Pi_n])$ to be close to that of a Poisson random variable with expected value $1$ when $n$ is large we will need to at least impose suitable conditions on $f_n$ and $t_n$. It will, in fact, suffice to suppose that $f_n$ and $t_n$ are bounded as $n$ varies, as the following result shows. \begin{theorem} Suppose that $a,b > 0$. There exists a constant $K$ that depends on $a$ and $b$ but not on $n \in \mathbb{N}$ such that if $\Pi$ is uniformly distributed on $\mathfrak{S}_n$ and $\eta \in \mathfrak{S}_n$ has at most $a$ fixed points and at most $b$ $2$-cycles, then the total variation distance between the distribution of the number of fixed points of the commutator $[\eta,\Pi]$ and a Poisson distribution with expected value $1$ is at most $\frac{K}{n}$. \end{theorem} \begin{proof} As we have observed above, the number of fixed points of $[\eta,\Pi]$ has the same distribution as $\# \{i \in [n] : U(i) = V(i) \}$, where $U$ and $V$ are independent random permutations that are uniformly distributed on the conjugacy class of $\eta$. We will write $\chi$ for $\chi_n$ to simplify notation. Similarly, we write $f$ for the number of fixed points of $\eta$ and $t$ for the number of $2$-cycles. We assume that $f \le a$ and $t \le b$. Let $F_U$ and $T_U$ be the random subsets of $[n]$ that are, respectively, the fixed points of $U$ and the elements that belong to the $2$-cycles of $U$. Define $F_V$ and $T_V$ similarly. Set \[ N := \# \{i \in [n] : U(i) = V(i), \; i \notin F_U \cup T_U \cup F_V \cup T_V\}. \] Observe that \[ \mathbb{P}\{U(i) = V(i), \; i \notin F_U \cup T_U \cup F_V \cup T_V\} =\left(\frac{n - f - 2t}{n}\right)^2 \frac{1}{n-1}, \] so \[ \begin{split} \mathbb{P}\{\chi([\eta,\Pi]) \ne N\} & \le \mathbb{E}[\chi([\eta,\Pi])] - \mathbb{E}[N] \\ & = n \left[ \left(\frac{n - f}{n}\right)^2 \frac{1}{n-1} + \left(\frac{f}{n}\right)^2 - \left(\frac{n - f - 2t}{n}\right)^2 \frac{1}{n-1} \right]. \\ \end{split} \] In particular, $n \mathbb{P}\{\chi([\eta,\Pi]) \ne N\}$ is bounded in $n$. Let $I,J$ be chosen elements uniformly without replacement from $[n]$ and independent of the permutations $U$ and $V$. Set \[ A := \{(I,J) \cap (F_U \cup T_U \cup F_{V} \cup T_{V}) = \emptyset\} \] and \[ W := N \mathbbold{1}_A. \] Note that \[ \mathbb{P}\{W \ne N\} \le \mathbb{P}(A^c) = 1 - \left(\frac{n - f - 2t}{n} \frac{n - f - 2t - 1}{n-1} \right)^2, \] so that $n \mathbb{P}\{W \ne N\}$, and hence $n \mathbb{P}\{W \ne \chi([\eta,\Pi])\}$, is bounded in $n$. It will therefore suffice to show that the total variation distance between the distribution of $W$ and a Poisson distribution with expected value $1$ is at most a constant muliple of $\frac{1}{n}$. We will do this using Stein's method. More precisely, we will use the version in \cite[Section 1]{MR2121796} that depends on the construction of an {\em exchangeable pair}; that is, another random variable $W'$ such that $(W,W')$ has the same distribution as $(W',W)$. Build another random permutation $V'$ by interchanging $I$ and $J$ in the cycle representation of $V$. If, using a similar notation to that above, we set \[ N' := \#\{i \in [n] : U(i) = V'(i) \; i \notin F_U \cup T_U \cup F_{V'} \cup T_{V'}\} \] and \[ W' := N' \mathbbold{1}_A, \] then $(W,W')$ is clearly an exchangeable pair. We can represent the permutations $U$ and $V$ when the event $A$ occurs as in Figure~\ref{fig:general_position}. \begin{figure}[htbp] \centering \includegraphics[width=1.00\textwidth]{general_position.pdf} \caption{The effect of the permutations $U$ and $V$ on the elements $I$ and $J$ when the event $A$ occurs. The solid arrows depict the action of $U$ and the dashed arrows depict the action of $V$. The components of the triple $(U^{-1}(I), I, U(I))$ are distinct. The same is true of the components of the triples $(V^{-1}(I), I, V(I))$, $(U^{-1}(J), J, U(J))$, and $(V^{-1}(J), J, V(J))$. However, it may happen that $U(I) = V(I)$, $U(I) = V(J)$, etc.} \label{fig:general_position} \end{figure} We have \begin{equation} \label{E:diff_Wprime_W_full} \begin{split} W' & = W \\ & \quad - \mathbbold{1}\{U^{-1}(I) = V^{-1}(I)\} \cap A - \mathbbold{1}\{U(I) = V(I)\} \cap A \\ & \quad - \mathbbold{1}\{U^{-1}(J) = V^{-1}(J)\} \cap A - \mathbbold{1}\{U(J) = V(J)\} \cap A \\ & \quad + \mathbbold{1}\{U^{-1}(I) = V^{-1}(J)\} \cap A + \mathbbold{1}\{U(J) = V(I)\} \cap A\\ & \quad + \mathbbold{1}\{U^{-1}(J) = V^{-1}(I)\} \cap A + \mathbbold{1}\{U(I) = V(J)\} \cap A. \\ \end{split} \end{equation} Note that \[ \begin{split} & \mathbb{P}\left(\{U^{-1}(I) = V^{-1}(I)\} \cap A \, | \, (U,V)\right) = \mathbb{P}\left(\{U(I) = V(I)\} \cap A \, | \, (U,V)\right) \\ & \quad = \mathbb{P}\left(\{U^{-1}(J) = V^{-1}(J)\} \cap A \, | \, (U,V)\right) = \mathbb{P}\left(\{U(J) = V(J) \} \cap A \, | \, (U,V)\right) \\ & \quad = \left(\frac{n - f - 2t}{n} \frac{n - f - 2t - 1}{n-1} \right)^2 \frac{W}{n-1} \\ & \quad = \frac{W}{n} + X_n, \end{split} \] where $X_n$ is a random variable such that if we set $b_n := \mathbb{E}[|X_n|]$, then $n^2 b_n$ is bounded in $n$. Furthermore, \[ \begin{split} & \mathbb{P}\left(\{U^{-1}(I) = V^{-1}(J)\} \cap A \, | \, (U,V) \right) = \sum_{k=1}^n \mathbb{P}\left(\{U^{-1}(I) = V^{-1}(J) = k\} \cap A \, | \, (U,V) \right) \\ & \quad = \sum_{k=1}^n \mathbb{P}\left(\{I = U(k), \, J = V(k)\} \cap A \, | \, (U,V) \right) \\ & \quad = n \left(\frac{n - f - 2t}{n} \frac{n - f - 2t - 1}{n-1} \right)^2 \left(\frac{n-1}{n} \frac{1}{n - f - 2t - 1} \right)^2 \\ & \quad = \frac{1}{n} + c_n, \\ \end{split} \] where $c_n$ is a constant such that $n^2 c_n$ is bounded in $n$, and similar arguments show that \[ \begin{split} & \mathbb{P}\left(\{U(J) = V(I)\} \cap A \, | \, (U,V) \right) \\ & \quad = \mathbb{P}\left(\{U^{-1}(J) = V^{-1}(I)\} \cap A \, | \, (U,V) \right) = \mathbb{P}\left(\{U(I) = V(J)\} \cap A \, | \, (U,V) \right) \\ & \quad = n \left(\frac{n - f - 2t}{n} \frac{n - f - 2t - 1}{n-1} \right)^2 \left(\frac{n-1}{n} \frac{1}{n - f - 2t - 1} \right)^2 \\ & \quad = \frac{1}{n} + c_n. \\ \end{split} \] Suppose we can show that the probability of the intersection of any two of the events whose indicators appear on the right-hand side of \eqref{E:diff_Wprime_W_full} is at most a constant $d_n$, where $n^2 d_n$ is bounded in $n$, then \[ \mathbb{E} \left[\left|W - \frac{n}{4} \mathbb{P}\{W' = W-1 \, | \, (U,V)\}\right|\right] \le n b_n + 7 n d_n \] \[ \mathbb{E} \left[\left|1 - \frac{n}{4} \mathbb{P}\{W' = W+1 \, | \, (U,V)\}\right|\right] \le n |c_n| + 7 n d_n. \] It will follow from the main result of \cite[Section 1]{MR2121796} that the total variation distance between the distribution of $W$ and a Poisson distribution with expected value $1$ is at most $\frac{C}{n}$ for a suitable constant $C$, and hence, as we have already remarked, the same is true (with a larger constant) for the distribution of $\chi([\eta,\Pi])$. Consider the event $\{U^{-1}(I) = V^{-1}(I)\} \cap \{U(I) = V(I)\} \cap A$, which we represent diagrammatically in Figure~\ref{fig:A_cccccccc_intersection_1}. \begin{figure}[htbp] \centering \includegraphics[width=1.00\textwidth]{A_cccccccc_intersection_1.pdf} \caption{Diagram for the event \[\{U^{-1}(I) = V^{-1}(I)\} \cap \{U(I) = V(I)\} \cap A.\]} \label{fig:A_cccccccc_intersection_1} \end{figure} The probability of this event is \[ \left( \frac{n - f - 2 t}{n} \frac{n - f - 2t - 1}{n-1} \right)^2 \frac{1}{n-2} \frac{1}{n-3}. \] As another example, consider the event $\{U^{-1}(J) = V^{-1}(I)\} \cap \{U(I) = V(J)\} \cap A$, which we represent diagrammatically in Figure~\ref{fig:A_cccccccc_intersection_2}. \begin{figure}[htbp] \centering \includegraphics[width=1.00\textwidth]{A_cccccccc_intersection_2.pdf} \caption{Diagram for the event \[\{U^{-1}(J) = V^{-1}(I)\} \cap \{U(I) = V(J)\} \cap A.\]} \label{fig:A_cccccccc_intersection_2} \end{figure} The probability of this event is also \[ \left( \frac{n - f - 2t}{n} \frac{n - f - 2 t - 1}{n-1} \right)^2 \frac{1}{n-2} \frac{1}{n-3}. \] Continuing in this way, we see that, as required, the probability of the intersection of any two of the events whose indicators appear on the right-hand side of \eqref{E:diff_Wprime_W_full} is at most a constant $d_n$, where $n^2 d_n$ is bounded in $n$. \end{proof} \bigskip \noindent {\bf Acknowledgments:} We thank Jim Pitman for getting us interested in the area investigated in this paper, sharing with us the contents of his results Theorem~\ref{T:main} and Remark~\ref{R:exchangeable}, and telling us about the results in \cite{Whi63}. We thank Steve Butler for helpful discussions about circular permutations and an anonymous referee for several helpful suggestions. \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{% \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
{ "timestamp": "2014-04-29T02:10:46", "yymm": "1308", "arxiv_id": "1308.5459", "language": "en", "url": "https://arxiv.org/abs/1308.5459", "abstract": "In a uniform random permutation \\Pi of [n] := {1,2,...,n}, the set of elements k in [n-1] such that \\Pi(k+1) = \\Pi(k) + 1 has the same distribution as the set of fixed points of \\Pi that lie in [n-1]. We give three different proofs of this fact using, respectively, an enumeration relying on the inclusion-exclusion principle, the introduction of two different Markov chains to generate uniform random permutations, and the construction of a combinatorial bijection. We also obtain the distribution of the analogous set for circular permutations that consists of those k in [n] such that \\Pi(k+1 mod n) = \\Pi(k) + 1 mod n. This latter random set is just the set of fixed points of the commutator [\\rho, \\Pi], where \\rho is the n-cycle (1,2,...,n). We show for a general permutation \\eta that, under weak conditions on the number of fixed points and 2-cycles of \\eta, the total variation distance between the distribution of the number of fixed points of [\\eta,\\Pi] and a Poisson distribution with expected value 1 is small when n is large.", "subjects": "Probability (math.PR); Combinatorics (math.CO)", "title": "Unseparated pairs and fixed points in random permutations", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9863631647185701, "lm_q2_score": 0.8221891327004133, "lm_q1q2_score": 0.8109770749275961 }
https://arxiv.org/abs/2206.08914
Graphs with Sudoku number $n-1$
Recently Lau-Jeyaseeli-Shiu-Arumugam introduced the concept of the "Sudoku colourings" of graphs -- partial $\chi(G)$-colourings of $G$ that have a unique extension to a proper $\chi(G)$-colouring of all the vertices. They introduced the Sudoku number of a graph as the minimal number of coloured vertices in a Sudoku colouring. They conjectured that a connected graph has Sudoku number $n-1$ if, and only if, it is complete. In this note we prove that this is true.
\section{Introduction} All colourings in this note are vertex-colourings. A vertex colouring of a graph is \emph{proper} if adjacent vertices receive different colours. The chromatic number, $\chi(G)$ is the minimal number of colours in a proper colouring of $G$. A \emph{partial} proper colouring of a graph is a colouring of some subset of $V(G)$ which doesn't give adjacent vertices the same colour. We say that a colouring $\phi$ extends a partial colouring $\psi$ if all vertices coloured in $\psi$ receive the same colour in $\phi$. A variety of tasks can be encoded as taking a partial proper colouring in a graph and then extending it to a full proper colouring of all the vertices. For example the well known ``Sudoku puzzle'' can be encoded in this form. Consider a graph $G_{\mathrm{Sudoku}}$ on $27$ vertices that are identified with the cells in a $9\times 9$ grid. Add an edge between any two vertices in the same column, between any two vertices in the same row, and between any two vertices in the same $3\times 3$ box. It is easy to see that a proper $9$-colouring of $G_{\mathrm{Sudoku}}$ exactly corresponds to filling in $9\times 9$ array according to the rules of Sudoku puzzles. Thus a Sudoku puzzle can be summarized as ``you are given a partial colouring of $G_{\mathrm{Sudoku}}$ and need to complete it to a proper colouring of all the vertices of $G_{\mathrm{Sudoku}}$''. One of the conventions for designing Sudoku puzzles is that there should always be precisely one way of filling in the $9\times 9$ array i.e. there should always exist one solution, and there shouldn't exist multiple solutions. This motivates the definition of a Sudoku colouring of a graph. Lau-Jeyaseeli-Shiu-Arumugam defined it as follows in~\cite{Sudoku_colourings}. \begin{definition} A Sudoku colouring of a graph $G$ is a partial proper $\chi(G)$-colouring of $G$ which has precisely one extension to a $\chi(G)$-colouring of $G$. \end{definition} Sudoku colourings of $G_{\mathrm{Sudoku}}$ are thus in one-to-one correspondence with uncompleted Sudoku puzzles (that have unique completion). But we can also investigate Sudoku colourings of general graphs. Lau-Jeyaseeli-Shiu-Arumugam~\cite{Sudoku_colourings} defined the Sudoku number of $G$, $sn(G)$ as the smallest number of coloured vertices in a Sudoku colouring of $G$. The motivation for this is that $sn(G_{\mathrm{Sudoku}})$ now asks for the minimum number of clues (i.e. non-blank entries) in a Sudoku puzzle with unique solution. This number has been determined as $sn(G_{\mathrm{Sudoku}})=17$ by McGuire, Tugemann, and Civario using a computer-assisted proof~\cite{mcguire2014there}. For general graphs, Lau-Jeyaseeli-Shiu-Arumugam~\cite{Sudoku_colourings} determined $sn(G)$ for various classes of graphs $G$ and obtained bounds for other classes. For example, they showed that $sn(G)=1$ if, and only if, $G$ is connected and bipartite. On the other extreme, they showed that $sn(G)\leq |G|-1$ for all graphs and conjectured that for connected graphs, equality holds if, and only if, $G$ is complete. Here we show that this is the case. \begin{theorem}\label{Main_Theorem} A connected graph has $sn(G)=|G|-1$ if, and only if, $G$ is complete. \end{theorem} The backwards direction already appears in~\cite{Sudoku_colourings} (see Corollary 3.4), so we focus on proving the statement ``let $G$ be a connected graph with $sn(G)=|G|-1$. Then $G$ is complete''. This amounts to showing that non-complete graphs have partial proper $\chi(G)$-colourings with $\geq 2$ uncoloured vertices which have a unique extension to a full proper $\chi(G)$-colouring of $G$. \section{Proofs} In a $k$-coloured graph our set of colours will always be $[k]=\{1, \dots, k\}$. For a colouring $\phi$ and vertices $u_1, \dots, u_k$, we use $\phi-u_1-\dots-u_k$ to mean the partial colouring formed by uncolouring the vertices $u_1, \dots, u_k$. For a partial colouring $\phi$ and a set of vertices $S$, we define $\phi(S):=\{\phi(s):s\in S\}$ to mean the set of colours appearing in $S$. We use $N(v)$ to denote the set of vertices connected to $v$ by an edge (our graphs are simple, so this never includes $v$ itself). The following is the tool we use for constructing Sudoku colourings in this note. It gives two kinds of Sudoku colourings with two uncoloured vertices. \begin{lemma}\label{Lemma_sudoku_colouring} Let $\psi$ be a partial proper $\chi(G)$-colouring with exactly two uncoloured vertices $u,v$. Suppose that either of the following holds: \begin{enumerate}[(i)] \item $uv$ is a nonedge and $|\phi(N(u))|=|\phi(N(v))|=\chi(G)-1$. \item $uv$ is an edge, $|\phi(N(u))|=\chi(G)-1$, $|\phi(N(v))|=\chi(G)-2$, and $\phi(N(v))\subset \phi(N(u))$. \end{enumerate} Then $\psi$ is a Sudoku colouring. \end{lemma} \begin{proof} In case (i), there is precisely one colour missing from $N(u)$ and precisely one colour missing from $N(v)$. In a proper $\chi(G)$-colouring, $u$ and $v$ must receive exactly these colours, and so the extension is unique. In case (ii), there are two colours $c,d$ missing from $N(v)$ and one of these colours (say $c$), is missing from $N(u)$. To complete the colouring $u$ must receive colour $c$, and $v$ must receive colour $d$, so the extension is unique. \end{proof} The following definition is crucial for us. \begin{definition} Let $\phi$ be a proper $\chi(G)$-colouring of $G$. A vertex is \textbf{full} if it is adjacent to vertices of all colours (aside from its own) i.e. if $\phi(N(v))=[\chi(G)]\setminus \phi(v)$ (or equivalently if $|\phi(N(v))|=\chi(G)-1$). \end{definition} In graphs with Sudoku number $|G|-1$, it turns out that the full vertices form a complete subgraph. \begin{lemma}\label{Lemma_full_vertices_complete} Let $sn(G)=|G|-1$ and let $\phi$ be a proper $\chi(G)$-colouring of $G$. Then any two full vertices are connected by an edge. \end{lemma} \begin{proof} Let $u,v$ be full and suppose for contradiction that $uv$ is not an edge in $G$. Consider the partial colouring $\psi:=\phi-u-v$. Since $u,v$ are full, we have $|\phi(N(u))|=|\phi(N(v)))|=\chi(G)-1$. Since $uv$ is a non-edge, the neighbours of $u,v$ all remain coloured in $\psi$ and so $|\psi(N(u))|=|\psi(N(v)))|=\chi(G)-1$. Thus, by Lemma~\ref{Lemma_sudoku_colouring} (i), $\psi$ is a Sudoku colouring. It has two uncoloured vertices, contradicting $sn(G)=|G|-1$. \end{proof} We say that a proper $k$-colouring of $G$ is $c$-minimal if it has as few colour $c$ vertices as possible (for a $k$-colouring of $G$). The following lemma shows that there are a lot of full vertices around all $c$-coloured vertices in a $c$-minimal colouring. \begin{lemma}\label{Lemma_1_minimal_colouring} Let $\phi$ be a $1$-minimal proper $\chi(G)$-colouring of $G$. Let $v$ be a vertex with $\phi(v)=1$. Then for every colour $c=2, \dots, \chi(G)$, the vertex $v$ has at least one colour $c$ neighbour $u$ with $u$ full. In particular, $v$ is full. \end{lemma} \begin{proof} First notice that is impossible that $v$ has no colour $c$ neighbours --- indeed otherwise, we could recolour $v$ with colour $c$ to get a proper colouring with one fewer colour 1 vertex (contradicting $1$-minimality). This proves the ``in particular $v$ is full'' part --- since we've shown that every colour other than $\phi(v)=1$ appears on $N(v)$. Now let the set of colour $c$ neighbours of $v$ be $\{u_1, \dots, u_k\}$. Suppose for contradiction that none of these are full --- equivalently there are colours $c_1, \dots, c_k \in [\chi(G)]\setminus c$ with $c_i$ missing from $N(u_i)$. Note that $c_i\neq 1$ for all $i$, since $v\in N(u_i)$ and $\phi(v)=1$. Note that $\{u_1, \dots, u_k\}$ is an independent set since all these vertices have colour $c$ and the colouring is proper. Now recolour $u_i$ by $c_i$ for each $i$ and recolour $v$ by $c$. Notice that this colouring is proper. To show this, we need to check that the recoloured vertices $v, u_1, \dots, u_k$ have different colours to all their neighbours (everywhere else the colouring remains proper just because $\phi$ was proper). Indeed $v$ has no colour $c$ neighbours since $\{u_1, \dots, u_k\}$ was the set of all colour $c$ neighbours of $v$ and these have all been recoloured by colours $c_i\neq c$. Vertex $u_i$ has no colour $c_i$ neighbour since it initially had no colour $c_i$ neighbours and the only neighbour of $u_i$ that was recoloured was $v$ (which received colour $c\neq c_i$). But the new colouring we have has one fewer colour $1$ vertex, contradicting $1$-minimality. \end{proof} Applying the above lemma to a graph with Sudoku number $|G|-1$ gives even more structure in a minimal colouring. \begin{lemma}\label{Lemma_1_minimal_sudoku} Let $sn(G)=|G|-1$ and let $\phi$ be a $1$-minimal proper $\chi(G)$-colouring of $G$. Then there is precisely one colour $1$ vertex. Additionally, this vertex $v$ is full and has $|N(v)|=\chi(G)-1$. \end{lemma} \begin{proof} Let $v$ be a colour $1$ vertex. By Lemma~\ref{Lemma_1_minimal_colouring} $v$ is full. There can't be another colour $1$ vertex $z$ --- because otherwise $z$ would also be full, which would give two disconnected full vertices (contradicting Lemma~\ref{Lemma_full_vertices_complete}). Suppose that $|N(v)|>\chi(G)-1$. Then, by the pigeonhole principle there must be some colour $c$ which occurs more than once on $N(v)$. By Lemma~\ref{Lemma_1_minimal_colouring} $v$ has some full colour $c$ neighbour $u$. Let $w$ be some other colour $c$ neighbour of $v$. Now let $\psi:=\phi-u-v$. Note that $\psi(N(v))=\phi(N(v))=[\chi(G)]\setminus 1$ (the first equality holds because precisely one neighbour $u$ of $v$ was uncoloured, but that neighbour $u$ had colour $c$ which is still present at $w$. The second equality holds because $v$ is full and has colour $1$). Also $\psi(N(u))=\phi(N(u))\setminus 1=[\chi(G)]\setminus\{1,c\}$ (the first equality holds because precisely one neighbour $v$ or $u$ was uncoloured, and that neighbour had colour $1$ which isn't present anywhere else in the graph. The second equality holds because $\phi(N(u))=[\chi(G)]\setminus c$ since $u$ is full and has colour $c$). Thus by Lemma~\ref{Lemma_sudoku_colouring} (ii), $\psi$ is a Sudoku colouring with two uncoloured vertices, contradicting $sn(G)=|G|-1$. \end{proof} We are ready to prove our main theorem. \begin{proof}[Proof of Theorem~\ref{Main_Theorem}] The backwards direction already appears in~\cite{Sudoku_colourings} (see Corollary 3.4), so it remains to prove that every connected $G$ with $sn(G)=|G|-1$ is complete. To that end, let $G$ be connected with $sn(G)=|G|-1$. Consider a 1-minimal proper $\chi(G)$-colouring of $G$. By Lemma~\ref{Lemma_1_minimal_sudoku}, there is precisely one colour 1 vertex, call it $v$. Also by Lemma~\ref{Lemma_1_minimal_sudoku}, $v$ is full and $|N(v)|=\chi(G)-1$ --- which, using the definition of ``full'', implies that all neighbours of $v$ have different colours. By Lemma~\ref{Lemma_1_minimal_colouring}, we have that all neighbours of $v$ are full. Now we have that all vertices in $v\cup N(v)$ are full, and so Lemma~\ref{Lemma_full_vertices_complete} tells us that $v\cup N(v)$ is complete. Unless $G$ is complete, then, by connectedness, there is some vertex $w$ outside $v\cup N(v)$ with a neighbour $u$ in $N(v)$. Now construct a colouring $\psi$ by recolouring $w$ by colour $1$ and uncolouring $u,v$. First notice that this is a proper (partial) colouring --- indeed $w$ has no colour $1$ neighbours since $v$ is the unique colour $1$ vertex and $w\not\in N(v)$. We have that $\psi(N(v))=\phi(N(v))\setminus \phi(u)=[\chi(G)]\setminus\{1, \phi(u)\}$ (the first equality holds because the neighbour $u$ of $v$ was uncoloured and $\phi(u)$ doesn't appear anywhere else on $N(v)$ since the neighbours of $v$ have different colours. The second equality holds because $\phi(N(v))=[\chi(G)]\setminus 1$ since $v$ is full and has colour $1$). Also $\psi(N(u))=[\chi(G)]\setminus\{\phi(u)\}$ (colour $1$ is present on $N(u)$ in $\psi$ because $\psi(w)=1$ and $w\in N(u)$. All colours in $[\chi(G)]\setminus\{1, \phi(u)\}$ are present on $N(u)$ in $\psi$ because the vertices of $N(v)\setminus u$ have exactly these colours in $\phi$, $u$ is connected to all of them, and their colours don't change going from $\phi$ to $\psi$). Thus by Lemma~\ref{Lemma_sudoku_colouring} (ii), $\psi$ is a Sudoku colouring with two uncoloured vertices, contradicting $sn(G)=|G|-1$. \end{proof}
{ "timestamp": "2022-06-20T02:21:09", "yymm": "2206", "arxiv_id": "2206.08914", "language": "en", "url": "https://arxiv.org/abs/2206.08914", "abstract": "Recently Lau-Jeyaseeli-Shiu-Arumugam introduced the concept of the \"Sudoku colourings\" of graphs -- partial $\\chi(G)$-colourings of $G$ that have a unique extension to a proper $\\chi(G)$-colouring of all the vertices. They introduced the Sudoku number of a graph as the minimal number of coloured vertices in a Sudoku colouring. They conjectured that a connected graph has Sudoku number $n-1$ if, and only if, it is complete. In this note we prove that this is true.", "subjects": "Combinatorics (math.CO)", "title": "Graphs with Sudoku number $n-1$", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9835969708496457, "lm_q2_score": 0.8244619263765707, "lm_q1q2_score": 0.8109382533648585 }
https://arxiv.org/abs/1703.06316
On the linear polarization constants of finite dimensional spaces
We study the linear polarization constants of finite dimensional Banach spaces. We obtain the correct asymptotic behaviour of these constants for the spaces $\ell_p^d$: they behave as $\sqrt[p]{d}$ if $1\le p\le 2$ and as $\sqrt{d}$ if $2\le p<\infty$. For $p=\infty$ we get the asymptotic behavior up to a logarithmic factor
\section*{Introduction} Given a Banach space $X$, its \textit{$n$th linear polarization constant} is defined as the smallest constant $\mathbf c_n(X)$ such that for any set of $n$ linear functionals $\{\psi_j\}_{j=1}^n\subseteq X^*$, we have \begin{equation}\label{problempolarizacion} \Vert \psi_1 \Vert \cdots \Vert \psi_n \Vert \le \mathbf c_n(X) \, \Vert \psi_1 \cdots \psi_n\Vert, \end{equation} where $\psi_1 \cdots \psi_n$ is the $n$-homogeneous polynomial given by the pointwise product of $\psi_1,\ldots,\psi_n,$ and $\Vert \cdot \Vert$ is the supremum norm over the unit sphere of $X.$ Related to this concept the \textit{linear polarization constant }$\mathbf c(X)$ of $X$ is defined as $$\mathbf c(X) = \displaystyle\lim_{{n\rightarrow \infty}} (\mathbf c_n(X))^{\frac 1 n}.$$ The existence of this limit is a result of \cite{RS}. These constants have been studied by several authors. Among the works on this topic, in~\cite{RT} the authors proved that for each $n$ there is a constant $K_n$ such that $\mathbf c_n(X) \leq K_n$ for every Banach space $X$. As a corollary of Theorem 3 from \cite{BST} the best possible constant $K_n$, for \textit{complex} Banach spaces, is $n^n$. Arias-de-Reyna proved in~\cite{A} that if $X$ is a \textit{complex }Hilbert space, of dimension greater or equal than $n$, then $$\mathbf c_n(X)=n^{\frac n 2}.$$ This result holds for real Hilbert spaces and $n\leq 5$ (see Theorem 4.6 in \cite{PPT}), but it is not known if it is true for every natural number $n$. We recall that the linear polarization constant is infinite for infinite dimensional Banach spaces (see Theorem 12 in \cite{RS}). As a consequence, an interesting problem is to understand how this constant behaves as the dimension of the involved spaces vary. For example, the linear polarization constant of a real $d$-dimensional Hilbert space $\mathcal{H}_d$ was obtained by \mbox{Garc\'ia-V\'azquez} and Villa in \cite{GV}, where they proved that $\mathbf c(\mathcal{H}_d)$ behaves like $\sqrt{d}$ as $d$ goes to infinity. This result was later extended to complex Hilbert spaces by A. Pappas and S. G. R\'ev\'esz in \cite{PR}. For the spaces $\ell_1(\mathbb C^d)$ it is know that $\mathbf c(\ell_1^d(\mathbb C))=d$ (see Proposition 17 of \cite{RS}). In this article we study the $n$th linear polarization constants, as well as the linear po\-la\-ri\-za\-tion constant of finite dimensional Banach spaces. In the first section we develop a method to estimate the linear polarization constant of a finite dimensional space (see Theorem~\ref{mainprop}). In Section~\ref{sec-elepe}, we apply this method to the finite dimensional spaces $\ell_p^d(\mathbb K)$, obtaining in Theorem~\ref{teo polarizacion} the following asymptotically optimal results on $d$ (the \emph{asymptotic} notation is explained in Section~\ref{sec-elepe}): $$\mathbf c(\ell_p^d(\mathbb K)) \asymp \sqrt[p]{d} \quad \mbox{ if } 1\le p \leq 2 \quad \mbox{ and } \quad \mathbf c(\ell_p^d(\mathbb K)) \asymp \sqrt{d} \quad \mbox{ if } 2\le p<\infty . $$ For $p=\infty$ we obtain $\sqrt{d} \prec \mathbf c(\ell_\infty^d(\mathbb K)) \prec \sqrt{d \log d}.$ In Section~\ref{seccion infinito} we use a probabilistic approach to estimate the norm of the product of linear functionals with coefficients $\pm 1$ (in the canonical basis) over the spaces $\ell_\infty^d(\mathbb C)$. This allow us to give in Proposition~\ref{prop infinito} some estimates for their $n$th linear polarization constants. \section{Linear polarization constants of finite dimensional spaces} Throughout this work, given a Banach space $X$, $B_X$ and $S_X$ will stand for the unit ball and the unit sphere respectively. In this section we present a general method for es\-ti\-ma\-ting linear polarization constants. In order to state our results, which give lower and upper bounds for these constants, we will define the so--called \emph{admissible} measures, which are measures that satisfy a rather mild condition. \begin{definition}\label{admisible} Let $X$ be a Banach space and $\lambda$ a Borel measure over a Borel subset $K\subseteq B_X$. We say that $\lambda$ is \textit{admissible} if $$\int_K \log|\langle x,\psi\rangle|\,\,\,d\lambda(x)$$ is finite for every $\psi\in S_{X^*}$ and the functions $g_m:S_{X^*}\rightarrow \mathbb R$ defined as \[ g_m(\psi)=\int_K \max\{\log|\langle x,\psi\rangle |,-m\} \,\,\,d\lambda(x), \] converges uniformly to the function $g:S_{X^*}\rightarrow \mathbb R$, defined as \[ g(\psi)=\int_K \log|\langle x,\psi\rangle |\,\,\,d\lambda(x). \] \end{definition} For example, for $\mathcal{H}$ a finite dimensional Hilbert space, the Lebesgue measure over $S_\mathcal{H}$ is admissible, since the functions $g_m$ are constant functions that converges to the constant function $g$. The main result of this section is the following. \begin{thm}\label{mainprop} Given a finite dimensional Banach space $X$, let $\mu$ and $\eta$ be admissible probability measures over $S_X$ and $S_{X^*}$ respectively. Then there is $\psi_0 \in S_{X^*}$ and $x_0\in S_X$, depending on $\mu$ and $\eta$, such that $$ \exp\left\{-\int_{S_{X^*}} \log |\langle x_0,\psi\rangle | \,\,\,d\eta(\psi) \right\} \leq \mathbf c(X) \leq \exp\left\{- \int_{S_X} \log |\langle x,\psi_0\rangle | \,\,\,d\mu(x) \right\}.$$ \end{thm} We will treat separately the lower and the upper bound, and state both as propositions. Let us first sketch some of the ideas behind the proof, specifically for the lower bound. Since $X$ is finite dimensional, by a compactness argument there exist, for each natural number $n$, linear functionals $\psi_1^n,\ldots,\psi_n^n\in S_{X^*}$ such that \begin{equation}\label{peores} \Vert \psi_1^n\cdots \psi_n^n \Vert =\mathbf c_n(X)^{-1}. \end{equation} Take now $x_n\in B_X$ a point where the function $\psi_1^n\cdots \psi_n^n$ attains its norm, i.e., $\Vert \psi_1^n\cdots \psi_n^n \Vert = \vert \psi_1^n\cdots \psi_n^n (x)\vert$. Then, $$ \Vert \psi_1^n\cdots \psi_n^n \Vert^{\frac{1}{n}}= \exp \left\{\frac{1}{n} \sum_{i=1}^n \log |\psi_i^n(x_n)|\right\}.$$ If we consider the functions $f_n:S_{X^*}\rightarrow \mathbb K$ defined as $f_n(\varphi)=\log|\varphi(x_n)|$ and $\eta_n$ the probability measure over $S_{X^*}$ defined as $$\eta_n = \frac{1}{n} \sum_{i=1}^n \delta_{\psi_i^n},$$ then we have: $$\frac{1}{n}\sum_{i=1}^n \log\left |\psi_i^n(x_n)\right|= \int_{S_{X^*}} f_n(\psi) \,\,\,d\eta_n .$$ The idea now is to take a subsequence $\{n_k\}$ such that $\eta_{n_k}$ $w^*$-converges to some probability measure $\eta$ and such that $x_{n_k}$ converges to some $x_0\in S_X$. All this will give us an estimate of $\mathbf c(X)$ in terms of $\eta$ and the function $f_0:S_{X^*}\rightarrow \mathbb K$, defined as $f_0(\varphi)=\log|\varphi(x_0)|$. Since it is not clear how to find a set of functions satisfying \eqref{peores} (and then, it is not clear that we can obtain $\eta$ and the estimate for $\mathbf c$), the following alternative procedure gives a lower bound for it: we fix a measure $\eta$ beforehand and choose the sets of linear functionals $\psi_1^n,\ldots , \psi_n^n$ to obtain this particular $\eta$ as the $w^*$-limit of the measures $\eta_n$. These sets of linear functionals may not satisfy \eqref{peores}, but we clearly have $$ \Vert \psi_1^n\cdots \psi_n^n \Vert \geq \mathbf c_n(X)^{-1},$$ which is precisely what we need to obtain the desired lower bounds. The sharpness of the bounds thus obtained will depend on the good choice of the probability measure $\eta$. \subsection*{Lower bounds in Theorem \ref{mainprop}} In the sequel, for a measure space $(K, \nu)$ and an integrable function $f:K\rightarrow \mathbb R$ we will use the notation $$\nu(f) = \int_K f(\omega) \,\,\,d\nu(\omega).$$ We need the following auxiliary lemma due to A. Pappas and S. G. R\'ev\'esz (see \cite[Lemma~4]{PR}). \begin{lem}\label{stronglaw} Let $\eta$ be any probability measure over $S_{X^*}.$ There is a sequence of sets of norm one linear functionals $\{\psi_1^n,\ldots, \psi_n^n\}_{n\in \mathbb N}$ over $X$ such that $$\displaystyle\lim_{n\rightarrow \infty} \frac 1 n \sum_{j=1}^n f(\psi_j^n) = \int_{S_{X^*}} f(\psi) \,\,\,d\eta(\psi)$$ for any continuous function $f:S_{X^*} \rightarrow \mathbb R$. In other words, if we consider the measures $\eta_n=\frac{1}{n}\sum_{j=1}^n \delta_{\psi_j^n}$, the sequence $\{\eta_n\}_{n\in \mathbb N}$ $w^*$-converges to $\eta$. \end{lem} We remark that, although the result in \cite{PR} is stated for $X$ a Hilbert space and $\eta$ the normalized Lebesgue measure, the proof works in the more general setting of our statement. Now we are ready to prove the lower estimates for $\mathbf c(X)$. \begin{prop} Given a finite dimensional Banach space $X$ and an admissible probability measure $\eta$ over $S_{X^*}$, there is a point $x_0 \in S_{X}$, depending on $\eta$, such that $$\mathbf c(X) \geq \exp\left\{-\int_{S_{X^*}} \log |\langle x_0,\psi\rangle | \,\,\, d\eta(\psi) \right\}.$$ \end{prop} \begin{proof} Take a sequence of sets of norm one of linear functionals $\{\psi_1^n,\ldots,\psi_n^n\}_{n\in \mathbb N}$ as in Lemma~\ref{stronglaw}, and consider the measures $\eta_n=\frac{1}{n}\sum_{j=1}^n \delta_{\psi_j^n}$. Let $x_n\in S_X$ be a point where $\prod_{j=1}^n \psi_j^n$ attains its norm. We may assume $\Big\Vert \prod_{j=1}^n \psi_j^n \Big\Vert^\frac{1}{n}$ converges, otherwise we work with a subsequence. With the same argument we may assume that there is $x_0\in S_X$ such that $x_n\rightarrow x_0$. Since $$c_n(X) \Big\Vert \prod_{j=1}^n \psi_j^n \Big\Vert \geq 1,$$ we need an upper bound for $ \displaystyle\lim_{n\rightarrow \infty} \Big\Vert \prod_{j=1}^n \psi_j^n \Big\Vert^\frac 1 n$. For every $n,m \in \mathbb N_0$ consider the functions $f_n: S_{X^*} \rightarrow \mathbb R\cup \{-\infty\}$ and $f_{n,m}:S_{X^*} \rightarrow \mathbb R$ defined as $$f_n(\psi) = \log |\langle x_n,\psi\rangle |$$ $$f_{n,m}(\psi)=\max \{f_n(\psi), -m\}.$$ Using that $f_{n,m} \geq f_n$ we obtain \begin{eqnarray} \Big\Vert \prod_{j=1}^n \psi_j^n \Big\Vert^{\frac 1 n} &=& \prod_{j=1}^n |\langle x_n,\psi_j^n\rangle | ^{\frac 1 n} = \exp \left\{ \frac 1 n \sum_{j=1}^n \log \left|\langle x_n,\psi_j^n\rangle \right| \right\}\nonumber \\ &= &\exp \left\{ \frac 1 n \sum_{j=1}^n f_n(\psi_j^n) \right\}= \exp \left\{ \eta_n(f_n) \right\}\nonumber \\ &\leq &\exp \left\{ \eta_n(f_{n,m}) \right\}.\nonumber \ \end{eqnarray} Fixed $m$, since $x_n\to x_0$, it is easy to check that the functions $f_{n,m}$ converges uniformly to $f_{0,m}$ as $n\to \infty$. Also, we know that $\eta_n$ $w^*$-converges to $\eta$. This altogether gives that $\eta_n(f_{n,m})$ converges to $\eta(f_{0,m})$ and then $$\displaystyle\lim_{n\rightarrow \infty} \Big\Vert \prod_{j=1}^n \psi_j^n \Big\Vert^\frac 1 n \leq \exp \left\{ \eta(f_{0,m}) \right\}.$$ This holds for arbitrary m. Since $\mu$ is admissible, taking limit on $m$, we obtain $$ \displaystyle\lim_{n\rightarrow \infty}\Big\Vert \prod_{j=1}^n \psi_j^n \Big\Vert^\frac 1 n \leq \exp\{\eta(f_{0})\} = \exp\left\{ \int_{S_{X^*}} \log |\langle x_0,\psi\rangle | \,\,\,d\eta(\psi) \right\},$$ as desired. \end{proof} \begin{rem} In the previous proof we only use from the Definition \ref{admisible} that $$\int_{S_{X^*}} \max\{\log|\langle x_0,\varphi\rangle |,-m\} \,\,\,d\eta(\varphi)\rightarrow \int_{X^*} \log|\langle x_0,\varphi\rangle |\,\,\,d\eta(\varphi),$$ that is, we only needed pointwise convergence for the point $x_0 \in S_{(X^*)^*}$, rather than uniform convergence on $S_{(X^*)^*}.$ To see this, it is enough to have $$\int_{S_{X^*}} \log |\langle x_0,\psi\rangle | \,\,\, d\eta(\psi) < \infty$$ and apply the Dominated Convergence Theorem. \end{rem} \subsection*{Upper bounds in Theorem \ref{mainprop}} For the upper bounds we will obtain a slightly better result, since we will get upper bounds for $\mathbf c_n(X)$ rather than for $\mathbf c(X)$. Setting $K$ as the sphere $S_X$ in the following proposition we obtain the upper bounds of Theorem \ref{mainprop}. \begin{prop}\label{prop polarizacion nesima} Given a finite dimensional Banach space $X$, $K\subseteq B_X$, and an admissible probability measure $\mu$ over $K$, there is a point $\psi_0\in S_{X^*}$, depending on $\mu$, such that $$\mathbf c_n(X)\leq \exp\left\{-n \int_{K} \log |\langle x,\psi_0\rangle | \,\,\,d\mu(x) \right\}.$$ \end{prop} \begin{proof} Consider the function $g:S_{X^*}\rightarrow \mathbb R$ defined as $$g(\psi)=\int_{K} \log |\langle x,\psi\rangle | \,\,\,d\mu(x).$$ We start by showing that $g$ is continuous. For every natural number $m$ define $g_m:S_{X^*}\rightarrow \mathbb R$ by $$g_m(\psi) =\int_{K} \max\{-m,\log |\langle x,\psi\rangle |\} \,\,\,d\mu(x).$$ Given that $\mu$ is admissible, $\{g_m\}_{m\in \mathbb N}$ converges uniformly to $g$ and therefore, since each $g_m$ is con\-ti\-nuous, $g$ is con\-ti\-nuous. Given that $g$ is continuous and $S_{X^*}$ is compact, there is $\psi_0\in S_{X^*}$ a global minimum of $g$. Recall that $\mathbf c_n(X)$ is the smallest constant such that $$1=\prod_{j=1}^n \Vert \psi_j \Vert \leq \mathbf c_n(X) \left\Vert \prod_{j=1}^n \psi_j \right\Vert$$ for any set of linear functionals $\psi_1,\ldots \psi_n \in S_{X^*}$. So we need to prove that $$\exp\left\{n \int_{K} \log |\langle x,\psi_0\rangle | \,\,\,d\mu(x) \right\} \leq \left\Vert \prod_{j=1}^n \psi_j \right\Vert.$$ Using that $\mu$ is a probability measure and that $\psi_0$ minimizes $g$, we obtain \begin{eqnarray} \left\Vert \prod_{j=1}^n \langle \cdot,\psi_j\rangle \right\Vert &\geq & \exp\left\{\log\left( \displaystyle\sup_{x\in K} \prod_{j=1}^n |\langle x,\psi_j\rangle |\right)\right\} \nonumber \\ &=& \exp\left\{ \displaystyle\sup_{x\in K} \sum_{j=1}^n \log |\langle x,\psi_j\rangle |\right\} \nonumber \\ &\geq& \exp\left\{ \int_{K} \sum_{j=1}^n \log |\langle x,\psi_j\rangle | \,\,\,d\mu(x)\right\} \nonumber \\ &=& \exp\left\{\sum_{j=1}^n \int_{K} \log |\langle x,\psi_j\rangle | \,\,\,d\mu(x) \right\}\geq \exp\left\{n \int_{K} \log |\langle x,\psi_0\rangle | \,\,\,d\mu(x) \right\},\nonumber \ \end{eqnarray} as desired. \end{proof} \begin{rem} In the previous proof we used that $\mu$ is admissible only to prove that $g$ has a global minimum. \end{rem} \section{Linear polarization constants of $\ell_p^d$ spaces}\label{sec-elepe} In this section we apply the method developed in the previous section and stated in Theorem \ref{mainprop}, to estimate the asymptotic behaviour of the linear polarization constants $\mathbf c(\ell_p^d(\mathbb K))$. To describe the asymptotic behaviour of two sequences of positive numbers $\{a_d\}_{d\in \mathbb N}$ and $\{b_d\}_{d\in \mathbb N}$ we use the notation $a_d \prec b_d$ to indicate that there is a constant $L>0$ such that $a_d \leq L b_d$. The notation $a_d \asymp b_d$ means that $a_d \prec b_d$ and $a_d \succ b_d$. In the following we write $dS$ for the normalized surface (Lebesgue) measure over the sphere $S_{\ell_2^d}$. When we consider a $d$-dimensional (real or complex) Hilbert space $\mathcal{H}_d$, taking in Theorem~\ref{mainprop} both measures $\mu$ and $\eta$ to be the normalized Lebesgue measure over $S_{\mathcal{H}_d}=S_{{\mathcal{H}^*_d}}$, we recover the following result from \cite{PR}: \begin{equation} \mathbf c(\mathcal{H}_d) = \exp\left\{ -\int_{S_{\mathcal{H}_d}} \log|\langle x,\psi_0\rangle | dS(x) \right\}. \label{hilbert} \end{equation} Note that, by symmetry, this expression does not depend $\psi_0$. If we call $$L(d,\mathbb K )=\int_{S_{\mathcal{H}_d}} \log|\langle x,\psi_0\rangle | dS(x),$$ a standard computation (see \cite{PR}) gives: \[ -L(d,\mathbb R)= \left\{ \begin{array}{lcl} \sum_{j=1}^{(d-2)/2} \frac{1}{2j} +\log 2 & \mbox{ if } & d\equiv 0(2) \\ & & \\ \sum_{j=1}^{(d-3)/2} \frac{1}{2j+1} & \mbox{ if } & d\equiv 1(2) \end{array} \right. \mbox{ and } -L(d,\mathbb C)=\frac{1}{2} \sum_{j=1}^{d-1} \frac 1 j. \] In particular $\mathbf c({\mathcal{H}_d}) \asymp \sqrt{d}$. Moreover, using the fact that $ \sum_{j=1}^{d-1} \frac 1 j - 2 \log(\sqrt{d})$ increases monotonically to the Euler-Mascheroni constant $\gamma$ it is easy to see that for $\mathbb K =\mathbb R$ and $d$ even $$\mathbf c({\mathcal{H}_d})= e^{-L(d,\mathbb R )} \leq e^\frac{\gamma}{2}\sqrt{2d},$$ while for the rest of the cases we get $$\mathbf c({\mathcal{H}_d})\leq e^\frac{\gamma}{2}\sqrt{d}. $$ In order to apply our results to a $d$-dimensional Banach space $X$, we need good candidates for the measures $\eta$ and $\mu$. Ideally, the measure $\eta$ on $S_{X^*}$ should be induced by a sequence of sets of norm one linear functionals $\{\psi_1^n,\ldots,\psi_n^n\}_{n\in \mathbb N}$ such that $$\Vert \psi_1^n\cdots \psi_n^n \Vert = c_n(X)^{-1}.$$ Since it is not easy to find such functionals, a good guess of their distribution on $S_{X^*}$ would be helpful. When $X$ is a Hilbert space, due to the symmetry of the sphere, it is natural to believe that they are uniformly distributed across the sphere. And that is a good choice: the measure induced by uniformly distributed functionals is the normalized Lebesgue measure which, as we observed, is an optimal choice of $\eta$. But this argument is no longer valid for the spaces $\ell_p^d$ with $p\neq 2$. If $\frac{1}{p}+\frac{1}{q}=1$, the lack of symmetry of $S_{\ell_q^d}$ for $q\ne 2$ suggests that the linear functionals will not be uniformly distributed on the sphere. After some reflection, by the geometry of the sphere, one may expect that if $p<2$, the linear functionals should be more concentrated around the points $e_1,\ldots,e_n$ than around points of the form $\sum \lambda_i e_i$, with $|\lambda_i|=\frac{1}{d^{1/q}}$. This is the case for $n\leq d$ (see \cite[Theorem 2.4]{CPR}), or for $n=dk$, as we will see below (see proof of Theorem \ref{teo polarizacion}, Step II). For $p>2$ we expect the reverse situation. Then, for the spaces $\ell_p^d$ we will choose a measure $\eta$ reflecting the previous reasoning and try to obtain the best possible lower bound, taking into consideration that we will not have control over the vector $x_0$ mentioned in Theorem \ref{mainprop}. The following is our main result and gives the asymptotic behaviour of the linear polarization constants $\mathbf c(\ell_p^d(\mathbb K))$ as $d$ goes to infinity. This extends results of \cite{GV} and \cite{PR} to non-Euclidean spaces. We devote the rest of this section to its proof. \begin{thm}\label{teo polarizacion} Let $1\leq p < \infty$. Then, \[ \mathbf c(\ell_p^d(\mathbb K)) \asymp \left\{ \begin{array}{lcl} \sqrt{d} & \mbox{ if } & p\geq 2 \\ & & \\ \sqrt[p]{d} & \mbox{ if } & p \leq 2. \end{array} \right. \] For $p=\infty$ we have the following estimation $$\sqrt{d} \prec \mathbf c(\ell_\infty^d(\mathbb K)) \prec \sqrt{d \log d}.$$ \end{thm} In order to prove Theorem \ref{teo polarizacion} we need some auxiliary calculations. Next lemma is essentially contained in Lemma 2.8 from \cite{CGP}, but we state it and say a a few words about the proof for completeness. \begin{lem}\label{concentration} Given $1\leq p <\infty$ we have $$\int_{S_{\ell_2^d(\mathbb K)}} \| t \|_p^p dS(t)\asymp d^{1-\frac{p}{2}},$$ and for $p=\infty$ we have $$\int_{S_{\ell_2^d(\mathbb K)}} \| t \|_\infty dS(t)\asymp \left(\frac{\log d}{d}\right)^{\frac 1 2}.$$ \end{lem} \begin{proof} The complex case can be easily deduced from the real case, since the norms of $\ell_p^d(\mathbb C)$ and $\ell_p^{2d}(\mathbb R)$ are equivalent up to a factor which is independent of $d$. The case $p<\infty$ and $\mathbb K=\mathbb R$ is a particular case of Lemma 2.8 from \cite{CGP}. The case $p=\infty$ and $\mathbb K =\mathbb R$ follows just as in the case $p<\infty$, considering the Gaussian measure $\gamma$ over $\mathbb R^d$ and using the well known behaviour of the maximum of $d$ standard Gaussian variables $$\int_{\mathbb R^d} \Vert z \Vert _\infty \, d\gamma(z) \asymp \sqrt{\log d}. \qedhere$$ \end{proof} \noindent With this lemma we are able to prove the following. \begin{lem}\label{lemaux} Let $ 1\leq p < \infty$. Then $$ \exp\left\{\int_{S_{\ell_2^d(\mathbb K)}} \log \left(\frac{1}{\Vert z \Vert_p}\right) dS(z) \right\}\asymp d^{\frac 1 2 - \frac 1 p} ,$$ and for $p=\infty$ we have $$ \left(\frac{d}{\log d} \right)^{\frac 1 2} \prec \exp\left\{\int_{S_{\ell_2^d(\mathbb K)}} \log \left(\frac{1}{\Vert z \Vert_\infty}\right) dS(z) \right\}\prec d^{\frac 1 2}.$$ \end{lem} \begin{proof} We prove only the real case, since the complex case follows from the real one as in Lemma \ref{concentration}. Let us start with the upper bound and $p<\infty$, using Jensen's inequality and relation (6.2) from the proof of Theorem 6.1 in \cite{Pi} $$\int_{S_{\ell_2^d(\mathbb R)}} \frac{1}{\Vert z \Vert_p^d}dS(z) =\frac{|B_{\ell_p^d}|}{|B_{\ell_2^d}|},$$ we have \begin{eqnarray} \int_{S_{\ell_2^d(\mathbb R)}} \log \left(\frac{1}{\Vert z \Vert_p}\right) dS(z)&=&\frac{1}{d} \int_{S_{\ell_2^d(\mathbb R)}} \log \left(\frac{1}{\Vert z \Vert_p^d}\right) dS(z) \nonumber \\ &\leq &\frac{1}{d} \log\left(\int_{S_{\ell_2^d(\mathbb R)}} \frac{1}{\Vert z \Vert_p^d}dS(z)\right)\nonumber \\ &=& \frac{1}{d}\log \left(\frac{|B_{\ell_p^d}|}{|B_{\ell_2^d}|} \right) = \log \left( \left(\frac{|B_{\ell_p^d}|}{|B_{\ell_2^d}|} \right)^{\frac 1 d} \right). \nonumber \ \end{eqnarray} Therefore, by \cite[Equation (1.18)]{Pi} $$|B_{\ell_p^d}|^\frac{1}{d}\asymp \frac{1}{d^p},$$ we obtain $$\exp\left\{\int_{S_{\ell_2^d(\mathbb R)}} \log \left(\frac{1}{\Vert z \Vert_p}\right) dS(z) \right\} \leq \left(\frac{|B_{\ell_p^d}|}{|B_{\ell_2^d}|} \right)^{\frac 1 d} \prec d^{\frac 1 2 - \frac 1 p}. $$ The upper bound for $p=\infty$ follows using the obvious modifications to the previous reasoning. For the lower bound and $p<\infty$, we will use again Jensen's inequality to get \begin{eqnarray} \int_{S_{\ell_2^d(\mathbb R)}} \log \left(\frac{1}{\Vert z \Vert_p}\right) dS(z) &=& \int_{S_{\ell_2^d(\mathbb R)}} -\frac{1}{p}\log \left(\Vert z \Vert_p^p\right) dS(z) \nonumber \\ &\geq & -\frac{1}{p}\log \left(\int_{S_{\ell_2^d(\mathbb R)}} \Vert z \Vert_p^p dS(z)\right). \nonumber \ \end{eqnarray} Then, using Lemma \ref{concentration}, we obtain \begin{eqnarray} \exp\left\{\int_{S_{\ell_2^d(\mathbb R)}} \log \left(\frac{1}{\Vert z \Vert_p}\right) dS(z) \right\} &\geq & \left(\int_{S_{\ell_2^d(\mathbb R)}} \Vert z \Vert_p^p dS(z)\right)^{-\frac{1}{p}} \nonumber \\ &\succ & d^{\frac 1 2 - \frac 1 p}. \nonumber \ \end{eqnarray} As before, using the obvious modifications to the previous reasoning, we obtain the lower bound for the case $p=\infty$. \end{proof} Now we are ready to prove our main result. \begin{proof}[Proof of Theorem \ref{teo polarizacion}] In order to have a better organization, we divide the proof in different parts. Given that the proof is the same for $\mathbb K=\mathbb C$ or $\mathbb R$, for simplicity, we omit the notation on the scalar field. Throughout this proof $q$ will be the conjugate exponent of $p$. \paragraph{\textbf{Step I:} $\mathbf c(\ell_p^d) \succ \sqrt{d}$ for $2 < p \leq \infty$.} As mentioned before, we want to consider a measure related to the geometry of the sphere $S_{\ell_p^d}$. That being said, we also want a measure that can be easily related to the Lebesgue measure of $S_{\ell_2^d}$, given that for Hilbert spaces the linear polarization constant is known. Consider, then, the measure $\eta$ on $S_{\ell_q^d}$ defined by $$\eta(A) = \int_{H(A)} \frac{1}{|DH^{-1}(\varphi)|} dS(\varphi),$$ where $H:S_{\ell_q^d}\rightarrow S_{\ell_2^d}$ is defined as $H(\psi)=\frac{\psi}{\Vert \psi \Vert_2}$. That is, we choose $\eta$ such that for any integrable function $f:S_{\ell_q^d}\rightarrow \mathbb K$, we have \begin{equation}\label{ecuacion eta} \int_{S_{\ell_q^d}} f(\psi) \,\,\,d\eta(\psi) = \int_{S_{\ell_2^d}} f\left(\frac{\varphi}{\Vert \varphi\Vert_q}\right) dS(\varphi). \end{equation} Using that the normalized Lebesgue measure is admissible, and its close relation with $\eta$, it is easy to see that $\eta$ is admissible. Then, by Theorem \ref{mainprop}, there is $x_0\in S_{\ell_p^d}$ such that $$\mathbf c(\ell_p^d) \geq \exp\left\{ -\int_{S_{\ell_q^d}} \log (|\langle x_0,\psi\rangle |) \,\,\,d\eta(\psi) \right\}.$$ Let's find an upper bound for the integral. By \eqref{ecuacion eta}, we have \begin{eqnarray} \int_{S_{\ell_q^d}} \log (|\langle x_0,\psi\rangle |) \,\,\,d\eta(\psi) &= & \int_{S_{\ell_2^d}} \log \left(\left|\langle x_0,\frac{\varphi}{\Vert \varphi \Vert_q}\rangle \right|\right) dS(\varphi) \nonumber \\ &= & \int_{S_{\ell_2^d}} \log \left(\left|\langle x_0\frac{\Vert x_0 \Vert_2}{\Vert x_0\Vert_2},\frac{\varphi}{\Vert \varphi \Vert_q}\rangle \right|\right) dS(\varphi) \nonumber \\ &= &\int_{S_{\ell_2^d}} \log \left(\left|\langle \frac{x_0}{\Vert x_0 \Vert_2},\varphi\rangle \right|\right) dS(\varphi) \nonumber \\ & & +\int_{S_{\ell_2^d}} \log \left(\frac{1}{\Vert \varphi \Vert_q}\right) dS(\varphi)+ \log (\Vert x_0 \Vert_2). \nonumber \ \end{eqnarray} Then, using \eqref{hilbert}, Lemma \ref{lemaux} and that $x_0 \in S_{\ell_p^d}$, with $p>2$, we obtain \begin{eqnarray} \mathbf c(\ell_p^d) &\geq & \mathbf c(\ell_2^d) \exp\left\{ -\int_{S_{\ell_2^d}} \log \left(\frac{1}{\Vert \varphi \Vert_q}\right) dS(\varphi) \right\} \frac{1}{\Vert x_0 \Vert_2}\nonumber \\ &\succ &\mathbf c(\ell_2^d) d^{\frac 1 q - \frac 1 2} d^{\frac 1 p - \frac 1 2} = \mathbf c(\ell_2^d) \nonumber\\ &\asymp & \sqrt{d}.\nonumber\ \end{eqnarray} \paragraph{\textbf{Step II:} $\mathbf c(\ell_p^d) \succ \sqrt[p]{d}$ for $p < 2$.} Note that in this case, the previous procedure would lead us to $\mathbf c(\ell_p^d) \succ \sqrt[q]{d},$ so we need an alternative way. It is enough to find a subsequence of natural numbers $\{n_k\}_{k\in \mathbb N}$ such that $$\mathbf c_{n_k} (\ell_p^d) \succ d^{\frac{n_k}{ p}}.$$ Let us consider the subsequence $n_k=dk$. For each $k$ consider the following set of norm one linear functionals $$\{\underbrace{e_1,\ldots, e_1,}_{k \mbox{ times}}\ldots ,\underbrace{e_d,\ldots ,e_d}_{k \mbox{ times}}\}\subseteq S_{\ell_q^d},$$ that is, we consider $k$ copies of each vector of the canonical basis. Then we have \begin{equation} \mathbf c_{n_k} (\ell_p^d) \geq \Vert (e_1)^k\cdots (e_d)^k \Vert^{-1} = \sqrt[p]{ \frac{ \left( k+\cdots +k \right)^{k+\cdots +k}}{ k^k\cdots k^k } }= \sqrt[p] { d^{n_k} }, \nonumber \end{equation} since $(e_1)^k\cdots (e_d)^k $ attains its maximum on $\left(\frac{1}{d^{\frac{1}{p}}},\ldots, \frac{1}{d^{\frac{1}{p}}} \right)$. Note that in this case we proved that $\mathbf c(\ell_p^d) \geq \sqrt[p]{d}$, rather than $\mathbf c(\ell_p^d) \succ \sqrt[p]{d}.$ We also remark that the strategy followed in this step would not give useful information in the previous case. \paragraph{\textbf{Step III:} $\mathbf c(\ell_p^d) \prec \sqrt{d}$ for $2<p <\infty$.} As before, define the measure $\mu$ on $S_{\ell_p^d}$ by $$\mu(A)=\int_{G(A)} \frac{1}{|DG^{-1}(z)|} dS(z),$$ where $G:S_{\ell_p^d}\rightarrow S_{\ell_2^d}$ is defined as $G(z)=\frac{z}{\Vert z \Vert_2}$. Proceeding as in the previous case, we obtain \begin{equation}\label{ecuacion cota sup del teo} \mathbf c(\ell_p^d) \leq \mathbf c(\ell_2^d) \exp\left\{ -\int_{S_{\ell_2^d}} \log \left(\frac{1}{\Vert z \Vert_p}\right) dS(z) \right\} \frac{1}{\Vert \psi_0 \Vert_2}, \end{equation} where $\psi_0$ is some point in $S_{\ell_q^d}$. Note that so far the fact that $2<p <\infty$ has not been used. Using Lemma \ref{lemaux} and the that $q<2$ we conclude \begin{eqnarray} \mathbf c(\ell_p^d) &\prec & \mathbf c(\ell_2^d) d^{\frac 1 p - \frac 1 2} d^{\frac 1 q - \frac 1 2} = \mathbf c(\ell_2^d) \nonumber \\ & \asymp & \sqrt{d} .\nonumber \end{eqnarray} \paragraph{\textbf{Step IV:} $\mathbf c(\ell_\infty^d) \prec \sqrt{d \log d}$.} Combining \eqref{ecuacion cota sup del teo} with Lemma \ref{lemaux} for $p=\infty$ we obtain $$ \mathbf c(\ell_\infty^d) \prec \mathbf c(\ell_2^d) \left(\frac{\log d}{d} \right)^{\frac 1 2} d^{\frac 1 2}\nonumber = \sqrt{d \log d} . $$ \paragraph{\textbf{Step V:} $\mathbf c(\ell_p^d) \prec \sqrt[p]{d}$ for $p < 2$.} By \eqref{ecuacion cota sup del teo}, the fact that in this case $\psi_0$ is some point in $S_{\ell_q^d}$ with $q>2$ and Lemma~\ref{lemaux} we obtain $$ \mathbf c(\ell_p^d) \prec \mathbf c(\ell_2^d) d^{\frac 1 p - \frac 1 2} 1 \asymp \sqrt[p]{d}. \nonumber \ $$ \end{proof} \section{On the $n$th linear polarization constant of $\ell_\infty^d(\mathbb C)$}\label{seccion infinito} In this section we study the $n$th linear polarization constant of the complex finite dimensional spaces $\ell_\infty^d(\mathbb C)$. Although we do not solve the gap in Theorem \ref{teo polarizacion}, we obtain a more precise result on the lower bounds. We use a probabilistic approach to prove the existence of li\-near functionals $\varphi_1,\ldots,\varphi_n :\ell_\infty^d\rightarrow \mathbb C$ such that the norm of the product is small in comparison with the product of the norms. The probabilistic techniques we use in this section are an adaptation to our problem of techniques used, for example, by H. Boas in \cite{Bo}. The aim of this section is then to prove the following. \begin{prop}\label{prop infinito} The $n$th linear polarization constant of $\ell_\infty^d(\mathbb C)$ satisfies $$\mathbf c_n(\ell_\infty^d(\mathbb C)) \geq \frac{1}{2}\sqrt{\frac{ d^n}{(24n)^d}}.$$ \end{prop} \begin{rem} Note that in particular, the result from above assures us that $$ \mathbf c(\ell_\infty^d(\mathbb C)) \geq \sqrt{d}. \nonumber $$ This improves the bound from Theorem \ref{teo polarizacion} where we had $\mathbf c(\ell_\infty^d(\mathbb C)) \succ \sqrt{d}.$ \end{rem} We start by using some notation. Let $\{\varepsilon_k^j:\Omega \rightarrow \mathbb R\}_{j,k}$, with $j\in\{1,\ldots,n\}$ and $k\in \{1,\ldots, d\}$, be a family of independent Rademacher functions over a probability space $(\Omega,\Sigma,P)$. That is, $\{\varepsilon_k^j\}_{j,k}$ are independent random variables such that $P(\varepsilon_k^j=1)=P(\varepsilon_k^j=-1)=\frac 1 2$ for $j=1,\ldots,n$ and $k=1,\ldots, d$. For any $t\in \Omega$ and $j\in\{1,\ldots,n\}$ we define the linear function $\varphi_j(\cdot,t):\ell_\infty^d\rightarrow \mathbb C$ as $\varphi_j(z,t) =\sum_{k=1}^d \varepsilon_k^j(t) z_k$ and $F:\ell_\infty^d\times \Omega \rightarrow \mathbb C$ by $$F(z,t) =\prod_{j=1}^n\varphi_j(z,t)=\sum_{k_1,\ldots,k_n=1}^d \varepsilon_{k_1}^1\cdots \varepsilon_{k_n}^nz_{k_1}\cdots z_{k_n}.$$ We will show the existence of some $t_0 \in \Omega$ such that the norm $\left\Vert \prod_{j=1}^n\varphi_j(\cdot,t_0) \right\Vert =\Vert F(\cdot,t_0)\Vert$ is small. To do this we need some auxiliary lemmas related to the function $F$, the geometry of the $d$ dimensional torus $\mathbb{T}^d=\{z\in \ell_\infty^d(\mathbb C) : |z_k| = 1\}$ and the space $\ell_\infty^d(\mathbb C)$. \begin{lem}\label{aux primero} For any natural number $N$, the $d$-dimensional torus $\mathbb{T}^d$ can be covered up with $N^d$ balls of $\ell_\infty^d(\mathbb C)$, with center on $\mathbb{T}^d$ and radius $\frac{\pi}{N}$. \end{lem} \begin{proof} It is enough to consider the balls of center $(e^{2\pi i\frac{j_1}{N}},\ldots, e^{2\pi i\frac{j_d}{N}})$, with $j_1,\ldots,j_d \in \{1,\ldots,N\}$. \end{proof} \begin{lem}\label{aux segundo} Given $z \in \mathbb{T}^d$ and a positive number $R$, we have $$P(|F(z,t)|>R)\leq \frac{1}{R^2}d^n.$$ \end{lem} \begin{proof} If we write $z=(z_1,\ldots,z_d)$, then the expected value of $\vert F(z, \cdot) \vert^2$ is \begin{eqnarray} \mathbb E(|F(z,\cdot)|^2)&=&\mathbb E\left(\left|\sum_{k_1,\ldots,k_n=1}^d \varepsilon_{k_1}^1\cdots \varepsilon_{k_n}^n z_{k_1}\cdots z_{k_n}\right|^2\right) \nonumber \\ &=&\sum_{k_1,\ldots,k_n=1}^d | z_{k_1}\cdots z_{k_n}|^2 =d^n, \nonumber \ \end{eqnarray} where we used the independence of the family $\{\varepsilon_k^j\}_{j,k}$. The result now follows from Chebyshev's inequality. \end{proof} \begin{lem}\label{aux tercero} For any pair of norm one vectors $z,w\in \ell_\infty^d(\mathbb C)$ and any $t\in \Omega$, we have $$|F(w,t)-F(z,t)| \leq n \, e\, \|F(\cdot,t)\| \|w-z\|.$$ \end{lem} \begin{proof} If we define $\gamma(s)=ws +z (1-s)$ for $0\leq s\leq 1$, there is $0\leq c\leq 1$ such that \begin{eqnarray} |F(w,t)-F(z,t)|&= & |F(\gamma(1),t)-F(\gamma(0),t)| \nonumber \\ & = &|D F(\gamma(c),t)\circ D\gamma (c)| \nonumber \\ & = &\frac{n^n}{(n-1)^{n-1}}\Vert F(\cdot,t)\Vert \Vert \gamma(c) \Vert^{n-1} \Vert D\gamma(c) \Vert \label{h inequality} \\ & \leq & n e \Vert F(\cdot,t)\Vert \Vert w-z \Vert \nonumber \ \end{eqnarray} where in \eqref{h inequality} we have used the following inequality, which is a particular case of a result by Harris~\cite[Corollary~1]{H}: if $P:X\rightarrow \mathbb C$ is an $n$-homogeneous polynomial over a complex Banach space $X$, then $$\Vert DP \Vert \leq \frac{n^n}{(n-1)^{n-1}} \Vert P \Vert.$$ \end{proof} \begin{lem}\label{aux cuarto} For any positive number $R$, $$P(\| F(\cdot,t) \| > 2R) < (24n)^d\frac{d^n}{R^2} .$$ \end{lem} \begin{proof} By Lemma \ref{aux primero}, there is a family of points $\{w_1,\ldots,w_{(24n)^d}\}\subseteq \mathbb{T}^d$ such that for any $z\in \mathbb{T}^d$, we have $$\Vert w_i-z\Vert\leq\frac{\pi}{24n} < \frac{1}{2ne} $$ for some $i=1,\ldots, (24n)^d$. For any fixed $t\in \Omega$, by the maximum modulus principle, there is $z_0\in \mathbb{T}^d$ such that $$\|F(\cdot,t)\| =|F(z_0,t)|.$$ Let $i$ be such that $\| w_i - z_0 \| \leq \frac{1}{2ne}$. By Lemma \ref{aux tercero} $$|F(w_i,t)-F(z_0,t)| \leq \|F(\cdot,t)\| n e \|w_i-z_0\| < \|F(\cdot,t)\| \frac{1}{2}.$$ Therefore, for each $t$ we have $$\frac{\|F(\cdot,t)\| }{2} < |F(w_i,t)|$$ for some $i$, and then we conclude that $$\|F(\cdot,t)\| < 2 \max_i \{|F(w_i,t)|:i=1,\ldots,(24n)^d\}.$$ Since $t\in \Omega$ was arbitrary, using Lemma \ref{aux segundo}, we have \begin{eqnarray} P(\| F(\cdot,t) \| > 2R) &<& P(\max_i \{ |F(w_i,t)|:i=1,\ldots,(24n)^d\} > R) \nonumber \\ &\leq& \sum_{i=1}^{(24n)^d} P(|F(w_i,t)| > R) \nonumber \\ &\leq& (24n)^d\frac{d^n}{R^2} , \nonumber \end{eqnarray} as desired. \end{proof} \noindent Now we are ready to prove the main result of this section. \begin{proof}[Proof of Proposition~\ref{prop infinito}] Take in Lemma \ref{aux cuarto} $$R= \sqrt{(24n)^d d^n}.$$ Then $$ P (\| F(\cdot,t) \| > 2R) < 1. $$ Therefore, there is $t_0 \in \Omega$, such that \begin{eqnarray} \| \prod_{j=1}^n \varphi_j(\cdot, t_0) \| &=& \| F(\cdot,t_0) \| \leq 2R \nonumber \\ &=& 2\sqrt{(24n)^d d^n} = 2\sqrt{\frac{(24n)^d}{ d^n}}d^n \nonumber \\ &=& 2\sqrt{\frac{(24n)^d}{ d^n}}\prod_{j=1}^n \|\varphi_j(\cdot, t_0) \|, \ \end{eqnarray} which ends the proof. \end{proof}
{ "timestamp": "2017-03-21T01:04:30", "yymm": "1703", "arxiv_id": "1703.06316", "language": "en", "url": "https://arxiv.org/abs/1703.06316", "abstract": "We study the linear polarization constants of finite dimensional Banach spaces. We obtain the correct asymptotic behaviour of these constants for the spaces $\\ell_p^d$: they behave as $\\sqrt[p]{d}$ if $1\\le p\\le 2$ and as $\\sqrt{d}$ if $2\\le p<\\infty$. For $p=\\infty$ we get the asymptotic behavior up to a logarithmic factor", "subjects": "Functional Analysis (math.FA)", "title": "On the linear polarization constants of finite dimensional spaces", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9835969655605173, "lm_q2_score": 0.8244619285331332, "lm_q1q2_score": 0.8109382511253619 }
https://arxiv.org/abs/1901.00587
On bounded elementary generation for $SL_n$ over polynomial rings
Let $F[X]$ be the polynomial ring over a finite field $F$. It is shown that, for $n\geq 3$, the special linear group $SL_n(F[X])$ is boundedly generated by the elementary matrices.
\section{Introduction} The special linear group $\mathrm{SL}_n(\mathbb{Z})$ is generated by the elementary matrices, that is, matrices which differ from the identity by at most one non-zero off-diagonal entry. Far more remarkable is the following fact: \begin{thm}[Carter - Keller \cite{CK}]\label{thm: ck} Let $n\geq 3$. Then $\mathrm{SL}_n(\mathbb{Z})$ is boundedly generated by the elementary matrices. \end{thm} This means that, for some positive integer $\nu_n$, every matrix in $\mathrm{SL}_n(\mathbb{Z})$ is a product of at most $\nu_n$ elementary matrices. The Carter - Keller theorem provides, in fact, the explicit bound $\nu_n=\tfrac{1}{2}(3n^2-n)+36$. See \cite{AM} for a variation on the Carter - Keller argument, with a slightly worse bound. In \cite{CK2}, Carter and Keller extend their argument to rings of integers in algebraic number fields. A different approach to bounded elementary generation for $\mathrm{SL}_n$ over rings of integers, based on unpublished work of Carter, Keller, and Paige, can be found in \cite{WM}. The novelty is the use of model-theoretic ideas. Unlike the original Carter - Keller approach, this is a non-explicit argument. One proves the \emph{existence} of a bound on the number of elementary matrices needed to express a matrix in $\mathrm{SL}_n$. Elementary generation of $\mathrm{SL}_n$ also holds for polynomial rings over fields. However, bounded elementary generation may fail. In \cite{K} van der Kallen shows, by means of algebraic K-theory, that $\mathrm{SL}_n(\mathbb{C}[X])$, $n\geq 2$, is not boundedly generated by the elementary matrices. From an arithmetical viewpoint, however, the closest relative of $\mathbb{Z}$ is a polynomial ring over a \emph{finite} field. The purpose of this note is to show the following analogue of Theorem~\ref{thm: ck}, which appears to be new (cf., e.g., \cite[p.523]{S}). \begin{thm}\label{thm: F} Let $\mathbb{F}$ be a finite field, and let $n\geq 3$. Then $\mathrm{SL}_n(\mathbb{F}[X])$ is boundedly generated by the elementary matrices. \end{thm} The proof is an adaptation of the Carter - Keller argument. Here is one technical difference. A crucial role in \cite{CK}, and also in \cite{AM}, is played by a `power lemma' \cite[Lemma 1]{CK} whose origins lie in properties of the so-called Mennicke symbols. We use instead a simple `swindle lemma', Lemma~\ref{lem: s} herein. A version of this swindle was used in \cite[\S 2.3]{N}. The proof of Theorem~\ref{thm: F} yields the explicit bound $\nu_n=\tfrac{1}{2}(3n^2-n)+29$. As this bound does not depend on the size of the finite field $\mathbb{F}$, it follows that Theorem~\ref{thm: F} holds, more generally, whenever $\mathbb{F}$ is an algebraic extension of a finite field. One cannot take $n=2$ in Theorem~\ref{thm: F}: $\mathrm{SL}_2(\mathbb{F}[X])$ is not boundedly generated by the elementary matrices. This fact, and the reason behind it, are analogous to what happens for $\mathrm{SL}_2(\mathbb{Z})$. The principal congruence subgroup of $\mathrm{SL}_2(\mathbb{F}[X])$ corresponding to the ideal $(X)$, in other words the kernel of the surjective homomorphism $\mathrm{SL}_2(\mathbb{F}[X])\to \mathrm{SL}_2(\mathbb{F})$ given by the evaluation $X=0$, has a free product structure. \section{Proof of Theorem~\ref{thm: F}}\label{sec: Z} Throughout, an elementary operation will be called, simply, a move. We allow the degenerate move of multiplying by the identity matrix. We write $\sim$ for the equivalence relation of being connected by a finite number of moves. \subsection{Reduction to a framed $\mathrm{SL}_2$ matrix} \label{sec: one} The first step is to reduce a matrix in $\mathrm{SL}_n$, $n\geq 3$, to a matrix of the following form: \begin{align*} \begin{pmatrix} a & b &\\ c & d &\\ & & I_{n-2} \end{pmatrix} \end{align*} This is a standard reduction which works over any principal domain $A$. The general concept underpinning this procedure is Bass's stable range \cite{Ba}. For the sake of completeness, let us sketch the argument for $n=3$. Let $(u,v,w)$ be the last row of a matrix in $\mathrm{SL}_3(A)$. Thus, $u$, $v$, and $w$ are relatively prime, and we may assume that either $u$ or $v$ is non-zero. A suitable move takes us to a matrix whose last row is $(u',v',w)$, and such that $u'$ and $v'$ are relatively prime. The key fact here is that, if $\gcd (u,v,w)=1$ and $u$ is non-zero, then $\gcd(u,v+tw)=1$ for some $t\in A$. (An explicit choice for $t$ is the product of all primes dividing $u$ but not $v$. More precisely, we take one prime per associate class. We set $t=1$ if there are no such primes.) Now $w-1$ is a combination of $u'$ and $v'$, so two moves turn $w$ into $1$. Four additional moves clear the last row and the last column. In summary, we have reached a framed $\mathrm{SL}_2$ matrix, as desired, in $7$ moves. More generally, this argument reduces an $\mathrm{SL}_n$ matrix to a framed $\mathrm{SL}_2$ matrix in $\tfrac{1}{2}(3n^2-n)-5$ moves. The remainder of the argument is devoted to showing that $34$ moves are sufficient in order to reduce, in $\mathrm{SL}_3$, a framed $\mathrm{SL}_2$ matrix to the identity. For the purposes of the next step, we assume that $a\neq 0$; otherwise, the reduction is trivial and quick, in only $3$ moves. \subsection{A convenient anti-diagonal} The second step will use the following analogue of Dirichlet's theorem on primes in arithmetic progressions. \begin{thm}[Kornblum - Artin]\label{thm: KA} If $a,b\in \mathbb{F}[X]$ are relatively prime and $a\neq 0$, then there are infinitely many primes congruent to $b$ mod $a$. Furthermore, such a prime can have arbitrary degree, provided the degree is sufficiently large. \end{thm} The first part is due to Kornblum (1919). The second part is a sharpening due to Artin (1921). See \cite[Chapter 4]{R} for a modern treatment. Consider a matrix \begin{align*} \begin{pmatrix} a & b\\ c & d \end{pmatrix}\in \mathrm{SL}_2(\mathbb{F}[X]). \end{align*} As $a$ and $b$ are relatively prime, the first part of Theorem~\ref{thm: KA} ensures that there is a prime $b'\in \mathbb{F}[X]$ satisfying $b'\equiv b$ mod $a$. Similarly, there is a prime $c'\in \mathbb{F}[X]$ satisfying $c'\equiv c$ mod $a$. Thus \begin{align*} \begin{pmatrix} a & b\\ c & d \end{pmatrix} \;\sim\; \begin{pmatrix} a & b'\\ c' & d' \end{pmatrix} \end{align*} in $2$ moves. Furthermore, we can assume that $b'$ and $c'$ have relatively prime degrees: once $b'$ has been chosen, we use the second part of Theorem~\ref{thm: KA} to pick $c'$ of suitable degree. \subsection{The main step} Let \begin{align*} \begin{pmatrix} a & b\\ c & d \end{pmatrix}\in \mathrm{SL}_2(\mathbb{F}[X]) \end{align*} be a matrix enjoying the property granted by the previous step: the anti-diagonal entries $b$ and $c$ are prime, with relatively prime degrees. Let $q$ denote the number of elements in $\mathbb{F}$. Then the integers \begin{align*} \delta(b):=\frac{q^{\deg(b)}-1}{q-1}, \qquad \delta(c):=\frac{q^{\deg(c)}-1}{q-1} \end{align*} are relatively prime, as well. Let $x$ and $y$ be positive integers satisfying $x\delta(b)-y\delta(c)=1$. We write \begin{align*} \begin{pmatrix} a & b\\c & d \end{pmatrix}=XY^{-1}, \end{align*} where \begin{align*} X=\begin{pmatrix} a & b\\ c & d \end{pmatrix}^{x\delta(b)},\quad Y=\begin{pmatrix} a & b\\ c & d \end{pmatrix}^{y\delta(c)}. \end{align*} We aim to reduce $X$ and $Y$ independently in $\mathrm{SL}_3$. More precisely, we will show that \begin{align*}\tag{$\dagger$} \begin{pmatrix} Y & \\ & 1 \end{pmatrix} \;\sim\; \begin{pmatrix} D(u) & \\ & -1 \end{pmatrix}, \qquad D(u):=\begin{pmatrix} -u & \\ & u^{-1} \end{pmatrix} \end{align*} in $14$ moves, for some $u\in \mathbb{F}^*$. The same will hold for $Y^{-1}$ in place of $Y$, and $u^{-1}$ in place of $u$, by inverting. It also holds for $X$ in place of $Y$, by interchanging $b$ and $c$, and then transposing, with respect to some other unit $v\in \mathbb{F}^*$. However, $D(v)\sim D(u)$ in $\mathrm{SL}_2$, in $4$ additional moves. We can then deduce that \begin{align*} \begin{pmatrix} XY^{-1} & \\ & 1 \end{pmatrix}=\begin{pmatrix} X & \\ & 1 \end{pmatrix}\begin{pmatrix} Y^{-1} & \\ & 1 \end{pmatrix} \;\sim\; \begin{pmatrix} D(u) & \\ & -1 \end{pmatrix}\begin{pmatrix} D(u^{-1}) & \\ & -1 \end{pmatrix}=I_3 \end{align*} in $14+4+14=32$ moves. Along the way, we are using the fact that diagonal matrices normalize the elementary matrices. Taking into account the second step, we conclude that $34$ moves are sufficient in order to reduce a framed $\mathrm{SL}_2$ matrix to the identity. Let us turn to proving $(\dagger)$. By the Cayley - Hamilton theorem, there are $e,f\in \mathbb{F}[X]$ such that: \begin{align*} \begin{pmatrix} a & b\\ c & d \end{pmatrix}^{y\delta(c)}=eI_2+f \begin{pmatrix} a & b\\ c & d \end{pmatrix} = \begin{pmatrix} e+fa & fb\\ fc & e+fd \end{pmatrix} \end{align*} Modulo $c$, the above matrices become upper triangular. So $e+fa\equiv a^{y\delta(c)}$ mod $c$. On the other hand, $a^{\delta(c)}$ mod $c$ is in $\mathbb{F}^*$. This follows by viewing the finite field $\mathbb{F}[X]/(c)$ as an extension of $\mathbb{F}$ of degree $\deg(c)$. Thus $e+fa\equiv u\in \mathbb{F}^*$ mod $c$. A similar argument applies to the lower diagonal entry. Keeping in mind that the determinant is $1$, we find that $e+fd\equiv u^{-1}\in \mathbb{F}^*$ mod $c$. At this point, we would like to replace the lower entry, $fc$, by $c$ so as to be able to perform reductions. These considerations motivate the following lemma. Roughly speaking, it provides a way of swindling factors across the diagonal. \begin{lem}\label{lem: s} Let $A$ be a principal domain, and let \begin{align*} \begin{pmatrix} a & b\\ sc & d \end{pmatrix}\in \mathrm{SL}_2(A) \end{align*} where $a\equiv d$ mod $s$. Then \begin{align*} \begin{pmatrix} a & b &\\ sc & d &\\ & & 1 \end{pmatrix} \;\sim\; \begin{pmatrix} \pm a & -sb &\\ c & \mp d &\\ & & -1 \end{pmatrix} \end{align*} in $11$ moves. \end{lem} \begin{proof} The degenerate case $s=0$ is easily seen to hold, so let us assume that $s\neq 0$. The hypotheses imply that $a^2\equiv ad\equiv 1$ mod $s$. So there are $s_1, s_2$ and $k_1,k_2$ in $A$ such that \begin{align*} s=s_1s_2, \qquad a=k_1s_1+1=k_2s_2-1. \end{align*} Now \begin{align*} \begin{pmatrix} a & b & 0\\ sc & d & 0\\ 0 & 0 & 1 \end{pmatrix} \;\sim\;\begin{pmatrix} a & b & 0\\ sc & d & 0\\ s_1 & 0 & 1 \end{pmatrix} \;\sim\; \begin{pmatrix} 1 & b & -k_1\\ 0 & d & -s_2c\\ s_1 & 0 & 1 \end{pmatrix} \;\sim\; \begin{pmatrix} 1 & b & -k_1\\ 0 & d & -s_2c\\ 0 & -s_1b & a \end{pmatrix} \end{align*} by a column move, two row moves, and one more row move. We have basically swindled $s_1$ across the diagonal, and we now go for $s_2$. Firstly, \begin{align*} \begin{pmatrix} 1 & b & -k_1\\ 0 & d & -s_2c\\ 0 & -s_1b & a \end{pmatrix} \;\sim\; \begin{pmatrix} 1 & 0 & s_2\\ 0 & d & -s_2c\\ 0 & -s_1b & a \end{pmatrix} \end{align*} by two column moves. Next, \begin{align*} \begin{pmatrix} 1 & 0 & s_2\\ 0 & d & -s_2c\\ 0 & -s_1b & a \end{pmatrix} &\;\sim\; \begin{pmatrix} 1 & 0 & s_2\\ c & d & 0\\ -k_2 & -s_1b & -1 \end{pmatrix}\\ &\;\sim\; \begin{pmatrix} -a & -sb & 0\\ c & d & 0\\ -k_2 & -s_1b & -1 \end{pmatrix} \;\sim\; \begin{pmatrix} -a & -sb & 0\\ c & d & 0\\ 0 & 0 & -1 \end{pmatrix} \end{align*} by two row moves, another row move, and two column moves. Overall, we have performed $11$ moves, as claimed. For the other choice of signs on the diagonal, one could `pivot' around $d$ instead of $a$. Alternatively, start from the above choice of signs, invert both matrices, interchange $a$ and $d$, and switch the signs of $b$ and $c$. \end{proof} Applying the above lemma, we obtain \begin{align*} \begin{pmatrix} e+fa & fb &\\ fc & e+fd &\\ & & 1 \end{pmatrix} \;\sim\; \begin{pmatrix} -(e+fa) & -f^2b &\\ c & e+fd &\\ & & -1 \end{pmatrix}\;\sim\; \begin{pmatrix} -u & \dots &\\ c & u^{-1} &\\ & & -1 \end{pmatrix} \end{align*} in $11+2=13$ moves. Taking the determinant reveals that the missing entry of the last matrix is $0$. One additional move, bringing the total to $14$, clears out the entry $c$. This completes the argument for $(\dagger)$, and so for Theorem~\ref{thm: F} as well. \section{Further remarks} \subsection{}\label{sury} The notion of bounded generation is commonly used for the property that a group is a product of finitely many cyclic subgroups. For $\mathrm{SL}_n(\mathbb{Z})$, $n\geq 3$, bounded cyclic generation follows from bounded elementary generation. This is no longer the case over $\mathbb{F}[X]$. In fact, bounded cyclic generation fails for $\mathrm{SL}_n(\mathbb{F}[X])$, $n\geq 3$. The idea that bounded cyclic generation is essentially a characteristic $0$ phenomenon, is crystallized by the following result from \cite{A+}: if a linear group in positive characteristic enjoys bounded cyclic generation, then the group is virtually abelian. \subsection{} Lemma~\ref{lem: s} can also be used over $A=\mathbb{Z}$. In this case, it leads to a simplification of the original Carter - Keller argument for $\mathrm{SL}_n(\mathbb{Z})$, and to the better bound $\nu_n=\tfrac{1}{2}(3n^2-n)+25$. The improved bound is irrelevant from an asymptotic perspective, but it becomes interesting in the case $n=3$. The question, which seems to us quite appealing, is how many elementary operations are needed to reduce a matrix in $\mathrm{SL}_3(\mathbb{Z})$ to the identity? Carter and Keller have shown that $48$ operations are sufficient. We have reduced this number to $37$. We challenge the reader to reduce this bound even further. \subsection{} There is an interesting issue of effectiveness in using the Kornblum - Artin theorem. The usual Dirichlet theorem, used in \cite{CK}, is made effective by a result of Linnik and its modern improvements (see, for instance, \cite{HB}). Theorem~\ref{thm: KA} is also effective, since the Riemann hypothesis in the function field context is already known. See \cite{BS} for further instances of Dirichlet-type theorems over polynomial rings. \bigskip \noindent\textbf{Acknowledgements.} I would like to thank Dave Witte Morris for thoughtful comments, and for pointing out Theorem~\ref{thm: KA} and reference \cite{R}. I am also grateful to B. Sury for many valuable remarks--notably, \ref{sury} herein.
{ "timestamp": "2019-01-04T02:04:05", "yymm": "1901", "arxiv_id": "1901.00587", "language": "en", "url": "https://arxiv.org/abs/1901.00587", "abstract": "Let $F[X]$ be the polynomial ring over a finite field $F$. It is shown that, for $n\\geq 3$, the special linear group $SL_n(F[X])$ is boundedly generated by the elementary matrices.", "subjects": "Group Theory (math.GR)", "title": "On bounded elementary generation for $SL_n$ over polynomial rings", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9890130596362788, "lm_q2_score": 0.8198933425148213, "lm_q1q2_score": 0.8108852232559989 }
https://arxiv.org/abs/2209.15493
Rainbow triangles in families of triangles
We prove that a family $\mathcal{T}$ of distinct triangles on $n$ given vertices that does not have a rainbow triangle (that is, three edges, each taken from a different triangle in $\mathcal{T}$, that form together a triangle) must be of size at most $\frac{n^2}{8}$. We also show that this result is sharp and characterize the extremal case. In addition, we discuss a version of this problem in which the triangles are not necessarily distinct, and show that in this case, the same bound holds asymptotically. After posting the original arXiv version of this paper, we learned that the sharp upper bound of $\frac{n^2}{8}$ was proved much earlier by Győri (2006) and independently by Frankl, Füredi and Simonyi (unpublished). Győri also obtained a stronger version of our result for the case when repetitions are allowed.
\section{Introduction} Let $\mathcal{F}$ be a family of sets. A rainbow set (with respect to $\mathcal{F}$) is a subset $R\subseteq\cup\mathcal{F}$, together with an injection $\sigma:R\to\mathcal{F}$ such that $e\in\sigma(e)$ for all $e\in R$. We view every member of $\mathcal{F}$ as a different color, and every $e\in R$ as colored by $\sigma(e)$, hence the ``rainbow" terminology. Note that, in general, we use the term ``family" in the sense of ``multiset", allowing repeated members and treating them as different colors. If each set in $\mathcal{F}$ satisfies some property $\mathcal{P}$ of interest, how large does $\mathcal{F}$ need to be in order to guarantee the existence of a rainbow set $R$ that also satisfies $\mathcal{P}$? A classic result of this kind is B{\'a}r{\'a}ny's colorful Carath{\'e}odory theorem \cite{barany1982Caratheodory}: Every family of $n+1$ subsets of $\mathbb{R}^n$, each containing a point $a$ in its convex hull, has a rainbow set satisfying the same property. There are several results of this type in extremal graph theory. For example, generalizing a theorem of Drisko \cite{drisko1998transversals}, Aharoni and Berger \cite{aharoniberger2009rainbowmatchings} proved that $2n - 1$ matchings of size $n$ in any bipartite graph have a rainbow matching of size $n$. Another example, and the motivation for this paper, is a theorem by Aharoni et al.~\cite{aharoni2021rainbowoddcycles}: Every family of $n$ odd cycles (viewed as edge sets) on $n$ vertices has a rainbow odd cycle. Here we are interested in the case when both the cycles in the given family and the desired rainbow one are not just any odd-length cycles but triangles. The existence of rainbow triangles has been studied in the literature in a different context: Given a graph and a coloring of its edges, under what conditions must there be a triangle whose edges have distinct colors? A fundamental structural result is due to Gallai \cite{gallai1967transitiv}: If $G$ is an edge-colored complete graph on at least two vertices without a rainbow triangle, there is a nontrivial partition $\mathscr{P}$ of $V(G)$ such that the edges between different parts are colored with at most two colors, and the edges between each pair of parts $P,Q\in\mathscr{P}$ all have the same color. Erd\H{o}s et al.~\cite{erdos1975anti} observed that every coloring of the edges of the complete graph $K_n$ with at least $n$ colors has a rainbow triangle; this was the starting point of their ``anti-Ramsey theory". Several more detailed conditions guaranteeing rainbow triangles were found in \cite{gyarfas2004tricolored,gyarfas2010noncomplete,li2012hetero,li2013rainbowc3c4,li2014rainbowtriangles,aharoni2019caccetta} and other studies. \begin{figure} \centering \includegraphics[width=0.3\textwidth]{extremal_family.png} \caption{\label{fig:graphexample1}The family $\mathcal{T}^*_8$.} \end{figure} Let us return to our problem of determining the largest size of a family of triangles on $n$ vertices without a rainbow triangle. A cheap upper bound of $\frac{n^2}{4}$ can be proved by combining Mantel's theorem and the general (and easy) Proposition~3.1 in~\cite{aharoni2021rainbowoddcycles} concerning rainbow sets. A lower bound of $\frac{n^2}{8}$ can be achieved by taking $\frac{n}{4}$ disjoint pairs of vertices, and connecting each pair to each of the remaining $\frac{n}{2}$ vertices with a triangle. We denote this family by $\mathcal{T}^*_n$ (see Figure 1). These observations indicate that the answer to our question about triangles is quadratic in $n$, in contrast to the answer to the similar problem about arbitrary odd cycles being linear in $n$, as mentioned above. Determining the correct constant between $\frac{1}{8}$ and $\frac{1}{4}$ requires more effort. In Section 2, using a method inspired by a folkloric proof of Mantel's theorem~\cite{aigner1995turan}, we prove that $\frac{n^2}{8}$ is the right answer when the family consists of distinct triangles. Moreover, we show that $\mathcal{T}^*_n$ is the only extremal such family. In Section 3 we use the result of Ruzsa and Szemer{\'e}di \cite{ruzsa1978triple} about the famous $(6,3)$-problem, to show that if triangles are allowed to repeat, the same upper bound holds asymptotically. Namely, when counting triangles with repetitions, we still cannot have more than $\frac{n^2}{8}(1+o(1))$ of them. As mentioned in the abstract, we have recently learned that our results (except for the characterization of the extremal family) had been discovered earlier. \section{Families of distinct triangles} In this section, the family $\mathcal{T}$ of triangles will be a set. Given $n$ vertices, with $n$ divisible by $4$, the family $\mathcal{T}^*_n$ introduced above is a set of $\frac{n^2}{8}$ triangles without a rainbow triangle. The following theorem says that this is the largest possible size of such a family. It turns out to be a rediscovery of Theorem~2 in Gy\H{o}ri~\cite{gyori2006}, which was also independently proved by Frankl, F\"uredi and Simonyi (unpublished). \begin{theorem} Let $V$ be a set of $n$ vertices and $\mathcal{T}$ a set of triangles on $V$ having no rainbow triangle. Then $|\mathcal{T}| \le \frac{n^2}{8}$. \end{theorem} \begin{proof} Let $V$ and $\mathcal{T}$ be as in the statement of the theorem. Consider the graph $G$ on the vertex set $V$ having the union of the edge sets of the triangles in $\mathcal{T}$ as its edge set. Let $A\subseteq V$ be a largest independent set of vertices in $G$. Let $B = V\setminus A$, and denote by $E(B)$ the set of edges in the subgraph induced by $G$ on $B$. Note that, by the independence of $A$, each triangle in $\mathcal{T}$ must have at least one edge in $E(B)$. Also note that, by the absence of rainbow triangles, each triangle in $\mathcal{T}$ has at most one edge that it shares with other triangles in $\mathcal{T}$. We define a function $\beta:\mathcal{T}\longrightarrow E(B)$ as follows. For each $t\in \mathcal{T}$, we set $\beta(t)$ to be one of its edges that is in $E(B)$; if $t$ has more than one edge (and hence all its edges) in $E(B)$, and one of them is shared with another triangle, we choose that edge to be $\beta(t)$; otherwise, $\beta(t)$ is chosen arbitrarily. For an edge $e\in E(B)$, we denote $d(e)\vcentcolon= |\beta^{-1}(e)|$. Then, we have \begin{equation} \sum_{b\in B}{\sum_{e\in E(B): b\in e}{d(e)}} = 2|\mathcal{T}|. \end{equation} Indeed, each triangle in $\mathcal{T}$ corresponds to one edge in $E(B)$ under $\beta$, and thus it is counted once in $d(e)$ for a single $e\in E(B)$. Each edge $e=\{u,v\}\in E(B)$ is considered twice in the inner sum - once with respect to $u$ and once with respect to $v$, and so we get $2|\mathcal{T}|$ in total. We will show below that for each $b\in B$, \begin{equation} \sum_{e\in E(B): b\in e}{d(e)} \leq |A|. \end{equation} By summing over all $b\in B$, this implies that \begin{equation} \sum_{b\in B}{\sum_{e\in E(B): b\in e}{d(e)}} \leq |B||A|. \end{equation} From (1) and (3) we get what we need: \begin{equation} 2|\mathcal{T}| \leq |B||A|\leq (\frac{|A|+|B|}{2})^2 = \frac{n^2}{4} \Rightarrow |\mathcal{T}| \leq \frac{n^2}{8}. \end{equation} It remains to show (2). Fix a vertex $b\in B$. It is enough to exhibit an independent set $I^b \subseteq V$ of size $\sum_{e\in E(B): b\in e}{d(e)}$, and from $A$'s maximality, the desired inequality will follow. We construct the independent set $I^b$ as follows. For each edge $e\in E(B)$ such that $b\in e$, and for each triangle $t\in\beta^{-1}(e)$, we pick a vertex of $t$ according to the rules below, and place it in $I^b$: \begin{itemize} \item If $\beta^{-1}(e) = \{t\}$, we pick the vertex of $e = \beta(t)$ that is not $b$. \item Else, we pick the vertex of $t$ that is not in $e = \beta(t)$. \end{itemize} \begin{figure} \centering \includegraphics[width=0.25\textwidth]{theorem_2_1.png} \caption{\label{fig:graphexample2}The set $I^b$ (in green) for a given $b$. In each triangle $t$, the edge $\beta(t)$ is marked in red.} \end{figure} See Figure 2 for an illustration. We have to make sure that: \begin{itemize} \item The vertices chosen that way are distinct (and thus indeed $|I^b|=\sum_{e\in E(B): b\in e}{d(e)}$). \\Suppose for the sake of contradiction that two distinct triangles $t_1, t_2$ contribute the same vertex $v$ to $I^b$. Then $t_1, t_2$ share the edge $\{b,v\}$, so we have $V(t_1)=\{b,v,v_1\}$ and $V(t_2)=\{b,v,v_2\}$ with $v_1\neq v_2$. If $v\notin B$ then $\beta(t_1)=\{b,v_1\}$. Because $t_1$ shares $\{b,v\}$ with another triangle, $\{b,v_1\}$ is not shared with another triangle, and so the first case in the definition of $I^b$ applies: $t_1$ contributes $v_1$ to $I^b$, not $v$ as assumed. If $v \in B$ then, by the definition of $\beta$, we have $\beta(t_1)=\beta(t_2)=\{b,v\}$. Thus, the second case in the definition of $I^b$ applies: $t_1$ contributes $v_1$ to $I^b$, again contradicting our assumption. \item $I^b$ is independent. \\Suppose for the sake of contradiction that two vertices $v_1\neq v_2\in I^b$ that were chosen from triangles $t_1,t_2$ respectively form an edge $\{v_1,v_2\}$, taken from some triangle $t\in \mathcal{T}$. If $t\neq t_1, t_2$, a rainbow triangle arises on the vertices $b, v_1, v_2$. So, we may assume w.l.o.g. that $t=t_1$, i.e., $V(t_1)=\{b,v_1,v_2\}$ and $V(t_2)=\{b,v_2,v_3\}$ for some $v_3\neq v_1$. Now, an argument similar to the one in the previous paragraph shows that, whether or not $v_2$ belongs to $B$, the vertex contributed by $t_2$ to $I^b$ is $v_3$, not $v_2$ as assumed. \end{itemize} That finishes the proof. \end{proof} We now characterize the equality case. That is, families $\mathcal{T}$ of exactly $\frac{n^2}{8}$ distinct triangles on $n$ vertices without a rainbow triangle. Such a construction must in particular have $4|n$, and the following theorem shows that it must coincide with the family $\mathcal{T}^*_n$. \begin{theorem} Let $4|n\in\mathbb{N}$. Let $\mathcal{T}^*_n$ be the family of triangles constructed by taking $\frac{n}{4}$ disjoint pairs of vertices, and connecting each of them to each of the remaining $\frac{n}{2}$ vertices with a triangle. Then $\mathcal{T}^*_n$ is the unique set of $\frac{n^2}{8}$ triangles on $n$ vertices without a rainbow triangle, up to isomorphism. \end{theorem} \begin{proof} Let $\mathcal{T}$ be a set of $\frac{n^2}{8}$ triangles on $n$ vertices without a rainbow triangle. We will show that $\mathcal{T} \cong \mathcal{T}^*_n$. To this end, we refer to the notations and arguments in the proof of Theorem 2.1, where we showed: \begin{equation*} 2|\mathcal{T}| = \sum_{b\in B} \sum_{e\in E(B): b\in e}{d(e)} \leq \sum_{b\in B} |A| = |B||A|\leq (\frac{|A|+|B|}{2})^2 = \frac{n^2}{4}. \end{equation*} By our assumption, $2|\mathcal{T}| = \frac{n^2}{4}$ and thus all the inequalities are equalities. Our proof proceeds in two steps. \textbf{Step 1.} We show that $\mathcal{T}$ does not have a triangle with all three vertices in $B$. Assume for the sake of contradiction that $t$ is a triangle in $\mathcal{T}$ contained in $B$. We denote by $b\in B$ the vertex of $t$ that is not in $\beta(t)=\{u,v\}$. Thus $V(t)=\{b,u,v\}$. Then, both $\{b,u\}$ and $\{b,v\}$ are not shared with other triangles (otherwise by our choice of $\beta(t)$, we would have picked one of them instead of $\{u,v\}$). We claim that $v \notin I^b$ and $I^b\cup \{v\}$ is independent, where $I^b$ is the independent set of vertices we chose for the vertex $b$ in the proof of Theorem 2.1. This will yield the desired contradiction for Step~1, because equality in the inequalities (2) gives in particular $|I^b| = \sum_{e\in E(B): b\in e}d(e) = |A|$, and $A$ is a largest independent set. \begin{itemize} \item First, we show that $v \notin I^b$. \\Note that $t$ itself does not contribute a vertex to $I^b$, because $\beta(t)$ does not contain $b$. Thus, if $v \in I^b$ then it must be contributed by some triangle $t' \ne t$. But then $t$ and $t'$ share the edge $\{b,v\}$, contradicting the above. \item The set $I^b\cup \{v\}$ is independent.\\ $I^b$ itself is independent and so it remains to rule out an edge of the form $\{v,v'\}$ where $v'\in I^b$. Suppose that $\{v,v'\}$ is such an edge. Then $v'$ is contributed to $I^b$ by some triangle $t'\ne t$, and the edge $\{v,v'\}$ belongs to some triangle $t''$. Note that $v' \ne u$ (or else $t$ and $t'$ would share the edge $\{b,u\}$) and hence $t'' \ne t$. Also $t'' \ne t'$, otherwise their common vertex set would be $\{b,v,v'\}$, and this triangle shares the edge $\{b,v\}$ with $t$. Thus, $t$, $t'$ and $t''$ are three distinct triangles which form together a rainbow triangle on $\{b,v,v'\}$, which is forbidden. \end{itemize} \begin{figure} \centering \includegraphics[width=0.3\textwidth]{multigraph_extremal.png} \caption{\label{fig:graphexample3}The colored multigraph $(\mathcal{T}^*_8)^B$.} \end{figure} \textbf{Step 2.} We show that $\mathcal{T}$ is of the form $\mathcal{T}^*_n$. Writing $m\vcentcolon=\frac{n}{2}$, we note that equality in the inequalities above requires that $|A|=|B|=m$. From Step 1, we know that every triangle in $\mathcal{T}$ must have one vertex in $A$ and two vertices in $B$. Now, we look at the colored multigraph $\mathcal{T}^B$ induced by $\mathcal{T}$ on $B$ by taking from each triangle its only edge in $B$, and coloring it with its only vertex in $A$. We say that an edge in $\mathcal{T}^B$ is simple if it has no parallel edges, and otherwise it is non-simple. So $\mathcal{T}^B$ has $m$ vertices and its edges are colored with $m$ colors. In addition, it has the following properties: \begin{enumerate} \item $\mathcal{T}^B$ is triangle-free. \item If $e_1,e_2,e_3$ is a path of three edges, then $e_1$ and $e_3$ have distinct colors. \item If $e_1,e_2$ are two edges sharing a vertex and having the same color, then both $e_1$ and $e_2$ are simple. \item $\mathcal{T}^B$ is $m$-regular. \end{enumerate} Properties 1-3 hold due to the fact that $\mathcal{T}$ is rainbow-triangle-free (actually, one can check that 1-3 are also enough to preclude rainbow triangles in $\mathcal{T}$). Property 4 holds because equality in (2) means that for each $b\in B$ we have $\sum_{e\in E(B): b \in e}d(e)=m$, and this counts exactly the number of edges in $\mathcal{T}^B$ incident with $b$. We will show below that a multigraph with these properties must consist of $\frac{m}{2}$ disjoint pairs of vertices, each having $m$ edges of all colors between them (see Figure 3). Such $\mathcal{T}^B$ corresponds to a family $\mathcal{T}\cong \mathcal{T}^*_n$, which concludes the proof. First, if every edge in $\mathcal{T}^B$ is non-simple, then by property 3, every two edges of the same color are disjoint. That is, the edges of each color $a$ form a matching $M_a$ in $B$. Moreover, property 4 implies that every vertex in $B$ is incident with edges of all $m$ colors, so the $M_a$'s are perfect matchings. Finally, due to property 2, all the $M_a$'s must induce the same partition of $B$ into $\frac{m}{2}$ pairs of vertices, which gives us the desired multigraph. Thus, we may assume that there is a simple edge $e=\{v,v'\}$ in $\mathcal{T}^B$. Suppose there are $l$ simple edges incident with $v$ (excluding $e$) and $k$ non-simple ones. Similarly, there are $l'$ and $k'$ simple and non-simple edges respectively incident with $v'$ (see Figure 4). Now, from property 4 applied to both $v$ and $v'$ we have \begin{equation*} k+l+1 = k'+l'+1 = m. \end{equation*} Note that $k>0$, because otherwise $v$ has $m$ distinct neighbors, which is impossible as $|B|=m$. Symmetrically, $k'>0$. Therefore, there are at least $l+1$ and $l'+1$ vertices in the two respective sides of $e$, all different due to property 1. Along with ${v,v'}$, there are at least $l+l'+4$ vertices in $B$ and thus \begin{equation*} l+l'+4\leq m. \end{equation*} Finally, property 2 implies that the edges of the two sides have different colors. Due to property 3, in the two sides there are at least $k,k'$ colors respectively. This gives \begin{equation*} k+k'\leq m. \end{equation*} By summing the last two inequalities, we obtain \begin{equation*} (k+k')+(l+l'+4)\leq 2m. \end{equation*} This can be rearranged as \begin{equation*} (k+l+1)+(k'+l'+1)\leq 2m-2, \end{equation*} which is a contradiction because the left-hand side equals $2m$. \end{proof} \begin{figure} \centering \includegraphics[width=0.35\textwidth]{proof_2_2_notations.png} \caption{\label{fig:graphexample4}The notations in the proof of Step 2.} \end{figure} \section{Families of not necessarily distinct triangles} We now treat another version of this problem, where the triangles are not necessarily distinct. Suppose $\mathcal{T}$ is a rainbow-triangle-free family of triangles on $n$ vertices, where duplicates are allowed. We want to bound the size of $\mathcal{T}$ as a multiset. Of course, each triangle must appear at most twice in $\mathcal{T}$, as three copies of the same triangle create a rainbow triangle. Therefore, the triangles in $\mathcal{T}$ can be of two types - those that appear twice in $\mathcal{T}$, and those that appear only once. Let $\mathcal{T}_1$ be the set of all triangles in $\mathcal{T}$ (one copy of each), and let $\mathcal{T}_2 \subseteq \mathcal{T}_1$ be the set of those triangles that appear twice. Then $|\mathcal{T}|=|\mathcal{T}_1|+|\mathcal{T}_2|$ and we can bound each summand separately. First, from Theorem 2.1 it follows that $|\mathcal{T}_1|\leq \frac{n^2}{8}$. To bound $|\mathcal{T}_2|$, we observe that the triangles in $\mathcal{T}_2$ are edge-disjoint, due to the absence of rainbow triangles. For the same reason, the graph $G_2$ whose edge set is the disjoint union of the edge sets of the triangles in $\mathcal{T}_2$ has no triangles other than those in $\mathcal{T}_2$. Thus, in $G_2$ every edge belongs to a unique triangle, and we can apply the following version of a theorem of Ruzsa and Szemerédi \cite{ruzsa1978triple}: \begin{theorem*}[Ruzsa and Szemerédi] A graph $G$ on $n$ vertices in which every edge belongs to a unique triangle, has $o(n^2)$ edges. \end{theorem*} We note that Ruzsa and Szemer{\'e}di treated the equivalent $(6,3)$-problem, which asks for the maximal number of triples of points that one can select from $n$ given points, in such a way that no six points contain three of the selected triples. A stronger form of the bound, namely $\frac{n^2}{e^{\Omega(\log^*n)}}$, was proved by Fox~\cite{fox2011removallemma}. We observe that the graph $G_2$ has $3|\mathcal{T}_2|$ edges, so we get $|\mathcal{T}_2|=o(n^2)$. Together with the bound on $|\mathcal{T}_1|$ we obtain $|\mathcal{T}_1|+|\mathcal{T}_2|\leq \frac{n^2}{8}(1+o(1))$. Thus we conclude the following: \begin{theorem} Let $V$ be a set of $n$ vertices and $\mathcal{T}$ a family of (not necessarily distinct) triangles on $V$ having no rainbow triangle. Then $|\mathcal{T}|\leq\frac{n^2}{8}(1+o(1))$. \end{theorem} We remark that for small values of $n$, there do exist constructions of families of triangles with repetitions having no rainbow triangle, which beat the $\frac{n^2}{8}$ bound. An example for $n=9$ is shown in Figure~5. We conjecture, however, that for sufficiently large $n$, the $\frac{n^2}{8}$ bound holds in its exact form even if repetitions are allowed. As it turns out, Theorem~1 in Gy\H{o}ri~\cite{gyori2006} established this, in a more general form, for $n \ge 100$. \begin{figure} \centering \includegraphics[width=0.18\textwidth]{small_case.png} \caption{\label{fig:graphexample5}A family of $12$ triangles on $9$ vertices, built of two copies of each of the $6$ triangles in the drawing, without a rainbow triangle.} \end{figure} \subsection*{Acknowledgment} We thank Gonen Gazit for his help in creating computer-generated explicit solutions for small values of $n$, which provided us with hints for the general case. We are grateful to Zolt\'an F\"uredi for drawing our attention to~\cite{gyori2006}. \printbibliography \end{document}
{ "timestamp": "2022-10-14T02:18:51", "yymm": "2209", "arxiv_id": "2209.15493", "language": "en", "url": "https://arxiv.org/abs/2209.15493", "abstract": "We prove that a family $\\mathcal{T}$ of distinct triangles on $n$ given vertices that does not have a rainbow triangle (that is, three edges, each taken from a different triangle in $\\mathcal{T}$, that form together a triangle) must be of size at most $\\frac{n^2}{8}$. We also show that this result is sharp and characterize the extremal case. In addition, we discuss a version of this problem in which the triangles are not necessarily distinct, and show that in this case, the same bound holds asymptotically. After posting the original arXiv version of this paper, we learned that the sharp upper bound of $\\frac{n^2}{8}$ was proved much earlier by Győri (2006) and independently by Frankl, Füredi and Simonyi (unpublished). Győri also obtained a stronger version of our result for the case when repetitions are allowed.", "subjects": "Combinatorics (math.CO)", "title": "Rainbow triangles in families of triangles", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9890130599601177, "lm_q2_score": 0.8198933271118222, "lm_q1q2_score": 0.8108852082877449 }
https://arxiv.org/abs/1402.5129
On a Cohen-Lenstra Heuristic for Jacobians of Random Graphs
In this paper, we make specific conjectures about the distribution of Jacobians of random graphs with their canonical duality pairings. Our conjectures are based on a Cohen-Lenstra type heuristic saying that a finite abelian group with duality pairing appears with frequency inversely proportional to the size of the group times the size of the group of automorphisms that preserve the pairing. We conjecture that the Jacobian of a random graph is cyclic with probability a little over .7935. We determine the values of several other statistics on Jacobians of random graphs that would follow from our conjectures. In support of the conjectures, we prove that random symmetric matrices over the p-adic integers, distributed according to Haar measure, have cokernels distributed according to the above heuristic. We also give experimental evidence in support of our conjectures.
\section{Introduction} Jacobians of graphs are often cyclic. A similar phenomenon has been observed in class groups of imaginary quadratic fields, where it is conjecturally explained by the classical Cohen-Lenstra heuristic, in which a group $\Gamma$ appears with frequency proportional to $1/ \# {\operatorname{Aut \;}} \Gamma $. Jacobians of Erd{\mathcal H}{o}s--R\'{e}nyi random graphs also seem to exhibit some of the deeper properties predicted by this heuristic. For instance, we have observed empirically that the average size of the Jacobian of a random graph modulo $p$ tends to 2, for all primes $p$, matching \cite{CohenLenstra84}. However, we have also observed that the odd part of the Jacobian of a random graph is cyclic with probability close to $.946$, which does not match the classical Cohen-Lenstra heuristic prediction that the odd part of a random abelian group should be cyclic with probability a little over $.9775$. This paper shows how these and other observed phenomena are explained by a natural variation on the Cohen-Lenstra heuristic, proposed in \cite{CLP13}, based on the fact that the Jacobian of a graph carries a canonical duality pairing. \begin{heuristic}[\!\!\cite{CLP13}]\label{H:Main} A group $\Gamma$ with pairing $\delta$ occurs as a Jacobian of a random graph with frequency proportional to $$ \frac{1}{ \# \Gamma \cdot \# {\operatorname{Aut \;}}(\Gamma,\delta)} \; , $$ where ${\operatorname{Aut \;}}(\Gamma,\delta)$ denotes the group of automorphisms of $\Gamma$ that respect the pairing $\delta$. \end{heuristic} \noindent In the present paper, we use this heuristic to make precise conjectures and compute predicted averages based on these conjectures for many specific statistics. The Jacobian of a graph is the torsion part of the cokernel of its Laplacian matrix, which is a symmetric matrix, and we also prove results about random symmetric matrices distributed according to Haar measure in ${\mathbb Z}_p$, showing their cokernels are distributed according to Heuristic~\ref{H:Main}. We also present empirical data to support the conjectures, in Section~\ref{S:Data}. \subsection{The pairing} Recall that a duality pairing on a finite abelian group $\Gamma$ is a symmetric bilinear map $\delta:\ \Gamma \times \Gamma \rightarrow {\mathbb Q}/{\mathbb Z}$ such that the induced map $g\rightarrow \langle g,\ \rangle$ is an isomorphism from $\Gamma$ to ${\operatorname{Hom}}(\Gamma,{\mathbb Q}/{\mathbb Z})$. The cokernel of a nonsingular symmetric integer matrix $A$ carries a canonical duality pairing, induced by \[ \langle x , y \rangle = y^t A^{-1} x. \] More generally, the torsion part of the cokernel of any symmetric integer matrix carries a canonical duality pairing. The Jacobian of a graph occurs naturally in this way, as the torsion subgroup of the cokernel of the combinatorial Laplacian. See \cite{Shokrieh10} for a detailed discussion of the duality pairing on graph Jacobians and its relation to the Grothendieck pairing, or monodromy pairing, on component groups of N\'eron models. \subsection{Conjectures} Let $G(n,q)$ be the Erd{\mathcal H}{o}s--R\'{e}nyi random graph on $n$ vertices, where each edge is included independently with probability $q$, for some fixed probability $0 < q < 1$. In other words, $G(n,q)$ is the probability space on graphs with $n$ vertices in which a graph $G$ with $e$ edges appears with probability $q^{e} \cdot (1-q)^{{n \choose 2} - e}$. Here we study the associated probability space on isomorphism classes of finite abelian groups with duality pairing, \[ \Gamma(n,q) = {\operatorname{Jac}}(G(n,q)), \] in which the measure of a subset is the probability that the Jacobian of a random graph in $G(n,q)$ lies in that subset. Let ${\mathcal A}(m)$ be the set of all isomorphism classes of pairs $(\Gamma,\delta)$, where $\Gamma$ is an abelian group of order $m$ and $\delta$ is a duality pairing on $\Gamma$. Our first conjecture is the analog of Cohen and Lenstra's Fundamental Assumption 8.1 for their heuristics on class groups of number fields \cite{CohenLenstra84} . \begin{conj}\label{C:basic} Let $F$ be a function on isomorphism classes of finite abelian groups with duality pairings that is either bounded or depends only on the Sylow $p$-subgroups of $F$ for finite many $p$. Then \[ \lim_{n \rightarrow \infty} \mathbb{E}(F(\Gamma(n,q))) = \lim_{n\rightarrow \infty} \frac{\sum_{m=1}^n \sum_{(\Gamma,\delta)\in {\mathcal A}(m)} \frac{F(\Gamma,\delta)}{ \# \Gamma \cdot \# {\operatorname{Aut \;}}(\Gamma,\delta)}}{\sum_{m=1}^n \sum_{(\Gamma,\delta)\in {\mathcal A}(m)} \frac{1}{\# \Gamma \cdot \# {\operatorname{Aut \;}}(\Gamma,\delta)}} \; . \] \end{conj} Since we include all bounded test functions $F$, Conjecture~\ref{C:basic} thus includes a claim in the spirit of weak convergence. However, note that neither side can be expressed as the evaluation of $F$ against some measure $\nu$ on the set of finite abelian groups, since when $F$ is the characteristic function of a group, the left-hand side of Conjecture~\ref{C:basic} is $0$ \cite[Corollary 9.3]{Wood14}, and from the product of Proposition~\ref{P:normalizing} over all $p$, if follows the right-hand side is $0$, which would contradict countable additivity of $\nu$. Even though Cohen and Lenstra said any non-negative test function $F$ should ``probably'' be included in their analogous conjecture on class groups \cite[8.1]{CohenLenstra84}, it is likely that is too much to hope for in the case of class groups, and it is definitely too much to hope for in the case of Jacobians of random graphs. For example, \cite[Theorem 5]{GJRWW} shows that $({\mathbb Z}/2{\mathbb Z})^k$ is never a Jacobian of a graph. So if we take a function $F$ supported on these groups and growing fast enough that the limit on the right-hand side is positive, Conjecture~\ref{C:basic} would fail for that $F$. Any finite abelian group with pairing splits as an orthogonal direct sum of its Sylow $p$-subgroups, and many interesting functions, such as the indicator function of the set of cyclic groups with pairing, depend only on their values on the Sylow $p$-subgroups. One important special case is where the function $F$ depends only on the Sylow $p$-subgroup of $\Gamma$ with its restricted pairing, for a single fixed prime $p$. In this case, Conjecture ~\ref{C:basic} implies the following, as in \cite[Proposition 5.6]{CohenLenstra84}: \begin{equation}\label{E:pSylow} \lim_{n \rightarrow \infty} \mathbb{E}(F(\Gamma(n,q))) = \frac{\sum_{m=0}^\infty \sum_{(\Gamma,\delta)\in {\mathcal A}(p^m)} \frac{F(\Gamma,\delta)}{\# \Gamma\cdot \# {\operatorname{Aut \;}}(\Gamma,\delta)}}{\sum_{m=0}^\infty \sum_{(\Gamma,\delta)\in {\mathcal A}(p^m)} \frac{1}{\# \Gamma\cdot \# {\operatorname{Aut \;}}(\Gamma,\delta)}} . \end{equation} In Proposition~\ref{P:normalizing}, we show that the denominator of the right-hand side above converges to $\prod_{i=1}^\infty (1-p^{1-2i})^{-1}$. Therefore, we can put a measure $\mu$ on \[ {\mathcal A}_p =\bigcup_m {\mathcal A}(p^m) \] so that $$\mu(\Gamma,\delta) =\frac{\prod_{i=1}^\infty (1-p^{1-2i})}{\# \Gamma\cdot \# {\operatorname{Aut \;}}(\Gamma,\delta)}.$$ (Note that as explained above, there is no way to have an analogous measure of the set of all finite abelian groups.) Then, if $F$ depends only on the Sylow $p$-subgroup with its restricted pairing, Conjecture~\ref{C:basic} says that \begin{equation}\label{E:int} \lim_{n \rightarrow \infty} \mathbb{E}(F(\Gamma(n,q))) = \int_{(\Gamma,\delta)\in {\mathcal A}_p} F(\Gamma,\delta) d\mu. \end{equation} As with the classical Cohen-Lenstra heuristic, different functions $F$ give rise to estimates for various statistics on random finite abelian $p$-groups with duality pairings, distributed according to Heuristic~\ref{H:Main}. In Section~\ref{S:averages}, we compute the integral on the right-hand side above for several interesting functions $F$, the indicator function for the set of groups with trivial $p$-part, the indicator function for groups with cyclic $p$-part, the number of surjections onto a fixed group, and the functions $p^{k r_p(\Gamma)}$, where $r_p(\Gamma)$ is the $p$-rank of $\Gamma$, i.e. the rank of the free ${\mathbb Z}/p{\mathbb Z}$-module $\Gamma \otimes {\mathbb Z}/p{\mathbb Z}$. For example, in Theorem~\ref{T:momint}, we show that if $\Gamma'=\prod_{i=1}^r {\mathbb Z}/p^{e_i}{\mathbb Z}$ with $e_1\le e_2 \le \cdots \le e_r$ then $$\int_{(\Gamma,\delta)\in{\mathcal A}_p} \#{\operatorname{Sur}}(\Gamma,\Gamma') d\mu =p^{(r-1)e_1 + (r-2)e_2+\cdots + e_{r-1}}.$$ \begin{rem} While this paper was in preparation, the fifth author proved Conjecture~\ref{C:basic} for many functions $F$ that depend on only finitely many Sylow $p$-subgroups of $\Gamma$ \cite{Wood14}. In particular, our predictions are now confirmed for $F$ the indicator function for the set of groups with a given Sylow $p$-subgroup, the indicator function for groups with cyclic $p$-part, the number of surjections onto a fixed group, and $p^{k r_p(\Gamma)}$. Theorem~\ref{thm:pparts} from this paper is an ingredient in the proof. \end{rem} In Proposition~\ref{P:cyclic}, we show that Conjecture~\ref{C:basic} implies that the asymptotic probability that that Sylow $p$-subgroup of the Jacobian of a random graph is cyclic is $\prod_{i=1}^{\infty} (1-p^{-1-2i})$. Taking the product over all primes $p$, we are led to the following conjecture. \begin{conj}\label{C:cyc} The probability that the Jacobian of $G(n,q)$ is cyclic tends to \[ \prod_{p} {\prod_{i=1}^\infty (1 - p^{-1-2i})}=\zeta(3)^{-1}\zeta(5)^{-1}\zeta(7)^{-1}\zeta(9)^{-1}\cdots \] as $n$ goes to infinity, where $\zeta(s)$ is the Riemann zeta function. \end{conj} \noindent This product converges to approximately $.7935$. Note that Wagner has made other conjectures on statistics of Jacobians of random graphs \cite[Conjectures 4.2, 4.3, 4.4]{Wagner00}, and his conjecture for the asymptotic probability that the Jacobian is cyclic differs from Conjecture~\ref{C:cyc}. \begin{rem} One could also consider stronger versions of these conjectures, allowing the probability $q$ to vary with $n$. If $q$ is too close to 0, then the graph will have very few edges. In particular, if it has no cycles, then the Jacobian will be trivial. Similarly, if $q$ is too close to $1$, then $G(n,q)$ will be very close to the complete graph, whose Jacobian is $({\mathbb Z}/n{\mathbb Z})^{n-2}$. Since the $p$-rank of a group changes by at most 1 when an edge is added or deleted \cite[Lemma~5.3]{Lorenzini89}, the Jacobian of $G(n,q)$ will rarely be cyclic when $q$ is too close to 1. It is natural to expect that versions of these conjectures will hold for $G(n,q(n))$ provided that $q(n) \log(n)$ and $(1- q(n)) \log (n)$ both go to infinity. \end{rem} \begin{rem} The idea of a variation of the classical Cohen-Lenstra heuristic that involves $\# \Gamma$ and the number of automorphisms preserving a bilinear form has appeared earlier, in other contexts. Cohen and Lenstra already considered a heuristic for class groups of real quadratic fields in which $\Gamma$ appears with frequency proportional to $1/(\# \Gamma\cdot \# {\operatorname{Aut \;}} \Gamma)$ \cite{CohenLenstra84, CohenLenstra84b}, and Delaunay studied variations on this heuristic for groups with alternating pairings \cite{Delaunay01, Delaunay07}. He proposed a heuristic for Tate-Shafarevich groups, which are only conjecturally finite but, when finite, carry a canonical non-degenerate alternating bilinear form $\beta$ \cite{Cassels62}. In Delaunay's heuristic for Tate-Shafarevich groups of rank $0$ elliptic curves, a finite abelian group with non-degenerate alternating bilinear form $(\Gamma,\beta)$ appears with frequency proportional to $\# \Gamma/ \# {\operatorname{Aut \;}}(\Gamma,\beta)$ where ${\operatorname{Aut \;}}(\Gamma,\beta)$ is the subgroup of ${\operatorname{Aut \;}}(\Gamma)$ consisting of automorphisms that respect the alternating form. One slight increase in subtlety in our case is that a finite abelian group can carry several non-isomorphic duality pairings \cite{BannaiMunemasa98}, while a group with a non-degenerate alternating bilinear form carries exactly one isomorphism class of such forms. \end{rem} \begin{rem} We do not know any obvious explanation for the exact form of Heuristic~\ref{H:Main}, but some natural analogies are at least suggestive of a relation to the heuristic of Cohen and Lenstra for class groups of real quadratic fields. Delaunay's heuristic for Tate-Shafarevich groups is motivated by an analogy relating the Mordell-Weil group of an elliptic curve $E$ over ${\mathbb Q}$ to the group of units in a number field. The analogy readily extends to graphs, with the Mordell-Weil group of an elliptic curve corresponding to the full cokernel of the combinatorial Laplacian, which is a finitely generated abelian group of rank $1$. The rank $1$ case for elliptic curves corresponds to the case of real quadratic fields, where Cohen and Lenstra predict that a group $\Gamma$ appears with frequency proportional to $1/ ( \# \Gamma \cdot \# {\operatorname{Aut \;}} \Gamma)$. The difference for Jacobians of graphs is that we consider only automorphisms that respect the duality pairing. \end{rem} \subsection{Result for Haar random symmetric matrices} In support of the above conjectures, we prove the following, where a random $p$-adic symmetric matrix stands in for the graph Laplacian. For a ring $R$, let ${\operatorname{Sym}}_n(R)$ denote the additive group of symmetric $n\times n$ matrices with coefficients in $R$. \begin{thm} \label{thm:pparts} Let $\Gamma$ be a finite abelian $p$-group of rank $r$ with a duality pairing $\delta$, and let $A$ be a random $n \times n$ symmetric matrix with respect to additive Haar measure on ${\operatorname{Sym}}_n({\mathbb Z}_p)$. Then the probability that the cokernel of $A$ with its given duality pairing is isomorphic to $(\Gamma, \delta)$ is \[ \mu_n(\Gamma,\delta) = \frac{\prod_{j=n-r+1}^n (1-p^{-j}) \prod_{i=1}^{\lceil(n-r)/2\rceil} (1-p^{1-2i})}{\# \Gamma \cdot \# {\operatorname{Aut \;}}(\Gamma,\delta)}, \] where ${\operatorname{Aut \;}}(\Gamma,\delta)$ is the set of automorphisms of $\Gamma$ that preserve the pairing $\delta$. In particular, \[ \lim_{n\rightarrow \infty} \mu_n(\Gamma,\delta) = \frac{\prod_{i=1}^\infty (1-p^{1-2i})}{\# \Gamma \cdot \# {\operatorname{Aut \;}}(\Gamma,\delta)}. \] \end{thm} \medskip \noindent In other words, the probability that the cokernel of a random symmetric $n \times n$ matrix over ${\mathbb Z}_p$ is isomorphic to $(\Gamma, \delta)$ tends to a limit that is inversely proportional to $\# \Gamma \cdot \# {\operatorname{Aut \;}}(\Gamma, \delta)$, with constant of proportionality $\prod_{i = 1}^\infty (1-p^{1-2i})$. \begin{rem} Note that a random matrix is nonsingular with probability 1, but the Laplacian of a random graph is always singular because its row sums and column sums are zero. However, we also prove that the same distribution holds for the torsion part of the cokernel of a random matrix with all row sums and column sums equal to zero, with respect to Haar measure on this subgroup of ${\operatorname{Sym}}_n({\mathbb Z}_p)$. See Theorem~\ref{thm:zerosum}. \end{rem} \begin{rem} Theorem \ref{thm:pparts} gives the probability that a random matrix in ${\operatorname{Sym}}_n({\mathbb Z}_p)$ has cokernel with its given duality pairing isomorphic to a particular pair $(\Gamma,\delta)$. It is not obvious from the form of this result that the sum taken over all possible pairs of a $p$-group and a duality pairing on it is equal to $1$. In \cite{Fulman14}, Fulman shows that this defines a probability measure by relating it to the theory of Hall-Littlewood polynomials. He further shows how this measure occurs as one specialization of a two parameter family of probability measures. A $p$-group naturally defines a partition $\lambda$, so this result gives a probability measure on the set of all partitions. Fulman uses two more of these specializations to compute the probability that a partition chosen from this distribution has given size and the probability that a randomly chosen partition has a specified number of parts, which is also proven algebraically in \cite{Wood14}. \end{rem} \section{Distribution of Cokernels of Haar Random Symmetric Matrices}\label{sec:cokernels} In this section, we determine the distribution of cokernels of random symmetric matrices over ${\mathbb Z}_p$, chosen according to Haar measure, and prove Theorem~\ref{thm:pparts}. In particular, we see that these cokernels are distributed according to Heuristic~\ref{H:Main} as the size of the matrix goes to infinity. This is an analog of a result of Friedman and Washington \cite{FriedmanWashington89}, that cokernels of Haar random square matrices over ${\mathbb Z}_p$ are distributed according to the Cohen-Lenstra heuristics as their size goes to infinity, as well as an analog of a result of Bhargava, Kane, Lenstra, Poonen, and Rains, that cokernels of skew-symmetric random matrices over ${\mathbb Z}_p$ are distributed according to Delaunay's heuristics as their size goes to infinity. Our strategy and proofs are closely analogous to those in \cite[Section~3]{BKLPR13}, to which we refer the reader for a beautiful treatment of similar results in which the duality pairing is replaced by a nondegenerate alternating form. \begin{rem} Theorem~\ref{thm:pparts} is used by the fifth author in \cite{Wood14}, in combination with a universality result that says cokernels of much more general random symmetric matrices, including Laplacians of random graphs, are distributed in the same way as cokernels of Haar random symmetric matrices. \end{rem} Let $A$ be nonsingular symmetric $n \times n$ matrix with entries in ${\mathbb Z}_p$. Then $A$ induces a natural symmetric bilinear pairing \[ {\langle\ ,\ \rangle}_A: {\mathbb Z}_p^n \times {\mathbb Z}_p^n \rightarrow {\mathbb Q}_p/{\mathbb Z}_p, \] given by \[ \langle x, y \rangle_A = y^t A^{-1} x. \] Let $\Gamma$ be the cokernel of $A$. If $x$ or $y$ is in the image of $A$, then $\langle x, y \rangle_A = 0$, so there is an induced symmetric pairing $\delta_A : \Gamma \times \Gamma \rightarrow {\mathbb Q}_p / {\mathbb Z}_p$. The image of $\delta_A$ is a subgroup of $\frac{1}{|\Gamma|} {\mathbb Z}_p/{\mathbb Z}_p$, which is naturally identified with $\frac{1}{|\Gamma|} {\mathbb Z}/{\mathbb Z} \subset {\mathbb Q}/{\mathbb Z}$. The resulting map from $\Gamma$ to ${\operatorname{Hom}}(\Gamma, {\mathbb Q}/{\mathbb Z})$ is injective, and hence an isomorphism. In particular, $(\Gamma, \delta_A)$ is a finite abelian $p$-group with duality pairing. The goal of this section is to prove Theorem~\ref{thm:pparts}, which gives the distribution on the groups with duality pairing appearing as the cokernel of $A$, when $A$ is chosen randomly with respect to additive Haar measure on ${\operatorname{Sym}}_n({\mathbb Z}_p)$. More concretely, a random element $x \in {\mathbb Z}_p$ chosen with respect to Haar measure is given by independently choosing a random element of ${\mathbb Z}/p{\mathbb Z}$ for each digit of the $p$-adic expansion of $x$. To choose a random $A \in {\operatorname{Sym}}_n({\mathbb Z}_p)$ with respect to this measure, we simply choose a random element for each entry $a_{i,j}$ of $A$ with $i \le j$. The structure of the argument is similar to the proof of \cite[Theorem~3.9]{BKLPR13}. One key ingredient in the argument is the following characterization of pairs of matrices that determine the same pairings ${\langle\ ,\ \rangle}: {\mathbb Z}_p^n \times {\mathbb Z}_p^n \rightarrow {\mathbb Q}_p/{\mathbb Z}_p$. For a matrix $M$ with entries in ${\mathbb Z}_p$, we write $\overline M$ for the matrix with entries in ${\mathbb Z}/p{\mathbb Z}$ given by reducing the entries of $M$ modulo $p$. \begin{lemma}[Lemma~3.2 of \cite{BKLPR13}] \label{BKLemma} Let $A$ and $M$ be nonsingular $n \times n$ matrices with entries in ${\mathbb Z}_p$. Then the pairings ${\langle\ ,\ \rangle}_A$ and ${\langle\ ,\ \rangle}_M$ from ${\mathbb Z}_p^n \times {\mathbb Z}_p^n$ to ${\mathbb Q}_p/{\mathbb Z}_p$ are the same if and only if \begin{enumerate} \item the matrix $A$ is equal to $M + MRM$, for some $R \in M_{n\times n}({\mathbb Z}_p)$, and \item the rank of $\overline A$ is equal to the rank of $\overline M$. \end{enumerate} \end{lemma} We also use the following formula for counting pairings. For an arbitrary symmetric bilinear pairing ${[ \ , \ ]}: {\mathbb Z}_p^n \times {\mathbb Z}_p^n \rightarrow {\mathbb Q}_p/{\mathbb Z}_p$, we write ${\operatorname{coker}} {[ \ , \ ]}$ for the finite abelian $p$-group \[ {\operatorname{coker}} {[ \ , \ ]} = {\mathbb Z}_p^n / \{x \in {\mathbb Z}_p^n \ | \ [x,y] = 0 \mbox{ for all } y \in {\mathbb Z}_p^n \}. \] Note that ${\operatorname{coker}} {[ \ , \ ]}$ carries a canonical duality pairing, induced by ${[ \ , \ ]}$. \begin{lemma}\label{lem:numpairings} The number of symmetric bilinear pairings ${[ \ , \ ]}: {\mathbb Z}_p^n \times {\mathbb Z}_p^n \rightarrow {\mathbb Q}_p / {\mathbb Z}_p$ such that ${\operatorname{coker}} {[ \ , \ ]}$ with its canonical duality pairing is isomorphic to $(\Gamma, \delta)$ is \[ \frac{\# \Gamma^n \cdot \prod_{j = n-r+1}^n (1 - p^{-j})} {\# {\operatorname{Aut \;}}(\Gamma, \delta)}, \] where $r$ is the rank of $\Gamma$. \end{lemma} \begin{proof} A symmetric bilinear pairing ${[ \ , \ ]}: {\mathbb Z}_p^n \times {\mathbb Z}_p^n \rightarrow {\mathbb Q}_p/{\mathbb Z}_p$ with a choice of an isomorphism ${\operatorname{coker}} {[ \ , \ ]} \xrightarrow{\sim} \Gamma$ respecting the pairing is equivalent to a surjection ${\mathbb Z}_p^n \rightarrow \Gamma$. In each instance, there are $\# {\operatorname{Aut \;}}(\Gamma,\delta)$ choices of isomorphisms from ${\operatorname{coker}} {[ \ , \ ]}$ to $\Gamma$ that respect the pairing, so the number of distinct pairings with cokernel isomorphic to $(\Gamma, \delta)$ is \[ \#{\operatorname{Sur}}({\mathbb Z}_p^n,\Gamma)/\# {\operatorname{Aut \;}}(\Gamma,\delta). \] Note that a homomorphism ${\mathbb Z}_p^n{\rightarrow} \Gamma$ is surjective if and only if it is a surjection modulo $p$, by Nakayama's Lemma. Therefore, \[ \#{\operatorname{Sur}}({\mathbb Z}_p^n,\Gamma)=\frac{\#\Gamma^n \cdot \prod_{i=0}^{r-1}(p^n-p^i)}{p^{nr}}, \] and the lemma follows. \end{proof} The final key ingredient in our argument is the classification of symmetric bilinear forms over ${\mathbb Q}_p$ up to ${\operatorname{GL}}_n({\mathbb Z}_p)$ equivalence. When $p$ is odd, any symmetric bilinear form over ${\mathbb Q}_p$ is diagonalizable by a $p$-adic integral change of coordinates. More precisely, if $M$ is a symmetric matrix with entries in ${\mathbb Q}_p$, then we can choose $H \in {\operatorname{GL}}_n({\mathbb Z}_p)$ such that $HMH^t$ is diagonal \cite[Section~15.4.4]{ConwaySloane99}. The classification of symmetric bilinear forms over ${\mathbb Q}_2$ is more complicated, so we treat this case separately at the end of the section. \begin{proof}[Proof of Theorem~\ref{thm:pparts} for odd $p$.] Let $A$ be a symmetric $n \times n$ matrix with entries in ${\mathbb Z}_p$, chosen randomly with respect to additive Haar measure. The singular matrices have Haar measure zero, so we may assume $A$ is nonsingular. We want to determine the probability that the cokernel of $A$ with duality pairing $\delta_A$ is isomorphic to $(\Gamma, \delta)$. Note that $\langle x, y \rangle_A$ is zero for all $y \in {\mathbb Z}_p^n$ if and only if $x \in A {\mathbb Z}_p^n$, so the cokernel of $A$ with its duality pairing is canonically isomorphic to ${\operatorname{coker}} {\langle\ ,\ \rangle}_A$ with its induced pairing. We now compute the probability that ${\langle\ ,\ \rangle}_A$ is equal to a fixed symmetric bilinear pairing ${[ \ , \ ]}: {\mathbb Z}_p^n \times {\mathbb Z}_p^n \rightarrow {\mathbb Q}_p / {\mathbb Z}_p$ such that ${\operatorname{coker}} {[ \ , \ ]}$ with its induced pairing is isomorphic to $(\Gamma, \delta)$. Since $A$ is chosen randomly with respect to Haar measure, this probability is invariant under a change of basis on ${\mathbb Z}_p^n$. Assume $p$ is odd, and let $N \in {\operatorname{Sym}}_n({\mathbb Q}_p)$ be a symmetric matrix whose $(i,j)$ entry is a lift of $[e_i,e_j]$ to ${\mathbb Q}_p$. By the classification of symmetric bilinear forms over ${\mathbb Q}_p$ up to ${\operatorname{GL}}_n({\mathbb Z}_p)$ \cite[Section~15.4.4]{ConwaySloane99}, after a change of basis on ${\mathbb Z}_p^n$ we may assume $N$ is diagonal. Possibly changing the lift to ${\mathbb Q}_p$, we may further ensure that $N$ has diagonal entries with valuations $-d_i$, where \[ d_1=\dots =d_{n-r}=0 \mbox{ \ and \ } 1\leq d_{n-r+1}\leq\dots\leq d_n. \] We therefore assume that $N$ is in this form. Let $M = N^{-1}$ and note that, by construction, we have ${[ \ , \ ]} = {\langle\ ,\ \rangle}_M$. We have shown that the probability that ${\langle\ ,\ \rangle}_A = {[ \ , \ ]}$ is the same as the probability that ${\langle\ ,\ \rangle}_A = {\langle\ ,\ \rangle}_M$, where $M$ is a diagonal matrix whose $i$th diagonal entry has valuation $d_i$. By Lemma~\ref{BKLemma}, the pairings ${\langle\ ,\ \rangle}_A$ and ${\langle\ ,\ \rangle}_M$ are equal if and only if $A = M + MRM$ for some $R \in M_{n \times n}({\mathbb Z}_p)$ and ${\operatorname{rank}} (\overline A) = {\operatorname{rank}} (\overline M)$. We now determine the probability that ${\langle\ ,\ \rangle}_A = {\langle\ ,\ \rangle}_M$. The condition that $A$ be of the form $M + MRM$ is equivalent to requiring that $M^{-1}(A-M)M^{-1}$ is in $M_{n\times n}({\mathbb Z}_p).$ Given $M$, this is equivalent to the entries $a_{i,j}$ of $A$ satisfying certain divisibility conditions. Let $p^{d_i}u_i$ be the $i$th diagonal entry of $M$, with $u_i\in{\mathbb Z}_p^*$. For $i<j$, the $(i,j)$ entry of $M^{-1}(A-M)M^{-1}$ is in ${\mathbb Z}_p$ if and only if $p^{d_i+d_j}\ \mid a_{i,j}$, and the $(i,i)$ entry is in ${\mathbb Z}_p$ if and only if $p^{2d_i} \mid (a_{i,i} - p^{d_i}u_i)$. Therefore, for all $i \leq j$, the condition $A = M + MRM$ is equivalent to fixing the first $d_i+ d_j$ digits in the $p$-adic expansion of $a_{i,j}$. Note, in particular, that when $d_i = d_j = 0$, there is no condition at all on the entry $a_{i,j}$. The probability that a random symmetric matrix satisfies these conditions is then \[ \prod_{1\le i \le j \le n} p^{-(d_i + d_j)} = \prod_{i=1}^n p^{-(n+1) d_i} = \frac{1}{\# \Gamma^{n+1}}. \] Furthermore, given the above valuation conditions, we see that $\overline A$ is zero outside the upper left $(n-r) \times (n-r)$ minor. The matrix $\overline M$ is also zero outside the upper left $(n-r) \times (n-r)$ minor, and this minor is a diagonal matrix with non-zero entries. In particular, ${\operatorname{rank}} (\overline M)=n-r$. The condition that ${\operatorname{rank}}( \overline A) = {\operatorname{rank}} (\overline M)$ is independent from the divisibility conditions, and holds with probability equal to the proportion of invertible matrices in ${\operatorname{Sym}}_{n-r}({\mathbb F}_p)$. In particular, the probability that ${\langle\ ,\ \rangle}_A = {\langle\ ,\ \rangle}_M$, and hence the probability that ${\langle\ ,\ \rangle}_A = {[ \ , \ ]}$, is \[ \frac{\# ({\operatorname{GL}}_{n-r}({\mathbb F}_p) \cap {\operatorname{Sym}}_{n-r}({\mathbb F}_p))}{\# {\operatorname{Sym}}_{n-r}({\mathbb F}_p) \cdot \# \Gamma^{n+1}}. \] Note that this probability depends only on $\# \Gamma$, and is independent of all other choices. Lemma~\ref{lem:numpairings} gives the number of symmetric bilinear pairings ${[ \ , \ ]}: {\mathbb Z}_p^n \times {\mathbb Z}_p^n \rightarrow {\mathbb Q}_p / {\mathbb Z}_p$ such that ${\operatorname{coker}} {[ \ , \ ]}$ with its induced duality pairing is isomorphic to $(\Gamma, \delta)$. We conclude that the probability that the cokernel of $A$ with its duality pairing is isomorphic to $(\Gamma, \delta)$ is the product \begin{equation} \label{eqn:probability} \frac{\# ({\operatorname{GL}}_{n-r}({\mathbb F}_p) \cap {\operatorname{Sym}}_{n-r}({\mathbb F}_p))}{\# {\operatorname{Sym}}_{n-r}({\mathbb F}_p) \cdot \# \Gamma^{n+1}} \cdot \frac{\# \Gamma^n \cdot \prod_{j = n-r+1}^n (1 - p^{-j})} {\# {\operatorname{Aut \;}}(\Gamma, \delta)} . \end{equation} By \cite[Theorem~2]{MacWilliams69}, the number of invertible matrices in ${\operatorname{Sym}}_{k}({\mathbb F}_p)$ is \[ p^{\binom{k+1}{2}} \prod_{j=1}^{\lceil \frac{k}{2} \rceil} (1-p^{1-2j}). \] Therefore, the expression in (\ref{eqn:probability}) can be rewritten as \[ \frac{\prod_{i=1}^{\lceil(n-r)/2\rceil} (1-p^{1-2i}) \cdot \prod_{j=n-r+1}^n (1-p^{-j}) }{\# \Gamma \cdot \# {\operatorname{Aut \;}}(\Gamma,\delta)}, \] and the theorem follows. \end{proof} \bigskip We now return to the case $p=2$ and show that, for a given symmetric pairing ${[ \ , \ ]}:{\mathbb Z}_p^n\times{\mathbb Z}_p^n {\rightarrow} {\mathbb Q}_p/{\mathbb Z}_p$, the probability that ${\langle\ ,\ \rangle}_A={[ \ , \ ]}$ is again \[ \frac{\# ({\operatorname{GL}}_{n-r}({\mathbb F}_p) \cap {\operatorname{Sym}}_{n-r}({\mathbb F}_p))}{\# {\operatorname{Sym}}_{n-r}({\mathbb F}_p) \cdot \# \Gamma^{n+1}}. \] \noindent The classification of symmetric bilinear forms over ${\mathbb Q}_2$ is due to Wall \cite{Wall64}. It is given in the following form in \cite[Theorem 2, Section~15.4.4]{ConwaySloane99}. \begin{thm}[\!\!\cite{ConwaySloane99}]\label{CS} Suppose that $A \in {\operatorname{Sym}}_n({\mathbb Z}_2)$. Then there exists a matrix $H \in {\operatorname{GL}}_n({\mathbb Z}_2)$ such that $H A H^t$ is a block diagonal matrix consisting of: \begin{enumerate} \item diagonal blocks $u_i 2^{d_i}$, where each $d_i \ge 0$ and $u_i$ is a unit in ${\mathbb Z}_2$, \item $2\times 2$ blocks of the form $2^{e_i} \left(\begin{smallmatrix} a & b \\ b & c \end{smallmatrix} \right)$, where $e_i \ge 0,\ b$ is a unit in ${\mathbb Z}_2$ and $a,c \in 2{\mathbb Z}_2$. \end{enumerate} \end{thm} \begin{proof}[Proof of Theorem~\ref{thm:pparts} for $p = 2$] As in the case for odd $p$, we let $A$ be a symmetric $n\times n$ matrix with entries in ${\mathbb Z}_2$ chosen randomly with respect to additive Haar measure. We will again consider the probability that the cokernel of $A$ with duality pairing $\delta_A$ is isomorphic to $(\Gamma,\delta)$. We compute the probability that ${\langle\ ,\ \rangle}_A$ is equal to a fixed symmetric bilinear pairing ${[ \ , \ ]}: {\mathbb Z}_2^n \times {\mathbb Z}_2^n \rightarrow {\mathbb Q}_2 / {\mathbb Z}_2$ such that ${\operatorname{coker}} {[ \ , \ ]}$ with its induced pairing is isomorphic to $(\Gamma, \delta)$. Just as for odd $p$, since $A$ is chosen randomly with respect to Haar measure, this probability is invariant under a change of basis on ${\mathbb Z}_2^n$. Let $N \in {\operatorname{Sym}}_n({\mathbb Q}_2)$ be a symmetric matrix whose $(i,j)$ entry is a lift of $[e_i,e_j]$ to ${\mathbb Q}_2$. We first note that there exists an element $2^m \in {\mathbb Z}_2$ such that $2^m N \in {\operatorname{Sym}}_n({\mathbb Z}_2)$. It is no longer true that we can change basis so that $2^m N$ is diagonal, but Theorem \ref{CS} shows that there exists $H\in {\operatorname{GL}}_2({\mathbb Z}_2)$ such that $HNH^t$ is a block diagonal matrix with entries in ${\mathbb Q}_2$ that has two types of blocks, diagonal blocks given by a unit times $2^{-d_i}$, and $2\times 2$ blocks of the form $2^{-e_i} \left(\begin{smallmatrix} a_i & b_i \\ b_i & c_i \end{smallmatrix} \right)$, where $b_i \in {\mathbb Z}_2^*$ and $a_i,c_i \in 2{\mathbb Z}_2$. By possibly changing the lift to ${\mathbb Q}_2$, we can suppose that each $d_i, e_i \ge 0$. Since $a_i c_i -b_i^2 \in {\mathbb Z}_2^*$, the inverse of a $2\times 2$ block of this form is a unit times $2^{e_i} \left(\begin{smallmatrix} c_i & -b_i \\ -b_i & a_i \end{smallmatrix}\right)$. We note that $-1\in {\mathbb Z}_2^*$ and can therefore write this block as a unit times $2^{e_i} \left(\begin{smallmatrix} c_i & b_i \\ b_i & a_i \end{smallmatrix}\right)$ where $b_i \in {\mathbb Z}_2^*$ and $a,c \in 2{\mathbb Z}_2$. We say that $e_i$ is the valuation of this block. Therefore, $M = (HNH^t)^{-1}$ has entries in ${\mathbb Z}_2$ and by construction we have ${[ \ , \ ]} = {\langle\ ,\ \rangle}_M$. As above, Lemma~\ref{BKLemma} implies that ${\langle\ ,\ \rangle}_A$ and ${\langle\ ,\ \rangle}_M$ are equal if and only if $A = M + MRM$ for some $R \in M_{n \times n}({\mathbb Z}_2)$ and ${\operatorname{rank}} (\overline A) = {\operatorname{rank}} (\overline M)$. The condition that $A$ be of the form $M + MRM$ is equivalent to $M^{-1} (A-M) M^{-1} \in M_{n\times n}({\mathbb Z}_2)$, and gives divisibility conditions on the entries $a_{i,j}$ of $A$. We determine these conditions by explicitly computing the relevant entries of $M+MRM$. For odd $p$ we could suppose that $M$ was a diagonal matrix such that the diagonal entry in row $i$ had valuation $d_i$. There were two kinds of divisibility results, one for $a_{i,j}$ with $i<j$, and another for $a_{i,i}$. Combining these constraints, we saw that there exists a matrix $R \in M_{n \times n}({\mathbb Z}_p)$ such that $A = M + MRM$ if and only if for each $i\le j$ the first $d_i+d_j$ entries in the $p$-adic expansion of $a_{i,j}$ are given by particular values determined by $M$. For $p=2$ there are more kinds of constraints to consider. However, we will show that the same statement is true, that there exists an $R \in M_{n \times n}({\mathbb Z}_2)$ such that $A = M + MRM$ if and only if for each $i\le j$ the first $d$ entries in the $p$-adic expansion of $a_{i,j}$ are given by a specific set of values determined by $M$, where $d$ is the sum of the valuations of the blocks of $M$ in rows $i$ and $j$. After permuting coordinates we may suppose that the initial rows of $M$ have only diagonal nonzero entries $u_i 2^{d_i}$ where each $u_i \in {\mathbb Z}_2^*$ and $0 \le d_0 \le d_1 \le \cdots \le d_{k_1}$, followed by a set of $2\times 2$ diagonal blocks $2^{e_i} \left(\begin{smallmatrix} a_i & b_i \\ b_i & c_i \end{smallmatrix} \right)$, where $b_i \in {\mathbb Z}_2^*,\ a_i,c_i \in 2{\mathbb Z}_2$, and $0\le e_1\le \cdots \le e_{k_2}$. By invariance of Haar measure, we see that the probability that ${\langle\ ,\ \rangle}_A = {[ \ , \ ]}$ is the same as the probability that ${\langle\ ,\ \rangle}_A = {\langle\ ,\ \rangle}_M$, where $M$ is a block diagonal matrix of this form. Let $R \in M_{n\times n}({\mathbb Z}_2)$ have entries $r_{i,j}$. We explicitly compute the entries of the matrix $MRM$. Row $i$ of $M$ can correspond either to a diagonal $1\times 1$ block, the first row of a $2\times 2$ block of the type described in the previous paragraph, or the second row of such a block. Suppose that rows $i$ and $j$ of $M$ each correspond to $1\times 1$ blocks with valuations $d_i$ and $d_j$, respectively. The $(i,j)$ entry of $MRM$ is then a unit times $2^{d_i + d_j} r_{i,j}$. An appropriate choice of $r_{i,j} \in {\mathbb Z}_2$ shows that this entry can be any element of $2^{d_i+d_j} {\mathbb Z}_2$. Now suppose that row $i$ corresponds to a $1\times 1$ block with valuation $d_i$ and that rows $j$ and $j+1$ correspond to the $2\times 2$ block $2^{e_j} \cdot \left(\begin{smallmatrix} a & b\\ b & c \end{smallmatrix} \right)$. Then the $(i,j)$ entry of $MRM$ is a unit times \[ 2^{d_i+e_j} \left( r_{i,j} a + r_{i,j+1} b \right) \] and the $(i,j+1)$ entry is a unit times \[ 2^{d_i+e_j} \left( r_{i,j} b + r_{i,j+1} c \right). \] Since $b \in {\mathbb Z}_2^*$, an appropriate choice of $r_{i,j+1}$ shows that this first entry can be any element of $2^{d_i + e_j} {\mathbb Z}_2$, and similarly, an appropriate choice of $r_{i,j}$ shows that the second entry can be any element of $2^{d_i + e_j} {\mathbb Z}_2$. Now suppose that rows $i$ and $i+1$ correspond to the $2\times 2$ block $2^{e_i} \cdot \left(\begin{smallmatrix} a_i & b_i \\ b_i & c_i \end{smallmatrix} \right)$, and rows $j$ and $j+1$ correspond to the $2\times 2$ block $2^{e_j} \cdot \left(\begin{smallmatrix} a_j & b_j \\ b_j & c_j \end{smallmatrix} \right)$. Then the four entries, $(i,j), (i+1,j),(i,j+1),(i+1,j+1)$, of $MRM$ are given by \begin{eqnarray*} 2^{e_i+e_j} & \left( a_i a_j r_{i,j} + b_i a_j r_{i+1,j} + a_i b_j r_{i,j+1} + b_i b_j r_{i+1,j+1}\right) & \\ 2^{e_i+e_j} & \left( a_i b_j r_{i,j} + b_i b_j r_{i+1,j} + a_i c_j r_{i,j+1} + b_i c_j r_{i+1,j+1}\right) & \\ 2^{e_i+e_j} & \left( b_i a_j r_{i,j} + c_i b_j r_{i+1,j} + b_i b_j r_{i,j+1} + c_i b_j r_{i+1,j+1}\right) & \\ 2^{e_i+e_j} & \left( b_i b_j r_{i,j} + c_i b_j r_{i+1,j} + b_i c_j r_{i,j+1} + c_i c_j r_{i+1,j+1}\right), & \end{eqnarray*} respectively. Since $b_i b_j \in {\mathbb Z}_2^*$, appropriate choices of $r_{i+1,j+1}, r_{i+1,j}, r_{i,j+1}$ and $r_{i,j}$, respectively, show that these four entries can each be any elements of $2^{e_i+e_j} {\mathbb Z}_2$. Note that we are allowing the case where $i=j$ and these two blocks are identical. We conclude from this analysis that by an appropriate choice of $R,\ MRM$ can be any matrix where the valuation of the $(i,j)$ entry is at least the sum of the valuations of the blocks corresponding to rows $i$ and $j$. We see that this fixes the initial $p$-adic digits of every entry of the matrix $A-M$. The probability that a matrix $A$ satisfies all of these conditions is given by the product of the probabilities for individual entries. Suppose that row $k$ of $A$ corresponds to a block with valuation $d$. Considering the divisibility constraints from all entries $a_{k,j}$ in this row with $k\le j$ and all $a_{l,k}$ with $l<k$ contributes a factor of $2^{-(n+1)d}$ to this probability. Taking the product over all rows gives the total probability that there exists an $R \in M_{n \times n}({\mathbb Z}_2)$ such that $A = M + MRM$. The determinant of $M$ is the product of the determinants of the diagonal blocks. For $1\times 1$ diagonal blocks the determinant is a unit times $2^{d_i}$. A $2\times 2$ block has determinant equal to a unit times $2^{2e_i}$. As in the case for odd $p,\ \# \Gamma = 2^v$, where $v$ is the $2$-adic valuation of $\det(M)$. By permuting rows and columns of $M$, we may suppose that $\overline{M}$ is zero outside of the upper left $(n-r)\times (n-r)$ minor, and is a block diagonal matrix of rank $n-r$. The divisibility conditions also imply that $\overline{A}$ is zero outside of its upper left $(n-r)\times (n-r)$ minor. The condition that ${\operatorname{rank}}(\overline{A}) = {\operatorname{rank}}(\overline{M})$ is again independent of the divisibility conditions. Taking the product over the rows of $A$ shows that the probability that ${\langle\ ,\ \rangle}_A={[ \ , \ ]}$ is \[ \frac{\# ({\operatorname{GL}}_{n-r}({\mathbb F}_2) \cap {\operatorname{Sym}}_{n-r}({\mathbb F}_2))}{\# {\operatorname{Sym}}_{n-r}({\mathbb F}_2)) \cdot \# \Gamma^{n+1}}. \] The rest of the argument is the same as for odd $p$. \end{proof} \subsection{A generalization of Theorem~\ref{thm:pparts} for matrices with row and column sums equal to zero.} The combinatorial Laplacian of a graph is a symmetric matrix with each row and column sum equal to zero. We now generalize Theorem~\ref{thm:pparts} to random matrices satisfying this condition. Let ${\operatorname{Sym}}_n^0 \subset {\operatorname{Sym}}_n$ be the symmetric matrices with each row and column sum equal to $0$. We adapt the construction of a finite abelian group with duality pairing associated to a symmetric nonsingular matrix to the case where the matrix may be singular, as follows. Let $A \in {\operatorname{Sym}}_n({\mathbb Z}_p)$ be a possibly singular matrix, and define \[ L_A = {\mathbb Z}_p^n \cap A{\mathbb Q}_p^n. \] Then $A$ determines a natural symmetric pairing $L_A \times L_A \rightarrow {\mathbb Q}_p/{\mathbb Z}_p$, where if $x=\frac{1}{m} Az$ for $z\in {\mathbb Z}_p^n$ and $m\in {\mathbb Z}_p$, then $(x,y)\mapsto \frac{1}{m} z^ty$. We define the finite cokernel of $A$ to be $\Gamma =L_A/ A{\mathbb Z}_p^n$. This is exactly the torsion subgroup of the cokernel of $A$. The above pairing descends to \[ \Gamma \times \Gamma \rightarrow {\mathbb Q}_p/{\mathbb Z}_p, \] which, as in the nonsingular case discussed at the beginning of the section, is a duality pairing on $\Gamma$. \begin{thm} \label{thm:zerosum} Let $\Gamma$ be a finite abelian $p$-group and $\delta$ a duality pairing on $\Gamma$. Choose an $A\in {\operatorname{Sym}}_n^0({\mathbb Z}_p)$ randomly with respect to additive Haar measure. Let $\mu_n(\Gamma,\delta)$ be the probability that the finite cokernel of $A$ with its duality pairing is isomorphic to $(\Gamma,\delta)$. Let $r$ be the $p$-rank $\dim_{{\mathbb Z}/p{\mathbb Z}} \Gamma/p\Gamma$. Then \[ \mu_n(\Gamma,\delta) = \frac{\prod_{j=n-r}^{n-1} (1-p^{-j}) \prod_{i=1}^{\lceil(n-1-r)/2\rceil} (1-p^{1-2i})}{\# \Gamma \cdot \# {\operatorname{Aut \;}}(\Gamma,\delta)}. \] In particular, \[ \lim_{n\rightarrow \infty} \mu_n(\Gamma,\delta) = \frac{\prod_{i=1}^\infty (1-p^{1-2i})}{\# \Gamma \cdot \# {\operatorname{Aut \;}}(\Gamma,\delta)}. \] \end{thm} \begin{proof} Given a matrix $A\in {\operatorname{Sym}}_n^0({\mathbb Z}_p)$, deleting the last row and column gives a matrix $A'$ in ${\operatorname{Sym}}_{n-1}({\mathbb Z}_p)$. Clearly $A'$ determines $A$ as well, and the correspondence respects Haar measure. Let $Z_0$ be the subset of ${\mathbb Z}_p^n$ where the coordinates sum to $0$. We have that $L_A\subseteq Z_0$ and if equality does not hold then some subset of the $n-1$ columns of $A'$ are linearly dependent over ${\mathbb Q}_p$. In particular $A'$ has determinant $0$, which happens with probability $0$. Therefore, $L_A = Z_0$ with probability $1$, which we assume from now on. Let $e_i$ be the standard basis for ${\mathbb Z}_p^n$. We can explicitly check that the matrix $A'$ gives the same pairing on ${\mathbb Z}_p^{n-1}$ that $A$ gives on $Z_0$ by using the basis $e_i - e_n$ for $Z_0$. This theorem now follows from Theorem \ref{thm:pparts}. \end{proof} \section{Computing Averages}\label{S:averages} In this section, we make theoretical computations of the exact values that are predicted by Conjecture~\ref{C:basic}. These are (unconditional) results of group theory about the values taken by the right-hand side of Conjecture~\ref{C:basic} for several important functions $F$. Recall that ${\mathcal A}(m)$ is the set of all isomorphism classes of pairs $(\Gamma,\delta)$ where $\Gamma$ is an abelian group of order $m$ and $\delta$ is a duality pairing on $\Gamma$. \subsection{The normalizing constant} Here we compute a constant that will be important in our later computations, as it arises when computing the denominator of our averages. \begin{prop}\label{P:normalizing} For any prime $p$, we have \[ \sum_{m = 0}^{\infty} \sum_{(\Gamma,\delta)\in {\mathcal A}(p^m)} \frac{1}{\# \Gamma\cdot \# {\operatorname{Aut \;}}(\Gamma,\delta)} = \prod_{i=1}^{\infty} (1-p^{1-2i})^{-1}. \] \end{prop} \begin{proof} Let $(\Gamma,\delta)$ be a finite abelian $p$-group with duality pairing. From Theorem~\ref{thm:pparts}, we know $\mu_n(\Gamma,\delta)$, the probability that a random matrix in ${\operatorname{Sym}}_n({\mathbb Z}_p)$ has cokernel isomorphic to $(\Gamma,\delta)$. For each $n$ we have that \[ \sum_{m = 0}^{\infty} \, \sum_{(\Gamma,\delta) \in {\mathcal A}(p^m)} \mu_n(\Gamma,\delta) = 1. \] Recall that \[ \lim_{n\rightarrow \infty} \mu_n(\Gamma,\delta) = \frac{\prod_{i=1}^{\infty} (1-p^{1-2i})}{\# \Gamma\cdot \# {\operatorname{Aut \;}}(\Gamma,\delta)}. \] Let $\mu_{\max}(\Gamma,\delta)=\max_{n} \mu_n(\Gamma,\delta)$. It is not hard to see that there is an absolute constant $c$ such that for any $n$ at least the rank of $\Gamma$, we have $\mu_{\max}(\Gamma,\delta)\leq c \mu_n(\Gamma,\delta),$ e.g. take $c=\prod_{i\geq 1} (1-2^{-1})^{-2}$. So for any $k$ \begin{align*} \sum_{m=0}^\infty \sum_{(\Gamma,\delta) \in {\mathcal A}(p^m) \atop r_p(\Gamma) \le k} \mu_{max}(\Gamma,\delta) &\le \sum_{m=0}^\infty \sum_{(\Gamma,\delta) \in {\mathcal A}(p^m) \atop r_p(\Gamma) \le k} c \mu_{k}(\Gamma,\delta) \\ &\le \sum_{m=0}^\infty \sum_{(\Gamma,\delta) \in {\mathcal A}(p^m)} c \mu_m(\Gamma,\delta)\\ &\leq c. \end{align*} Taking the limit as $k$ goes to infinity shows that $$ \sum_{m=0}^\infty \sum_{(\Gamma,\delta) \in {\mathcal A}(p^m)} \mu_{max}(\Gamma,\delta) $$ converges. Then by the Lesbesgue Dominated Convergence Theorem, we have \[ \lim_{n\rightarrow \infty} \sum_{m=0}^\infty \sum_{(\Gamma,\delta) \in {\mathcal A}(p^m)} \mu_n(\Gamma,\delta) = \sum_{m=0}^\infty \sum_{(\Gamma,\delta) \in {\mathcal A}(p^m)} \lim_{n\rightarrow \infty} \mu_n(\Gamma,\delta) , \] as desired. \end{proof} \subsection{The probability that $\Gamma$ has trivial or cyclic $p$-part} For each prime $p$, we define \[ C_p = \prod_{i=1}^{\infty} (1-p^{1-2i}). \] We have the following, using the measure $\mu$ defined in the introduction. \begin{prop} Let $F_{p-trivial}(\Gamma) = 1$ if $r_p(\Gamma) = 0$, and $0$ otherwise. Then \[ \int_{(F,\delta)\in \mathcal{A}_p}F(\Gamma)d\mu= C_p. \] \end{prop} \begin{proof} The measure $\mu$ of the trivial group is $C_p$. \end{proof} \noindent This shows for example, that the probability that a group obeying our conjectures has trivial $2$-part is a little over $.4194$, and the probability that it has trivial $17$-part is a little over $.9409$. \begin{prop} \label{P:cyclic} Let $F_{cyclic}(\Gamma) = 1$ if $\Gamma$ is cyclic, and $0$ otherwise. Then \[ \int_{(\Gamma,\delta)\in \mathcal{A}_p}F(\Gamma)d\mu = \frac{C_p}{1-p^{-1}}. \] \end{prop} \begin{proof} By definition, we have \[ \int_{(F,\delta)\in \mathcal{A}_p}F(\Gamma)d\mu =C_p \sum_{m=0}^\infty \sum_{\substack{(\Gamma,\delta) \in {\mathcal A}(p^m)\\ \Gamma \textrm{ cyclic}}} \frac{1}{\# \Gamma\cdot \# {\operatorname{Aut \;}}(\Gamma,\delta)}. \] \noindent For each $m$ we show that \begin{equation}\label{E:tocompute} \sum_{\substack{(\Gamma,\delta) \in {\mathcal A}(p^m)\\ \Gamma \textrm{ cyclic}}} \frac{1}{\# \Gamma\cdot \# {\operatorname{Aut \;}}(\Gamma,\delta)} = \frac{1}{p^m}. \end{equation} There is only one cyclic group of given order, so we only need to take the sum over the different pairings. We claim that the number of isomorphism classes of $({\mathbb Z}/p^m{\mathbb Z},\delta)$ is equal to the size of ${\operatorname{Aut \;}}({\mathbb Z}/p^m{\mathbb Z},\delta)$ for each duality pairing $\delta$. A duality pairing $\delta$ on the cyclic group ${\mathbb Z}/p^m{\mathbb Z}$ is determined by its value on $(1,1)$. Changing basis for ${\mathbb Z}/p^m{\mathbb Z}$, replacing $1$ by a generator $u \in ({\mathbb Z}/p^m{\mathbb Z})^*$, multiplies this value by a factor of $u^2$. Therefore, the isomorphism classes of duality pairings on ${\mathbb Z}/p^m{\mathbb Z}$ correspond naturally to the cosets in $({\mathbb Z}/p^m{\mathbb Z})^*$ modulo squares, and the number of automorphisms of each pairing is the number of square roots of $1$. These are equal, since one is the index of the image and one is the size of the kernel for the endomorphism $u \mapsto u^2$. This proves the claim, and the lemma follows. \end{proof} \subsection{Moments} The Cohen-Lenstra distribution is a measure on abelian $p$-groups such that the expected number of surjections to any finite abelian $p$-group $\Gamma$ is $1$. These are often called the \emph{moments} of the distribution, and we now explain why. Let $\Gamma \cong \prod_{i=1}^r {\mathbb Z}/p^{f_i}{\mathbb Z}$ with $f_1\le f_2 \le \cdots \le f_r$ and $\Gamma' \cong \prod_{i=1}^r {\mathbb Z}/p^{e_i}{\mathbb Z}$ with $e_1\le e_2 \le \cdots \le e_r$. Let $f'_1\geq \cdots$ be the transpose of the partition $f_r\geq \dots \geq f_1$, and let $e'_1\geq \cdots$ be the transpose of the partition $e_r\geq \dots \geq e_1$. Note that the $f_i'$ are a complete set of invariants for the finite abelian $p$-group $\Gamma$ (and thus so are $p^{f_i'}$). It is a standard fact that $\# {\operatorname{Hom}} ( \Gamma, \Gamma')=p^{\sum_i f_i'e_i'}$. Thus, for any measure $\nu$ on finite abelian $p$-groups $$ \int_{\Gamma} \#{\operatorname{Hom}}(\Gamma,\Gamma') d\nu=\int_{\Gamma} \prod_i (p^{f_i'})^{e_i'} d\nu $$ is the $e'_1,e'_2,\dots$ mixed moment (in the usual sense) of the variables $p^{f_1'}, p^{f_2'}, \dots$. The averages $$ \int_{\Gamma} \#{\operatorname{Hom}}(\Gamma,\Gamma') d\nu $$ for all subgroups $\Gamma'$ of an abelian $p$-group $A$ are related to the averages $$ \int_{\Gamma} \#{\operatorname{Sur}}(\Gamma,\Gamma') d\nu $$ for all subgroups $\Gamma'$ of $A$ by an upper-triangular linear transformation (only depending on $A$). In other words, the ${\operatorname{Hom}}$ averages, taken together, are equivalent data to the ${\operatorname{Sur}}$ averages. However, in practice, the ${\operatorname{Sur}}$ averages often seem to capture more basic algebraic data, and the ${\operatorname{Hom}}$ averages are usually best understood as the sum of the ${\operatorname{Sur}}$ averages over subgroups (e.g. in the case of the Cohen-Lenstra measure all the ${\operatorname{Sur}}$ averages are $1$). Hence the ${\operatorname{Sur}}$ averages are often studied and called the moments. We now turn to computing moments of measure $\mu$ defined in the introduction. \begin{thm}\label{T:momint} Let $\Gamma' \cong \prod_{i=1}^r {\mathbb Z}/p^{e_i}{\mathbb Z}$ with $e_1\le e_2 \le \cdots \le e_r$. Then $$\int_{(\Gamma,\delta)\in{\mathcal A}_p} \#{\operatorname{Sur}}(\Gamma,\Gamma') d\mu =p^{(r-1)e_1 + (r-2)e_2+\cdots + e_{r-1}}.$$ \end{thm} \noindent We prove this result after first establishing the following analogous result in the random matrix case. \begin{thm}\label{T:mom} Suppose $\Gamma' \cong \prod_{i=1}^r {\mathbb Z}/p^{e_i}{\mathbb Z}$ with $e_1\le e_2 \le \cdots \le e_r$. Let $A$ be a random matrix in ${\operatorname{Sym}}_n({\mathbb Z}_p)$ with respect to (additive) Haar measure. As $n$ goes to infinity, the expected number of surjections from the cokernel of $A$ to $\Gamma'$ approaches $p^{(r-1)e_1 + (r-2)e_2+\cdots + e_{r-1}}$. \end{thm} \begin{proof} Let $A \in {\operatorname{Sym}}_n({\mathbb Z}_p)$ be chosen randomly with respect to Haar measure. With probability $1,\ \det(A) \neq 0$, so we assume that from now on. There are $p^{(e_1+e_2+\cdots + e_r) n}$ distinct maps ${\mathbb Z}_p^n \rightarrow \Gamma'$. As $n$ goes to infinity, the probability that such a map is a surjection goes to $1$. We choose such a map at random and compute the probability that it factors through the cokernel of $A$. The kernel of a surjection from ${\mathbb Z}_p^n$ to $\Gamma'$ is given by the column space of a matrix $B\in M_{n\times n}({\mathbb Z}_p)$. Given $B$, we determine the probability that $A {\mathbb Z}_p^n \subset B {\mathbb Z}_p^n$. This probability is unchanged by a change of basis on ${\mathbb Z}_p^n$. We put $B$ into Smith normal form, first multiplying on the right and then on the left by matrices in ${\operatorname{GL}}_n({\mathbb Z}_p)$. We choose $G, H \in {\operatorname{GL}}_n({\mathbb Z}_p)$ so that $GBH$ is diagonal. Since $G^t {\mathbb Z}_p^n = H {\mathbb Z}_p^n = {\mathbb Z}_p^n$, the probability that $A{\mathbb Z}_p^n \subset B{\mathbb Z}_p^n$ is equal to the probability that $G A G^t {\mathbb Z}_p^n \subset G B H {\mathbb Z}_p^n$. By the properties of Haar measure, $GAG^t$ is drawn from the same distribution as $A$. We now need only compute the probability that $A \in D {\mathbb Z}_p^n$, where $D$ is a diagonal matrix with $r$ diagonal entries of positive valuation, $u_i p^{e_i}$ where each $u_i$ is a unit in ${\mathbb Z}_p$ and $1\le e_1 \le e_2 \le \cdots \le e_r$. Note that ${\mathbb Z}_p^n/D{\mathbb Z}_p^n \cong \Gamma'$. The condition that $A \in D {\mathbb Z}_p^n$ can now be phrased in terms of divisibility conditions on the entries of $A$ that are determined by the $e_i$. Suppose that the $k$th row of $D$ has diagonal entry equal to a unit times $p^{e_i}$. Then this condition implies that every entry of the $k$th row and column of $A$ must have valuation at least $e_i$. We count the number of independent entries affected and see that a random symmetric matrix satisfies all of these conditions with probability $$p^{-e_r n - e_{r-1}(n-1)- \cdots - e_1 (n-(r-1))}.$$ We multiply by the $p^{(e_1+e_2+\cdots + e_r) n}$ maps ${\mathbb Z}_p^n \rightarrow \Gamma'$, which are almost all surjections as $n$ goes to infinity. This shows that the expected number of surjections is $p^{(r-1)e_1+(r-2) e_2+\cdots + e_{r-1}}$. \end{proof} For a partition $\lambda$ given by $\lambda_1\geq \lambda_2\geq \dots$, we let $\Gamma'_\lambda =\oplus_i {\mathbb Z}/p^{\lambda_i}{\mathbb Z}$. Let $\lambda'$ denote the transpose of $\lambda$. Note that $\sum_i (i-1)\lambda_i$ is the sum over boxes in the partition diagram of $\lambda$ of $i-1$, where $i$ in the row the box appears in. Summing by column, we obtain $\sum_i (i-1)\lambda_i= \sum_j \frac{\lambda'_j(\lambda'_j-1)}{2}. $ So we have, when $n-1\geq \lambda'_1$, the expected number of surjections from the cokernel of $A\in {\operatorname{Sym}}_n({\mathbb Z}_p)$ to $\Gamma'_\lambda$ is given by \begin{equation}\label{E:ESur} p^{\sum_i (i-1)\lambda_i}=p^{\sum_j \frac{\lambda'_j(\lambda'_j-1)}{2}}. \end{equation} \begin{proof}[Proof of Theorem~\ref{T:momint}] In Theorem~\ref{T:mom}, we have computed \[ \lim_{n\rightarrow \infty} \sum_{m=0}^\infty \sum_{(\Gamma,\delta) \in {\mathcal A}(p^m)} \mu_n(\Gamma,\delta) \#{\operatorname{Sur}}(\Gamma,\Gamma'), \] and so it remains to show that \[ \lim_{n\rightarrow\infty} \sum_{m=0}^\infty \sum_{(\Gamma,\delta) \in {\mathcal A}(p^m)} \mu_n(\Gamma,\delta) \#{\operatorname{Sur}}(\Gamma,\Gamma')= \sum_{m=0}^\infty \sum_{(\Gamma,\delta) \in {\mathcal A}(p^m)} \lim_{n\rightarrow \infty} \mu_n(\Gamma,\delta) \#{\operatorname{Sur}}(\Gamma,\Gamma'). \] We have \[ \mu_n(\Gamma,\delta) = \frac{\prod_{j=n-r+1}^n (1-p^{-j}) \prod_{i=1}^{\lceil(n-r)/2\rceil} (1-p^{1-2i})}{\# \Gamma \cdot \# {\operatorname{Aut \;}}(\Gamma,\delta)}, \] where $r$ is the rank of $\Gamma$. We closely follow the proof of Proposition~\ref{P:normalizing}. Let $\mu_{\max}(\Gamma,\delta):=\max_{n} \mu_n(\Gamma,\delta)$. It is not hard to see that there is an absolute constant $c$ such that for any $n$ at least the rank of $\Gamma$, we have $\mu_{\max}(\Gamma,\delta)\leq c \mu_n(\Gamma,\delta),$ e.g. take $c=\prod_{i\geq 1} (1-2^{-1})^{-2}$. So for any $k$ \begin{align*} \sum_{m=0}^\infty \sum_{(\Gamma,\delta) \in {\mathcal A}(p^m) \atop r_p(\Gamma) \le k} \mu_{max}(\Gamma,\delta) \#{\operatorname{Sur}}(\Gamma,\Gamma') &\le \sum_{m=0}^\infty \sum_{(\Gamma,\delta) \in {\mathcal A}(p^m) \atop r_p(\Gamma) \le k} c \mu_m(\Gamma,\delta)\#{\operatorname{Sur}}(\Gamma,\Gamma')\\ &\le \sum_{m=0}^\infty \sum_{(\Gamma,\delta) \in {\mathcal A}(p^m)} c \mu_m(\Gamma,\delta)\#{\operatorname{Sur}}(\Gamma,\Gamma'). \end{align*} Taking the limit as $k\rightarrow\infty$, we see the last term is bounded by Theorem~\ref{T:mom}. So we have that $$ \sum_{m=0}^\infty \sum_{(\Gamma,\delta) \in {\mathcal A}(p^m)} \mu_{max}(\Gamma,\delta) \#{\operatorname{Sur}}(\Gamma,\Gamma') $$ converges. Then by the Lesbesgue Dominated Convergence Theorem, we have \[ \lim_{n\rightarrow \infty} \sum_{m=0}^\infty \sum_{(\Gamma,\delta) \in {\mathcal A}(p^m)} \mu_n(\Gamma,\delta) \#{\operatorname{Sur}}(\Gamma,\Gamma') \] is equal to \[ \sum_{m=0}^\infty \sum_{(\Gamma,\delta) \in {\mathcal A}(p^m)} \lim_{n\rightarrow \infty} \mu_n(\Gamma,\delta) \#{\operatorname{Sur}}(\Gamma,\Gamma'), \] as desired. \end{proof} We now use this result to deduce the expectation of $r_p(A)$. We recall the definition of the Gaussian binomial coefficient. Let \[ \binom{k}{j}_p:= \prod_{i=0}^{j-1} \frac{p^k-p^i}{p^j-p^i}. \] This counts the number of $j$ dimensional subspaces of $\left({\mathbb Z}/p{\mathbb Z}\right)^k$. Note that $\binom{k}{0}_p = 1$ for any $k$ and $p$ since both the numerator and denominator consist of the empty product. \begin{thm} For a finite abelian $p$-group $\Gamma$, let $r_p(\Gamma)$ denote the $p$-rank of $\Gamma$, so $p^{r_p(\Gamma)}$ is the size of $\Gamma / p\Gamma$. We have \[ \int_{(\Gamma,\delta)\in{\mathcal A}_p} p^{k\cdot r_p(\Gamma)} d\mu = \prod_{j=0}^{k-1} (p^j+1). \] \end{thm} \begin{proof} We show that the left hand side of the statement is equal to \[ \sum_{j=0}^{k} p^{j(j-1)/2} \binom{k}{j}_p. \] Applying the $q$-binomial theorem, formula (1.87) in \cite{Stanley12}, with $x=1$ completes the proof. Let $\Gamma' = \left({\mathbb Z}/p{\mathbb Z}\right)^k$. The number of maps to $\Gamma'$ is $p^{k\cdot r_p(A)}$. Such a map surjects onto a subgroup of $\Gamma'$ of size $p^j$ for some $j \in [0,k]$. By the above theorem, the expected number of surjections onto a particular subgroup isomorphic to $\left({\mathbb Z}/p{\mathbb Z}\right)^j$ is $p^{j(j-1)/2}$. The number of subgroups of $\Gamma'$ isomorphic to $\left({\mathbb Z}/p{\mathbb Z}\right)^j$ is equal to $\binom{k}{j}_p$. \end{proof} The expectation $p^{k\cdot r_p(A)}$ satisfies a particularly nice form, which suggests that the $p$-rank of the Jacobian of a random graph also satisfies a nice distribution. This distribution has been determined by the fifth author as Corollary 9.4 of \cite{Wood14}. \section{Empirical evidence}\label{S:Data} Here we present some computational evidence for the conjectures stated in the introduction. The code used to generate the data is available as part of the arxiv version of this paper \cite{arxiv}, after the {\textbackslash}end{\textbraceleft}document{\textbraceright} line in the source file. \subsection{The probability that a random graph is cyclic} We computed the Jacobians of $10^6$ connected random graphs with $n$ vertices and edge probability $q$, for $n \in \{ 15, 30, 45, 60\}$ and $q \in \{.3, .5, .7\}$. (When disconnected graphs appeared, we discarded them without computing the Jacobians.) The following table displays the proportions of these graphs with cyclic Jacobians. \vspace{.25 cm} \begin{center} \begin{tabular}{ |c | c | c | c | } \hline n$\setminus$ q & .3 & .5 & .7 \\ \hline 15 & .784255 & .792895 & .775746 \\ \hline 30 & .793807 & .793570 & .793375 \\ \hline 45 & .793308 & .793962 & .793637 \\ \hline 60 & .793436 & .793694 & .79354 \\ \hline \end{tabular} \end{center} \vspace{.25 cm} \noindent We believe these data support Conjecture~\ref{C:cyc}, which predicts that the probability that $\Gamma(n,q)$ is cyclic tends to a limit slightly higher than $.7935$ as $n$ tends to infinity. \subsection{Relative frequencies of $2$-groups with duality pairings} We computed the Sylow $2$-subgroups with duality pairing for $10^5$ random graphs on $20$ vertices with edge probability $.5$, and compared the frequency of each group of size at most 8 with the frequency of the trivial group. In order to distinguish between the pairings that we observed, we recall the classification of finite abelian $p$-groups with duality pairing, following the presentation in \cite{Miranda84}, in which these results are attributed to Wall \cite{Wall64}. Let $\mathcal{B}_p$ denote the semigroup of isomorphism classes of finite abelian $p$-groups with duality pairing under orthogonal direct sum. The classification of symmetric bilinear forms on abelian $2$-groups is more complicated than for other $p$-groups \cite{Miranda84}. \begin{prop}\label{mirandap2} The semigroup $\mathcal{B}_2$ is generated by forms of the following types: \begin{align*} A_{2^r} & \text{ on } {\mathbb Z}/2^r{\mathbb Z},\ r\ge 1; \langle 1,1\rangle = 2^{-r}, \\ B_{2^r} & \text{ on } {\mathbb Z}/2^r{\mathbb Z},\ r\ge 2; \langle 1,1 \rangle = -2^{-r}, \\ C_{2^r} & \text{ on } {\mathbb Z}/2^r{\mathbb Z},\ r\ge 3; \langle 1,1 \rangle = 5 \cdot 2^{-r},\\ D_{2^r} & \text{ on } {\mathbb Z}/2^r{\mathbb Z},\ r\ge 3; \langle 1,1 \rangle = -5\cdot 2^{-r},\\ E_{2^r} & \text{ on } {\mathbb Z}/2^r{\mathbb Z} \times {\mathbb Z}/2^r {\mathbb Z},\ r\ge 1; \langle e_i, e_j \rangle = \begin{cases} 0 & \text{if } i =j, \\ 2^{-r}& \text{if } i\neq j, \end{cases}\\ F_{2^r} & \text{ on } {\mathbb Z}/2^r{\mathbb Z} \times {\mathbb Z}/2^r {\mathbb Z},\ r\ge 2; \langle e_i, e_j \rangle = \begin{cases} 2^{-(r-1)} & \text{if } i =j, \\ 2^{-r}& \text{if } i\neq j \end{cases}. \end{align*} \end{prop} \noindent The relations between these generators for $\mathcal{B}_2$ are complicated and we will not need them here. \bigskip \begin{center} $\begin{array}{|c|c|c|c|c|} \hline \mathbf{(\Gamma, \delta)}&$\#$\textbf{Aut}\mathbf{(\Gamma, \delta)}&\textbf{Proportion}& \textbf{Observed}&\textbf{Expected}\\ & &\textbf{of Sample}&\textbf{Ratio}&\textbf{Ratio} \\ \hline 1&1&.419161&1&1\\ \hline A_2&1&.210371&1.99249&2\\ \hline A_4&2&.0518826&8.07903&8\\ \hline B_4&2&.0522326&8.02489&8\\ \hline A_2\oplus A_2 &2&.0517726&8.0962&8 \\ \hline E_4 &6&.0170709&24.5542&24 \\ \hline A_8 &4&.0129706&32.3161&32\\ \hline B_8&4&.0131007&31.9953&32 \\ \hline C_8&4&.0132507&31.6332&32\\ \hline D_8&4 &.0129206&32.4412&32\\ \hline A_2 \oplus A_4 &2&.0259813&16.1332&16\\ \hline A_2 \oplus A_2 \oplus A_2 &6&.00888044&47.2005&48 \\ \hline \end{array}$ \end{center} \bigskip \noindent Here, the observed ratio is the observed frequency of the trivial group $.419161$ divided by the observed frequency of $(\Gamma, \delta)$, while the expected ratio is the factor $\# \Gamma \cdot \# \text{Aut}\;(\Gamma,\delta)$ predicted by Conjecture~\ref{C:basic}, in the special case where $F$ depends only on the Sylow $2$-subgroup of $\Gamma$ with its restricted pairing. \subsection{Relative frequencies of $3$-groups with duality pairings} We now recall the classification of finite abelian $p$-groups with duality pairing for odd primes $p$ \cite{Miranda84}. \begin{prop}\label{miranda} If $p$ is odd, the semigroup $\mathcal{B}_p$ is generated by cyclic groups with pairings of the following two types: \begin{align*} A_{p^r} & \text{ on } {\mathbb Z}/2^r{\mathbb Z},\ r\ge 1; \langle 1,1\rangle = p^{-r}, and \\ B_{p^r} & \text{ on } {\mathbb Z}/2^r{\mathbb Z},\ r\ge 1; \langle 1,1 \rangle = \alpha p^{-r}, \end{align*} where $\alpha$ is a quadratic non-residue mod $p$. \end{prop} \noindent The semigroup $\mathcal{B}_p$ is not free on these generators; the relations are generated by $A_{p^r} \oplus A_{p^r} = B_{p^r} \oplus B_{p^r}$. We computed the Sylow $3$-subgroup of the Jacobians on a sample of $10^5$ random graphs on 20 vertices with edge probability $.5$, and compared the relative frequency of each group with pairing of size at most 9 with the frequency of the trivial group. \bigskip \begin{center}$\begin{array}{|c|c|c|c|c|} \hline \mathbf{(\Gamma, \delta)}&$\#$\textbf{Aut}\mathbf{(\Gamma,\delta)}&\textbf{Proportion}& \textbf{Observed}&\textbf{Expected}\\ &&\textbf{of Sample}&\textbf{Ratio}&\textbf{Ratio} \\ \hline 1&1&.638566&1&1\\ \hline A_3&2&.106104&6.01828&6\\ \hline B_3&2&.106324&6.00583&6\\ \hline A_9&2&.0349714&18.2597&18 \\ \hline B_9&2&.0355414&17.9668&18\\ \hline A_3 \oplus A_3 &8&.00905036&70.5569&72\\ \hline A_3 \oplus B_3 &4&.0177307&36.0147&36 \\ \hline \end{array}$ \end{center} \bigskip \noindent We found similar data for Sylow 5-subgroups and Sylow 7-subgroups of Jacobians of random graphs. \subsection{Relative frequencies at two places} Any duality pairing on a finite abelian group decomposes as an orthogonal direct sum of duality pairings on the Sylow $p$-subgroups. Here we give data supporting the hypothesis that the $p$-parts of Jacobians of random graphs are independent for distinct primes. We computed the Sylow 2-subgroups and 3-subgroups of $2\cdot 10^5$ random graphs on 30 vertices with edge probability $.5$. In 53366 cases, both of these subgroups were trivial. The following table displays the ratio of the frequency of the trivial group to the frequency of each of the duality pairings on ${\mathbb Z}/2{\mathbb Z} \times {\mathbb Z}/2{\mathbb Z} \times {\mathbb Z}/3{\mathbb Z}$. \bigskip \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline $\mathbf{(\Gamma, \delta)}$&\textbf{$\#\text{Aut}(\Gamma,\delta)$}& \textbf {Proportion} & \textbf{Observed} & \textbf{Expected} \\ & & \textbf{in Sample}& \textbf{Ratio} & \textbf{Ratio} \\ \hline $A_2 \oplus A_2 \oplus A_3$&4& .005655 &47.1848&48\\ \hline $A_2 \oplus A_2 \oplus B_3$&4& .005420 & 49.2306&48\\ \hline $A_3 \oplus E_4$&12& .001865 &143.072&144\\ \hline $B_3 \oplus E_4$&12& .001965 &135.791&144\\ \hline \end{tabular} \end{center} \bigskip \section{Acknowledgments} The authors thank Matt Baker, Wei Ho, Matt Kahle, and the referees. The fourth author was supported in part by NSF grant DMS-1068689 and NSF CAREER grant DMS-1149054. The fifth author was supported by an American Institute of Mathematics Five-Year Fellowship and National Science Foundation grants DMS-1147782 and DMS-1301690. \newcommand{\etalchar}[1]{$^{#1}$} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{% \href{https://urldefense.proofpoint.com/v2/url?u=http-3A__www.ams.org_mathscinet-2Dgetitem-3Fmr-3D-231&d=AwIFAg&c=-dg2m7zWuuDZ0MUcV7Sdqw&r=HXetpLHMphcM0tNPAuuorMjjwezLst1bZHvaSAGtD5M&m=-U9L2H6dbSz3KJGqUnsxhNvMKmlp5ifcI85Khhu_GXM&s=RLp4VGH2hR3q6HkuSf5_l1-F25RrJ7oTaaKdDOINeJs&e= }{#2} } \providecommand{\href}[2]{#2}
{ "timestamp": "2015-04-22T02:11:30", "yymm": "1402", "arxiv_id": "1402.5129", "language": "en", "url": "https://arxiv.org/abs/1402.5129", "abstract": "In this paper, we make specific conjectures about the distribution of Jacobians of random graphs with their canonical duality pairings. Our conjectures are based on a Cohen-Lenstra type heuristic saying that a finite abelian group with duality pairing appears with frequency inversely proportional to the size of the group times the size of the group of automorphisms that preserve the pairing. We conjecture that the Jacobian of a random graph is cyclic with probability a little over .7935. We determine the values of several other statistics on Jacobians of random graphs that would follow from our conjectures. In support of the conjectures, we prove that random symmetric matrices over the p-adic integers, distributed according to Haar measure, have cokernels distributed according to the above heuristic. We also give experimental evidence in support of our conjectures.", "subjects": "Combinatorics (math.CO)", "title": "On a Cohen-Lenstra Heuristic for Jacobians of Random Graphs", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9861513930404762, "lm_q2_score": 0.8221891283434876, "lm_q1q2_score": 0.8108029542586651 }
https://arxiv.org/abs/0801.2115
A study of counts of Bernoulli strings via conditional Poisson processes
We say that a string of length $d$ occurs, in a Bernoulli sequence, if a success is followed by exactly $(d-1)$ failures before the next success. The counts of such $d$-strings are of interest, and in specific independent Bernoulli sequences are known to correspond to asymptotic $d$-cycle counts in random permutations.In this note, we give a new framework, in terms of conditional Poisson processes, which allows for a quick characterization of the joint distribution of the counts of all $d$-strings, in a general class of Bernoulli sequences, as certain mixtures of the product of Poisson measures. This general class includes all Bernoulli sequences considered before, as well many new sequences.
\section{#1}\setcounter{equation}{0}} \newtheorem{theorem}{Theorem}[section] \newtheorem{lemma}[theorem]{Lemma} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{conjecture}[theorem]{Conjecture} \newtheorem{definition}[theorem]{Definition} \newtheorem{remark}[theorem]{Remark} \newtheorem{example}[theorem]{Example} \def\hfill$\blacksquare$\vskip .2cm{\hfill$\blacksquare$\vskip .2cm} \begin{document} \author{Fred W. Huffer, Jayaram Sethuraman, and Sunder Sethuraman} \address{\noindent Department of Statistics, Florida State University, Tallahassee, FL \ 32306. \newline e-mail: \rm \texttt{huffer@stat.fsu.edu} } \address{\noindent Department of Statistics, Florida State University, Tallahassee, FL \ 32306. \newline e-mail: \rm \texttt{sethu@stat.fsu.edu} } \address{\noindent Department of Mathematics, 396 Carver Hall, Iowa State University, Ames, IA \ 50011, USA. \newline e-mail: \rm \texttt{sethuram@iastate.edu}} \title[Bernoulli strings and Poisson processes] {A study of counts of Bernoulli strings via conditional Poisson processes} \begin{abstract} A sequence of random variables, each taking values $0$ or $1$, is called a Bernoulli sequence. We say that a string of length $d$ occurs, in a Bernoulli sequence, if a success is followed by exactly $(d-1)$ failures before the next success. The counts of such $d$-strings are of interest, and in specific independent Bernoulli sequences are known to correspond to asymptotic $d$-cycle counts in random permutations. In this note, we give a new framework, in terms of conditional Poisson processes, which allows for a quick characterization of the joint distribution of the counts of all $d$-strings, in a general class of Bernoulli sequences, as certain mixtures of the product of Poisson measures. In particular, this general class includes all Bernoulli sequences considered in the literature, as well as a host of new sequences. \end{abstract} \subjclass[2000]{primary 60C05; secondary 60K99} \keywords{Bernoulli, cycles, strings, spacings, nonhomogeneous, Poisson processes, random permutations} \thanks{Research partially supported by ARO-W911NF-04-1-0333, NSA-H982300510041, and NSF-DMS-0504193.} \maketitle \sect{Introduction} \label{intro_sect} In this note, we study the joint distribution of the counts of certain $d$-strings of all orders $d>1$ arising in Bernoulli sequences. Previous work has used several different methods, including combinatorial, factorial moment, and P\'olya and Hoppe urn model methods to identify the joint count distribution with respect to a class of independent Bernoulli sequences. In this context, our main contribution is to introduce a new framework, using conditional Poisson processes, which allows for a concise derivation of the joint count distribution as a mixture of the product of Poisson measures with respect to all Bernoulli sequences considered before, as well as many others in a general class, including some dependent Bernoulli sequences. A Bernoulli sequence ${\bf Y} = \{Y_n\}_{n\geq 1}$ is a sequence of $\{0,1\}$-valued random variables. For $d\geq 1$, we say that a $d$-string occurs if a $1$ is followed by exactly $(d-1)$ $0$'s before the next $1$ in the Bernoulli sequence. Specifically, a $d$-string occurs at time $n\geq 1$ if $Y_{n,d}=1$ where $$Y_{n,d} \ {=}\ \left\{\begin{array}{rl} Y_nY_{n+1}& \ {\rm for \ }d=1\\ Y_n(1-Y_{n+1})\cdots (1-Y_{n+d-1})Y_{n+d}&\ {\rm for \ }d\geq 2,\end{array}\right. $$ that is, if $\langle Y_n,\dots,Y_{n+d}\rangle = \langle 1,\underbrace{0,\dots,0}_{d-1},1\rangle.$ Let $Z_d=\sum_{n\geq 1}Y_{n,d}$ be the count of all $d$-strings, for $d\geq 1$, and ${\bf Z} = \langle Z_d: d\geq 1\rangle$ be the ``count vector'' of strings. [In general, ${\bf Z}$ may have divergent components, but for the Bernoulli sequences considered in this article it is easily shown (by taking expectations) that all components $Z_k$ are finite with probability $1$.] In this notation, the general problem is to understand the distribution of ${\bf Z}$ and its connection to the underlying sequence ${\bf Y}$. Aside from the problem's basic interest, $d$-strings and their counts from specific independent Bernoulli sequences have interpretations with respect to random permutations, record values, Bayesian nonparametrics, and species allocation models through Ewens sampling formula. We will use ``$\stackrel{d}{=}$'' to signify ``equals in distribution,'' and ${\mathcal L}(X)$ to denote the law or distribution of the random variable $X$. Denote also ${\rm Po}(\lambda)$ as the Poisson measure on $\mathbb R$ with intensity $\lambda$, and $I(B)$ as the indicator of a set $B$. \begin{example} \label{ex_1} \rm Let $\mathbb S_n = \{1,2,\ldots,n\}$, and consider the Feller algorithm to generate a permutation $\pi:\mathbb S_n\rightarrow \mathbb S_n$ uniformly among the $n!$ choices (cf. Feller~(1945)): \begin{itemize} \item[1.] Draw an element uniformly from $\mathbb S_n$, and call it $\pi(1)$. If $\pi(1)=1$, a $1$-cycle is completed. If $\pi(1)\neq 1$, make another draw uniformly from $\mathbb S_n\setminus\{\pi(1)\}$, and call it $\pi(\pi(1))$. Continue drawing from $\mathbb S_n\setminus\{\pi(1), \pi(\pi(1))\},\ldots$ naming them $\pi(\pi(\pi(1)))$, and so on, until a cycle (of some length) is finished. \item[2.] From the elements left in $\mathbb S_n\setminus\{\pi(1),\pi(\pi(1)), \ldots,1\}$ after the first cycle is completed, follow the process in step $1$ with the smallest remaining number taking the role of ``$1$'' to finish a second cycle. Repeat until all elements of $\mathbb S_n$ are exhausted. \end{itemize} Let $I^{(n)}_k$ be the indicator that a cycle is completed at the $k$th Feller draw from $\mathbb S_n$. A moment's thought convinces that $\{I^{(n)}_k\}_{k=1}^n$ are independent Bernoulli random variables with $P(I^{(n)}_k=1)=1/(n-k+1)$ as, independent of the past, exactly one choice at time $1\leq k\leq n$ from the remaining $n-k+1$ members left in $\mathbb S_n$ completes the cycle. Denote $C^{(n)}_k$ as the number of $k$-cycles in $\pi$, $$C^{(n)}_k = \left\{\begin{array}{rl} I^{(n)}_1 + \sum_{i=1}^{n-1} I^{(n)}_iI^{(n)}_{i+1}& {\rm for \ }k=1\\ \prod_{l=1}^{k-1}(1-I^{(n)}_l)I^{(n)}_k + \sum_{i=1}^{n-k}I^{(n)}_i\prod_{l=i+1}^{i+k-1}(1-I^{(n)}_l)I^{(n)}_{i+k}& {\rm for \ }2\leq k\leq n.\end{array}\right.$$ Now let ${\bf Y}$ be the independent sequence where $P(Y_k=1)=1/k$ for $k\geq 1$, so that $Y_k\stackrel{d}{=}I^{(n)}_{n-k+1}$ for $1\leq k\leq n$. Then, as $Y_n$, and $Y_{n-k+1}\prod_{l=n-k+2}^n(1-Y_l)$ for $2\leq k\leq n$ all vanish in probability as $n\uparrow \infty$, we conclude for each $k\geq 1$ that $\lim_{n\rightarrow \infty}C^{(n)}_k \stackrel{d}{=} Z_k$. Finally, as is well-known, the asymptotic cycle counts $\{\lim_n C^{(n)}_k\}_{k\geq 1}$ are distributed as independent Poisson random variables with respective means $1/k$ for $k\geq 1$ (cf. Kolchin~(1971)). Hence, ${\bf Z}\stackrel{d}{=}\prod_{k\geq 1}{\rm Po}(1/k)$. [Example~\ref{ex_3}, in section 2, gives a derivation in our Poisson process framework. See also Arratia-Barbour-Tavar\'e~(1992,\ 2003) for more discussion with Ewens sampling formula.] \end{example} \begin{example} \label{ex_2} \rm Consider the standard nonparametric problem of estimating the unknown distribution function $F$ from independent and identically distributed observations $\{X_i\}_{i\geq 1}$. A Bayesian may place on $F$ a Dirichlet prior with parameters $a\mu$ where $a>0$ and $\mu$ is a non-atomic probability measure. Let $Y_1=1$ and for $n\ge 2$ define $Y_n =1$ if $X_n$ is a new observation, that is if $X_n \not \in \{X_1,\dots,X_{n-1}\}$, and $Y_n=0$ otherwise. Then, it can be shown that ${\bf Y}$ is an independent Bernoulli sequence with $P(Y_n=1) = a/(a+n-1)$ for $n\geq 1$ and that $(\log n)^{-1}\sum_{i=1}^n Y_i \rightarrow a$ a.s. The latter result can be interpreted in terms of counts of strings in this Bernoulli sequence. See Korwar-Hollander~(1973) for more details, and also Ghosh-Ramamoorthi (2003). \end{example} In the literature, to our knowledge, only the count vectors of the following class of underlying independent Bernoulli sequences have been investigated. Denote the independent Bernoulli sequence ${\bf Y}$ where $P(Y_n=1)=a/(a+b+n-1)$ for $n\geq 1$ as ${\bf Y}={\rm Bern}(a,b)$. The case $a=1$, $b=0$ is Example~\ref{ex_1} (see also Arratia-Tavar\'e~(1992)). The case $a>0$, $b=0$ is Example~\ref{ex_2}. For this case, Arratia-Barbour-Tavar\'e~(1992) observe that the associated ${\bf Z}\stackrel{d}{=}\prod_{k\geq 1}{\rm Po}(a/k)$ through connections with Ewens sampling formula. When $a=1$, $b>0$, Sethuraman-Sethuraman~(2004), employing factorial moments, show that, given the value $x_0$ of a Beta$(b,1)$ random variable, ${\bf Z}\stackrel{d}{=}\prod_{k\geq 1} {\rm Po}((1-x_0^k)/k)$. Such a distribution will be called a ``mixture of independent Poisson factors.'' When $a>0$ and $b> 0$, Holst (2007) extends further, using P\'olya and Hoppe urns, and establishes that, given the value $x_0$ of a Beta$(b,a)$ random variable, ${\bf Z}\stackrel{d}{=}\prod_{k\geq 1} {\rm Po}(a(1-x_0^k)/k)$, again a mixture of independent Poisson factors. We note also that several interesting studies of $1$-strings preceded some of the above work, e.g. an unpublished manuscript of Diaconis, Chern-Hwang-Yeh~(2000), M\'ori~(2001), Joffe-Marchand-Perron-Popadiuk~(2004), and references therein in these and the above papers. With this background, our main idea is that it is easier to study ${\bf Z}$ starting from an extrinsic ``conditional marked Poisson process model'' (CMPP) rather than directly from the Bernoulli sequence. Namely, we prove that when the underlying Bernoulli sequence ${\bf Y}$ is generated through a CMPP model, the count vector ${\bf Z}$ is distributed as a mixture of independent Poisson factors in terms of model parameters (Theorem~\ref{cmpp_thm}). As remarked earlier, the Poisson process techniques used here are different from previous methods and allow quick derivations. Perhaps interestingly, the sequences ${\bf Y}$ found in our model include many dependent Bernoulli sequences (some explicit examples are in section~\ref{dep_sect}). However, the most general sequence studied till now, the independent sequence ${\rm Bern}(a,b)$ with $a>0$ and $b\geq 0$, can also be realized in our framework (Proposition \ref{holst_prop}), yielding a new proof of its count vector distribution. Our conditional marked Poisson process model also yields a new class of independent Bernoulli sequences which we call ${\rm Bern}_1(a,b)$. Denote the independent Bernoulli sequence ${\bf Y}$ where $P(Y_1=1)=1$, and $P(Y_n=1)=a/(a+b+n-2)$ for $n\geq 2$ as ${\bf Y}={\rm Bern}_1(a,b)$. The ${\rm Bern}_1(a,b)$ sequence appends a $1$ to the ${\rm Bern}(a,b)$ sequence and picks up one more $d$-string contributed by any leading $0$'s in ${\rm Bern}(a,b)$. We show that the distribution of the count vector ${\bf Z}$ for ${\rm Bern}_1(a,b)$ for $a>0,b \geq 1$ is a mixture of independent Poisson factors (Proposition~\ref{new_prop}). This result fails for $0 \leq b<1$, and in this case even the distribution of $Z_1$, the count of $1$-strings in ${\rm Bern}_1(a,b)$, is not a mixture of Poisson distributions (Proposition~\ref{cont_prop}). However, the distribution of ${\bf Z}$ in ${\rm Bern}_1(a,b)$ can be expressed through a recurrence relation for all values of $b$ including $0\le b <1$ (Proposition~\ref{decomp_prop}). The plan of the article is to discuss the CMPP model, and prove the main theorem in section ~\ref{cmpp_sect}. In sections~\ref{holst_sect} and \ref{new_sect}, the main theorem is applied to independent sequences ${\rm Bern}(a,b)$ and ${\rm Bern}_1(a,b)$ respectively. Last, in section \ref{dep_sect}, two explicit dependent Bernoulli sequences, arising from the CMPP model, are given. \sect{CMPP models} \label{cmpp_sect} The following ``Poisson process'' derivation of the distribution of ${\bf Z}$ with respect to ${\rm Bern}(1,0)$ (cf. Example \ref{ex_1}) motivates subsequent development. \begin{example} \label{ex_3} \rm Consider the following standard way to generate a ${\rm Bern}(1,0)$ sequence. Let $\{\beta_i\}_{i\geq 1}$ be independent, identically distributed (iid) Uniform$[0,1]$ random variables, and define $Y_n = I(\beta_n {\rm\ is \ a \ record}),\ n\ge 1$. R\`enyi's theorem shows that $\{Y_n\}_{n\geq 1}$ are independent and $P(Y_n =1)=1/n$ for $n\geq 1$, that is ${\bf Y} = {\rm Bern}(1,0)$. Let $\{X_i\}_{i\geq 1}$ be the record values among $\{\beta_i\}_{i \ge 1}$. Notice that the point process $N$ on $[0,1]$ defined by $N(A) = \sum_{i\geq 1}\delta_{X_i}(A)$ is a nonhomogeneous Poisson process on $[0,1]$ with intensity $1/(1-x)$ (cf. Resnick~(1994)). For each point $X_i$, we can associate a Geometric$(1-X_i)$ variable $L_i$ (a ``mark'') corresponding to the number of uniform random variables in $\{\beta_i\}_{i\geq 1}$ to the next record. Then, by thinning decompositions, $Z_k= \sum_{i\geq 1}I(L_i=k) = \sum_{i\geq 1} \delta_{X_i}([0,1])I(L_i=k)$ for $k\geq 1$ are independent Poisson variables with respective means $\int_0^1 (1-x)^{-1}x^{k-1}(1-x)dx = 1/k$ for $k\geq 1$. \end{example} In a sense, the thrust of the following CMPP model and our main result (Theorem \ref{cmpp_thm}) below is to reverse the procedure in Example~\ref{ex_3}. By beginning with a given Poisson process and spacing variables, which themselves determine the count vector ${\bf Z}$, we then see what associated Bernoulli sequence ${\bf Y}$ arises. Consider a sequence of random variables ${({\bf X,L})} =\{(X_i,L_i)\}_{i\geq 0}$ on $\mathbb R\times \mathbb N$ where $\mathbb N = \{1,2,\ldots\}$, and the point process $N$ on $\mathbb R$ given by $N(A) {=} \sum_{i\geq 1}\delta_{X_i}(A)$. Let also $g:\mathbb R\rightarrow [0,\infty)$ be a probability density function (pdf), and for each $x\in \mathbb R$ $r(x,\cdot),q(x,\cdot):\mathbb N\rightarrow [0,1]$ be probability mass functions, and $\lambda_{x}: \mathbb R \rightarrow [0,\infty)$ be an intensity function. Then, we say ${({\bf X,L})}$ is the conditional marked Poisson process ${\mathcal M}(g,r,\lambda,q)$ if the following hold: \begin{itemize} \item [{1.}] $X_0$ has pdf $g$, \item [{ 2.}] conditional on $X_0=x_0$, $N$ is a nonhomogeneous Poisson process with intensity function $\lambda_{x_0}(\cdot)$, \item [{ 3.}] $P(L_0=k|{\bf X}) = r(X_0,k)$ for $k\geq 1$, and \item [{ 4.}] $P(L_n=k|{\bf X},L_0,L_1,\dots,L_{n-1}) = q(X_n,k)$ for $k,n\geq 1$. \end{itemize} Let $L_0^*=L_0$, and $L^*_r=L^*_{r-1}+L_r$ for $r\geq 1$. We now define a Bernoulli sequence ${\bf Y}$ based on $({\bf X,L})$ as follows: $Y_n=1$ if $n$ is of the form $L^*_r$ for some $r\geq 0$, and $Y_n=0$ otherwise. Another way to say this is \begin{equation} \label{bernoulli_seq} Y_n \ =\ \left\{\begin{array}{rl} 0 & {\rm \ when \ } n< L_0^*, {\rm \ or \ }L_r^* < n < L_{r+1}^* {\rm \ for \ } r\geq 0\\ 1 & {\rm \ when \ } n=L_r^* {\rm \ for \ }r\geq 0.\end{array}\right. \end{equation} Then, the count vector ${\bf Z}$ is given by \begin{equation} \label{count_rep} Z_k \ =\ \sum_{n\geq 1} I(L_n=k),\ \ \ \ {\rm for \ } k\geq 1. \end{equation} We note the zeroth mark $L_0$ is not included in the above summation since any $Y_i$ with $i<L_0$ is part of an initial segment of zeros of the sequence not preceded by a $1$, and so does not contribute to any $d$-string, for $d\geq 1$. \begin{theorem} \label{cmpp_thm} Suppose $\int \lambda_{w}(x) q(x,k) dx<\infty$ for all $w\in \mathbb R$ and $k\geq 1$. Then, the count vector ${\bf Z}$ associated with sequence ${\bf Y}$, defined through CMPP $({\bf X,L})={\mathcal M}(g,r,\lambda,q)$, is distributed as follows. Given the value $X_0=x_0$, $${\bf Z}\ \stackrel{d}{=}\ \prod_{k\geq 1}{\rm Po}\bigg(\int \lambda_{x_0}(x) q(x,k) dx\bigg).$$ \end{theorem} \begin{remark} \label{cmpp_rmk}\rm The distribution of ${\bf Z}$ does not depend on the transition function $r$, consistent with the discussion of $L_0$ before the theorem. Also, for a given $k\geq 1$, $Z_k$ is infinite with positive probability exactly when there is a set $B$ such that $P(X_0\in B)>0$ and $\int \lambda_{w}(x) q(x,k) dx=\infty$ for $w\in B$. \end{remark} {\it Proof of Theorem~\ref{cmpp_thm}.} Recall the count vector representation (\ref{count_rep}). Conditional on $X_0=x_0$, the point process $M$ on $\mathbb R\times \mathbb N$ given by $M(A\times \{k\}) = \sum_{i\geq 1} \delta_{X_i}(A) I(L_i=k)$ is a Poisson process on $\mathbb R \times \mathbb N$ with intensity function $\lambda_{x_0}(x) q(x,k)$ (cf. Proposition 4.10.1~(b) Resnick~(1994)). Hence, it follows that, given $X_0=x_0$, the variables $M(\mathbb R\times \{k\})= \sum_{n\geq 1} I(L_n=k) = Z_k$ are independent Poisson variables with respective means $\int \lambda_{x_0}(x)q(x,k) dx$, for $k\geq 1$. \hfill$\blacksquare$\vskip .2cm \sect{The sequence ${\rm Bern}(a,b)$} \label{holst_sect} We now derive the count vector distribution for the sequence ${\rm Bern}(a,b)$ using a CMPP model. Denote, as usual, for $\alpha,\beta>0$, the Beta function \begin{equation} \label{beta} B(\alpha,\beta) \ =\ \frac{\Gamma(\alpha)\Gamma(\beta)}{\Gamma(\alpha+\beta)},\end{equation} and let \begin{itemize} \item [1.] $\bar{g}(x) = x^{b-1}(1-x)^{a-1}/B(b,a)$ on $0<x<1$, the Beta$(b,a)$ pdf, \item [2.] $\bar{r}(x,k) = x^{k-1}(1-x)$ for $k\geq 1$, \item [3.] $\bar{\lambda}_{w}(x) = [{a}/{(1-x)}]I(w<x<1)$, and \item [4.] $\bar{q}(x,k) = x^{k-1}(1-x)$ for $k\geq 1$. \end{itemize} \begin{proposition} \label{holst_prop} The model $({\bf X,L})={\mathcal M}(\bar{g},\bar{r},\bar{\lambda},\bar{q})$ produces an independent\break Bernoulli sequence ${\bf Y}\stackrel{d}{=}{\rm Bern}(a,b)$ for $a>0$ and $b>0$ whose count vector ${\bf Z}$, conditional on the value $x_0$ of a ${\rm Beta}(b,a)$ random variable, is distributed as $\prod_{k\geq 1}{\rm Po}(a(1-x_0^k)/k)$. \end{proposition} \begin{remark} \label{holst_rmk}\rm As a corollary, by taking $b \downarrow 0$, we recover the count vector distribution for ${\rm Bern}(a,0)$ already considered in the literature as simply ${\bf Z}\stackrel{d}{=}\prod_{k\geq 1} {\rm Po}(a/k)$. Note that $(X_0,L_0) \rightarrow (0,1)$ in distribution as $b \downarrow 0$. The Poisson process in the above CMPP model with intensity $\bar{\lambda}_{w}(\cdot)$ can be generated in the following way. First, the point process formed by the record values from an iid sequence of Beta$(1,a)$ random variables is a Poisson process with intensity $a/(1-x)$, the Beta$(1,a)$ failure rate (cf. Resnick~(1994) Proposition 4.11.1~(b)). Next, we thin this process as follows. Let $X_0 \stackrel{d}{=} {\rm Beta}(b,a)$, and $\{X_i\}_{i\geq 1}$ be the record values from an iid sequence of ${\rm Beta}(1,a)$ random variables, subject to $X_i>X_0$ for $i\geq 1$. Then, conditional on $X_0=x_0$, the point process $\bar{N}$ defined by $\bar{N}(A) = \sum_{i\geq 1}\delta_{X_i}(A)$ is the desired Poisson process with intensity function $\bar{\lambda}_{x_0}(x)=[a/(1-x)]I(x_0<x<1)$. \end{remark} {\it Proof of Proposition~\ref{holst_prop}.} The second part on the count vector distribution follows from Theorem~\ref{cmpp_thm}, noting for $k\geq 1$, that \begin{equation} \label{indep_mean_comp} \int_0^1\bar{\lambda}_{x_0}(x)\bar{q}(x,k)dx \ = \ \int_{x_0}^1 ax^{k-1}dx \ = \ \frac{a(1-x_0^k)}{k}.\end{equation} For the first part, we observe that the distribution of $\{Y_i\}_{i\geq 1}$ given through (\ref{bernoulli_seq}) is uniquely determined by the probabilities of cylinder sets of the form \begin{eqnarray} \label{cylinder} &&E(k_0,\dots,k_n) = (L_0=k_0,L_1=k_1,\dots, L_n=k_n)\\\nonumber &=& \Big(Y_{t} = 1 {\rm \ for \ } t \in \{K_0,K_1,\dots,K_n\}, {\rm and \ } Y_t = 0 {\rm \ otherwise \ for \ } 1 \le t \le K_n\Big) \nonumber \end{eqnarray} where $k_0,k_1,\dots,k_n$ are positive integers and $K_0=k_0,K_1=K_0+k_1,\dots,K_n=K_{n-1}+k_n$ are their partial sums. If the probability of sets of the form $E\stackrel{def}{=}E(k_0,\dots,k_n)$ is a product of appropriate marginal probabilities then $\{Y_n, n\ge 1\}$ will be the Bernoulli sequence ${\rm Bern}(a,b)$. We will proceed to establish this. Let $A_n = \{0<x_0<x_1<\cdots<x_n<1\}$. Using the Beta variables representation in Remark~\ref{holst_rmk}, write \begin{eqnarray*} P(E) &=&\int_{A_n} \bar{g}(x_0)\bar{r}(x_0,k_0) \prod_{i= 1}^n\Big[ P(X_i\in dx_i|X_i>x_{i-1}) \bar{q}(x_i,k_i)\Big] dx_0. \end{eqnarray*} Since $P(X_i\in dx_i|X_i>x_{i-1}) = a(1-x_i)^{a-1}/(1-x_{i-1})^a\,dx_i$ for $1\leq i\leq n$, we have further that the last line equals \begin{eqnarray} \label{calc} &&\frac{a^n}{B(b,a)} \int_{A_n} x_0^{b+k_0-2}\prod_{i= 1}^n x_i^{k_i-1} (1-x_n)^a dx_0\dots dx_n \\ &&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ = \ \frac{B(b+K_n-1,a+1)}{B(b,a)}\cdot\frac{a^n}{\prod_{s=0}^{n-1}(b+K_s-1)}\nonumber \end{eqnarray} and, noting (\ref{beta}) and $\alpha\Gamma(\alpha) = \Gamma(\alpha+1)$, that (\ref{calc}) becomes \begin{eqnarray*} \frac{a\prod_{r=0}^{K_n-2}(b+r)}{\prod_{r=0}^{K_n-1}(a+b+r)}\cdot \frac{a^{n}}{\prod_{s=0}^{n-1}(b+K_s-1)} & =& \ \prod_{i=1}^{K_n}\frac{b+i-1}{a+b+i-1} \prod_{r=0}^n \frac{a}{b+K_r -1}\end{eqnarray*} which is exactly $\prod_{i=1}^{K_n} P(Y_i =0) \prod_{r=0}^n [P(Y_{K_r}=1)/P(Y_{K_r}=0)]$ with ${\bf Y}$ specified as ${\rm Bern}(a,b)$. \hfill$\blacksquare$\vskip .2cm \sect{The sequence ${\rm Bern}_1(a,b)$} \label{new_sect} We will derive the count vector distribution for the sequence ${\rm Bern}_1(a,b)$, and show a dichotomy depending on whether $b\geq 1$ or $b<1$. We first consider the case where $a>0$ and $b>1$. Define \begin{itemize} \item [1.] $g^*(x) = x^{b-2}(1-x)^{a}/B(b-1,a+1)$ on $0<x<1$, \mbox{the Beta$(b-1,a+1)$ pdf,} \item [2.] $r^*(x,1) = 1$, \item [3.] $\lambda^*_{w}(x) = [{a}/{(1-x)}]I(w<x<1)$, and \item [4.] $q^*(x,k) = x^{k-1}(1-x)$ for $k\geq 1$. \end{itemize} \begin{proposition} \label{new_prop} The CMPP model $({\bf X,L})={\mathcal M}(g^*,r^*,\lambda^*,q^*)$ produces an independent Bernoulli sequence ${\bf Y}\stackrel{d}{=}{\rm Bern}_1(a,b)$ for $a>0$ and $b>1$, and, conditional on a Beta$(b-1,a+1)$ variable $X_0=x_0$, the distribution of its count vector ${\bf Z}$ is $\prod_{k\geq 1}{\rm Po}(a(1-x_0^k)/k)$. \end{proposition} \begin{remark}\label{new_rmk} \rm As a corollary, by taking $b\downarrow 1$, we find the count vector distribution for ${\rm Bern}_1(a,1)$ to be simply ${\bf Z}\stackrel{d}{=}\prod_{k\geq 1} {\rm Po}(a/k)$. [In fact, ${\rm Bern}_1(a,1)$ coincides with the sequence ${\rm Bern}(a,0)$ mentioned earlier in Remark \ref{holst_rmk}.] Also, we note the Poisson process in the above CMPP model with intensity $\lambda^*$ can be generated, as in Proposition~\ref{holst_prop}, by taking $X_0 \stackrel{d}{=} {\rm Beta}(b-1,a+1)$, and $\{X_i\}_{i\geq 1}$ as the sequence of records from an iid sequence of ${\rm Beta}(1,a)$ random variables, subject to the condition $X_1 > X_0$. \end{remark} {\it Proof of Proposition~\ref{new_prop}.} We need only establish the distribution of ${\bf Y}$, as the last statement follows from Theorem~\ref{cmpp_thm} and the computation (\ref{indep_mean_comp}). The calculations are similar to the proof of Proposition~\ref{holst_prop}. Let $k_0=1,k_1,k_2,\dots,k_n$ be positive integers, and $K_0=k_0=1,K_1=K_0+k_1,\dots,K_n=K_{n-1}+k_n$ be their partial sums. Recall the cylinder set defined in (\ref{cylinder}) and let \begin{eqnarray*} E_1 \ \stackrel{def}{=}\ E(1,k_1,\ldots,k_n) \ = \ (L_0=1,L_1=k_1,\dots, L_n=k_n),\end{eqnarray*} and set $A_n = \{0<x_0<x_1<\cdots <x_n<1\}$. Write, using the construction in Remark~\ref{new_rmk}, that \begin{eqnarray*} P(E_1) &=&\frac{1}{B(b-1,a+1)} \int_{A_n} \big[x_0^{b-2} (1-x_0)^{a}\big]\cdot 1 \\ &&\ \ \ \ \ \ \ \times\prod_{i=1}^n \big[a(1-x_i)^{a-1}/(1-x_{i-1})^a\big]\big[x_i^{k_i-1}(1-x_i)\big] dx_0\dots dx_n\\ &=&\frac{a^n}{B(b-1,a+1)} \int_{A_n} x_0^{b-2}\prod_{i=1}^n x_i^{k_i-1} (1-x_n)^a dx_0\dots dx_n. \end{eqnarray*} Then, with (\ref{beta}) and $\alpha\Gamma(\alpha) = \Gamma(\alpha +1)$, the last line equals \begin{eqnarray*} &&\frac{B(b+K_n-2, a+1)}{B(b-1,a+1)}\cdot \frac{a^n}{(b-1)\prod_{s=1}^{n-1}(b+K_s-2)}\\ &&\ \ \ \ \ \ \ \ \ \ \ \ \ \ = \ \frac{\prod_{r=0}^{K_n-2}(b-1+r)}{\prod_{r=0}^{K_n-2}(a+b+r)}\cdot \frac{a^{n}}{(b-1)\prod_{s=1}^{n-1}(b+K_s-2)}\\ &&\ \ \ \ \ \ \ \ \ \ \ \ \ \ = \ \prod_{i=1}^{K_n-1}\frac{b+i-1}{a+b+i-1} \prod_{r=1}^n \frac{a}{b+K_r -2}\end{eqnarray*} which is exactly $P(Y_1=1)\prod_{i=2}^{K_n} P(Y_i =0) \prod_{r=1}^n [ P(Y_{K_r}=1)/P(Y_{K_r}=0)]$ with ${\bf Y}$ specified as ${\rm Bern}_1(a,b)$. \hfill$\blacksquare$\vskip .2cm We now give the distribution of the count vector under ${\rm Bern}_1(a,b)$ for all $a>0$ and $b \ge 0$ by conditioning on the location of the second $1$ in the sequence ${\bf Y}$. Denote ${\bf Z}(a,b)$ as the count vector with respect to ${\rm Bern}_1(a,b)$ for $a>0$ and $b\geq 0$. Let ${\bf W_n}$ be the sequence whose $n$th co-ordinate is $1$ and all the other co-ordinates are zero, for $n\geq 1$. Let also \[p_n \ = \ \left\{\begin{array}{rl} \frac{a}{a+b}& \ {\rm for \ }n=2\\ \frac{a}{a+b+n-2}\prod_{r=0}^{n-3}\frac{b+r}{a+b+r}& \ {\rm for \ } n\geq 3\end{array}\right.\] be the probability that the second $1$ in ${\rm Bern}_1(a,b)$ occurs at time $n\geq 2$, and note $\sum_{n\geq 2} p_n = 1$. \begin{proposition} \label{decomp_prop} For $a>0$ and $b\geq 0$, we have \begin{equation} \label{general_b} \mathcal{L}\left( {\bf Z}(a,b)\right) \ =\ \sum_{n\geq 2} p_n \,\mathcal{L}\Big({\bf Z}(a,b+n-1) + {\bf W}_{n-1}\Big), \end{equation} and ${\bf Z}(a,b+n-1)$, conditional on the value $x_0$ of a ${\rm Beta}(b+n-2,a+1)$ random variable, is distributed as $\prod_{k\geq 1}{\rm Po}(a(1-x_0^k)/k)$, for $b>0$ and $n\geq 2$. \end{proposition} \begin{remark} \rm The special case $b=0$ is interesting. The sequence ${\rm Bern}_1(a,0)$ is the independent sequence where $Y_1=Y_2=1$ and $P(Y_n=1) = a/(a+n-2)$ for $n\geq 3$. That is, starting from time $n=2$, the sequence is ${\rm Bern}_1(a,1)={\rm Bern}(a,0)$. Hence, by Proposition~\ref{holst_prop} (see Remark \ref{holst_rmk}), ${\bf Z}(a,0)$ is distributed as $\hat{{\bf Z}}+{\bf W}_1$ where $\hat{{\bf Z}}\stackrel{d}{=} \prod_{k\geq 1}{\rm Po}(a/k)$ is the count vector for ${\rm Bern}(a,0)$. This agrees with (\ref{general_b}), since $p_2=1$ (when $b=0$) and ${\bf Z}(a,1)= \hat{\bf Z}$. \end{remark} {\it Proof of Proposition~\ref{decomp_prop}.} The distribution of ${\bf Z}(a,b)$ follows by conditioning on the first time that $Y_n =1$ for $n\ge 2$. The distributions of ${\bf Z}(a,b+n-1)$ are completely specified by Proposition~\ref{new_prop} and Remark~\ref{new_rmk}, since $b+n-1\geq 1$ for $n\geq 2$. \hfill$\blacksquare$\vskip .2cm From (\ref{general_b}), it is not clear whether the distribution of ${\bf Z}(a,b)$ is a mixture of product Poisson factors or not for $0\le b < 1$. We show now that even the first component $Z_1(a,b)$ is not a mixture of Poissons when $0\le b < 1$. \begin{proposition} \label{cont_prop} The distribution of $Z_1 \equiv Z_1(a,b)$, the count of\ $1$-strings in the\break ${\rm Bern}_1(a,b)$ sequence, is not a mixture of Poissons when $0\leq b<1$, that is, there is no measure $\mu$ on $[0,\infty)$ such that \begin{equation} \label{mixture_cond} E\Big[\exp\{tZ_1\}\Big] \ = \ \int_{[0,\infty)} e^{v(e^t-1)}d\mu(v). \end{equation} \end{proposition} {\it Proof.} It is well known that when (\ref{mixture_cond}) holds, the variable $Z_1$ is over-dispersed, that is $O(Z_1) \stackrel{def}{=} \textrm{Var}(Z_1) - E(Z_1) \ge 0$. The proof now follows by the expression for $O(Z_1)$ in \rr{eq:overdisp} below. Let ${\bf Y} = {\rm Bern}_1(a,b)$. Then, \begin{equation} \label{decomp_Z} Z_1 \ =\ Y_2 + \hat{Z}_1 = Y_2 + Y_2Y_3 + Z_1^{+} \end{equation} where $\hat{Z}_1 =\sum_{i\ge 2} Y_iY_{i+1}$ and $Z_1^{+} = \sum_{i\ge 3} Y_i Y_{i+1}$, and the latter is independent of $Y_2$. Furthermore $\hat{Z}_1$, $Z_1^{+}$ are the counts of strings of order $1$ from ${\rm Bern}(a,b)$, ${\rm Bern}(a,b+1)$, respectively, and their distributions are known from Proposition~\ref{holst_prop}. Hence, by easy calculations \[E(\hat{Z_1}) = \frac{a^2}{(a+b)},\, E(Z_1^{+}) = \frac{a^2}{(a+b+1)},\, E(\hat{Z_1}^2) = \frac{a^3(a+1)}{(a+b)(a+b+1)} + \frac{a^2}{(a+b)}.\] From the identities in (\ref{decomp_Z}), we have \[E(Z_1) = \frac{a(a+1)}{(a+b)},\; E(Z_1^2)= \frac{a(a+1)}{(a+b)} + \frac{a^2(a+1)(a+2)}{(a+b)(a+b+1)}.\] This leads to \begin{equation} \label{eq:overdisp} O(Z_1) \ =\ \frac{a^2(a+1)(b-1)}{(a+b)^2(a+b+1)} \end{equation} which is negative for $b<1$, and positive for $b>1$. \hfill$\blacksquare$\vskip .2cm \sect{Some dependent Bernoulli sequences} \label{dep_sect} Two examples of dependent Bernoulli sequences, arising in CMPP models with simple structures, whose count vector distributions are mixtures of independent Poisson factors are given. \vskip .2cm {\bf First Sequence.} For $a>0$ and $b> 0$, denote $P_{a,b}$ as the probability distribution of the CMPP ${\mathcal M}(\bar{g},\bar{r},\bar{\lambda},\bar{q})$ described in Proposition~\ref{holst_prop} which gives rise to the Bernoulli sequence ${\rm Bern}(a,b)$. Let now $r^+(x,k) = kx^{k-1}(1-x)^2$ for $k\geq 1$. Consider the associated CMPP model ${\mathcal M}(\bar{g},r^+,\bar{\lambda},\bar{q})$ with $\bar{g},\bar{\lambda},\bar{q}$ the same as in Proposition~\ref{holst_prop}. Denote the probability measure under this model as $P^+=P^+_{a,b}$. Note that $r^+(x,k) =k[\bar{r}(x,k) - \bar{r}(x,k+1)]$ where $\bar{r}(x,k)=x^{k-1}(1-x)$. Recall the cylinder set $E\stackrel{def}{=}E(k_0,\ldots, k_n)$ from (\ref{cylinder}) where $k_0,k_1,\dots,k_n$ are positive integers, and $K_0,K_1,\dots,K_n$ their partial sums. It is easy to see that \begin{eqnarray*} P^+(E) &=& k_0\Big[P_{a,b} \Big(E(k_0,\dots,k_n)\Big) - P_{a,b}\Big(E(k_0+1,k_1,\dots,k_n)\Big)\Big]. \end{eqnarray*} From this expression, the distribution of ${\bf Y}$ can be recovered, and shown to be not that of independent Bernoulli variables. For instance, $$ P^+(Y_1=1) \ =\ P_{a,b}(Y_1=1) - P_{a,b}(Y_1=0,Y_2=1) \ =\ \frac{a(a+1)}{(a+b)(a+b+1)}, $$ and analogously \[P^+(Y_2=1) \ = \ \frac{a^2(a+2) + 2ba(a+1)}{(a+b)(a+b+1)(a+b+2)}. \] Thus \[ P^+(Y_1=1)P^+(Y_2=1) \ =\ \frac{a^2(a+1)(a^2+2a+2ba+2b)}{(a+b)^2(a+b+1)^2(a+b+2)}, \] which does not match \[P^+(Y_1=1,Y_2=1) \ = \ \frac{a^2 (a+2)}{(a+b)(a+b+1)(a+b+2)}\] for $a,b>0$. Finally, by Remark~\ref{cmpp_rmk}, we note the count vectors under $P_{a,b}$ and $P^+$ have the same distribution, and by Proposition~\ref{holst_prop} conditional on the value of $x_0$ of a ${\rm Beta}(b,a)$ variable, the count vectors are distributed as $\prod_{k\geq 1}{\rm Po}(a(1-x^k_0)/k)$. \vskip .2cm {\bf Second Sequence.} Consider $P_{1,0}$, the measure for the CMPP model discussed in Example~\ref{ex_3} and Remark~\ref{holst_rmk}, with respect to Bernoulli sequence ${\rm Bern}(1,0)$, where $(X_0, L_0) \equiv (0,1)$, $\{X_i\}_{i\geq 1}$ are the records from an iid Uniform$[0,1]$ sequence, and $L_i$ are Geometric$(1-X_i)$ for $i\geq 1$. Let $P'$ stand for the measure under the ``switched'' CMPP model where $(X_1,L_1)$ and $(X_2,L_2)$ are interchanged. The probabilities of ${\bf Y}$ on cylinder sets (cf. (\ref{cylinder}), under $P'$, is given by \begin{eqnarray*} P'\Big(E(1,k_1,\ldots,k_n)\Big) &=& P'(L_1=k_1, \ldots, L_n=k_n)\\ &=& P_{1,0}(L_2=k_1, L_1=k_2,\ {\rm and \ }L_i=k_i {\rm \ for \ }3\leq i\leq n)\end{eqnarray*} for positive integers $k_0=1,k_1,\ldots, k_n$, with $K_0=1,K_1=K_0+k_1,\ldots, K_n=K_{n-1} + k_n$ as their partial sums. Under both models $P_{1,0}$ and $P'$, as only two terms ($L_1,L_2$) exchange places, the associated count vectors are the same, and by Proposition~\ref{holst_prop} distributed as $\prod_{k\geq 1}{\rm Po}(1/k)$. We now show that $\{Y_i\}_{i\geq 1}$ is not an independent sequence under $P'$. From the calculation in (\ref{calc}) with $(X_0,L_0)\equiv (0,1)$, $Y_1\equiv 1$ and $\bar{r}(x,1)=1$ (take $b\downarrow 0$), and $a=1$, we can write \begin{eqnarray*} P'(Y_2=1) &=& P_{1,0}(L_2=1) \ = \ \sum_{k\geq 1} P_{1,0}(L_1=k,L_2=1)\\ &=& \sum_{k\geq 1}\int_{0<x_1<x_2<1} x_1^{k-1} (1-x_2) dx_1 dx_2 \ = \ 1/4. \end{eqnarray*} Also, \begin{eqnarray*} P'(Y_2=1,Y_3=1)&=& P_{1,0}(L_1=1,L_2=1)\ = \ P_{1,0}(Y_2=1,Y_3=1) \ = \ 1/6,\\ P'(Y_2=0,Y_3=1) &=& P_{1,0}(L_2=2)\\ &=& \sum_{k\geq 1} \int_{0<x_1<x_2<1} x_1^{k-1} x_2 (1-x_2) dx_1 dx_2 \ =\ 5/36, \end{eqnarray*} which give $P'(Y_3=1)= 11/36$. However, $P'(Y_2=1) P'(Y_3=1) = 11/144 \ \neq \ 1/6 = P'(Y_2=1,Y_3=1)$. \bibliographystyle{plain}
{ "timestamp": "2008-01-14T17:45:22", "yymm": "0801", "arxiv_id": "0801.2115", "language": "en", "url": "https://arxiv.org/abs/0801.2115", "abstract": "We say that a string of length $d$ occurs, in a Bernoulli sequence, if a success is followed by exactly $(d-1)$ failures before the next success. The counts of such $d$-strings are of interest, and in specific independent Bernoulli sequences are known to correspond to asymptotic $d$-cycle counts in random permutations.In this note, we give a new framework, in terms of conditional Poisson processes, which allows for a quick characterization of the joint distribution of the counts of all $d$-strings, in a general class of Bernoulli sequences, as certain mixtures of the product of Poisson measures. This general class includes all Bernoulli sequences considered before, as well many new sequences.", "subjects": "Probability (math.PR)", "title": "A study of counts of Bernoulli strings via conditional Poisson processes", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9916842222598317, "lm_q2_score": 0.8175744695262775, "lm_q1q2_score": 0.810775701951661 }
https://arxiv.org/abs/0802.0316
Fourier series and approximation on hexagonal and triangular domains
Several problems on Fourier series and trigonometric approximation on a hexagon and a triangle are studied. The results include Abel and Cesàro summability of Fourier series, degree of approximation and best approximation by trigonometric functions, both direct and inverse theorems. One of the objective of this study is to demonstrate that Fourier series on spectral sets enjoy a rich structure that allow an extensive theory for Fourier expansions and approximation.
\section{Introduction} \setcounter{equation}{0} A theorem of Fuglede \cite{F} states that a set tiles ${\mathbb R}^n$ by lattice translation if and only if it has an orthonormal basis of exponentials $e^{i {\langle} \a, x\ra}$ with $\a$ in the dual lattice. Such a set is called a spectral set. The theorem suggests that one can study Fourier series and approximation on a spectral set. For the simplest spectral sets, cubes in ${\mathbb R}^d$, we are in the familiar territory of classical (multiple) Fourier series. For other spectral sets, such a study has mostly remained at the structural property in the $L^2$ level and it seems to have attracted little attention among researchers in approximation theory. Besides the usual rectangular domain, the simplest spectral set is a regular hexagon on the plane, which has been studied in connection with Fourier analysis in \cite{AB, Sun}. Recently in \cite{LSX}, discrete Fourier analysis on lattices was developed and the case of hexagon lattice was studied in detail; in particular, Lagrange interpolation and cubature formulas by trigonometric functions on a regular hexagon and on an equilateral triangle were studied. Here we follow the set up in \cite{LSX} to study the summability of Fourier series and approximation. The purpose of this paper is to show, using the hexagonal domain as an example, that Fourier series on a spectral set has a rich structure that permits an extensive theory of Fourier expansions and approximation. It is our hope that this work may stimulate further studies in this area. It should be mentioned that, in response to a problem on construction and analysis of hexagonal optical elements, orthogonal polynomials on the hexagon were studied in \cite{Dunkl} in which a method of generating an orthogonal polynomial basis was developed. We study orthogonal expansion and approximation by trigonometric functions on the hexagon domain. In comparison to the usual Fourier series for periodic functions in both variables on the plane, the periodicity of the Fourier series on a hexagonal domain is defined in terms of the hexagon lattice, which has the symmetry of the reflection group ${\mathcal A}_2$ with reflections along the edges of the hexagon. The functions that we consider are periodic under the translation of hexagonal lattice. It turns out (\cite{LSX, Sun}) that it is convenient to use homogenous coordinates that satisfy $t_1 + t_2 +t_3 =0$ in ${\mathbb R}^3$ rather than coordinates in ${\mathbb R}^2$. Using homogenous coordinates allows us to treat the three directions equally and it reveals symmetry in various formulas that are not obvious in ${\mathbb R}^2$ coordinates. As we shall show below, many results and formulas resemble closely to those of Fourier analysis and approximation on $2\pi$ periodic functions. Fourier analysis on the hexagonal domain can be approached from several directions. Orthogonal exponentials on the hexagonal domain are related to trigonometric functions on an equilateral triangle, upon considering symmetric exponentials on the hexagonal domain. These trigonometric functions arise from solutions of Laplacian on the equilateral triangle, as seen in \cite{K} and developed extensively in \cite{Sun, LS}, and they are closely related to orthogonal algebraic polynomials on the domain bounded by Steiner's hyercycloid \cite{K, LSX}, much as Chebyshev polynomials arise from exponentials. In fact, the trigonometric functions arise from the exponentials by symmetry are called generalized cosine functions in \cite{LSX,Sun}, and there are also generalized sine functions that are anti-symmetric. Our results on the hexagonal domain can be easily translated to results in terms of generalized cosines. Our results on summability can also be translated to orthogonal expansions of algebraic polynomials on the domain bounded by hypercycloid, but the same cannot be said on our results on best approximation. In fact, just like the case of best approximation by polynomial on the interval, the approximation should be better at the boundary for polynomial approximation on the hypercycloid domain. For example, our Bernstein type inequality (Theorem 4.8) can be translated into a Markov type inequality for algebraic polynomials. A modulus of smoothness will need to be defined to take into account of the boundary effect of the hypercycloid domain, which is not trivial and will not be considered in this paper. Some of our results, especially those on the best approximation, can be extended to higher dimensions. We choose to stay on the hexagonal domain to keep an uniformity of the paper and to stay away from overwhelming notations. The paper is organized as follows. Definitions and background materials will be given in Section 2. In Section 3 we study the Abel summability, aka Poisson integral, and Ces\`aro $(C,\delta)$ means of the Fourer series on the hexagon, where several compact formulas for the kernel functions will be deduced. One interesting result shows that the $(C,2)$ means are nonnegative, akin to the Fej\`er means for the classical Fourier series. In Section 4 we study best approximation by trigonometric functions on the hexagonal domain and establish both direct and inverse theorems in terms of a modulus of smoothness. \section{Fourier series on the regular hexagon} \setcounter{equation}{0} Below we briefly sum up what we need on Fourier analysis on hexagonal domain. We refer to \cite{LSX} for further details. The hexagonal lattice is given by $H {\mathbb Z}^2$, where the matrix $H$ and the spectral set $\Omega_H$ are given by $$ H=\begin{pmatrix} \sqrt{3} & 0\\ -1 & 2\end{pmatrix}, \qquad \Omega_H =\left\{(x_1,x_2):\ -1\leq x_2, \tfrac{\sqrt{3}}{2}x_1 \pm \tfrac{1}{2} x_2 < 1 \right\}, $$ respectively. The reason that $\Omega_H$ contains only half of its boundary is given in \cite{LSX}. We will use homogeneous coordinates $(t_1,t_2,t_3)$ that satisfies $t_1 + t_2 +t_3 =0$ for which the hexagonal domain $\Omega_H$ becomes \begin{align*} \Omega=\left\{(t_1,t_2,t_3)\in {\mathbb R}^3:\ -1\le t_1,t_2,-t_3<1;\, t_1+t_2+t_3=0 \right\}, \end{align*} which is the intersection of the plane $t_1+t_2+t_3=0$ with the cube $[-1,1]^3$, as seen in Figure 1. \begin{figure}[h] \centerline{\includegraphics[width=0.4\textwidth]{hexagon} \qquad\quad \includegraphics[width=0.375\textwidth]{Hex3d}} \caption{Regular hexagon in ${\mathbb R}^2$ and in ${\mathbb R}^3$.} \end{figure} The relation between $(x_1,x_2) \in \Omega_H$ and ${\mathbf t} \in \Omega$ is given by \begin{align}\label{coordinates} t_1= -\frac{x_2}{2} +\frac{\sqrt{3}x_1}{2},\quad t_2 = x_2, \quad t_3 = -\frac{x_2}{2} -\frac{\sqrt{3}x_1}{2}. \end{align} For convenience, we adopt the convention of using bold letters, such as ${\mathbf t}$, to denote points in the space $$ {\mathbb R}_H^3 : = \{{\mathbf t} = (t_1,t_2,t_3)\in {\mathbb R}^3: t_1+t_2 +t_3 =0\}. $$ In other words, bold letters such as ${\mathbf t}$ stand for homogeneous coordinates. If we treat $x \in {\mathbb R}^2$ and ${\mathbf t} \in {\mathbb R}_H^3$ as column vectors, then it follows from \eqref{coordinates} that \begin{equation} \label{x-t-x} x = \tfrac13 H (t_1 - t_3, t_2-t_3)^{\mathsf {tr}} = \tfrac13 H (2 t_1 + t_2, t_1+2t_2)^{\mathsf {tr}} \end{equation} upon using the fact that $t_1 + t_2 + t_3 =0$. Computing the Jacobian of the change of variables shows that $d x = \frac{2 \sqrt{3}} {3} d t_1 dt_2$. A function $f$ is called {\it periodic} with respect to the hexagonal lattice, if $$ f(x) = f (x + H k), \qquad k \in {\mathbb Z}^2. $$ We call such a function $H$-periodic. In homogeneous coordinates, $x\equiv y \pmod{H}$ becomes, as easily seen using \eqref{x-t-x}, ${\mathbf t} \equiv {\mathbf s} \mod 3$, where we define $$ {\mathbf t} \equiv {\mathbf s} \mod 3 \quad \Longleftrightarrow \quad t_1-s_1 \equiv t_2-s_2 \equiv t_3-s_3 \mod 3. $$ Thus, a function $f ({\mathbf t})$ is $H$-periodic if $f ({\mathbf t}) = f({\mathbf t} + {\mathbf j})$ whenever ${\mathbf j} \equiv 0 \mod 3$. If $f$ is $H$-periodic, then it can be verified directly that \begin{equation} \label{IntPeriod} \int_{\Omega} f({\mathbf t} + {\mathbf s}) d{\mathbf t} = \int_{\Omega} f({\mathbf t}) d{\mathbf t}, \qquad {\mathbf s} \in {\mathbb R}_H^3. \end{equation} We define the inner product on the hexagonal domain by \begin{align*} \langle f, g\rangle_H := \frac{1}{|\Omega_H|} \int_{\Omega_H} f(x_1,x_2) \overline{g(x_1,x_2)} d x_1 dx_2 = \frac{1}{|\Omega|} \int_{\Omega} f({\mathbf t}) \overline{g({\mathbf t})} d {\mathbf t}, \end{align*} where $|\Omega|$ denote the area of $\Omega$. Furthermore, let ${\mathbb Z}_H^3: = {\mathbb Z}^3 \cap {\mathbb R}_H^3$. We define $$ \phi_{\mathbf j}({\mathbf t}) : = \mathrm{e}^{\frac{2 \pi i}{3}{\mathbf j} \cdot {\mathbf t}}, \qquad {\mathbf j} \in {\mathbb Z}_H^3, \quad {\mathbf t} \in {\mathbb R}_H^3. $$ These exponential functions are orthogonal with respect to ${\langle} f,g\ra_H$. \begin{thm} \cite{F}. For ${\mathbf k},{\mathbf j} \in {\mathbb Z}_H^3$, ${\langle}\phi_{\mathbf k}, \phi_{\mathbf j} \ra = \delta_{{\mathbf k},{\mathbf j}}$. Moreover, the set $\{\phi_{\mathbf j}: {\mathbf j} \in {\mathbb Z}_H^3\}$ is an orthonormal basis of $L^2(\Omega)$. \end{thm} It is easy to see that $\phi_{\mathbf j}$ are $H$-periodic functions. We consider a special collection of them indexed by ${\mathbf j}$ inside a hexagon. Define $$ {\mathbb H}_n : = \{{\mathbf j} \in {\mathbb Z}_H^3: -n \le j_1,j_2,j_3 \le n\} \quad\hbox{and} \quad {\mathbb J}_n: = {\mathbb H}_n \setminus {\mathbb H}_{n-1}. $$ Notice that ${\mathbf j} \in {\mathbb H}_n$ satisfies $j_1 + j_2 + j_3 =0$, so that ${\mathbb H}_n$ contains all integer points inside the hexagon $n\, \overline{\Omega}$, whereas ${\mathbb J}_n$ contains exactly those integer points in ${\mathbb Z}_H^3$ that are on the boundary of $n\, \Omega$, or $n$-th hexagonal line. We then define \footnote{Here our notation differs from those in \cite{LSX}. Our ${\mathbb H}_n$ and ${\mathcal H}_n$ are in fact ${\mathbb H}_n^*$ and ${\mathcal H}_n^*$ there.} \begin{equation} \label{Hn-space} {\mathcal H}_n: = \operatorname{span} \left \{ \phi_{\mathbf j}: {\mathbf j} \in {\mathbb H}_n \right \}. \end{equation} It follows that $\dim {\mathcal H}_n = 3 n^2 +3n+1$. As we shall see below, the class ${\mathcal H}_n$ shares many properties of the class of trigonometric polynomials of one variable. As a result, we shall call functions in ${\mathcal H}_n$ trigonometric polynomials over $\Omega$. We will study the best approximation by trigonometric polynomials in ${\mathcal H}_n$ in Section 4. By Theorem 2.1, the standard Hilbert space theory shows that an $H$-periodic function $f\in L^2(\Omega)$ can be expanded into a Fourier series \begin{equation}\label{FourierSeries} f = \sum_{{\mathbf j} \in {\mathbb Z}_H^3} \widehat f_{\mathbf j} \phi_{\mathbf j} \quad \hbox{in $L^2(\Omega)$}, \quad \hbox{where} \quad \widehat f_{\mathbf j} := \frac{1}{|\Omega|} \int_{\Omega} f({\mathbf s}) e^{- \frac{2\pi i {\mathbf s}}{3}} d{\mathbf s}. \end{equation} We consider the $n$-th hexagonal Fourier partial sum defined by \begin{equation} \label{PartialSum} S_n f(x) := \sum_{{\mathbf j} \in {\mathbb H}_n} \widehat f_{\mathbf j} \phi_{\mathbf j}({\mathbf t}) = \frac{1}{|\Omega|} \int_{\Omega} f({\mathbf t} - {\mathbf s}) D_n({\mathbf s}) d{\mathbf s}, \end{equation} where the second equal sign follows from \eqref{IntPeriod} with the kernel $D_n$ defined by $$ D_n({\mathbf t}) : = \sum_{{\mathbf j} \in {\mathbb H}_n} \phi_{\mathbf j}({\mathbf t}). $$ The kernel is an analogue of the Dirichlet kernel for the ordinary Fourier series. It enjoys a compact formula given by \cite{Sun} (see also \cite{LSX}) that \begin{equation}\label{D-kernel} D_n({\mathbf t} ) = \Theta_n({\mathbf t}) - \Theta_{n-1}({\mathbf t}), \end{equation} where \begin{equation}\label{Theta} \Theta_n({\mathbf t}) = \frac{\sin \frac{(n+1) (t_1-t_2)\pi}{3}\sin \frac{(n+1) (t_2-t_3)\pi}{3} \sin \frac{(n+1) (t_3-t_1)\pi}{3}} {\sin \frac{(t_1-t_2)\pi}{3}\sin \frac{(t_2-t_3)\pi}{3}\sin \frac{(t_3-t_1)\pi}{3}}. \end{equation} This last formula is our starting point for studying summability of Fourier series in the following section. We will also need to use the Poisson summation formula associated with the hexagonal lattice. This formula takes the form (see, for example, \cite{AB, H, LSX}) \begin{equation} \label{Poisson-pre} \sum_{k \in {\mathbb Z}^2} f(x + H k) = \frac{1}{\det (H)} \sum_{j \in {\mathbb Z}^2} \widehat f ( H^{-{\mathsf {tr}}} k) e^{2 \pi i k^{\mathsf {tr}} H^{-1} {\mathbf x}}, \end{equation} where for $f\in L^1({\mathbb R}^2)$ the Fourier transform $\widehat f$ and its inverse are defined by $$ \widehat f(\xi) = \int_{{\mathbb R}^2} f(x) \mathrm{e}^{-2\pi \xi \cdot x} dx \quad \hbox{and} \quad f(x) = \int_{{\mathbb R}^2} \widehat f(\xi) \mathrm{e}^{2\pi \xi \cdot x} dx, $$ and the formula holds under the usual assumption on the convergence of the series in both sides. Define Fourier transform in homogeneous coordinates by $\widehat f({\mathbf t}) := \widehat f(H^{-{\mathsf {tr}}} t)$ with $t = (t_1,t_2)$ as a column vector. Using \eqref{x-t-x}, it is easy to see that $\widehat f$ and its inverse become, in homogeneous coordinates, \begin{equation} \label{FT} \widehat f({\mathbf s}) = \int_{{\mathbb R}_H^3} f({\mathbf t}) \mathrm{e}^{- \frac{2\pi i}{3} {\mathbf s} \cdot {\mathbf t}} d{\mathbf t} \qquad \hbox{and} \qquad f({\mathbf t}) = \int_{{\mathbb R}_H^3} \widehat f({\mathbf s}) \mathrm{e}^{\frac{2\pi i}{3} {\mathbf s} \cdot {\mathbf t}} d{\mathbf s}. \end{equation} Next we reformulate Poisson summation formula in homogenous coordinates. The left hand side of \eqref{Poisson-pre} is an $H$-periodic function over hexagonal lattice, which becomes the summation of $f({\mathbf t} + 3 {\mathbf j})$ over all ${\mathbf j} \in {\mathbb Z}_H^3$ as $x \equiv y \mod H$ becomes ${\mathbf t} \equiv {\mathbf s} \mod 3$. For the right hand side, using the fact that $(k_1,k_2) H^{-1} x = \frac{1}{3} (k_1,k_2) (t_1-t_3, t_2-t_3)^{\mathsf {tr}} = \frac{1}{3} {\mathbf k} \cdot {\mathbf t}$ by \eqref{x-t-x}, we obtain $$ \widehat f(H^{-{\mathsf {tr}}} k) = \int_{{\mathbb R}^2} f(x) \mathrm{e}^{-2 \pi k^{\mathsf {tr}} H^{-1} x} dx = \frac{2 \sqrt{3}}{3} \int_{{\mathbb R}_H^3} f({\mathbf t}) \mathrm{e}^{-2 \pi {\mathbf k} \cdot {\mathbf t}} d{\mathbf t} = \frac{2 \sqrt{3}}{3} \widehat f({\mathbf k}). $$ Consequently, we conclude that the Poisson summation formula in \eqref{Poisson-pre} becomes, in homogeneous coordinates, \begin{equation} \label{Poisson} \sum_{{\mathbf k} \in {\mathbb Z}_H^3} f({\mathbf t} + 3 {\mathbf k}) = \frac{1}{3} \sum_{{\mathbf j} \in {\mathbb Z}_H^3} \widehat f({\mathbf j}) \mathrm{e}^{\frac{2\pi i}{3} {\mathbf j}\cdot {\mathbf t}}. \end{equation} Throughout of this paper we will reserve the letter $c$ for a generic constant, whose value may change from line to line. By $A \sim B$ we mean that there are two constants $c$ and $c'$ such that $c A \le B \le c' A$. \section{Summability of Fourier series on Hexagon} \setcounter{equation}{0} We consider hexagonal summability of Fourier series \eqref{FourierSeries}; that is, we write the Fourier series \eqref{FourierSeries} as blocks whose indices are grouped according to ${\mathbb J}_n$: $$ f({\mathbf t}) = \sum_{n=0}^\infty \sum_{{\mathbf j} \in {\mathbb J}_n} \widehat f_{\mathbf j} \phi_{{\mathbf j}}({\mathbf t}). $$ From now on we call such a series hexagonal Fourier series. Its $n$-th partial sum is exactly $S_n f$ given in \eqref{PartialSum}. \subsection{Abel summability} We consider Poisson integral $P_r f(t)$ of the hexagon Fourier series, which is defined by $$ P_r f( {\mathbf t}):=\sum_{n =0}^\infty \sum_{{\mathbf j} \in {\mathbb J}_n} \widehat f_{\mathbf j} \phi_{\mathbf j}({\mathbf t}) r^n = \frac{1}{\Omega} \int_{|\Omega|} P(r; {\mathbf t} - {\mathbf s}) f({\mathbf s}) d{\mathbf s}, $$ where $P(r; {\mathbf t})$ denotes the Poisson kernel $$ P(r; {\mathbf t}) := \sum_{n=0}^\infty \sum_{{\mathbf j} \in {\mathbb J}_n} \phi_{\mathbf j}({\mathbf t}) r^n, \qquad 0 \le r < 1. $$ Just as in the classical Fourier series, the kernel is nonnegative and enjoys a closed expression. \begin{prop} (1) The Poisson kernel $P(r; {\mathbf t})$ is nonnegative for all ${\mathbf t} \in \Omega$ and $$ \frac{1}{|\Omega} \int_{\Omega} P(r; {\mathbf t}) d {\mathbf t} =1. $$ (2) Let $q(r,t) = 1 - 2r \cos t + r^2$. Then \begin{align*} P(r; {\mathbf t}) = & \frac{ (1-r)^3(1-r^3) } { q\left(r,\frac{2 \pi(t_1-t_2)}{3} \right) q\left(r,\frac{2 \pi (t_2-t_3)}{3}\right) q\left(r,\frac{2 \pi (t_3-t_1)}{3}\right) } \\ & + \frac{r (1-r)^2 } { q\left(r,\frac{2 \pi (t_1-t_2)}{3}\right) q\left(r,\frac{2 \pi (t_2-t_3)}{3}\right) } + \frac{ r(1-r)^2 } { q\left(r,\frac{2 \pi(t_2-t_3)}{3} \right) q\left(r,\frac{2 \pi (t_3-t_1)}{3}\right) } \\ & + \frac{r (1-r)^2} { q\left(r,\frac{2 \pi(t_3-t_1)}{3} \right) q\left(r,\frac{2 \pi (t_1-t_2)}{3}\right) } . \end{align*} \end{prop} \begin{proof} The fact that the integral of $P(r;{\mathbf t})$ is 1 follows from the definition and the orthogonality of $\phi_{\mathbf j}(t)$, whereas that $P(r;{\mathbf t}) \ge 0$ is an immediate consequence of the compact formula in the part (2). To prove the part (2), we start from the compact formula of $D_n({\mathbf t})$, from which follows readily that $$ P(r;{\mathbf t}) = (1-r) \sum_{n=0}^\infty D_n({\mathbf t}) r^n = (1-r)^2 \sum_{n=0}^\infty \Theta_n({\mathbf t}) r^n. $$ If $t_1+t_2+t_3 =0$ then it is easy to verify that \begin{align} \label{elementary} \begin{split} & \sin 2 t_1+ \sin 2 t_2+ \sin 2 t_3 = - 4 \sin t_1 \sin t_2 \sin t_3,\\ & \cos 2 t_1+ \cos 2 t_2+ \cos 2 t_3 = 4 \cos t_1 \cos t_2 \cos t_3 -1. \end{split} \end{align} Using the first equation in \eqref{elementary} and the fact that $$ \sum_{n=0}^\infty \sin (n+1) s = \frac{\sin s} {1 - 2r \cos s + r^2}, $$ we conclude then \begin{align*} \sum_{n=0}^\infty \Theta_n({\mathbf t}) r^n = & \frac{1} {4 \sin \frac{\pi(t_1-t_2)}{3}\sin \frac{\pi(t_2-t_3)}{3}\sin \frac{\pi (t_3-t_1)}{3} } \\ & \times \left[ \frac{\sin \frac{2\pi (t_1-t_2)}{3}} { q\left(r,\frac{2 \pi(t_1-t_2)}{3} \right) } +\frac{\sin\frac{2\pi (t_2-t_3)}{3} }{q\left(r,\frac{2 \pi (t_2-t_3)}{3}\right) }+ \frac{\sin\frac{2\pi(t_3-t_1)}{3}} {q\left(r,\frac{2 \pi (t_3-t_1)}{3}\right) } \right]. \end{align*} Putting the three terms together and simplify the numerator, we conclude, after a tedious computation, that \begin{align*} \sum_{n=0}^\infty \Theta_n({\mathbf t}) r^n = \frac{1 + 2 r + 2 r^3 + r^4 - 2 r^2 \left[ \cos \frac{2 \pi(t_1 - t_2)}{3} + \cos \frac{2 \pi( t_1 - t_3)}{3} + \cos \frac{2 \pi (t_2 - t_3) }{3} \right]} { q\left(r,\frac{2 \pi(t_1-t_2)}{3} \right) q\left(r,\frac{2 \pi (t_2-t_3)}{3}\right) q\left(r,\frac{2 \pi (t_3-t_1)}{3}\right) }. \end{align*} The numerator of the right hand side can be written as \begin{align*} (1-r) (1-r^3)+ r\left[ \left(r,\tfrac{2 \pi(t_1-t_2)}{3} \right) + q\left(r,\tfrac{2 \pi (t_2-t_3)}{3}\right) + q\left(r,\tfrac{2 \pi (t_3-t_1)}{3}\right) \right], \end{align*} from which the stated compact formula follows. \end{proof} Next we consider the convergence of $P_r f$ as $r\to 1-$. For the classical Fourier series, if $P_r f$ converges to $f$ as $r\to 1-$ then the series is called Abel summable. \begin{thm} If $f$ is an $H$-periodic function, bounded on $\overline{\Omega}$ and continuous at ${\mathbf t} \in \Omega^\circ$, then $P_r f({\mathbf t})$ converges to $f({\mathbf t})$ as $r \mapsto 1-$. Furthermore, if $f$ is continuous on $\overline{\Omega}$, then $P_r f$ converges uniformly on $\Omega$ as $r \mapsto 1-$. \end{thm} \begin{proof} Since $f$ is continuous at ${\mathbf t} \in \Omega^\circ$. For any ${\varepsilon} > 0$, choose ${\delta} > 0$ such that $$ |f ({\mathbf t}- {\mathbf s}) - f({\mathbf t})| < {\varepsilon} \qquad \hbox{whenever ${\mathbf s} \in \Omega_{\delta}: = \{{\mathbf s} \in \Omega: |s_i| \le {\delta}, \, 1 \le i \le 3\}$}. $$ Since $P(r; {\mathbf s})$ has a unit integral, it follows that \begin{align*} |P_r f({\mathbf t}) - f({\mathbf t}) |& \le {\varepsilon} \int_{\Omega_{\delta}}P(r;{\mathbf s}) d{\mathbf s} + 2 \int_{\Omega \setminus \Omega_{\delta}} |f ({\mathbf t}- {\mathbf s}) - f({\mathbf t})|P(r;{\mathbf s}) d{\mathbf s}\\ & \le {\varepsilon} + 2 \|f\|_\infty \int_{\Omega \setminus \Omega_{\delta}}P(r;{\mathbf s}) d{\mathbf s}. \end{align*} Thus, it suffices to show that the last integral goes to 0 as $r \mapsto 1+$. Since $q(r,t) = (1-r)^2 + 2 r \sin^2 \frac{t}{2} \ge (1-r)^2$, the closed formula of the Poisson kernel shows that $P_r({\mathbf t})$ is bounded by \begin{align*} P(r; {\mathbf t}) & \le \frac{2 (1-r)^2 } { q\left(r,\frac{2 \pi (t_1-t_2)}{3}\right) q\left(r,\frac{2 \pi (t_2-t_3)}{3}\right) } + \frac{ 2(1-r)^2 } { q\left(r,\frac{2 \pi(t_2-t_3)}{3} \right) q\left(r,\frac{2 \pi (t_3-t_1)}{3}\right) } \\ & + \frac{2 (1-r)^2} { q\left(r,\frac{2 \pi(t_3-t_1)}{3} \right) q\left(r,\frac{2 \pi (t_1-t_2)}{3}\right) } . \end{align*} Clearly we only need to consider one of the three terms in the right hand side, say the second one, so that the essential task is reduced to show that, as $q(r,t)$ is even in $t$, \begin{equation} \label{limit1} \lim_{r \to 1+} \int_{\Omega \setminus \Omega_{\delta}} \frac{ (1-r)^2 } { q\left(r,\frac{2 \pi (t_1-t_3)}{3}\right) q\left(r,\frac{2 \pi (t_2-t_3)}{3}\right)} d{\mathbf s} =0. \end{equation} Setting $P(r; t): = \frac{1-r^2}{q(r,t)}$ for $- \pi \le t \le \pi$. Then $P(r;t)$ is the Poisson kernel for the classical Fourier series. It is known \cite{Z} that \begin{equation} \label{limit2} \int_{-\pi}^\pi P(r;t) dt =1 \qquad\hbox{and}\qquad 0 \le P(r;t) \le c \frac{1-r} {t^2}, \end{equation} where $c$ is a constant independent of $r$ and $t$. Evidently, the kernel in the left hand side of \eqref{limit1} can be written as $$ \frac{ (1-r)^2 } { q\left(r,\frac{2 \pi (t_1-t_3)}{3}\right) q\left(r,\frac{2 \pi (t_2-t_3)}{3}\right)} =\frac{1}{(1+r)^2} P\left(r;\tfrac{2 \pi (t_1-t_3)}{3}\right) P\left(r,\tfrac{2 \pi (t_2-t_3)}{3}\right). $$ Recall that $t_1 + t_2 + t_3 =0$ for ${\mathbf t} = (t_1,t_2,t_3) \in \Omega$. It is easy to see that ${\mathbf t} \in \overline{\Omega}$ means that $-1 \le t_1, t_2, t_1-t_2 \le 1$. Let \begin{equation} \label{t-s} s_1 := \frac{t_1 - t_3}{3} = \frac{2t_1 + t_2} {3} , \qquad s_2 := \frac{t_2 - t_3}{3} = \frac{t_1 +2 t_2} {3}. \end{equation} A simple geometric consideration shows that the image of the domain $\Omega_{\delta}$ under the affine mapping $(t_1,t_2) \mapsto (s_1,s_2)$ in \eqref{t-s} contains the square $[-{\delta}/6,{\delta}/6]^2$. Consequently, the image of $\overline{\Omega} \setminus \Omega_{\delta}$ is a subset of $[-1,1]^2 \setminus [-{\delta}/6,{\delta}/6]^2$. Hence, changing variables $(t_1,t_2) \mapsto (s_1,s_2)$, we obtain by \eqref{limit2} that \begin{align*} & \int_{\Omega \setminus \Omega_{\delta}} \frac{ (1-r)^2 } {q\left(r,\frac{2 \pi (t_1-t_3)}{3}\right) q\left(r,\frac{2 \pi (t_2-t_3)}{3}\right)} d{\mathbf t} \\ & \qquad\qquad \le \frac{3}{(1+r)^2} \int_{[-1,1]^2 \setminus [-{\delta}/6,{\delta}/6]^2} {P\left(r,2 \pi s_1\right) P\left(r,2 \pi s_2\right)} d{\mathbf s} \\ & \qquad\qquad \le 3 \int_{1 \ge |s| \ge \delta/6} P(r, 2 \pi s)ds \le c \frac{1-r}{{\delta}}, \end{align*} which converges to zero when $r \to 1-$. This proves \eqref{limit1} and the convergence of $P_r f({\mathbf t})$ to $f({\mathbf t})$. Clearly it also proves the uniform convergence of $P_rf$ to $f \in C(\overline{\Omega})$. \end{proof} \subsection{Ces\`aro summability} Let us denote by $S_n^{\delta} f$ and $K_n^{\delta}$ the Ces\`aro $(C,{\delta})$ means of the hexagonal Fourier series and its kernel, respectively. Then $$ S_n^{\delta} f({\mathbf t}) := \int_{\Omega} f({\mathbf s}) K_n^{({\delta})}(tb - {\mathbf s}) d{\mathbf s}, $$ where $$ K_n^{({\delta})}({\mathbf t}):= \frac{1}{A_n^{\delta}} \sum_{k=0}^n A_{n-k}^{\delta} \sum_{{\mathbf j} \in {\mathbb J}_n} \phi_{\mathbf j}({\mathbf t}), \qquad A_n^{\delta} = \binom{n+{\delta}}{{\delta}}. $$ It is evident that the case ${\delta} =0$ corresponds to $S_n f$ and $D_n({\mathbf t})$, respectively. By \eqref{D-kernel} and \eqref{Theta}, the $(C,1)$ kernel is given by $$ K_n^{(1)}({\mathbf t}) = \frac{1}{n} \Theta_n({\mathbf t}) = \frac{1}{n} \frac{\sin \frac{(n+1) (t_1-t_2)\pi}{3}\sin \frac{(n+1) (t_2-t_3)\pi}{3} \sin \frac{(n+1) (t_3-t_1)\pi}{3}} {\sin \frac{(t_1-t_2)\pi}{3}\sin \frac{(t_2-t_3)\pi}{3}\sin \frac{(t_3-t_1)\pi}{3}}. $$ For the classical Fourier series in one variable, the $(C,1)$ kernel is the well-known Fej\`er kernel, which is given by $$ \frac{1}{n+1} \sum_{k=0}^n D_k(t) = \frac{1}{n+1} \left(\frac{\sin \frac{(n+1)t}{2}}{\sin \frac{t}{2} } \right)^2 $$ and is nonnegative in particular. For the hexagonal Fourier series, it turns out that $K_n^{(2)}$ is nonnegative. \begin{lem} For $n \ge 0$, $$ \binom{n+2}{2} K_n^{(2)}({\mathbf t}) = \frac{1}{16} \frac{A_n({\mathbf t})^2 + B_n({\mathbf t})^2} {\left(\sin \frac{(t_1-t_2)\pi}{3}\right)^2 \left( \sin \frac{(t_2-t_3)\pi}{3}\right)^2\left(\sin \frac{(t_3-t_1)\pi}{3}\right)^2}, $$ where \begin{align*} A_n({\mathbf t}) &\, : = \cos\tfrac{n t_1 \pi}{3} \sin \tfrac{(n+2) (t_2 - t_3)\pi}{3} + \cos\tfrac{n t_2 \pi}{3} \sin \tfrac{(n+2) (t_3-t_1)\pi}{3} + \cos\tfrac{n t_3\pi}{3} \sin\tfrac{(n+2) (t_1 - t_2)\pi}{3}, \\ B_n({\mathbf t}) &\, : = \sin\tfrac{n t_1 \pi}{3} \sin \tfrac{(n+2) (t_2 - t_3)\pi}{3} + \sin\tfrac{n t_2 \pi}{3} \sin \tfrac{(n+2) (t_3-t_1)\pi}{3}+ \sin\tfrac{n t_3\pi}{3} \sin\tfrac{(n+2) (t_1 - t_2)\pi}{3}. \end{align*} \end{lem} \begin{proof} Using \eqref{elementary} and the elementary formula $$ \sum_{k=0}^n \sin [2 (k+1) s] = \frac{\sin [(n+1)s] \sin[(n+2)s]}{\sin s}, $$ we obtain $$ \sum_{k=0}^n \Theta_k({\mathbf t}) = \frac{- 4 E_n({\mathbf t})} {\left(\sin \frac{(t_1-t_2)\pi}{3}\right)^2 \left( \sin \frac{(t_2-t_3)\pi}{3}\right)^2\left(\sin \frac{(t_3-t_1)\pi}{3}\right)^2} . $$ where \begin{align*} E_n({\mathbf t}) := & \sin \tfrac{(n+1) (t_1-t_2)\pi}{3} \sin \tfrac{(n+2) (t_1-t_2)\pi}{3} \sin \tfrac{(t_2-t_3)\pi}{3} \sin \tfrac{(t_3-t_1)\pi}{3} \\ + & \sin \tfrac{(n+1) (t_2-t_3)\pi}{3}\sin \tfrac{(n+2) (t_2-t_3)\pi}{3} \sin \tfrac{(t_1-t_2)\pi}{3} \sin \tfrac{(t_3-t_1)\pi}{3} \\ + & \sin \tfrac{ (n+1) (t_3-t_1)\pi}{3}\sin \tfrac{ (n+2) (t_3-t_1)\pi}{3} \sin \tfrac{(t_1-t_2)\pi}{3} \sin \tfrac{(t_2-t_3)\pi}{3}. \end{align*} From here, the difficulty lies in identifying that the numerator is a sum of squares. Once the form is recognized, the verification that $- 4 E_n({\mathbf t}) = \frac{1}{16}(( A_n({\mathbf t})^2 + B_n({\mathbf t})^2)$ is a straightforward, though tedious, exercise. \end{proof} An immediate consequence of the above lemma is the following: \begin{thm} The $(C,2)$ means of the Fourier expansion with respect to the hexagon domain is a positive linear opreator. \end{thm} As a comparison, let us mention that for the usual Fourier series on the tours, if the partial sums are defined with respect to $\ell^\infty$ ball, that is, if the Dirichlet kernel is defined as $$ D_n({\theta}_1,{\theta}_2)= \sum_{- n \le k_1,k_2\le n} e^{i(k_1 {\theta}_1 + k_2 {\theta}_2)}, \quad ({\theta}_1,{\theta}_2) \in [-\pi,\pi]^2, $$ then it is proved in \cite{BX} that the corresponding $(C,3)$ means are nonnegative and $\delta =3$ is sharp. In fact, the results in \cite{BX} are established for the partial sums defined with respect to the $\ell^1$ ball for $d$-dimensional torus. For $d=2$, it is easy to see that the result on $\ell^1$ ball implies the result on $\ell^\infty$ ball. If the $(C,{\delta})$ means converge to $f$, then so is $(C,{\delta}')$ means for ${\delta}' > {\delta}$. The positivity of the kernel shows immediately that $(C,2)$ means of the hexagonal Fourier series converges. It turns out that the $(C,1)$ means is enough. \begin{thm} \label{thm:(C,1)} If $f \in C(\overline{\Omega})$, then the $(C,1)$ means $S_n^{(1)} f$ converge uniformly to $f$ in $\overline{\Omega}$. \end{thm} \begin{proof} A standard argument shows that it suffices to prove that $S_n^{(1)}$ is a bounded operator, which amounts to show that $$ I_n: = \int_\Omega \left| \Theta_n({\mathbf t}) \right| d{\mathbf t} \le c\, n $$ Since $\Theta_n({\mathbf t})$ is a symmetric function in $t_1,t_2,t_3$, we only need to consider the integral over the triangle $$ \Delta: = \{{\mathbf t} \in {\mathbb R}_{\mathbb H}: 0 \le t_1,t_2,-t_3 \le 1\} = \{(t_1,t_2): t_1 \ge 0, t_2 \ge 0, t_1+t_2 \le 1\}, $$ which is one of the six triangle in $\Omega$ (see Figure 1). Let $s_1,s_2$ be defined as in \eqref{t-s} and let $\Delta^*$ denote the image of $\Delta$ in $(s_1,s_2)$ plane. Then $$ \widetilde \Delta =\{(s_1,s_2): 0 \le s_1 \le 2 s_2, 0 \le s_2 \le 2 s_1, s_1+s_2 \le 1\} $$ and it follows, as the Jacobian of the change of variables is equal to 3, that $$ I_n = 3 \int_{\widetilde \Delta} \left| \frac{\sin((n+1)\pi s_1) \sin( (n+1)\pi s_2) \sin ((n+1)\pi(s_1-s_2))}{\sin (\pi s_1) \sin (\pi s_2) \sin(\pi (s_1+s_2))} \right| ds_1ds_2. $$ Since the integrant in the right hand side is clearly a symmetric function of $(s_1,s_2)$, it is equal to twice of the integral over half of the $\widetilde \Delta$, say over $$ \widetilde \Delta^* =\{(s_1,s_2) \in \widetilde \Delta: s_1 \le s_2\} =\{(s_1,s_2): s_1 \le s_2 \le 2 s_1, s_1+s_2 \le 1\}. $$ Making another change of variables $s_1 = (u_1-u_2)/2$ and $s_2 = (u_1+u_2)/2$, the domain $\widetilde \Delta^*$ becomes $\Gamma: = \{(u_1,u_2): 0 \le u_2 \le u_1/3, 0 \le u_1 \le 1\}$ and we conclude that \begin{align*} I_n = 3 \int_{\Gamma} \left| \frac{\sin \frac{(n+1)(u_1+u_2)\pi}{2} \sin \frac{(n+1)(u_1-u_2) \pi}{2} \sin\frac{(n+1)u_2 \pi}{2}} {\sin \frac{(u_1+u_2)\pi}{2} \sin\frac{(u_1-u_2)\pi}{2} \sin \frac{u_2\pi}{2}} \right| du_1 du_2 :=3 \int_{\Gamma} |\Theta_n^*(u)|du \end{align*} To estimate the last integral, we partition $\Gamma$ as $\Gamma = \Gamma_1 \cup \Gamma_2 \cup \Gamma_3$ and consider the three cases separately. \medskip\noindent {\it Case 1.} $\Gamma_1 =\{u \in \Gamma: u_1 \le 3/n\}$. Using the fact that $|\sin n t / \sin t| \le n$, we obtain $$ \int_{\Gamma_1} |\Theta_n^*(u)|du \le (n+1)^3 \int_{\Gamma_1} du_1 du_2 = (n+1)^3 \frac{3n^2}{2} \le c\, n. $$ \medskip\noindent {\it Case 2.} $\Gamma_2 =\{u \in \Gamma: 3/ n \le u_1, \, u_2\le 1/n\}$. In this region we have $u_1 - u_2 \ge 2 u_1 /3$ and $u_1 + u_2 \ge u_1$. Hence, upon using $\sin u \ge (2/\pi) u$, we obtain $$ \int_{3/n}^1 \left| \frac{\sin \frac{(n+1)(u_1+u_2)\pi}{2} \sin \frac{(n+1)(u_1-u_2) \pi}{2} } {\sin \frac{(u_1+u_2)\pi}{2} \sin\frac{(u_1-u_2)\pi}{2}} \right| du_1 \le \int_{3/n}^1 \frac{1}{u_1^2} du_1 \le c\, n. $$ Consequently, it follows that $$ \int_{\Gamma_2} |\Theta_n^*(u)|du = \int_0^{1/n} \int_{3/n}^1|\Theta_n^*(u)|du \le c \,n \int_{0}^{1/n} \left| \frac{\sin\frac{(n+1)u_2 \pi}{2}} {\sin \frac{u_2\pi}{2}} \right| \le c \, n $$ upon using $|\sin n u / \sin u| \le n$ again. \medskip\noindent {\it Case 3.} $\Gamma_3 =\{u \in \Gamma: 3/n \le u_, \, u_2 \ge 1/n\}$. In this region we have $u_1 \ge 3 u_2$, which implies that $u_1 - u_2 \ge (2/3) u_1$ and $u_1 + u_2 \ge u_1$. Thus, using $\sin u \ge (2/\pi) u$ again, we obtain $$ \int_{3 u_2}^1 \left| \frac{\sin \frac{(n+1)(u_1+u_2)\pi}{2} \sin \frac{(n+1)(u_1-u_2) \pi}{2} } {\sin \frac{(u_1+u_2)\pi}{2} \sin\frac{(u_1-u_2)\pi}{2}} \right| du_1 \le c \int_{3u_2}^1 \frac{1}{u_1^2} du_1 \le \frac{c}{u_2}. $$ Consequently, using $\sin u_2 \ge (2/\pi)u_2$, we conclude that $$ \int_{\Gamma_3} |\Theta_n^*(u)|du = \int_0^{1/n} \int_{3/n}^1|\Theta_n^*(u)|du \le c \int_{1/n}^{1/3} \frac{1}{u_2^2} d u_2 \le c \, n. $$ Putting these estimates together completes the proof. \end{proof} This theorem shows that $(C,{\delta})$ summability of the hexagonal Fourier series behaviors just like that of classical Fourier series. In particular, we naturally conjecture that the $(C,\delta)$ means of the hexagonal Fourier series should converge if ${\delta} > 0$. This condition is sharp as ${\delta} =0$ corresponds to $S_nf$ whose norm is known to be in the order of $(\log n)^2$ (\cite{P, Sun}). \section{Best Approximation on Hexagon} \setcounter{equation}{0} For $1 \le p \le \infty$ we define $L^p$ space to be the space of Lebesgue integrable $H$-periodic functions on $\Omega$ with the norm $$ \|f\|_p : = \left(\int_{\Omega} |f({\mathbf t})|^p d {\mathbf t} \right)^{1/p}, \qquad 1 \le p < \infty, $$ and we assume that $L^p$ is $C(\overline{\Omega})$ when $p = \infty$, with the uniform norm on $\overline{\Omega}$. For $f \in L^p$, we define the error of best approximation to $f$ from ${\mathcal H}_n$ by $$ E_n (f)_p: = \inf_{ S\in {\mathcal H}_n} \|f- S\|_p. $$ We shall prove the direct and inverse theorems using a modulus of smoothness. \subsection{A simple fact} We start with an observation that $E_n (f)$ can be related to the error of best approximation by trigonometric polynomials of two variables on $[-1,1]^2$. To see this, let us denote by $\CT_n$ the space of trigonometric polynomials of two variables of degree $n$ in each variable, whose elements are of the form $$ T (u) = \sum_{k_1=0}^n \sum_{k_2=0}^n b_k \mathrm{e}^{2\pi i (k_1 u_1 + k_2 u_2)}. $$ Furthermore, if $f \in C([-1,1]^2)$ and it is $2\pi$ periodic in both variables, then define $$ {\mathcal E}_n(f) : = \inf_{T \in \CT_n} \max_{u \in [-1,1]^2} |f(u) - T(u)|. $$ \begin{prop} Let $f$ be $H$-periodic and continuous over the regular hexagon $\overline{\Omega}_H$. Assume that $f^*$ is a continuous extension of $f$ on $[-1,1]^2$. Then $$ E_n (f) \le {\mathcal E}_{\lfloor \frac{n}{2} \rfloor} (f^*). $$ \end{prop} \begin{proof} We again work with homogeneous coordinates. Using the fact that $t_1+t_2+t_3=0$ and $j_1+j_2+j_3=0$, we can write $$ \phi_{{\mathbf j}}({\mathbf t}) = e^{\frac{2\pi i}{3} {\mathbf j} \cdot {\mathbf t}} = e^{2 \pi i (j_1 s_1 + j_2 s_2)}, \qquad s_1 = \tfrac{2 t_1 + t_2 }{3}, \quad s_2 = \tfrac{ t_1 + 2 t_2 }{3}. $$ Clearly ${\mathbf t} \in \overline{\Omega}$ implies that $(s_1,s_2) \in \Omega^*: = \{s: -1 \le s_1, s_2, s_1+s_2 \le 1\}$. Furthermore, ${\mathbf j} \in {\mathbb H}_n$ implies that $- n \le j_1,j_2, j_1+j_2 \le n$. Consequently, we have that $$ S_n ({\mathbf t}): = \sum_{j \in {\mathcal H}_n} c_{\mathbf j} \phi_{\mathbf j}({\mathbf t}) = \sum_{\substack {-n \le j_1,j_2 \le n \\ -n \le j_1 + j_2 \le n} } c_{j_1,j_2,-j_1-j_2} e^{2 \pi i (j_1s_1 + j_2 s_2)} : = T_n(s). $$ Let $g(s) = f({\mathbf t}) = f(2 s_1-s_2, 2s_2-s_1,-s_1-s_2)$ and let its continuous extension to $[-1,1]^2$ be $g^*$. It follows that \begin{align*} \|f - S_n\| = \max_{s \in \Omega^*} | g(s) - T_n(s)| \le \max_{-1 \le s_1,s_2 \le 1} | g^*(s) - T_n(s)|. \end{align*} Taking minimum over all $S_n \in {\mathcal H}_n$ translates to taking minimal of all $c_{\mathbf j}$, consequently we conclude that \begin{align*} E_n (f) &\, \le \min_{c_{\mathbf j}} \max_{-1 \le s_1,s_2 \le 1} \left | g^*(s) - \sum_{\substack {-n \le j_1,j_2 \le n \\ -n \le j_1 + j_2 \le n} } c_{\mathbf j} \mathrm{e}^{2 \pi i (j_1 s_1 + j_2 s_2)} \right| \\ &\, \le \min_{c_{\mathbf j}} \max_{-1 \le s_1,s_2 \le 1} \left | g^*(s) - \sum_{-\lfloor \frac{n}{2}\rfloor \le j_1,j_2 \le \lfloor \frac{n}{2}\rfloor } c_{\mathbf j} \mathrm{e}^{2 \pi i (j_1 s_1 + j_2 s_2)} \right| = {\mathcal E}_{\lfloor \frac{n}{2}\rfloor} (g^*). \end{align*} If we work with $f$ defined on $\Omega_H$, then $g^*$ becomes $f^*$ in the statement of the proposition. \end{proof} For smooth functions, this result can be used to derive the convergence order of $E_n (f)$. The procedure of the proof, however, is clearly only one direction. Below we prove both direct and inverse theorems using a modulus of smoothness. \subsection{Modulus of smoothness} On the set $\Omega$, we can define a modulus of smoothness by following precisely the definition for periodic functions on the real line. Thus, we define for $r \in {\mathbb N}_0$, $$ \Delta_{\mathbf t} f({\mathbf x}) = f({\mathbf x} + {\mathbf t}) - f({\mathbf x}), \qquad \Delta_{\mathbf t}^r f({\mathbf x}) = \Delta_{\mathbf t} \Delta_{\mathbf t}^{r-1} f({\mathbf x}). $$ It is well known that $$ \Delta_{\mathbf t} f ({\mathbf x}) = \sum_{k=0}^r (-1)^k \binom{r}{k} f({\mathbf x} + k {\mathbf t}). $$ For ${\mathbf t} \in {\mathbb R}_{\mathbb H}$, let $\|{\mathbf t}\| := (t_1^2 +t_2^2)^{1/2}$, the usual Euclidean norm of $(t_1,t_2)$. We can also take other norm of $(t_1,t_2)$ instead, for example, $\|{\mathbf t}\|_\infty := \max\{|t_1|, |t_2|\}$. Since $t_1+t_2+t_3 =0$, we have evidently $\|{\mathbf t}\|_\infty \le \max\{|t_1|, |t_2|, |t_3|\} \le 2 \|{\mathbf t}\|_\infty$. The modulus of smoothness of an $H$-periodic function $f$ is then defined as $$ \omega_r (f;h)_p := \sup_{\|{\mathbf t}\|\le h} \|\Delta_{\mathbf t}^r f \|_p, \qquad 1 \le p \le \infty. $$ \begin{prop} \label{prop:modulus} The modulus of smoothness satisfies the following properties: \begin{enumerate} \item For $\lambda > 0$, $\omega_r(f; \lambda h)_p \le (1+\lambda)^r \omega_r(f;h)_p$. \item Let $\partial_i$ denote the partial derivative with respect to $t_i$ and $\partial^k = \partial_1^{k_1}\partial_2^{k_2}\partial_3^{k_2}$ for $k = (k_1,k_2,k_3)$. Then $$ \omega_r(f;h)_p \le h^r \sum_{k_1+k_2+ k_3 = r} \frac{r!}{k_1!k_2! k_3!} \|\partial^k f\|_p. $$ \end{enumerate} \end{prop} \begin{proof} The proof of (1) follows exactly as in the proof of one variable. For part (2), it is easy to show by induction that $$ \Delta_{\mathbf t}^r f({\mathbf x}) = \int_{[0,1]^r} \partial_{u_1} \ldots \partial_{u_r} f({\mathbf x} + u_1 {\mathbf t} + \ldots + u_r {\mathbf t}) du_1 \ldots u_r. $$ The integrant of the right hand side is easily seen, by another induction, to be $$ \sum_{k_1+k_2+k_3 = r} \frac{r!}{k_1!k_2!k_3!} \partial^k f({\mathbf x} + u_1 {\mathbf t} + \ldots + u_r {\mathbf t}) {\mathbf t}^k, $$ from which the stated result follows from \eqref{IntPeriod} and the fact that $\|{\mathbf t}^k\| \le h^{|k|} = h^r$. \end{proof} \subsection{Direct theorem} For the proof of the direct theorem, we use an analogue of Jackson integral. Let $r $ be a positive integer. We consider the kernel $$ K_{n,r}({\mathbf t}):= \lambda_{n,r} \left[\Theta_n({\mathbf t})\right]^{2r}, \qquad \hbox{where} \qquad \int_\Omega K_{n,r}({\mathbf t}) d{\mathbf t} =1. $$ Since $n^{-1} \Theta_n({\mathbf t})$ is the $(C,1)$ kernel of the Fourier series, we see that $\Theta_n \in {\mathcal H}_n$ and, thus, $K_{n,r} \in {\mathcal H}_{r n}$. \begin{lem} \label{lem:main} For $\nu \in {\mathbb N}$ and $\nu \le 2 r -2$, $$ \int_{\Omega} \|{\mathbf t}\|^\nu K_{n,r} ({\mathbf t}) d {\mathbf t} \le c n^{-\nu}. $$ \end{lem} \begin{proof} First we estimate the constant $\lambda_{n,r}$. We claim that \begin{equation} \label{lambda} \left( \lambda_{n,r} \right)^{-1} = \int_\Omega \left[\Theta_n({\mathbf t})\right]^{2r} d{\mathbf t} \sim n^{-6 r +2}. \end{equation} We derive the lower bound first. As in the proof of Theorem \ref{thm:(C,1)}, we change variables from ${\mathbf t} \in \Omega$ to $(s_1,s_2) \in \Omega^*$, then use symmetry and change variables to $(u_1, u_2) \in \Gamma$. The result is that $$ \int_\Omega \left[\Theta_n({\mathbf t})\right]^{2r} d{\mathbf t} = 3 \int_\Gamma \left[\Theta_n^*({\mathbf u})\right]^{2r} d{\mathbf u} \ge 3 \int_{\Gamma^*} \left[\Theta_n^*({\mathbf u})\right]^{2r} d{\mathbf u}, $$ where we choose $\Gamma^* =\{(u_1,u_2): \frac{1}{16 (n+1)} \le u_2 \le u_1/3 \le \frac{1}{8(n+1)} \}$, which is a subset of $\Gamma$ and its area is in the order of $n^{-2}$. For $u \in \Gamma^*$, we have $\sin (n+1) (u_1-u_2) \ge \sin \pi/16$, $\sin (n+1) (u_1+u_2) \ge \sin \pi/8$, and $\sin (n+1) u_2 \ge \sin \pi/32$; furthermore, $\sin (u_1-u_2) \le \sin 5\pi/(16n) \le 5 \pi/(16n)$, $\sin (u_1+u_2) \le \sin \pi/(2n) \le \pi /(2n)$, and $\sin u_2 \le \sin \pi/(8n) \le \pi / (8 n)$. Consequently, we conclude that $$ \int_\Omega \left[\Theta_n({\mathbf t})\right]^{2r} d{\mathbf t} \ge c n^{6 r} \int_{\Gamma^*} d u = c n^{6 r -2}. $$ This proves the lower bound of \eqref{lambda}. The upper bound will follow as the special case $\nu=0$ of the estimate of the integral $I_n^{r,\nu}$ below. We now estimate the integral $$ I_n^{r,\nu} : = \int_\Omega |{\mathbf t}|^\nu \left[\Theta_n({\mathbf t})\right]^{2r} d{\mathbf t}. $$ Again we follow the proof of Theorem \eqref{thm:(C,1)} and make a change of variables from ${\mathbf t} \in \Omega$ to $(u_1, u_2) \in \Gamma$. The change of variables shows that $t_1 = \tfrac{1}{6} \left[2(u_1-u_2)-(u_1+u_2)\right]$ and $t_2 = \tfrac{1}{6} \left[2(u_1+u_2)-(u_1-u_2)\right]$, which implies that $$ \|{\mathbf t}\|_\infty = \max \{ |t_1|, |t_2| \} \le \tfrac12 \max \left \{|u_1-u_2|, |u_1+u_2| \right \}. $$ Consequently, we end up with $$ I_n^{r,\nu} \le c \int_\Gamma \max \left \{|u_1-u_2|, |u_1+u_2| \right \}^\nu |\Theta_n^*(u)|^{2 r} du. $$ Since $\max\{|a|,|b|\} \le |a|+|b|$, we can replace the $\max\{ ... \}$ term in the integrant by the sum of the two terms. The fact that $\nu \le 2 r -2$ shows that we can cancel $|u_1-u_2|^\nu$, or $|u_1+u_2|^\nu$, with the denominator of $\Theta_n^*$. After this cancellation, the integral can be estimated by considering three cases as in the proof of Theorem \ref{thm:(C,1)}. In fact, the proof follows almost verbatim. For example, in the case 2, we end up, using $u_1 - u_3 \ge 2 u_1/3 $ and $u_1+u_2 \ge u_1$, that \begin{align*} & \int_{3/n}^1 \left[ \frac{\sin \frac{(n+1)(u_1+u_2)\pi}{2}} {\sin \frac{(u_1+u_2)\pi}{2}} \right]^{2r} \left[ \frac{\sin \frac{(n+1)(u_1-u_2) \pi}{2}} { \sin\frac{(u_1-u_2)\pi}{2}} \right]^{2r -\nu} du_1 \\ & \qquad\qquad \qquad\qquad \qquad\qquad \le \int_{3/n}^1 \frac{1}{u_1^{4r-\nu}} du_1 \le c\, n^{4r-\nu-1}. \end{align*} Consequently, it follows that \begin{align*} \int_{\Gamma_2} |u_1-u_2|^\nu |\Theta_n^*(u)|du & = \int_0^{1/n} \int_{3/n}^1 |u_1-u_2|^\nu |\Theta_n^*(u)|du \\ & \le c \,n^{4r-\nu -1} \int_{0}^{1/n} \left| \frac{\sin\frac{(n+1)u_2 \pi}{2}} {\sin \frac{u_2\pi}{2}} \right|^{2r} du \le c \, n^{6 r - \nu -2}. \end{align*} upon using $|\sin n u / \sin u| \le n$. The other two cases can be handled similarly. As a result, we conclude that $I_n^{r,p} \le c n^{6 r - p -2}$. The case $p = 0$ gives the lower bound estimate of \eqref{lambda}. The desired estimate is over the quantity $\lambda_{n,r} I_n^{r,p}$ and follows from our estimates. \end{proof} Using the kernel $K_{n,r}$ we can now prove a Jackson estimate: \begin{thm} For $1 \le p \le \infty$ and for each $r = 1,2,\ldots$, there is a constant $c_r$ such that if $f\in L^p$ then $$ E_n (f)_p \le c_r \omega_r (f, \tfrac{1}{n})_p, \qquad n = 1, 2, ... . $$ \end{thm} \begin{proof} As in the proof of classical Jackson estimate for trigonometric polynomials on $[0, 2\pi]$, we consider the following operator $$ F_n^{\rho,r} f ({\mathbf x}):= \int_{\Omega} J_{n,\rho}({\mathbf t}) \sum_{k=1}^r (-1)^{k-1} \binom{r}{k} f({\mathbf x} + k {\mathbf t}) d{\mathbf t}, $$ where $J_{n,\rho}({\mathbf t}) = K_{n^*,\rho}({\mathbf t})$ with $n^* = \lfloor \frac{n}{\rho} \rfloor +1$, and $\rho \ge (r+2)/2$. Evidently, $J_{n,\rho}(-{\mathbf t}) = J_{n,\rho}({\mathbf t})$. Using the fact that $J_{n,\rho} \in {\mathcal H}_n$, we see that $F_{n,\rho} f$ can be written as a linear combination of \begin{equation} \label{periodJ} \int_{\Omega} f({\mathbf x} + k {\mathbf t}) \phi_{{\mathbf j}} ({\mathbf t}) d{\mathbf t}, \qquad {\mathbf j} \in {\mathcal H}_n, \quad k =1,\ldots, r. \end{equation} As $f$ is $H$-periodic, so is $f({\mathbf x} + k {\mathbf t})$ as a function of ${\mathbf t}$. Let $F_m = \sum_{{\mathbf j} \in {\mathcal H}_m} a_{\mathbf j} \phi_{\mathbf j}$ denote the $(C,1)$ means of the Fourier series of $f$ over $\Omega$. Then $F_m$ converges to $f$ uniformly on $\Omega$. If ${\mathbf j} \ne - k {\mathbf l}$ for some ${\mathbf l} \in {\mathcal H}_n$ then, using the fact that $\phi_{\mathbf l} ({\mathbf x} + k {\mathbf t}) = \phi_{\mathbf l}({\mathbf x}) \phi_{k{\mathbf l}}({\mathbf t})$, we obtain that \begin{align*} \int_\Omega f({\mathbf x} + k {\mathbf t}) \phi_{{\mathbf j}} ({\mathbf t}) d{\mathbf t} &\,= \lim_{n\to \infty} \int_\Omega F_m({\mathbf x} + k {\mathbf t}) \phi_{{\mathbf j}} ({\mathbf t}) d{\mathbf t} \\ & \, = \lim_{n\to \infty} \sum_{{\mathbf l} \in {\mathcal H}_n} a_{\mathbf l} \phi_{\mathbf l}({\mathbf x}) \int_{\Omega} \phi_{k{\mathbf l}}({\mathbf t})\phi_{{\mathbf j}} ({\mathbf t}) d{\mathbf t} = 0. \end{align*} If ${\mathbf j} = - k {\mathbf l}$, then making a change of variables ${\mathbf x} + k {\mathbf t} = {\mathbf s}$ shows that $$ \int_{\Omega} f({\mathbf x} + k {\mathbf t}) \phi_{{\mathbf j}} ({\mathbf t}) d{\mathbf t} = \int_{\Omega} f({\mathbf x} + k {\mathbf t}) \phi_{{\mathbf l}} (- k {\mathbf t}) d{\mathbf t} = \int_{\Omega} f({\mathbf s}) \phi_{{\mathbf l}} ({\mathbf x} - {\mathbf s}) d{\mathbf s} $$ which is a trigonometric polynomial in ${\mathbf x}$ in ${\mathcal H}_n$. Consequently, we conclude that $F_n^{\rho,r} f$ is indeed a trigonometric polynomial in ${\mathcal H}_n$. Since $J_{n,\rho}({\mathbf t}) = J_{n,\rho}(-{\mathbf t})$, it follows from (1) of Proposition \ref{prop:modulus} and Minkowski's inequality that \begin{align*} \|F_n^{\rho,r} f -f \|_p &\, \le \left \| \int_\Omega K_{n,\rho} ({\mathbf t}) \Delta_{\mathbf t}^r f(\cdot)d {\mathbf t} \right \|_p \le \int_\Omega K_{n,\rho} ({\mathbf t}) \omega_r(f; \|{\mathbf t}\|)_p d {\mathbf t} \\ &\, \le \omega_r(f; \tfrac{1}{n})_p \int_\Omega K_{n,\rho} ({\mathbf t}) (1+n \|{\mathbf t}\|)^r d {\mathbf t} \le c\, \omega_r(f; \tfrac{1}{n})_p, \end{align*} where the last step follows from Lemma \ref{lem:main}. \end{proof} For $1 \le p \le \infty $ and $r=1,2,\ldots$, define $W_p^r$ as the space of $H$-periodic functions whose $r$-th derivatives belong to $L^p$. \begin{cor} For $1 \le p \le \infty$ and $r = 1, 2, \ldots$, if $f \in W_p^{r}$ then $$ E_n (f)_p \le c n^{-r} \sum_{|k| = p} \|\partial^k f\|_p, \qquad n = 1,2, \ldots. $$ \end{cor} \subsection{Inverse Theorem} As in the classical approximation theory, the main task for proving an inverse theorem lies in the proof of a Bernstein inequality. For this, we introduce an operator that is of interest in its own right. Let $\eta$ be a nonnegative $C^\infty$ function on ${\mathbb R}$ such that $$ \eta(t) = 1, \quad \hbox{if $0\le t \le 1$}, \quad \hbox{and}\quad \eta(t) = 0, \quad \hbox{if $t \le 0$ or $t\ge 2$}. $$ We then define an operator $\eta_n f$ on $\Omega$ by $$ \eta_n f({\mathbf x}) := \int_\Omega f({\mathbf x}-{\mathbf t}) \eta_n({\mathbf t}) d{\mathbf t}, \qquad \hbox{where}\quad \eta_n({\mathbf t}):= \sum_{k=0}^{2n} \eta\left(\frac{k}{n} \right) D_k ({\mathbf t}), $$ where $D_k$ is the Dirichlet kernel in \ref{D-kernel}. Evidently, $\eta_n f \in {\mathcal H}_{2n}$ and $\eta_n f = f$ if $f\in {\mathcal H}_n$. Such an operator has been used by many authors, starting from \cite{Kam}. It is applicable for orthogonal expansion in many different settings; see, for example, \cite{X05}. A standard procedure of summation by parts leads to the fact that $\|\eta_n f \|$ is bounded. Consequently, using the fact that $\eta_n f = f$ for $f \in {\mathcal H}_n$, we have the following result. \begin{prop} For $1 \le p \le \infty$, if $f \in L^p$ then $$ \|\eta_n f - f \|_p \le c E_n (f)_p, \qquad n = 1, 2, \ldots . $$ \end{prop} This shows that for all practical purpose, $\eta_n f$ is as good as the polynomial of best approximation. For our purpose, however, the more important fact is the following near exponential estimate of the kernel function $\eta_n({\mathbf t})$. For $\a \in {\mathbb N}_0^d$, write $|\a|=\a_1+\ldots+\a_d$. \begin{lem} For each $k= 1, 2, \ldots$, there exists a constant $c_k$ that depends on $k$, such that $$ \partial^\alpha \eta_n ({\mathbf t}) \le c_k \frac{n^{|\alpha|+2}}{(1+ n \|{\mathbf t}\|)^k}, \qquad {\mathbf t} \in \overline{\Omega}. $$ \end{lem} \begin{proof} The main tool of the proof is the Poisson summation formula \eqref{Poisson} as used in the case of trigonometric series on the real line in \cite{PX}. Let us introduce a notation that $|{\mathbf t}|_H = \max\{|t_1|, |t_2|, |t_3|\}$. Since ${\mathbb J}_n = \{ {\mathbf j} \in {\mathbb H}_n: \hbox{ $|j_1| = n$ or $|j_2|=n$ or $|j_3| =n$} \}$, we can write $\eta_n({\mathbf t})$ as $$ \eta_n ({\mathbf t}) = \sum_{k=0}^{2n} \eta\left(\frac{k}{n} \right) \sum_{{\mathbf j} \in {\mathbb J}} \phi_{\mathbf j}({\mathbf t}) = \sum_{{\mathbf j} \in {\mathbb H}_n} \eta\left( \frac{|{\mathbf j}|_H}{n} \right) \phi_{\mathbf j}({\mathbf t}). $$ In particular, for $\a \in {\mathbb N}_0^3$, we have $$ \partial^\a \eta_n({\mathbf t}) = \left(\tfrac{2 \pi}{3}\right)^{|\a|} \sum_{{\mathbf j} \in {\mathbb H}_n} \eta\left( \frac{|{\mathbf j}|_H}{n} \right) {\mathbf j}^\a \phi_{\mathbf j}({\mathbf t}). $$ An immediate consequence of this expression is that \begin{equation} \label{eta-bound1} \left |\partial^\a \eta_n({\mathbf t}) \right| \le c \|\eta\|_\infty \sum_{{\mathbf j} \in {\mathbb H}_n} \|{\mathbf j}^\a\| \le c \|\eta\|_\infty n^{|\a|+2}. \end{equation} Define $\Phi_n$ such that $\widehat \Phi_n ({\mathbf t}) = \eta \left(\frac{|t|_H}{n}\right) {\mathbf t}^\a$. Then $ \Phi_n(\xi) = \int_{{\mathbb R}^3_H} \widehat \Phi_n({\mathbf t}) e^{\frac{2\pi i}{3} \xi \cdot {\mathbf t}} d{\mathbf t}. $ The definition of $\Phi_n$ and Poisson summation formula shows that \begin{equation} \label{eta-bound2} \eta_n ({\mathbf t}) = \sum_{{\mathbf j} \in {\mathbb H}_n} \widehat \Phi_n({\mathbf j}) \phi_{\mathbf j}({\mathbf t}) = 2 \sqrt{3} \sum_{{\mathbf j} \in {\mathbb Z}_H^3} \Phi_n ({\mathbf t} + 3 {\mathbf j}). \end{equation} In order to estimate the right hand side we first derive an upper bound for $\Phi_n$. Since $\eta$ is a $C^\infty$ function and $\|{\mathbf t}\|_H$ is differentiable except when one of the variable is zero, taking derivatives in $L^1$ norm, we end up with $$ \left(\tfrac{2 \pi i}{3}\right)^{|{\beta}|} {\mathbf t}^{\beta} \Phi_n({\mathbf t}) = \int_{{\mathbb R}^3_H} \partial^{\beta} \widehat\Phi_n({\mathbf s}) e^{\frac{2\pi i}{3} {\mathbf s} \cdot {\mathbf t}} d{\mathbf s} = \int_{{\mathbb R}^3_H} \partial^{\beta} \left [ \eta \left(\tfrac{|{\mathbf s}|_H}{n}\right) {\mathbf s}^\a \right ] e^{\frac{2\pi i}{3} {\mathbf s} \cdot {\mathbf t}} d{\mathbf s}. $$ Each derivative of $\eta\left(\frac{|{\mathbf t}|_H}{n}\right)$ yields a $n^{-1}$. For $\beta \in {\mathbb N}_0^3$ and $k:=|{\beta}| > |\a|$, we have \begin{align*} \partial^{\beta} \left [ \eta \left(\tfrac{|{\mathbf t}|_H}{n}\right) {\mathbf t}^\a \right ] & = \sum_{|{\gamma}| \le k} \binom{k}{{\gamma}} n^{-k+|{\gamma}|} \eta^{(k-|{\gamma}|)} \left(\tfrac{|{\mathbf t}|_H}{n}\right) \partial^{{\gamma}} {\mathbf t}^\a \\ & = \sum_{|{\gamma}| \le |\a|} \binom{|{\beta}|}{{\gamma}} n^{-k+|{\gamma}|} \eta^{(k-|{\gamma}|)} \left(\tfrac{|{\mathbf t}|_H}{n}\right) \frac{\a!}{{\gamma}!} {\mathbf t}^{\a -{\gamma}}, \end{align*} where we have used muti-index notations that for $k \in {\mathbb N}$ and $\a \in {\mathbb Z}^d$, $\alpha! = \a_1! \ldots \a_d!$ and $\binom{k}{\a} = k!/ (\a! (k-|\a|)!)$. Hence, as $\eta^{(j)}$ is supported on $[1,2]$ for $j \ge 1$, we deduce that \begin{align*} \left| \left(\tfrac{2 \pi i}{3}\right)^{k} {\mathbf t}^{\beta} \Phi_n({\mathbf t}) \right | & \le c \, n^{-|{\beta}|+|\a|} \sum_{|{\gamma}|\le |\a|}\| \eta^{(k-|{\gamma}|)}\|_\infty \int_{n \le |{\mathbf t}|_H \le 2 n}d{\mathbf t} \\ & \le c \, n^{-k+|\a| +2} \sum_{j= k-|\a|}^{k} \|\eta^{(j)}\|_\infty. \end{align*} Together with \eqref{eta-bound1}, we conclude then $$ \left| \Phi_n({\mathbf t}) \right | \le c_k \frac{n^{|\a|+2}}{(1+ n \|{\mathbf t}\|)^{k} }, \qquad c_k = c \max_{k-|\a| \le j\le k} \|\eta^{(j)}\|_\infty. $$ As a result of the estimate, we conclude from \eqref{eta-bound2} that $$ \left| \eta_n({\mathbf t})\right| \le c \sum_{{\mathbf j} \in {\mathbb Z}_H^3} |\Phi_n({\mathbf t} + 3 {\mathbf j})| \le c_k \sum_{{\mathbf j} \in {\mathbb Z}_H^3}\frac{n^{|\a|+2}}{(1+ n \|{\mathbf t}+ 3 {\mathbf j}\|)^{k} }. $$ Since $\|{\mathbf t}\| = \max\{|t_1|, |t_2|\} \le 1$ for ${\mathbf t} \in \Omega$, we have $\|{\mathbf t} + 3 {\mathbf j}\| \ge 3 \|{\mathbf j}\| - 1 \ge 2\|{\mathbf j}\|$ if ${\mathbf j} \ne 0$, and thus, $1+ n \|{\mathbf t} + 3 {\mathbf j}\| \ge (1+ n) \|{\mathbf j}\|$. Consequently $$ \left| \eta_n f({\mathbf t})\right| \le c_k \frac{n^{|\a|+2}}{(1+ n \|{\mathbf t}\|)^{k} } + \frac{n^{|\a|+2}}{(1+n)^k} \sum_{0 \ne {\mathbf j} \in {\mathbb Z}_H^3}\frac{1}{\|{\mathbf j}\|^{k} } \le c_k \frac{n^{|\a|+2}}{(1+ n \|{\mathbf t}\|)^{k} } $$ and the proof is completed. \end{proof} \begin{rem} The proof of the above estimate relies essentially on Poisson summation formula, which is known to hold for all lattice $A{\mathbb Z}^d$ (see, for example, \cite{H, LSX}). Thus, there is a straightforward extension of the above result with an appropriate definition of partial sums. For relevant results on lattices, see \cite{CS}. \end{rem} As an application of this estimate, we can now establish the Bernstein inequality for hexagonal trigonometric polynomials. \begin{thm} If $\a \in {\mathbb N}_0^3$, then for $1 \le p \le \infty$ there is a constant $c_p$ such that $$ \left \| \partial^\a S_n \right \|_p \le c_p n^{|\a|} \|S_n\|_p, \qquad \hbox{for all $S_n \in {\mathcal H}_n$}. $$ \end{thm} \begin{proof} Recall that $\eta_n f \in {\mathcal H}_{2n}$ and $\eta_n f =f $ for $f \in {\mathcal H}_n$. We have then $$ S_n({\mathbf t}) = (\eta_n S_n)({\mathbf t}) = \int_{\Omega} S_n({\mathbf s}) \eta_n( {\mathbf t} - {\mathbf s}) d {\mathbf s}. $$ For $ p =1$ and $p = \infty$, we then apply the previous proposition with $k =4$ to obtain \begin{align*} \left\|\partial^\a S_n \right\|_p & \le \| S_n\|_p \int_{\Omega} |\partial^\a \eta_n({\mathbf s})| d {\mathbf s} \le c \|S_n\|_p \int_{\Omega} \frac{n^{|\a|+2}}{(1+ n|{\mathbf t}|)^4} d{\mathbf t} \\ & \le cn^{|\a} \|S_n\|_p \int_{{\mathbb R}^2} \frac{1}{(1+ |{\mathbf t}|)^4} dt_1d t_2 \le cn^{|\a} \|S_n\|_p, \end{align*} which establishes the stated inequality for $p =1$ and $p=\infty$. The case $1 < p < \infty$ follows from the case of $p =1$ and $p=\infty$ by interpolation. \end{proof} It is a a standard argument by now that the Bernstein inequality yields the inverse theorem. \begin{thm} There exists a constant $c_r$ such that for each function $f\in C(\overline{\Omega})$ $$ \omega_r(f; h)_p \le c_r h^r \sum_{0 \le n\le h^{-1}} (n+1)^{r-1} E_n (f)_p. $$ \end{thm} \section{Approximation on Triangle} \setcounter{equation}{0} The hexagon is invariant under the reflection group ${\mathcal A}_2$, generated by the reflections in the edges of its three paris of parallel edges. In homogeneous coordinates, the three reflections $\sigma_1$, $\sigma_2$ and $\sigma_3$ are given by $$ {\mathbf t} \sigma_1 := -(t_1,t_3,t_2), \quad {\mathbf t} \sigma_2 := -(t_2,t_1,t_3), \quad {\mathbf t}\sigma_3:= -(t_3,t_2,t_1). $$ Indeed, for example, the reflection in the direction $(\sqrt{3},1)$ becomes reflection in the direction of $\alpha = (1,-2,1)$ in ${\mathbb R}_{\mathbb H}^3$, which is easy to see, using $t_1+t_2+t_3 =0$, as given by ${\mathbf t}- 2 \frac{\langle \alpha,{\mathbf t} \rangle}{\langle \alpha,\alpha \rangle} \alpha = -(t_2,t_1,t_3) = t \sigma_2$, The reflection group ${\mathcal A}_2$ is given by ${\mathcal A}_2 =\{1, \sigma_1,\sigma_2, \sigma_3, \sigma_1\sigma_2,\sigma_2\sigma_1\}$. Define operators $\CP^+$ and $\CP^-$ acting on functions $f({\mathbf t})$ by \begin{equation} \label{CP^+} \CP^\pm f({\mathbf t}) = \frac{1}{6} \left[f({\mathbf t}) + f({\mathbf t} \sigma_1\sigma_2)+ f({\mathbf t} \sigma_2\sigma_1) \pm f({\mathbf t} \sigma_1) \pm f({\mathbf t} \sigma_2) \pm f({\mathbf t} \sigma_3) \right]. \end{equation} They are projections from the class of $H$-periodic functions onto the class of invariant, respectively anti-invariant functions under ${\mathcal A}_2$. The action of these operators on elementary exponential functions was studied in \cite{K}, and more recently studied in \cite{Sun, LS} and in \cite{LSX}. For $\phi_{\mathbf k}({\mathbf t})$, we call the functions $$ {\mathsf {TC}}_{\mathbf k}({\mathbf t}) := \CP^+ \phi_{\mathbf k}({\mathbf t}), \qquad \hbox{and} \qquad {\mathsf {TS}}_{\mathbf k}({\mathbf t}): = \frac{1}{i} \CP^- \phi_{\mathbf k}({\mathbf t}) $$ a generalized cosine and a generalized sine, respectively. For invariant functions, we can translate the results over the regular hexagon to those over one of its six equilateral triangles. We choose the triangle as \begin{align} \label{Delta} \Delta := & \{(t_1,t_2,t_3) : t_1 + t_2 + t_3 =0, 0 \le t_1, t_2, -t_3 \le 1\}\\ = & \{(t_1,t_2): t_1, t_2 \ge 0, \, t_1+t_2 \le 1\}. \notag \end{align} It is known that the generalized cosines ${\mathsf {TC}}_{\mathbf k}$ are orthogonal with respect to the inner product $$ \langle f, g \rangle_\Delta := \frac{1}{|\Delta|}\int_\Delta f({\mathbf t})\overline{g({\mathbf t})} d{\mathbf t} = 2 \int_\Delta f(t_1,t_2) \overline{g(t_1,t_2)} dt_1 dt_2, $$ so that we can consider orthogonal expansions in terms of generalized cosine functions, $$ f \sim \sum_{{\mathbf k} \in \Lambda} \widehat f_{\mathbf k} {\mathsf {TC}}_{\mathbf k}, \qquad \widehat f_{\mathbf k} = \langle f, {\mathsf {TC}}_{\mathbf k} \rangle_\Delta, $$ where $\Lambda: = \{{\mathbf k} \in {\mathbb H}: k_1 \ge 0, k_2 \ge 0, k_3 \le 0\}$. It is known that ${\langle} f,g\ra_H = {\langle} f, g \ra_\Delta$ if $f \bar g$ is invariant. If $f$ is ${\mathcal A}_2$ invariant and $H$-periodic, then it can be expanded into the generalized cosine series and it can be approximated from the space $$ {\mathcal T\!C}_n^* = \operatorname{span} \{{\mathsf {TC}}_{\mathbf k}: {\mathbf k} \in \Lambda, - k_3 \le n\}. $$ This is similar to the situation in classical Fourier series, in which even functions can be expanded in cosine series and approximated by polynomials of cosine. We state one theorem as an example. \begin{thm} \label{thm:trig} If $f \in C(\Delta)$ is the restriction of a ${\mathcal A}_2$ invariant function in $C(\overline{\Omega})$, then the (C,1) means of its generalized cosine series converge uniformly to $f$ on $\Delta$. \end{thm} We take this theorem as an example because the meaning of the $(C,1)$ means of the generalized cosine series should be clear and we do not need to introduce any new definition or notation. The requirement of $f$ in the theorem may look redundant, but a moment of reflection shows that merely $f \in C(\Delta)$ is not enough. Indeed, in the classical Fourier analysis, an even function derived from even extension of a function $f$ defined on $[0,\pi]$ (by $f(-x) = f(x)$) satisfies $f(-\pi) = f(\pi)$, so that it is automatically a continuous $2\pi$ periodic function if $f$ is continuous. The $H$-periodicity, however, imposes a much stronger restriction on the function. Indeed, for a function $f$ defined on $\Delta$, we can then extend it to $F$ defined on $\Omega$ by ${\mathcal A}_2$ symmetry. That is, we define $$ F({\mathbf t}) = f ({\mathbf t} \sigma), \qquad {\mathbf t} \in \Delta \sigma, \quad \sigma \in {\mathcal A}_2. $$ It is evident that $\Delta = \cup_{{\sigma} \in {\mathcal A}_2} \Delta {\sigma}$. In order that $F$ is a continuous $H$-periodic, we will need the restrictions of $F$ on two opposite linear boundaries of $\Omega$ are eqaul. Let $\partial_\Omega \Delta $ denote the part of boundary of $\Delta$ that is also a part of boundary of $\Omega$. Then $\partial_\Omega \Delta : = \{((t_1,t_2, -1): t_1, t_2 \ge 0, t_1+t_2 =1\}$. Upon examining the explicit formula of ${\sigma} \in {\mathcal A}_2$, we see that $F$ being continuous and $H$-periodic requires that $$ F({\mathbf t}) = F({\mathbf t} {\sigma}_2), \quad F({\mathbf t} {\sigma}_1) = F({\mathbf t} {\sigma}_2{\sigma}_1), \quad F({\mathbf t} {\sigma}_3) = F({\mathbf t} {\sigma}_1{\sigma}_2), \quad {\mathbf t} \in \partial \Delta. $$ In terms of $f$ this means \begin{align} \label{f-periodic} & f(t_1,t_2, -1) = f(-t_2,-t_1,1), \quad f(t_2, -1,t_1) = f(-t_1, 1, -t_2), \\ & f(-1,t_1,t_2) = f(1,-t_2,-t_1), \qquad \hbox{for $t_1+t_2 =1$ and $t_1,t_2 \ge 0$}. \notag \end{align} Hence, only functions that satisfy the restrictions \eqref{f-periodic} can be extended to ${\mathcal A}_2$ invariant functions on $\Omega$ that are also $H$-periodic and continuous on $\Omega$. As a result, we can replace the assumption on $f$ in Theorem \ref{thm:trig} by {\it if $f \in C(\Delta)$ and $f$ satisfies \eqref{f-periodic}.} It does not seem to be easy to classify all functions that satisfy \eqref{f-periodic}, which is clearly satisfied if $f$ has ${\mathcal A}_2$ symmetry.
{ "timestamp": "2008-02-04T01:04:45", "yymm": "0802", "arxiv_id": "0802.0316", "language": "en", "url": "https://arxiv.org/abs/0802.0316", "abstract": "Several problems on Fourier series and trigonometric approximation on a hexagon and a triangle are studied. The results include Abel and Cesàro summability of Fourier series, degree of approximation and best approximation by trigonometric functions, both direct and inverse theorems. One of the objective of this study is to demonstrate that Fourier series on spectral sets enjoy a rich structure that allow an extensive theory for Fourier expansions and approximation.", "subjects": "Classical Analysis and ODEs (math.CA)", "title": "Fourier series and approximation on hexagonal and triangular domains", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9859363746096915, "lm_q2_score": 0.822189134878876, "lm_q1q2_score": 0.8106261748859577 }
https://arxiv.org/abs/2108.12034
Optimal Point Sets Determining Few Distinct Angles
We characterize the largest point sets in the plane which define at most 1, 2, and 3 angles. For $P(k)$ the largest size of a point set admitting at most $k$ angles, we prove $P(2)=5$ and $P(3)=5$. We also provide the general bounds of $k+2 \leq P(k) \leq 6k$, although the upper bound may be improved pending progress toward the Weak Dirac Conjecture. Notably, it is surprising that $P(k)=\Theta(k)$ since, in the distance setting, the best known upper bound on the analogous quantity is quadratic and no lower bound is well-understood.
\section{Introduction} \subsection{Background} In 1946, Erdős introduced the problem of finding asymptotic bounds on the minimum number of distinct distances among sets of $n$ points in the plane \cite{ErOg}. The Erd\H{o}s distance problem, as it has become known, proved infamously difficult and was only finally (essentially) resolved by Guth and Katz in 2015 \cite{GuthKatz}. The Erd\H{o}s distance problem has also spawned a wide variety of related questions, including the problem of finding maximal point sets with at most $k$ distinct distances. Characterizing the largest possible point sets satisfying a given property in this way is a classic problem in discrete geometry. As another example, Erd\H{o}s introduced the problem of finding maximal point sets of all isosceles triangles in 1947 \cite{ErKeIsosSets}. Ionin completely answers this question in Euclidean space of dimension at most $7$ \cite{Io}. Erd\H{o}s and Fishburn determine maximal planar sets with at most $k$ distinct distances \cite{ErFi}. Recent results by Szöll\H{o}si and Östergård classify the maximal 3-distance sets in $\mathbb{R}^4$, 4-distance sets in $\mathbb{R}^3$, and 6-distance sets in $\mathbb{R}^2$ \cite{SzOs}. In \cite{ELMP, BrDePaSe, BrDePaSt} point sets with a low number of distinct triangles in Euclidean space are investigated. Along these lines, we consider the related problem of maximal planar point sets admitting at most $k$ distinct angles in $(0, \pi)$. We ignore angles of $0$ and $\pi$ so as to align our convention with related research (see \cite{PaSha}, for example), although we provide results including the 0 angle as corollaries. We completely answer this question for $k=2$, and $k=3$ and provide asymptotically tight linear bounds for $k > 3$. In answering this question for $k=2$ and $k=3$, we consider systematically consider all possible triangles in such configurations and then reduce to adding points in a finite number of positions by geometric casework. We provide linear asymptotic bounds using bounds on the related problem of the minimum number of distinct angles among $n$ non-collinear points in the plane. \subsection{Definitions and Results} By convention, we only count angles of magnitude strictly between $0$ and $\pi$. Our computations still answer the related optimal point configuration questions including $0$ angles (see Corollaries \ref{cor: p2-with-0}, \ref{cor: p3-with-0}). We begin by introducing convenient notation: \begin{definition} Let $\mathcal{P} \subset \mathbb{R}^2$. Then \[ A(\mathcal{P}) \coloneqq \#\{|\angle abc|\in(0,\pi) \,:\, a,b,c \text{ distinct, } a,b,c \in \mathcal{P}\}, \] \end{definition} Now we define the quantity we are interested in studying. \begin{definition} \[ P(k) \coloneqq \max \{\# \mathcal{P} \,:\, \mathcal{P} \subseteq \mathbb{R}^2, \text{ not all points in }\mathcal{P}\text{ are collinear, } A(\mathcal{P}) \leq k\}. \] \end{definition} We first provide general linear lower and upper bounds for $P(k)$. In particular, we have the following theorem. \begin{theorem}\label{thm: linear bounds on P(k)} For all $k \geq 1$, \begin{alignat*}{2} 2k+3 \, &\leq \, P(2k) &&\leq \, 12k \\ 2k+3 \, &\leq \, P(2k+1) &&\leq \, 12k + 6. \end{alignat*} \end{theorem} In the distance setting, the best known upper bound on the analogous parameter is the quadratic $(2+k)(1+k)$, and no lower bound is well-understood \cite{SzOs}. It is therefore interesting and surprising that we find $P(k)=\Theta(n)$ in the angle setting. We prove Theorem \ref{thm: linear bounds on P(k)} in Section \ref{sec: gen bds}. Furthermore, we explicitly compute $P(1), P(2)$, and $P(3)$ and exhaustively identify all maximal point configurations for each. \begin{prop} \label{thm: P(1) = 3} We have $P(1) = 3$, and the equilateral triangle is the unique maximal configuration. \end{prop} In order to have only a single angle, every triangle of three points in the configuration must be equilateral. As this is impossible for point configurations that are not the vertices of an equilateral triangle, $P(1) = 3$. $P(2)$ and $P(3)$ are considerably less trivial quantities. We calculate $P(2), P(3)$ via exhaustive casework, simultaneously characterizing all of the unique optimal point configurations up to rigid motion transformations and dilation about the center of the configuration. We proceed by first considering sets of three points and then search for what additional points may be added without determining too many angles. We prove Theorem \ref{thm: P(2) = 5} in Section \ref{sec: P(2)} and Theorem \ref{thm: P(3) = 5} in Section \ref{sec: P(3)}. \begin{theorem} \label{thm: P(2) = 5} We have $P(2) = 5$. Moreover, the unique optimal point configuration is four vertices in a square with a fifth point at its center (see A in Figure \ref{fig: 3.thm}). \end{theorem} \begin{theorem} \label{thm: P(3) = 5} We have $P(3) = 5$. There are 5 unique optimal configurations, shown in Figure \ref{fig: 3.thm}. \end{theorem} \begin{figure}[h!] \centering \includegraphics{diagram-20210822.pdf} \caption{Optimal Two and Three Angle Configurations. $\alpha = \frac{\pi}{5}, \beta = \frac{2\pi}{5}, \gamma = \frac{3\pi}{5}$.} \label{fig: 3.thm}\end{figure} \section{General Bounds} \label{sec: gen bds} Although one may in principle calculate $P(k)$ for any $k$ by extensive casework (as we later calculate $P(2),P(3)$), it quickly becomes overwhelming. As such, we instead provide general bounds on $P(2k)$ and $P(2k+1)$. Note that the construction of a square with a point in the center is no accident in the case of $P(2)$. Indeed, adding a point in the center of a regular $2k$-gon introduces no additional angles. This is because, in a regular $2k$-gon, the line from the center to any boundary vertex intersects an additional vertex on the other side. As such, the only additional angles that may be added from the center point are those with the center point as the center of the angle. For the other angles including it as an endpoint, choosing the point on the other end of the line through the center gives an equal angle. Moreover, the angles formed with the center point as the center of the angle are precisely $i\pi/k$ for $1 \leq i \leq k-1$, which are already achieved among the other points of the regular $2k$-gon. So, using the regular $(2k+2)$-gon with a point added in the center yields the following lemma. \begin{lemma} \label{lem: bd on P(2k)} We have $P(2k) \geq 2k+3$. \end{lemma} Moreover, in the case of $P(2k+1)$, the regular $(2k+3)$-gon and the projection of a regular $(2k+3)$-gon onto a line (via a stereographic-like projection from a cap vertex) both achieve $2k+1$ angles, providing a bound on $P(2k+1)$. \begin{lemma} \label{lem: bd on P(2k+1)} We have $P(2k+1) \geq 2k+3$. \end{lemma} \begin{prop} If we wish to also count the 0-angle, then we may not add the center to an even polygon, and in general we reach a bound of $P(k)\geq k+2$. \end{prop} We conjecture that both of these lower bounds are tight in general. Nonetheless, we provide a linear upper bound. We achieve this bound as a corollary of a lower bound on the number of distinct angles, using progress on the Weak Dirac Conjecture. In 1961, Erd\H{o}s \cite{ErConj} conjectured the following, based on an earlier, more difficult conjecture of Dirac: \begin{conj}[Erd\H{o}s, 1961] \label{conj: weak dirac} Every set $\mathcal{P}$ of $n$ non-collinear points in the plane contains a point incident to at least $\left \lceil n/2 \right \rceil$ lines of $\mathcal{L(P)}$, where $\mathcal{L(P)}$ is the set of lines formed by points in $\mathcal{P}$. \end{conj} While Dirac's conjecture has not been proven, significant progress has been made. Let $l(n)$ be the largest proven lower bound proven for Dirac's conjecture. I.e., every set $\mathcal{P}$ of $n$ non-collinear points in the plane contains a point incident to at least $l(n)$ lines of $\mathcal{L(P)}$. We have $l(n) \geq \left \lceil n/3 \right \rceil +1$ from \cite{Ha}. Let $A(n)$ be the minimum number of distinct angles among $n$ points in the plane. We have the following lemma. \begin{lemma} \label{thm: lower bd distinct angles, no res} For $n > 3$, $A(n) \geq \frac{l(n) - 1}{2} \geq n/6$. \end{lemma} \begin{proof} Fix a set $\mathcal{P}$ of $n$ non-collinear points in the plane. Let $p$ be a point in $\mathcal{P}$ incident to at least $\ell(n)$ lines of $\mathcal{L(P)}$. Fix a point $q \neq p$ in $\mathcal{P}$. It shares exactly one line with $p$. Note that for a fixed nonzero angle $\theta < \pi$, there are exactly two possible lines which $r$ must be on in order for $\angle qpr = \theta$. As such, since $p$ is incident to $\ell(n) - 1$ lines without $q$, $p$ is the center angle of at least $(\ell(n) - 1)/2$ distinct angles. Therefore \[ A(n) \geq \frac{\ell(n) - 1}{2}. \] We have $\ell(n) \geq \left \lceil n/3 \right \rceil +1$ from \cite{Ha}. As such, we have $A(n) \geq n/6$, as desired. \end{proof} Note that such a use of the Weak Dirac Conjecture is known. See \cite{BMP}, Section 6.2. \begin{cor} \label{cor: bound on P(k)} We have $P(k) \leq 6k$. \end{cor} \begin{proof} Since $A(n) \geq n/6$ by Lemma \ref{thm: lower bd distinct angles, no res}, then $P(k) \leq 6k$ as point configurations with at least $6k + 1$ points define at least $k+1$ angles. \end{proof} \section{Computing $P(2) = 5$} \label{sec: P(2)} \begin{proof} In any point configuration with at least three points, there are triangles. For any point configuration with at most two angles, all triangles must be isosceles. We divide into two cases, based on whether or not there is an equilateral triangle. \subsection{There is an equilateral triangle} We consider adding a fourth point in cases (Figure \ref{2.equil}). \begin{figure}[h!] \centering \includegraphics[scale=0.8]{diagram-20210822-2.pdf} \caption{Equilateral Triangle Regions} \label{2.equil} \end{figure} \begin{description} \item[\textit{Case 1}] $p \in A$.\\ Then $\angle acp < \pi/3$ and $\angle cap > \pi/3$, leading to more than two angles. \item[\textit{Case 2}] $p \in \overline{ab}$.\\ Then $\angle bcp < \pi/3$ and one of $\angle cpb$ and $\angle apc \geq \pi/2$, leading to more than two angles. \item[\textit{Case 3}] $p \in \dvec{ac}$ to the upper-right of $a$.\\ Then $\angle cbp > \pi/3$ and $\angle cpb < \pi/3$, again leading to more than two angles. \item[\textit{Case 4}] $p \in B$.\\ In this case, $\angle cbp > \pi/3$ and $\angle cpb < \pi/3$, leading to more than two angles. \item[\textit{Case 5}] $p \in \triangle{abc}$.\\ In this case, one of $\angle apb, \angle bpc, \angle cpa \geq 2\pi/3$ and $\angle acp < \pi/3$, leading to more than two angles. \end{description} Up to symmetry, these cases are exhaustive. Thus if there is an equilateral triangle in the configuration, there can only be at most three points. \subsection{There is no equilateral triangle} Now, let $a$, $b$, and $c$ be the vertices of an isosceles triangle with vertex angle $\alpha$, base angle $\beta$, and $a$ the apex vertex. We reduce the number of possibilities for additional points by partitioning the plane into regions $A_i$ (Figure \ref{2.regions}). \begin{figure}[h!] \centering \includegraphics{diagram-20210822-3.pdf} \caption{Isosceles Triangle Regions.} \label{2.regions} \end{figure} Note that we may without loss of generality assume that no fourth point is added within $\triangle abc$ as we could then choose that triangle as our initial triangle. Also note that $A_1$ and $A_1'$ and $A_3$ and $A_3'$ are equivalent up to symmetry. \begin{description} \item[\textit{Case 1}] $p \in A_1$.\\ In this case, $\angle pab > \alpha$ and $\angle pcb > \beta$. So, regardless of whether $\alpha$ or $\beta$ is greater, adding $p$ introduces an additional angle. So, no additional points can be in $A_1$ or $A_1'$. \item[\textit{Case 2}] $p \in A_2$.\\ In this case, $\angle pcb$ and $\angle pbc$ are greater than $\beta$, so both must be $\alpha$ to not add additional angles. But then $\angle cpb = \pi - 2\alpha \neq \beta$, in order to not add angles, implying $3 \alpha = \pi$. But, this implies $\triangle pcb$ is an equilateral triangle. Thus no points may be added in this case. \item[\textit{Case 3}] $p \in A_3$ (or $A_3'$ by symmetry). \\ In this case, $\angle bap > \alpha$ and $\angle abp > \beta$, so there is an additional angle added regardless and no additional points are possible. \item[\textit{Case 4}] $p \in A_4$. \\ In this case, $\angle cap, \angle bap < \alpha$, so both must equal $\beta$. Therefore, $2 \beta = \alpha$, which implies $\beta = \pi/4$ and $\alpha = \pi /2$. Moreover, since $\angle acp$ and $\angle abp$ are greater than $\beta$, they must both equal $\alpha = \pi/2$. So, the only possibility for an addable point in this case is for $p$ to be the fourth vertex of the square $acpb$. \item[\textit{Case 5}] $p \in \dvec{bc}$. \\ If $p$ is on $\dvec{bc}$ between $b$ and $c$, then $\angle cap, \angle bap < \alpha$. In order for these not to introduce additional angles, they must both be equal to $\beta$. This implies $\beta = \pi/4$ and $\alpha = \pi/2$ and $p$ is the center of the side $bc$. If $p \in \dvec{bc}$ to the left of $c$ (or by symmetry, right of $b$), $\angle bap > \alpha$ and thus $\angle bap = \beta$. Since $2\beta + \alpha = \pi$, $\beta < \pi/2$. But then $\angle acp > \pi/2 > \beta > \alpha$. Thus there is exactly one point possible on line $\dvec{bc}$, the centerpoint of the edge between $b$ and $c$. \item[\textit{Case 6}] $p \in \dvec{ac}$ (or $p \in \dvec{ab}$).\\ If $p$ is between $a$ and $c$, then $\angle cbp < \beta$ and thus $\angle cbp = \alpha$. But, as before, $\beta < \pi/2$. Moreover, one of $\angle bpc$ or $\angle bpa$ is at least $\pi/2 > \beta > \alpha$. Thus there are too many angles in this case. If $p$ is to the bottom left of $c$, $\angle apb < \beta$ and thus $\angle apb = \alpha$. But, again, either $\angle bca$ or $\angle bcp > \pi/2 > \beta$, creating too many angles in this case. If $p$ is on $\dvec{ac}$ to the upper right of $a$, $\angle pbc > \beta$ and thus equals $\alpha$. Then $\angle pba < \alpha$ and must equal $\beta$ and thus $2\beta = \alpha$. This implies $\beta = \pi/4$ and $\alpha = \pi/2$ and $\triangle cbp$ is an isosceles right triangle with $b$ the apex vertex, $p$ on $\dvec{ac}$ to the upper right of $a$, and $a$ at the center of side $\overline{pc}$. \end{description} As such, in order to add additional points to an isosceles triangle point configuration without adding additional angles, we must have $\alpha = \pi/2$ and $\beta = \pi/4$. The four additional possible points are marked in Figure \ref{2.points}. \begin{figure}[h!] \centering \includegraphics{diagram-20210822-4.pdf} \caption{Compatible Points with the Right Triangle.} \label{2.points} \end{figure} Note that $\angle x_4ax_1, \angle x_4 a x_2 > \pi/2$. So, $x_4$ cannot be in the same point configuration as $x_1$ or $x_2$. By symmetry the same follows for $x_3$. However, we may have both $x_1$ and $x_2$ or both $x_3$ and $x_4$, either of which give the unique optimal configuration $A$ in Figure \ref{fig: 3.thm}. \end{proof} \begin{cor}\label{cor: p2-with-0} One might also wish to include the trivial 0-angle in our count. In this case, $P(2)=4$, and the unique configuration is the square. \end{cor} \begin{proof} The only 5-point configuration no longer holds when we count the 0-angle. Figure \ref{2.points} displays all valid four point configurations which define only 2 angles excluding 0, as detailed in the proof of $P(2)$. All the shown points but $x_2$ define a 0-angle, so the only valid 4 point configuration is the square. \end{proof} \section{Computing $P(3) = 5$.} \label{sec: P(3)} In this section we prove the surprising result that $P(3) = 5$. That is, adding an additional allowable distinct angle from two to three does not increase the maximum number of points in an optimal point configuration. \begin{proof} We divide the casework for this section into four parts based on the triangles exhibited in the point configuration: \begin{enumerate} \item There is a scalene triangle. \item All triangles are isosceles with at least one with base angle larger than vertex angle. \item All triangles are isosceles with the base angle at most the vertex angle with at least one non-equilateral triangle. \item All triangles are equilateral. \end{enumerate} \subsection{There is a scalene triangle.} Let $a, b,$ and $c$ be the vertices of a scalene triangle in the configuration. We without loss of generality assume $\alpha < \beta < \gamma$ (Figure \ref{3.1.regions}). \begin{figure}[h!] \centering \includegraphics{diagram-20210822-5.pdf} \caption{Scalene Triangle Regions.} \label{3.1.regions} \end{figure} As in the proof of Theorem \ref{thm: P(2) = 5}, we begin by reducing the number of possible points to a finite number by region-based casework. \begin{description} \item[\textit{Case 1}] $p \in A_1$. \\ As $\angle bap < \alpha$, no points may be added in $A_1$. \item[\textit{Case 2}] $p \in A_2$.\\ In this case, $\angle abp > \beta$ and thus must equal $\gamma$. Moreover, $\angle bap > \alpha$. If $\angle bap = \gamma$, $\angle bpa < \alpha$. Thus $\angle bap = \beta$, which implies $\angle bpa = \alpha$. But, $\angle bpc < \angle bpa = \alpha$, so we define a fourth angle. Therefore there cannot be points added in $A_2$. \item[\textit{Case 3, Case 4}] $p \in A_3 \cup A_4$. \\ As $\angle bcp > \gamma$, no points may be added in $A_4$. \item[\textit{Case 5}] $p \in A_5$. \\ In this case, $\angle cbp > \beta$ and thus $\angle cbp = \gamma$. Moreover, $\angle cap > \alpha$ and is thus $\beta$ or $\gamma$ (which implies $\angle pab = \alpha$ or $\beta$, respectively). If $\angle cap = \beta$, we have that $\angle pab < \beta$ and thus is equal to $\alpha$. So, $\beta = 2\alpha$. Then $\angle abp$ cannot be $\alpha$ because then $\angle apb > \gamma$. So, $\angle abp = \beta$. So, $2\alpha = \beta$ and $2\beta = \gamma$, so the angles are $\pi/7, 2\pi/7,$ and $4\pi/7$. We then have Figure \ref{3.1.5}. \begin{figure}[h!] \centering \includegraphics[scale=.7]{diagram-20210822-6.pdf} \caption{Four Point Kite Configuration. $\displaystyle \alpha \ =\ \frac{\pi }{7} ,\ \beta \ =\ 2\alpha ,\ \gamma =4\alpha $.} \label{3.1.5} \end{figure} Alternatively, we have $\angle pab = \beta$ and thus $\angle cap = \gamma$. Then $\gamma = \beta + \alpha$, so $\angle abp = \alpha$. $\gamma = \beta + \alpha$ and $\alpha + \beta + \gamma = \pi$ implies $\gamma = \pi/2$. I.e., $a,b,c, p$ are the vertices of a rectangle. As such, we reach Figure \ref{3.1.5.2}. \begin{figure}[h!] \centering \includegraphics[scale=0.7]{diagram-20210822-7.pdf} \caption{Four Point Rectangular Configuration.} \label{3.1.5.2} \end{figure} Therefore, there are \textbf{exactly two possible points} to add in $A_5$, with each choice exactly determining the angles $\alpha$, $\beta$, and $\gamma$. \item[\textit{Case 6}] $p \in A_6$. \\ As $\angle acp > \gamma$, no points may be added in $A_6$. \item[\textit{Case 7}] $p \in \dvec{ab}$. \\ If $p$ is to the right of $b$, then $\angle acp > \gamma$. Note that $\beta, \alpha < \pi/2$. Then, if $p$ is to the left of $a$, $\angle pac > \pi/2$ and must equal $\gamma$. But then $\alpha$ and $\gamma$ are supplementary, implying $\beta = 0$. Finally, if $p$ is between $a$ and $b$, one of $\angle cpa$ and $\angle cpb$ is at least $\pi/2$. So, $\gamma \geq \pi/2$. Moreover, as neither $\alpha$ nor $\beta$ can be supplementary to $\gamma$, we have that the supplement of $\gamma$ is $\gamma$, and hence $\gamma = \pi/2$. This yields a diagram like Figure \ref{3.1.7}. \begin{figure}[h!] \centering \includegraphics[scale=.8]{diagram-20210822-8.pdf} \caption{Four Point Configuration from Case 7.} \label{3.1.7} \end{figure} So, \textbf{exactly one point} may be added on $\dvec{ab}$ and it forces $\gamma = \pi/2$. \item[\textit{Case 8}] $p \in \dvec{bc}$ .\\ If $p$ is between $b$ and $c$ then $\angle bap < \alpha$. Recall that $\beta,\alpha < \pi/2$. Then, if $p$ is not on $\overline{bc}$ and is closest to $b$, $\angle abp > \pi/2$ and thus must equal $\gamma$. So, $\gamma + \beta = \pi$. But, this implies $\alpha = 0$. Finally, if $p$ is not on $\overline{bc}$ and is closest to $c$, then $\angle acp$ is supplementary to $\gamma$. Since neither $\beta$ nor $\alpha$ can supplementary to $\gamma$ be without implying the other is 0, this implies $\gamma = \pi/2$. We then have the configurations in Figure \ref{3.1.8} since $\angle cap = \beta$ or $\alpha$. \begin{figure}[h!] \centering \includegraphics[scale=0.7]{diagram-20210822-9.pdf} \caption{Four Point Configurations from Case 8.} \label{3.1.8} \end{figure} So, exactly\textbf{ two points} can be added on $\dvec{bc}$ and both force $\gamma = \pi/2$. \item[\textit{Case 9}] $p \in \dvec{ac}$. \\ If $p$ is between $a$ and $c$, then one of $\angle bpa$ and $\angle bpc$ are at least $\pi/2$ and must thus be $\gamma$. Since neither $\alpha$ nor $\beta$ can be supplementary to $\gamma$, this implies $\gamma = \pi/2$. However, $\angle abp < \beta$ and thus must be $\alpha$. This then yields $2\alpha = \pi/2$ from $\triangle abp$, contradicting $\alpha + \beta = \pi/2$ since $\alpha\neq\beta$. If $p$ is left of $a$, we have $\angle pab > \pi/2$ and thus must be $\gamma$. But, this implies $\gamma$ and $\alpha$ are supplementary. If $p$ is right of $c$, then $\angle bcp$ is supplementary to $\gamma$. Since neither $\alpha$ nor $\beta$ can be, this implies $\gamma = \pi/2$. This leads to the allowable point configuration in Figure \ref{3.1.9.1}. \begin{figure}[h!] \centering \includegraphics[scale=0.8]{diagram-20210822-10.pdf} \caption{Four Point Configuration from Case 9.} \label{3.1.9.1}\end{figure} So, exactly \textbf{one point} may be added in this case, with $\gamma = \pi/2$ being forced.\\ \item[\textit{Case 10}] $p$ in the interior of $\triangle abc$. \\ In this case $\angle pac < \alpha$, we no points may be added. \end{description} It now remains to show that all six addable points are mutually incompatible. Suppose we add $p \in A_5$ as in the first case of a kite (Figure \ref{3.1.5}). As $\gamma \neq \pi/2$ as in all the other cases, no additional points may be added. Where $\gamma=\pi/2$, we have the five point placements to consider (presented in Figure $\ref{3.1.points}$). \begin{figure} \centering \includegraphics{diagram-20210822-11.pdf} \caption{Compatible Points with the Right Triangle} \label{3.1.points} \end{figure} Suppose we add $x_1$. Adding $x_2$ yields an angle $\angle x_1x_2b > \pi/2$ or $\angle x_1x_2a > \pi/2$ (the diagonals cannot intersect at right angles lest $\alpha = \beta$). Adding $x_3$ adds $\angle x_1ax_3 > \pi/2$, and similarly for $x_4$. Adding $x_5$ adds $\angle x_1bx_5 > \pi /2.$ Suppose we add $x_2$. Adding $x_3$ yields $\angle x_2cx_3 > \pi/2$, and similarly for $x_4$. Adding $x_5$ adds $\angle ax_2x_5 > \pi/2$. Suppose we add $x_3$. Adding $x_4$ creates $\angle ax_3x_4 > \pi/2$. Adding $x_5$ forces $\angle ax_3x_5=\angle bx_5x_3=\angle abx_5=\pi/2$. But $\angle bax_3=\beta<\pi/2$, so the angles in $abx_5x_3$ do not add to $2\pi$. Finally, suppose we add $x_4$ and $x_5$. In this case, $\angle cx_4x_5 < \angle cx_3x_5 = \alpha.$ So, if there is a scalene triangle in the point configuration, there can be at most four points. \vspace{3mm} \subsection{All triangles are isosceles with at least one with base angle larger than vertex angle.}\label{section:3.2} Let $a, b$, and $c$ be the vertices of an isosceles triangle with the base angle larger than the vertex angle (Figure \ref{3.2.regions}). \begin{figure}[h!] \centering \includegraphics{diagram-20210822-12.pdf} \caption{Isosceles Triangle with Small Vertex Angle.} \label{3.2.regions} \end{figure} Specifically, $\alpha = \angle cba < \angle abc = \angle acb = \beta$. \begin{description} \item[\textit{Case 1}] $p \in A_1$ .\\ Note that $\angle bcp > \beta > \alpha$. Let this new angle be $\gamma$. Then, since only three angles are admissible and since $\angle pca + \angle acb = \gamma$, then $\angle pca = \alpha$ or $\angle pca = \beta$. Suppose $\angle pca = \beta$. Then $\angle pcb = \gamma = 2\beta$. Additionally, since $\angle pbc < \beta$, $\angle pbc = \alpha$. Since $2\beta + \alpha = \pi$, then $\angle bpc < \gamma$ lest the angles in $\triangle pbc$ be too large. Then as $\triangle pbc$ must be isosceles, $\angle bpc = \alpha$. Thus the angles in $\triangle bpc$ sum to $\gamma + 2\alpha = 2\beta + 2\alpha > 2\beta + \alpha = \pi$, a contradiction. Thus $\angle pca = \alpha$. In this case, $\gamma = \alpha + \beta$. Observe that $\angle pbc < \beta$, and so $\angle pbc = \alpha$. Similarly, $\angle pba = \alpha$, thus giving $\angle bca = \beta = 2\alpha$. Then $\gamma = \alpha + \beta = 3\alpha$. The angles in $\triangle abc$ must add to $\pi$, so $2\beta + \alpha = 5\alpha = \pi$, which implies $\alpha = \pi/5$. Moreover, as $\triangle pbc$ must be isosceles, $\angle cpb = \alpha$. Then as $\angle bpa < \angle cpa$, we have $\angle bpa = \alpha$ or $\angle bpa = \beta$, both of which determine $\angle pac$. Thus in this case we have two points in $A_1$ which are contenders to give acceptable four-point configurations (Figure \ref{3.2.1}). \begin{figure}[h!] \centering \includegraphics{diagram-20210822-13.pdf} \caption{Configurations of Interest from Case 1.} \label{3.2.1} \end{figure} Consider further the case where $\angle bpa = \alpha$. We have $pabc$ is a parallelogram. We have deduced above that $\dvec{pb}$ bisects $\angle abc$, but this is only true if $abcp$ is a rhombus. However, this would mean that $\triangle abc$ is equilateral, since then $|\overline{ab}|=|\overline{ac}|=|\overline{bc}|$, contradicting our original assumption that $\alpha < \beta$. Thus this case is impossible and $\angle bpa = \beta$. So, there is exactly \textbf{one point} we may add in $A_1$ (thus forming the left configuration of Figure \ref{3.2.1}), and, by symmetry, an additional \textbf{one point} in $A'_1$. \item[\textit{Case 2}] $p\in A_2$. \\ Note that $\angle bcp > \angle bca = \beta$, and $\angle cpb < \angle cab = \alpha$. Thus no points may be added in $A_2$. \item[\textit{Case 3}] $p\in A_3$. \\ If $p\in A_3$, then $\gamma = \angle abp > \beta$. Then, as $\angle pab > \alpha$, we have $\angle apb < \beta$ and thus $\angle apb = \alpha$ so that we may still have only three angles. But then $\angle apc < \angle apb = \alpha$, yielding a fourth angle. Thus no new angles may be added in $A_3$ or $A'_3$. \item[\textit{Case 4}] $p\in A_4$. \\ In this case, $\angle pac < \angle bac = \alpha < \beta = \angle bca < \angle pca$. Thus there are already four angles, and no points can be added in $A_4$. \item[\textit{Case 5}] $p$ inside $\triangle abc$. \\ In this case, $\angle pab, \angle pac < \angle cab = \alpha$, so let $\angle pab = \angle pac = \gamma$. As $\triangle abp$ must be isosceles and $\gamma < \alpha$, $\gamma$ cannot be the vertex angle of $\triangle abp$. Since $\gamma < \alpha < \beta$, this implies the vertex angle of $\triangle abp$ must be $\beta$. So, $2 \gamma = \alpha$ and $2\gamma + \beta = \pi$. But, this is a contradiction as $\alpha + 2\beta = \pi$. No points are addable in this case. \item[\textit{Case 6}] $p\in \dvec{bc}$. \\ First, assume $p$ is between $b$ and $c$; that is, $p$ is located on the base of $\triangle abc$. Then, $\angle pab < \alpha$ and one of $\angle pba$ or $\angle cpa \geq \pi/2 > \beta$. Thus no points can be added on the base of $\triangle abc$. Now suppose that $p$ is not between $b$ and $c$. By symmetry, we may assume that $p$ is on the left (i.e., it is closer to $c$ than to $b$). Now, since $\beta < \pi/2$, $\angle acp = \gamma > \pi/2$. As $\triangle acp$ must be isosceles, $\gamma$ cannot be a base angle, and $\gamma > \alpha$, we have $\angle cpa = \angle pac = \alpha$. This implies $2 \alpha + \gamma = 2 \beta + \alpha$ and $\alpha + \gamma = 2\beta$. Along with $\pi - \gamma = \beta$, this yields $\alpha = \pi/5, \beta = 2\alpha, \gamma = 3 \alpha$. So, \textbf{two points} are addable in this case (one on either side of the edge $bc$). See Figure \ref{3.2.6}. \begin{figure}[h!] \centering \includegraphics[scale=0.8]{diagram-20210822-14.pdf} \caption{Four Point Configuration from Case 6.} \label{3.2.6} \end{figure} \item[\textit{Case 7}] $p\in \dvec{ac}$ or $p\in \dvec{ab}$.\\ By symmetry, we may assume $p\in\dvec{ac}$. First, assume $p$ is between $a$ and $c$; that is, $p$ is located on a leg of $\triangle abc$. In this case, one of $\angle apb$ and $\angle bpc \geq \pi/2 > \beta$. Let this angle be $\gamma$. As $\angle cbp < \beta$ and thus must equal $\alpha$, since $\triangle cbp$ must be isosceles, we have $\angle bpc = \beta$. So, $\angle bpa = \gamma$ and $\beta + \gamma = \pi$. Moreover, we have $\gamma + 2\alpha = \pi$. This again implies $\alpha = \pi/5, \beta = 2\alpha, \gamma = 3\alpha$. This is a legal configuration. Next, assume that $p\in\dvec{ac}$ is not on the triangle's side, and that it is closer to $a$ than to $c$. In this case, $\angle bap = \pi - \alpha > \beta$ and $\angle cpb < \angle cab = \alpha$. Thus no points may be added in this case. Now, assume that $p\in\dvec{ac}$ is not on the triangle's side, and that it is closer to $c$ than to $a$. Then we have a new angle $\gamma = \angle pba > \angle cba = \beta$. We also have $\angle pcb = \pi - \beta > \beta$ because $\beta < \pi/2$. So, to maintain only three angles, we must have $\angle pcb = \angle pba = \gamma$. Then, as $\triangle cbp$ must be isosceles and $\angle bcp$ must be the vertex angle, $\angle cbp = \angle cpb = \alpha$. As before, we conclude that $\alpha = \pi/5, \beta = 2\pi/5, \gamma = 3\pi/5$. This is a legal configuration. So, in this case, all our angles are exactly determined and there are \textbf{two points} addable on $\dvec {ac}$, and, by symmetry, \textbf{two points} addable on $\dvec {ab}$. \end{description} So there are only eight addable points in the case of there being an isosceles triangle with vertex angle smaller than base angle (Figure \ref{fig:smaller vertex angle isos}). We examine each of these points up to symmetry and examine which are compatible. \begin{figure}[h!] \centering \includegraphics{diagram-20210822-15.pdf} \caption{Compatible Points with the Isosceles Triangle with Small Vertex Angle} \label{fig:smaller vertex angle isos} \end{figure} \begin{description} \item[\textit{Combination Case 1}] Including $x_1$.\\ \label{case: 1} We cannot add $x_4$ as $\angle x_4x_1a > \angle cx_1a = \gamma$. Additionally, note that $\angle bx_1x_4', \angle bx_1x_2', \angle ax_3'x_1 < \alpha$ and thus none of $x_4'$, $x_3'$, or $x_2'$ may be added. Each of $x_1'$, $x_2$, and $x_3$ are individually compatible with $x_1$, leading to \textbf{three} valid five point configurations including $x_1$ (see $B, D, E$ of Figure \ref{fig: 3.thm}). By symmetry, $x_1'$ is then individually compatible with $x_1$, $x_2'$ and $x_3'$. \item[\textit{Combination Case 2}] Including $x_2$.\\ Note that $\angle x_2x_3c, \angle x_2x_3'c, ax_4'x_2 < \alpha$. So none of $x_3, x_3'$, or $x_4'$ are addable in this case. Adding $x_2'$ is analogous to adding both $x_1$ and $x_3$ to $\triangle abc$, so $x_2'$ is addable. And $x_4$ is addable as the projection of a regular pentagon onto a line via one of its vertices. So, there are \textbf{three} valid five point configurations including $x_2$ (see $E, D, C$ of Figure \ref{fig: 3.thm}). By symmetry, $x_2'$ is compatible with exactly $x_1'$, $x_2$ and $x_4'$. \item[\textit{Combination Case 3}] Including $x_3$.\\ In this case, $\angle cx_4'x_3 < \alpha$, so $x_4'$ is not addable in this case. Adding $x_3'$ creates a projected pentagon, and adding $x_4$ creates the construction of a trapezoid with a point in the middle, and both are individually compatible with $x_3$. So, each of $x_1, x_3',$ and $x_4$ may be individually added alongside $x_3$, leading to \textbf{three} valid five point configurations including $x_3$ (see $E, C, D$ of Figure \ref{fig: 3.thm}). By symmetry, $x_3'$ is compatible with $x_1', x_3,$ and $x_4'$. \item[\textit{Combination Case 4}] Including $x_4$. $x_4'$ is compatible with $x_4$. In combination with the above casework, we have $x_4$ is individually compatible with exactly $x_2, x_3, $ and $x_4$. So, there are \textbf{three} valid five point configurations including $x_4$ (see $C, D, E$ of Figure \ref{fig: 3.thm}). By symmetry, $x_4'$ is compatible with exactly $x_2', x_3'$, and $x_4$. \end{description} At this point, we have exhaustively identified all our five point configuration for this case of $\alpha < \beta$. From our casework, we see there are no compatible, addable points which share an additional compatible point. Therefore, there are most five points in this case, with $C, E,$ and $F$ of Figure \ref{fig: 3.thm} as the possible five point configurations. \subsection{All triangles are isosceles with the base angle at most the vertex angle with at least one non-equilateral triangle.} As before, we proceed by region-based casework. Fortunately, whenever we encounter a scalene triangle or an isosceles triangle with the vertex angle larger than the base angle, we reduce to the previous cases. Our diagram for this section is Figure \ref{3.3.regions}, where $\beta < \alpha$. \begin{figure}[h!] \centering \centering \includegraphics[scale=0.7]{diagram-20210822-16.pdf} \caption{Isosceles Triangle with Large Vertex Angle.} \label{3.3.regions}\end{figure} \begin{description} \item[\textit{Case 1}] $p \in A_1$.\\ In this case, $\angle bpc < \alpha$ since $\angle pcb$ and $\angle pbc > \beta$. We then have two cases. If $\angle bpc = \beta$, then $\angle pcb$ and $\angle pbc$ cannot both be $\alpha$ (as $2\alpha + \beta > \pi$). Moreover, $\triangle cpb$ must be isosceles and neither $\angle pcb$ nor $\angle pbc$ can be $\beta$, so both must be $\gamma$. This implies $2\gamma + \beta = 2\beta + \alpha$ and $\beta < \gamma < \alpha$. But, $\angle bpa < \beta < \gamma$, creating more than 3 angles. If $\angle bpc = \gamma < \alpha$, then, since $\angle cpa, \angle bpa < \gamma$, both must be $\beta$. So, $\gamma = 2\beta$. Now, $\triangle pcb$ must be isosceles. Since $\alpha + 2\beta = \alpha + \gamma = \pi$, this implies both $\angle pbc$ and $\angle pcb$ must be $\gamma$. But, then $\gamma = \pi/3$, $\beta = \pi/6$, and $\alpha = 2\pi/3$ (Figure \ref{3.3.1.2}). \begin{figure}[h!] \centering \includegraphics[scale=0.7]{diagram-20210822-17.pdf} \caption{Four Point Configuration from Case 1.} \label{3.3.1.2} \end{figure} As such, there is \textbf{exactly one point} addable in this case and it forces the choice of $\alpha$, $\beta$, and $\gamma$. \item[\textit{Case 2}] $p \in A_2$ (or $A_2'$). \\ In this case, $\angle bcp < \beta$ and $\angle cap > \alpha$, so no points may be added. \item[\textit{Case 3}] $p \in A_3$. \\ As both $\angle cap, \angle bap < \alpha$, we have three cases: \begin{itemize} \item[(1)] $\angle cap = \beta = \angle bap$ \item[(2)] $\angle cap = \beta$, $\angle bap = \gamma \neq \beta$ (and swapping $\angle cap$ and $\angle bap$ by symmetry), and \item[(3)] $\angle bap, \angle cap = \gamma \neq \beta$ \end{itemize} Case $(1)$ implies $\alpha = 2\beta$, and hence $\beta = \pi/4, \alpha = \pi/2$. Moreover, $\triangle bcp$ must be isosceles and must include one of $\alpha$ or $\beta$. Note $\triangle bcp$ cannot be equilateral as then $\angle acp = 7\pi/12$, and we have four angles. Note that $\pi/2$ cannot be a base angle in $\triangle bcp$ and $\beta$ being the vertex angle reduces to the previous casework in section \ref{section:3.2}. Thus the only option is $\angle cbp = \angle bcp = \pi/4$ and $\angle bpc = \pi/2$, yielding a \textbf{valid} configuration, the square $abpc$. Case $(2)$ implies $\beta + \gamma = \alpha$. This is illustrated in the Figure \ref{3.3.3.2}. \begin{figure}[h!] \centering \includegraphics{diagram-20210822-18.pdf} \caption{Four Point Configuration from Case 3.} \label{3.3.3.2} \end{figure} First, we show that $\gamma > \beta$. If $\gamma < \beta,$ then $\alpha = \angle abp > \beta$, which implies $\angle cbp = \gamma$. We similarly have $\angle bcp = \gamma$. But then, $\angle bpc = \pi - 2\gamma > \alpha$, yielding four angles $\gamma<\beta<\alpha<\pi-2\gamma$. Thus $\gamma > \beta$. We then have two sub-subcases to consider: \begin{itemize} \item[(2i)] $\angle abp = \alpha$. \item[(2ii)] $\angle abp = \gamma \neq \alpha$. \end{itemize} In case $(2i)$, note that $\angle cbp = \gamma$. Since $\triangle abp$ must be isosceles with largest angle the vertex angle, $\angle apb = \gamma$. Then, we have $\alpha + 2\gamma = \alpha + 2\beta$, contradicting the assumption of case (2) that $\gamma \neq \beta$. In case $(2ii)$ we have $\beta < \gamma < \alpha$. Since $\angle cbp < \gamma$, it must equal $\beta$. So, $\gamma = 2\beta$. This implies $\alpha = 3\beta$. Thus $\beta = \pi/5$, $\gamma = 2\pi/5$, and $\alpha = 3 \pi /5$. Since $\triangle acp$ must be isosceles with smaller base angle, $\angle apc = \beta$. Completing $\triangle abp$, we have $\angle apb = \beta$. But then $\triangle abp$ is isosceles with smaller vertex angle. Thus this case reduces to the previous casework. In case $(3)$, we have $2\gamma = \alpha$. Now, $\angle cbp$ cannot be $\gamma$ since $\beta + \gamma \neq \alpha$. As $\gamma < \alpha$, it cannot be $\alpha$ either. Thus it must be $\beta$. Since $\beta \neq \gamma$, we must then have $2 \beta = \gamma$. So, we have $\alpha = 4\beta$. Thus $\beta = \pi/6$, $\gamma = \pi/3$, $\alpha = 2\pi/3$. This yields the \textbf{valid} point configuration in Figure \ref{3.3.3.3.2}. \begin{figure}[h!] \centering \includegraphics{diagram-20210822-19.pdf} \caption{Four Point Configuration from Case 3.} \label{3.3.3.3.2}\end{figure} So, there are exactly \textbf{two} addable points $p \in A_3$ and each forces a choice of $\alpha, \beta$, and $\gamma$. \item[\textit{Case 4}] $p \in A_4$ (or $A_4')$.\\ In this case, $\angle cap > \alpha$ and $\angle apc < \beta$. Thus no points are addable in this case. \item[\textit{Case 5}] $p \in \dvec{ac}$ (or $\dvec{ab}$ by symmetry).\\ If $p$ is to the left of $a$, $\angle pab = \pi - \alpha \neq \beta$. We then have two subcases: \begin{enumerate} \item $\angle pab = \alpha$ \item $\angle pab = \gamma \neq \alpha$ \end{enumerate} In case (1), $\angle pab = \alpha$ implies $\alpha = \pi/2$ and $\beta = \pi/4$. As $\triangle pab$ must be isosceles, $\angle bpa = \angle pba = \pi/4$. So, we get the \textbf{valid} configuration in Figure \ref{3.3.5.1}. \begin{figure}[h!] \centering \includegraphics[scale=.8]{diagram-20210822-20.pdf} \caption{Four Point Configuration from Case 5.} \label{3.3.5.1}\end{figure} In case $(2)$, we have $\angle pab = \gamma = \pi - \alpha > \beta$. Note $\triangle pbc$ needs to be isosceles with the vertex angle at least as large as the base angle, and that $\angle pcb = \beta$. Thus since $\angle pbc > \beta$, $\angle pbc = \alpha$ and $\angle bpc = \beta$. Then $\angle abp = \gamma$, since $\triangle pab$ must be isosceles and $\angle pab \neq \alpha$. So, $2\gamma + \beta = \pi,$ $2\beta + \alpha = \pi$, and $\alpha + \gamma = \pi$. This implies $\beta = \pi/5, \gamma = 2\pi/5$, and $\alpha = 3\pi/5$. But then $\triangle pba$ has angles $2\pi/5, 2\pi/5, \pi/5$, and so the vertex angle is smaller than the base angles. Thus this case reduces to the previous section. Now, if $p$ is between $a$ and $c$ then $\gamma = \angle abp = \angle cbp < \beta$. Furthermore, $\angle bpc > \alpha$, giving four angles, so no points can be added in this subcase. Finally, if $p$ is to the bottom right of $c$ then $\gamma = \angle bcp = \pi - \beta > \alpha$. But, $\triangle bcp$ must be isosceles with largest angle not repeated. Thus $\angle cbp < \beta$. So, exactly \textbf{two points} are addable in this case and they exactly determine $\alpha$ and $\beta$ (the second by symmetry). \item[\textit{Case 6}] $p \in \dvec{bc}$.\\ In this case, left of $b$ is equivalent to right of $c$ by symmetry. So, we without loss of generality consider $p$ left of $b$. In this case, $\angle pba = \pi - \beta > \alpha$. Then, $\triangle pba$ must be isosceles with largest angle non-repeated. So, $\angle pab < \beta$. So, no points are addable in this subcase. It remains to consider $p$ between $b$ and $c$. In this case, $\triangle acp$ and $\triangle abp$ are isosceles triangles including $\beta$. However, $\beta$ cannot be the vertex angle, so another of their angles must be $\beta$ and the third $\alpha$. Since $\angle cap$, $\angle pab < \alpha$, we must then have $2 \beta = \alpha$. Thus we have $\beta = \pi/4$ and $\alpha = \pi/2$. The resulting \textbf{valid} point configuration is displayed in Figure \ref{3.3.6}. \begin{figure}[h!] \centering \includegraphics[scale=0.8]{diagram-20210822-21.pdf} \caption{Four Point Configuration from Case 6.} \label{3.3.6} \end{figure} So, there is a \textbf{single addable} point in this case and it forces the choice of $\alpha$ and $\beta$. \item[\textit{Case 7}] $p$ in the interior of $\triangle abc$.\\ In this case, $\angle pbc$, $\angle pcb < \beta$ and thus $\angle cpb$ is greater than $\alpha$. Thus no points are addable in this case. \item[\textit{Combinations}] Adding more than one point.\\ Now we determine which of the six addable points are mutually compatible. As exactly two force $\alpha = 2\pi/3, \beta = \pi/6$, and $\gamma = \pi/3$, those two (from Case 1 and Case 3) are at most compatible with each other. This is displayed in Figure \ref{3.3.combo.1}. \begin{figure}[h!] \centering \includegraphics{diagram-20210822-22.pdf} \caption{Attempting to Combine Cases 1 and 3.} \label{3.3.combo.1} \end{figure} However, this adds an additional angle, $\angle qcp = 3\beta = \pi/2$. So, we only consider the addable points which force $\alpha = \pi/2$ and $\beta = \pi/4$. Such addable points are shown in Figure \ref{3.3.points}. \begin{figure} \centering \includegraphics{diagram-20210822-23.pdf} \caption{Compatible Points with the Right Triangle} \label{3.3.points} \end{figure} Both pairs $x_1,x_2$ and $x_3,x_4$ are compatible, and both yield a square with a point in the center (see A of Figure \ref{fig: 3.thm}). However, $x_3$ is not compatible with either of $x_1,x_2$. For $x_1$, then $\triangle x_1x_3b$ is not isosceles, and similarly for $x_2$. Also, $x_4$ is not compatible with either of $x_1,x_2$ as then we have angles $\alpha,\beta,\alpha+\beta, \angle bx_1x_4 < \beta$ (or $\angle bx_2x_4 < \beta$). Therefore, the only 5-point configurations are $\{a,b,c,x_1,x_2\},\{a,b,c,x_3,x_4\}$. Therefore, only 5 points are allowed in this case, and the only acceptable 5-point configuration of the square with its center (see A in Figure \ref{fig: 3.thm}). \end{description} \subsection{There are only equilateral triangles.} Since an equilateral triangle has all equal side lengths, every distance between two points in the configuration must be equal. That is, we need a 1-distance set. The largest 1-distance set in the plane is the equilateral triangle. Thus no configuration of four or more points can exist defining only equilateral triangles. The maximal number of points in this case is thus three. Then across all cases, we find that the largest configurations of points on the plane which define at most three angles contain exactly five points. As such, $P(3) = 5$, and the complete list of configurations is shown in Figure \ref{fig: 3.thm}. \end{proof} \begin{cor}\label{cor: p3-with-0} One might also wish to include the trivial 0-angle in our count. In this case, $P(3)=5$, but the square with the center-point and the pentagon are now the only valid configurations. \end{cor} \begin{proof} The set of valid five-point configurations when we count the 0-angle must be a subset of the valid five-point configurations we identified above. By direct inspection, the square with the center-point and the pentagon are the only of the five in Figure \ref{fig: 3.thm} which define only three angles. All the others define three angles greater than zero and also the 0-angle by collinearity. \end{proof} \section{Future Work} While it seems possible to compute $P(k)$ by exhaustive casework for higher values of $k$, the casework quickly becomes overwhelming. Additionally, while it is potentially possible to repeat such methods in higher dimensions, the visualization of the proofs played a crucial role in this analysis. In combination with the added degrees of freedom from adding dimensions, this would make this method of computation quickly intolerable. Future work may tighten our upper bound on $P(k)$. However, we make the following conjecture. \begin{conj} The lower bound on $P(k)$ in Theorem \ref{thm: linear bounds on P(k)} is tight. Namely, $P(2k)=2k+3$ and $P(2k+1)=2k+3$ for all $k\geq1$. \end{conj} Therefore, we believe that future work should improve the upper bound of $P(n)\leq 6n$, either via progress towards the Weak Dirac Conjecture (which would still fall short of our conjecture) or by some other means. Alternatively, future research may find a more efficient method of constructing viable point sets without the need for the exhaustive search we perform. We propose the related problem of characterizing optimal point sets in higher dimensions with a low number of \textit{solid angles.} \begin{definition}[Solid Angles] Given $d + 1$ points in $\mathbb{R}^d$, fix one of the points $p$. Let $S$ be a unit $d$-dimensional hypersphere about $p$. Project the remaining $d$ points onto the surface of the sphere along the lines connecting them to $p$. The \textbf{solid angle} formed by the $d+1$ points with center $p$ is the surface area of $S$ enclosed by the projections of the other points onto $S$ and connected via geodesics. \end{definition} Solid angles have applications to physics and have not been extensively studied in the context of discrete geometry. They provide an exciting new avenue for angle-related problems. They also motivate the following problem. For a fixed $d \geq 3$, what is the maximum number of non-coplanar points in a configuration yielding at most $k$ solid angles?
{ "timestamp": "2021-08-30T02:03:10", "yymm": "2108", "arxiv_id": "2108.12034", "language": "en", "url": "https://arxiv.org/abs/2108.12034", "abstract": "We characterize the largest point sets in the plane which define at most 1, 2, and 3 angles. For $P(k)$ the largest size of a point set admitting at most $k$ angles, we prove $P(2)=5$ and $P(3)=5$. We also provide the general bounds of $k+2 \\leq P(k) \\leq 6k$, although the upper bound may be improved pending progress toward the Weak Dirac Conjecture. Notably, it is surprising that $P(k)=\\Theta(k)$ since, in the distance setting, the best known upper bound on the analogous quantity is quadratic and no lower bound is well-understood.", "subjects": "Combinatorics (math.CO)", "title": "Optimal Point Sets Determining Few Distinct Angles", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9859363741964573, "lm_q2_score": 0.8221891283434876, "lm_q1q2_score": 0.8106261681027238 }
https://arxiv.org/abs/1506.07941
Two classes of modular $p$-Stanley sequences
Consider a set $A$ with no $p$-term arithmetic progressions for $p$ prime. The $p$-Stanley sequence of a set $A$ is generated by greedily adding successive integers that do not create a $p$-term arithmetic progression. For $p>3$ prime, we give two distinct constructions for $p$-Stanley sequences which have a regular structure and satisfy certain conditions in order to be modular $p$-Stanley sequences, a set of particularly nice sequences defined by Moy and Rolnick which always have a regular structure.Odlyzko and Stanley conjectured that the 3-Stanley sequence generated by $\{0,n\}$ only has a regular structure if $n=3^k$ or $n=2\cdot 3^k$. For $p>3$ we find a substantially larger class of integers $n$ such that the $p$-Stanley sequence generated from $\{0,n\}$ is a modular $p$-Stanley sequence and numerical evidence given by Moy and Rolnick suggests that these are the only $n$ for which the $p$-Stanley sequence generated by $\{0,n\}$ is a modular $p$-Stanley sequence. Our second class is a generalization of a construction of Rolnick for $p=3$ and is thematically similar to the analogous construction by Rolnick.
\section{Introduction} A set is called \textit{$p$-free} if it contains no $p$-term arithmetic progression. The study of $p$-free sets has been of much interest. Szekeres conjectured that for $p$ an odd prime, the maximum number of elements in a $p$-free subset of $\{0,1,\ldots,n-1\}$ grew as $n^{\log_p(p-1)}$ \cite{erd2}. This conjecture has since been disproved. The lower bound for $p=3$ is $n^{1-o(1)}$ with the best known bound due to Elkin \cite{lower}. The upper bound lies at $O(n(\log\log n)^5/\log n)$ due to Sanders \cite{upper}. Szekeres's conjecture is based on the fact that starting with 0 and greedily adding each subsequent integer that does not create a $p$-term arithmetic progression will produce exactly the numbers that have no digit of $(p-1)$ in their base $p$ expansion. Despite the fact that this construction does not produce maximally dense sets, it still exhibits many interesting structures. In 1978, Odlyzko and Stanley generalized this construction to arbitrary sets \cite{stan}. \begin{defn} For $p$ an odd prime, the \textbf{$p$-Stanley sequence} $S_p(A)$ generated by a $p$-free subset of the nonnegative integers $A=\{a_1,\ldots,a_k\}$ with $a_1<a_2<\cdots<a_k$ is constructed such that if $a_1<a_2<\cdots<a_n$ have been defined, then $a_{n+1}>a_n$ is defined to be the smallest integer such that the set $\{a_1,a_2,\ldots,a_{n+1}\}$ is $p$-free. \end{defn} Odlyzko and Stanley noticed that for some sets $A$, the Stanley sequence $S_3(A)$ displays a regular pattern in terms of the ternary representations of its terms and these sequences grow as $n^{\log_23}$. In particular, they explicitly computed $S_3(0,3^k)$ and $S_3(0,2\cdot3^k)$ and showed that these sequences satisfy the above properties. However, for other values of $n$, the sequence $S_3(0,n)$ seems to grow chaotically and at the rate $n^2/\log n$. Odlyzko and Stanley provided a heuristic argument why a randomly chosen sequence should grow at this rate and conjectured that these two behaviors are the only possible ones. Further work on the growth of chaotic $p$-Stanley sequences for $p>3$ can be found in \cite{t2}. This leads to the following conjecture. \begin{conj}[Based on \cite{stan},\cite{t2}] A $p$-Stanley sequence $a_1,a_2,\ldots$ with $p$ an odd prime satisfies either: \begin{itemize} \item[Type I:]$a_n=\Theta(n^{\log_{(p-1)}p})$ \item[Type II:]$a_n=\Theta\left(n^{(p-1)/(p-2)}/(\log n)^{1/(p-2)}\right)$ \end{itemize} \end{conj} Though we now know many more $A$ such that $S_3(A)$ is Type I, almost no progress has been made since 1978 on showing even the $p=3$ case of this conjecture. In fact, to this date, no Stanley sequence has been proven to be Type II. In this paper we prove an analogous result to Odlyzko and Stanley's original result. For each $p>3$ prime, we find a set $A_p$ such that for $n\in A_p$, the $p$-Stanley sequence $S_p(0,n)$ is modular, a stronger condition than Type I, which was developed by Moy and Rolnick in \cite{mod}. We find that for $p>3$ there are many more $n$ such that $S_p(0,n)$ is modular than in the $p=3$ case. Furthermore, numerical evidence suggests that we have found all numbers $n$ with this property. It is still an interesting open problem to show that the other (or any) sets generate Type II Stanley sequences. \section{Definitions} This section provides the definitions and basic results on modular $p$-Stanley sequences necessary to prove our result. For further exposition, see \cite{mod}. \begin{defn} A set $A$ is said to \textit{$p$-cover} $x$ if there exists $x_1,x_2,\ldots,x_{p-1}\in A$ such that $x_1<x_2<\cdots<x_{p-1}<x$ is an arithmetic progression. \end{defn} \begin{prop} \label{thm:alt} The $p$-Stanley sequence $S_p(A)$ is the unique sequence that starts with $A$, is $p$-free, and covers all $x\not\in S_p(A)$ with $x>\max(A)$. \end{prop} \begin{proof} This is easy to see since for each $x>\max(A)$, it is either in $S_p(A)$ if it keeps the sequence $p$-free, or not in $S_p(A)$ if it doesn't -- because it would be the largest term of a $p$-term arithmetic progression in $S_p(A)$. \end{proof} \begin{defn} A set $A\subseteq\{0,1,\ldots,N-1\}$ is said to \textit{$p$-cover $x$ mod $N$} if there exists $x_1,x_2,\ldots,x_{p-1}\in A$ such that $x_1<x_2<\cdots<x_{p-1}<x$ or $x_1<x_2<\cdots<x_{p-1}<x+N$ is an arithmetic progression. \end{defn} \begin{defn} A set $A\subseteq\{0,1,\ldots,N-1\}$ is a \textit{modular $p$-free set mod $N$} if it contains 0, is $p$-free mod $N$, and $p$-covers all $x$ with $0\leq x<N$ and $x\not\in A$. A $p$-Stanley sequence is modular if it can be written as $S_p(A)$ for $A$ a modular $p$-free set. \end{defn} We will refer to $p$-covering and modular $p$-free simply as covering and modular when $p$ is obvious. We write $A+B$ for $\{a+b\mid a\in A, b\in B\}$ and $c\cdot A$ for $\{c\cdot a\mid a\in A\}$. The following is the main theorem on modular $p$-Stanley sequences proved in \cite{mod}. It implies that a modular Stanley sequence grows asymptotically as $S_p(0)$. \begin{thm}[Theorem 6.5 in \cite{mod}] If $A$ is a modular $p$-free set mod $N$, then $S_p(A)=A+N\cdot S_p(0)$. \end{thm} \begin{cor} Any modular $p$-Stanley sequence exhibits Type I growth. \end{cor} \section{Results} In this section, \textit{digits} refers to the digits of a number in base $p$. \begin{defn} Define the set $A_p$ by $n\in A_p$ if and only if $p^{k-1}<n\leq p^k$ with $p^k-n\in S_p(0)$ and $p^k-n<(p-2)p^{k-1}$. \end{defn} This is equivalent to saying that $p^k-n$ must have first digit less than $p-2$ and all other digits not equal to $p-1$. \begin{defn} Write $S_p^k$ for the set $\{x\mid x\in S_p(0),x<p^k\}$. \end{defn} It is known that $S_p^k$ is $p$-free mod $p^k$ and that it covers $\{0,1,\ldots,p^k-1\}\setminus S_p^k$. For a proof, see Lemma 6.4 in \cite{mod}. \begin{defn} For $0\leq x<p^k$ with $x\not\in S_p^k$, write $x=\sum_i d_ip^i$. Define $S=\{i\mid d_i=p-1\}$. The \textbf{canonical covering} of $x$ is $x_j=\sum_i d_i^{(j)}p^i$ for $0\leq j<p-1$ where $d_i^{(j)}=d_i$ if $j\not\in S$ and $d_i^{(j)}=j$ otherwise. \end{defn} The canonical covering is contained in $S_p^k$ and as suggested by its name, $p$-covers $x$. An example is shown in Table \ref{fig:can}. Now we prove our main result. Showing that a $p$-Stanley sequence is modular requires guessing the modular set that generates it and then proving that the set is modular and does in fact generate the right sequence. We show that for $n\in A_p$, the sequence $S_p(0,n)$ is generated by a set which is almost equal to $S_p^k$ translated by $n$ and that the modifications we make to it do not change the properties we require of it. \begin{thm} For $p>3$ a prime, $S_p(0,n)$ is modular for all $n\in A_p$. \end{thm} \begin{proof} Say $p^{k-2}<n\leq p^{k-1}$. Define $A=\{0\}\cup(n+S_p^k)\setminus\{p^{k-1}(p-1)\}$. We will show that $S_p(0,n)=S_p(A)$ and that $A$ is modular mod $p^k$. The first requires showing that $A$ is $p$-free and covers all $n<x<p^k$ with $x\not\in A$ by Proposition \ref{thm:alt}. The second requires showing that $A$ is $p$-free mod $p^k$ and covers all $0\leq x<p^k$ with $x\not\in A$ mod $p^k$. Note that for $0<x<n$, the set $A$ cannot $p$-cover $x$, so it must $p$-cover $x+p^k$ instead. Thus it is sufficient to show that $A$ is $p$-free mod $p^k$ and covers all $n<x<p^k+n$ with $x\not\in A$ and $x\neq p^k$. Define $ A'=-n+A=\{-n\}\cup S_p^k\setminus\{p^{k-1}(p-1)-n\}$. We will show that $A'$ is $p$-free mod $p^k$ and covers all $0\leq x<p^k$ with $x\not\in A'$ and $x\neq p^k-n$. \textbf{$A'$ is $p$-free mod $p^k$:} Since $S_p^k$ is $p$-free mod $p^k$, any arithmetic progression must contain $-n$. Furthermore, for any arithmetic progression $\{a_i\}$ mod $p^k$, it must be the case that $\{a_i\pmod{p^{k-1}}\}$ is an arithmetic progression mod $p^{k-1}$. However, by the definition of $A_p$, we know that $p^{k-1}-n\in S_p^k$, so the progression $\{a_i\pmod{p^{k-1}}\}$ is in fact an arithmetic progression mod $p^{k-1}$ in $S_p^{k-1}$, so it must be the constant arithmetic progression. Therefore $a_0\equiv a_1\equiv\cdots\equiv a_{p-1}\equiv -n \pmod{p^{k-1}}$. Then the only possible arithmetic progression mod $p^k$ in $A'$ is $ip^{k-1}-n$ for $0\leq i <p$, but since $(p-1)p^{k-1}-n\not\in A'$, in fact $A'$ is $p$-free mod $p^k$. \textbf{$A'$ covers its complement:} If $x=p^{k-1}(p-1)-n$, then $x$ is covered by $\{ip^{k-1}-n\}$ for $0\leq i<p-1$. Otherwise, $x\not\in S_p^k$. We already know that $x$ is covered by its canonical covering, so the only cases we have to consider are those in which the canonical covering of $x$ contains $p^{k-1}(p-1)-n$. Write \[p^{k-1}(p-1)-n=\sum_{i=0}^{k-1}d_ip^i,\]where $0\leq d_i<p$. Our hypotheses on $A_p$ tell us that $d_{k-1}=p-2$ and $d_{k-2}<p-2$ and $d_i\neq p-1$ for all $i$. For $S\subseteq\{0,1,\ldots,k-1\}$ define \[x(S)=\sum_{i=0}^{k-1}d_i(S)p^i,\] where $d_i(S)=d_i$ if $i\not\in S$ and $d_i(S)=p-1$ otherwise. The only numbers that contain $p^{k-1}(p-1)-n$ in their canonical covering are those that can be expressed as $x(S)$ where $|S|>0$ and $d_i$ is constant for all $i\in S$. Note that $S\neq\{k-1\}$ since then $x(S)=p^k-n$. Furthermore, the fact that $d_{k-1}=p-2> d_{k-2}$ implies that $\{k-1,k-2\}\not\subseteq S$. We will use the following notation. Let $j$ be the largest element of $S\setminus\{k-1\}$. The above reasoning shows that $j$ exists and that if $j=k-2$, then $k-1\not\in S$. We know that $d_j(S)=p-1$. Define \[a=d_j,\] and \[b=d_{j+1}=d_{j+1}(S).\] We know that $0\leq a,b<p-1$. We now have four cases. \begin{table} \centering \begin{subtable}[b]{0.4\textwidth} \centering \begin{tabular}{ccccccc} 11 & 6 & 12 & 12 & 0 & 3 & 12 \\ 11 & 6 & 11 & 11 & 0 & 3 & 11\\ 11 & 6 & 10 & 10 & 0 & 3 & 10\\ 11 & 6 & 9 & 9 & 0 & 3 & 9\\ 11 & 6 & 8 & 8 & 0 & 3 & 8\\ 11 & 6 & 7 & 7 & 0 & 3 & 7\\ 11 & 6 & 6 & 6 & 0 & 3 & 6 \\ 11 & 6 & 5 & 5 & 0 & 3 & 5 \\ 11 & 6 & 4 & 4 & 0 & 3 & 4 \\ 11 & 6 & 3 & 3 & 0 & 3 & 3 \\ 11 & 6 & 2 & 2 & 0 & 3 & 2 \\ 11 & 6 & 1 & 1 & 0 & 3 & 1 \\ 11 & 6 & 0 & 0 & 0 & 3 & 0 \\ \end{tabular} \caption{The canonical covering} \label{fig:can} \end{subtable} \qquad \begin{subtable}[b]{0.4\textwidth} \centering \begin{tabular}{ccccccc} 11 & 6 & 12 & 12 & 0 & 3 & 12 \\ 10 & 6 & 11 & 11 & 0 & 3 & 11\\ 9 & 6 & 10 & 10 & 0 & 3 & 10\\ 8 & 6 & 9 & 9 & 0 & 3 & 9\\ 7 & 6 & 8 & 8 & 0 & 3 & 8\\ 6 & 6 & 7 & 7 & 0 & 3 & 7\\ 5 & 6 & 6 & 6 & 0 & 3 & 6 \\ 4 & 6 & 5 & 5 & 0 & 3 & 5 \\ 3 & 6 & 4 & 4 & 0 & 3 & 4 \\ 2 & 6 & 3 & 3 & 0 & 3 & 3 \\ 1 & 6 & 2 & 2 & 0 & 3 & 2 \\ 0 & 6 & 1 & 1 & 0 & 3 & 1 \\ $(-1)$ & 6 & 0 & 0 & 0 & 3 & 0 \\ \end{tabular} \caption{An alternative covering} \end{subtable} \caption{An example of Case 1 with $p=13$. The first line gives the base-13 representation of $x(S)$ and the subsequent lines give the base-13 representations of the sequence which covers it. The canonical covering of $x(S)$ containing $p^{k-1}(p-1)-n$ and an alternative covering which contains $-n$ instead.} \label{fig:c1} \end{table} \textbf{Case 1:} $a=0$ Let $\Delta=\sum_{i\in S}p^i$. Then $\{p^{k-1}(p-1)-n+i\Delta\}$ for $0\leq i<p-1$ is the canonical covering of $x(S)$. However $\{ip^{k-1}-n+i\Delta\}$ for $0\leq i<p-1$ also covers $x(S)$. Table \ref{fig:c1} shows an example of both of these sequences. We can check that all of these terms are in $A'$. Since $p^{k-1}(p-1)-n+i\Delta\in S_p^k$ with first digit $p-2$, then $ip^{k-1}-n+i\Delta$ is the same except that it has first digit 0 through $p-2$ for $0<i<p-1$. Finally, for $i=0$, then $ip^{k-1}-n+i\Delta=-n\in A'$. \FloatBarrier \begin{table} \centering \begin{tabular}{ccc|ccccc|cc} 12 & 6 & 3 & 6 & 4 & 5 & 4 & 12 & 10 & 12 \\ 11 & 6 & 3 & 5 & 10 & 11 & 11 & 6 & 10 & 11\\ 10 & 6 & 3 & 5 & 4 & 5 & 5 & 0 & 10 & 10\\ 9 & 6 & 3 & 4 & 10 & 11 & 11 & 7 & 10 & 9\\ 8 & 6 & 3 & 4 & 4 & 5 & 5 & 1 & 10 & 8\\ 7 & 6 & 3 & 3 & 10 & 11 & 11 & 8 & 10 & 7\\ 6 & 6 & 3 & 3 & 4 & 5 & 5 & 2 & 10 & 6\\ 5 & 6 & 3 & 2 & 10 & 11 & 11 & 9 & 10 & 5\\ 4 & 6 & 3 & 2 & 4 & 5 & 5 & 3 & 10 & 4\\ 3 & 6 & 3 & 1 & 10 & 11 & 11 & 10 & 10 & 3\\ 2 & 6 & 3 & 1 & 4 & 5 & 5 & 4 & 10 & 2\\ 1 & 6 & 3 & 0 & 10 & 11 & 11 & 11 & 10 & 1\\ 0 & 6 & 3 & 0 & 4 & 5 & 5 & 5 & 10 & 0\\ \end{tabular} \caption{An example of Case 2 with $p=13$. An alternative covering of $x(S)$ with common difference $1000666601_{13}$. Outside of the vertical lines, the covering matches the canonical covering.} \label{fig:c2} \end{table} \textbf{Case 2:} $0<a<p-1$ and $0\leq b<(p-3)/2$ Let $j'>j$ be the smallest integer such that $d_{j'}\geq(p-1)/2$. Note that $j'$ exists since $d_{k-1}\geq p-2\geq (p-1)/2$. Table \ref{fig:c2} shows an example of this case with the range $j$ through $j'$ marked off. Now let \begin{align*} \Delta&=\sum_{i=j}^{j'-1}p^i(p-1)/2+\sum_{i\in S\setminus\{j,j+1,\ldots,j'\}}p^i,\\ &=(p^{j'}-p^j)/2+\sum_{i\in S\setminus\{j,j+1,\ldots,j'\}}p^i. \end{align*} Consider the arithmetic progression $\{x(S)-i\Delta\}$ for $0<i\leq p-1$. We claim this is contained in $A'$. We can compute the digits of each of these numbers. Say $x(S)-i\Delta=\sum_l d_l^{(i)}p^l$. For $l\not\in\{j,j+1,\ldots,j'\}$, then $d_l^{(i)}$ matches the canonical covering: $d_l^{(i)}=d_l$ if $i\not\in S$, otherwise $d_l^{(i)}=p-1-i$. Now it is not too hard to check that $d_{j'}^{(i)}=d_{j'}-\lceil i/2\rceil$. For $j+1<l<j'$, the digit $d_{l}^{(i)}=d_l$ for $i$ even and $d_l^{(i)}=d_l+(p-1)/2$ for $i$ odd. The digit $d_{j+1}^{(i)}=d_l+1$ for $i>0$ even and $d_{j+1}^{(i)}=d_l+1+(p-1)/2$ for $i$ odd. Finally, $d_j^{(i)}=i/2-1$ for $i>0$ even and $d_j^{(i)}=(p-1)/2+(i-1)/2$ for $i$ odd. Lastly, we check that all of these terms are in $A'$. The $j$th digit cycles through each value, so it is never equal to $p-1$ again. Since $d_{j'}\geq(p-1)/2$, it never goes below 0 and even if $d_{j'}=p-1$, it immediately is decreased by 1 so $d_{j'}^{(i)}<p-1$ for $i>0$. Since $d_l<(p-1)$/2 for $j<l<j'$, neither of the two values that this digit takes is $p-1$. Finally, the $j+1$st digit only takes on 3 values, none of which are $p-1$ since $d_{j+1}=b<(p-3)/2$. Furthermore, $d_{j+1}^{(i)}\neq d_{j+1}$ for $i>0$. Since this digit never takes on its original value again, none of the terms in this sequence are $p^{k-1}(p-1)-n$. \begin{table} \centering \begin{tabular}{c|cc|cccc} 11 & 6 & 12 & 12 & 0 & 3 & 12\\ 11 & 6 & 7 & 11 & 0 & 3 & 11\\ 11 & 6 & 2 & 10 & 0 & 3 & 10\\ 11 & 5 & 10 & 9 & 0 & 3 & 9\\ 11 & 5 & 5 & 8 & 0 & 3 & 8\\ 11 & 5 & 0 & 7 & 0 & 3 & 7\\ 11 & 4 & 8 & 6 & 0 & 3 & 6 \\ 11 & 4 & 3 & 5 & 0 & 3 & 5 \\ 11 & 3 & 11 & 4 & 0 & 3 & 4 \\ 11 & 3 & 6 & 3 & 0 & 3 & 3 \\ 11 & 3 & 1 & 2 & 0 & 3 & 2 \\ 11 & 2 & 9 & 1 & 0 & 3 & 1 \\ 11 & 2 & 4 & 0 & 0 & 3 & 0 \\ \end{tabular} \caption{An example of Case 3 with $p=13$. An alternative covering of $x(S)$ with common difference $51001_{13}$. Outside of the vertical lines, the covering matches the canonical covering. To ensure that $p^{k-1}(p-1)-n$ is not contained in this covering, we need $d=5\nmid p-a-1$.} \label{fig:c3} \end{table} \textbf{Case 3:} $0<a<p-1$ and $(p-3)/2\leq b<p-1$ and $(a,b,p)\neq(2,1,5)$ We claim we can find $1\leq d\leq b+1$ such that $d\nmid p-a-1$ given the conditions on this case. If $p>5$, we will show below (Lemma \ref{thm:lem}) that $\lcm(1,2,\ldots,(p-1)/2)\geq p-1$, so some number in this range must not divide $p-a-1<p-1$. If $p=5$, we can use $d=2$ unless $p-a-1=4-a=2$. In the case that $p=5$ and $a=2$, we can use $d=3$ if $b\geq 2$. Let \[\Delta=dp^j+\sum_{i\in S\setminus\{j\}}p^i.\] Then consider the arithmetic progression $\{x(S)-i\Delta\}$ for $0<i\leq p-1$. We claim this is contained in $A'$. None of the digits of $x(S)-i\Delta$ are equal to $p-1$ except for possible the $j$th and $j+1$st digits. The $j$th digit decreases by $d\pmod{p}$ so it only takes on the value $p-1$ when $i=0$. Furthermore, the $j$th digit ``borrows'' from the $j+1$st digit exactly $d-1$ times, so since $p-1>b\geq(p-3)/2\geq d-1$, the $j+1$st digit never takes on the value $p-1$ and never ``borrows'' from the $j+2$nd digit. Finally, the only problem is if one of the terms in this arithmetic progression is equal to $p^{k-1}(p-1)-n$. This must occur before the $j+1$st digit has changed its value from $d_{j+1}$. In this range, the $j$th digit has value $d_j(S)-id=(p-1)-id$. If $(p-1)-id=a$, then $d\mid p-a-1$, a contradiction. Thus this arithmetic progression is contained in $A'$, as desired. \FloatBarrier \begin{table} \centering \begin{subtable}[b]{0.4\textwidth} \centering \begin{tabular}{ccccc|ccc|c} 3 & 1 & 0 & 2 & 3 & 1 & 1 & 4 & 2\\ 3 & 1 & 0 & 2 & 3 & 1 & 0 & 1 & 2\\ 3 & 1 & 0 & 2 & 3 & 0 & 3 & 3 & 2\\ 3 & 1 & 0 & 2 & 3 & 0 & 2 & 0 & 2\\ 3 & 1 & 0 & 2 & 3 & 0 & 0 & 2 & 2\\ \end{tabular} \caption{$d_{j+2}\geq1$} \label{fig:c4a} \end{subtable} \qquad \begin{subtable}[b]{0.4\textwidth} \centering \begin{tabular}{cc|cccccc|c} 3 & 1 & 3 & 0 & 1 & 0 & 1 & 4 & 2\\ 3 & 1 & 2 & 3 & 2 & 3 & 0 & 1 & 2\\ 3 & 1 & 2 & 1 & 0 & 0 & 3 & 3 & 2\\ 3 & 1 & 1 & 3 & 2 & 3 & 2 & 0 & 2\\ 3 & 1 & 1 & 1 & 0 & 1 & 0 & 2 & 2\\ \end{tabular} \caption{$d_{j+2}=0$} \label{fig:c4b} \end{subtable} \caption{An example of Case 4. An alternate covering for $x(S)$ in the cases that $d_{j+2}\geq1$ and $d_{j+2}=0$.} \label{fig:c4} \end{table} \textbf{Case 4:} $a=2$, $b=1$, and $p=5$. The case is similar to Case 2. Note that for $j<j'<k$, it is not the case that $j'\in S$, since by definition, the only possibility is $j'=k-1$, but $\{j,k-1\}\subseteq S$ would imply that $d_{j}=d_{k-1}$ and $d_j=a=2$ while $d_{k-1}=p-2=3$. Also note that $j+1\neq k-1$ since $d_{k-1}=3\neq1=d_{j+1}$. If $p-1>d_{j+2}\geq 1$, we will use\[\Delta=5^{j+1}+3\cdot5^j+\sum_{i\in S\setminus\{j\}}5^i.\] It is easy to check that $\{x(S)-i\Delta\}$ for $0<i\leq 4$ is in $A'$. See Table \ref{fig:c4a} for the computation. Otherwise, $d_{j+2}=0$. Let $j'>j+2$ be the smallest integer such that $d_{j'}\geq 2$. This exists for the same reason as in Case 2. Now let \begin{align*} \Delta&=\left(\sum_{i=j+2}^{j'-1}2\cdot5^i\right)+5^{j+1}+3\cdot5^j+\sum_{i\in S\setminus\{j,j+1,\ldots,j'\}}5^i,\\ &=(5^{j'}-5^{j+2})/2+5^{j+1}+3\cdot5^j+\sum_{i\in S\setminus\{j,j+1,\ldots,j'\}}5^i. \end{align*} Now we cover $x(S)$ by $\{x(S)-i\Delta\}$ for $0<i\leq 4$. As in Case 2, this covering is in $A'$. An example can be found in Table \ref{fig:c4b}. \end{proof} \begin{lem} If $p>5$ is prime, then $\lcm(1,2,\ldots,(p-1)/2)\geq p-1$. \label{thm:lem} \end{lem} \begin{proof} Say $p-1=\prod_i p_i^{e_i}$. If $p-1$ is not a prime power, then $p_i^{e_i}\in\{1,\ldots,(p-1)/2\}$ for all $i$. Otherwise, since $p$ is odd, we can write $p-1=2^k$. Then since $k>2$, then $2^{k-1},3\in\{1,2,\ldots,(p-1)/2\}$ and so the $\lcm$ is at least $3\cdot2^{k-1}\geq 2^k=p-1$. \end{proof} \section*{Acknowledgments} This research was conducted at the University of Minnesota Duluth REU and was supported by NSF grant 1358695 and NSA grant H98230-13-1-0273. The author thanks Joe Gallian for suggesting the problem and Ben Gunby and Levent Alpoge for helpful comments on the manuscript.
{ "timestamp": "2015-06-29T02:02:28", "yymm": "1506", "arxiv_id": "1506.07941", "language": "en", "url": "https://arxiv.org/abs/1506.07941", "abstract": "Consider a set $A$ with no $p$-term arithmetic progressions for $p$ prime. The $p$-Stanley sequence of a set $A$ is generated by greedily adding successive integers that do not create a $p$-term arithmetic progression. For $p>3$ prime, we give two distinct constructions for $p$-Stanley sequences which have a regular structure and satisfy certain conditions in order to be modular $p$-Stanley sequences, a set of particularly nice sequences defined by Moy and Rolnick which always have a regular structure.Odlyzko and Stanley conjectured that the 3-Stanley sequence generated by $\\{0,n\\}$ only has a regular structure if $n=3^k$ or $n=2\\cdot 3^k$. For $p>3$ we find a substantially larger class of integers $n$ such that the $p$-Stanley sequence generated from $\\{0,n\\}$ is a modular $p$-Stanley sequence and numerical evidence given by Moy and Rolnick suggests that these are the only $n$ for which the $p$-Stanley sequence generated by $\\{0,n\\}$ is a modular $p$-Stanley sequence. Our second class is a generalization of a construction of Rolnick for $p=3$ and is thematically similar to the analogous construction by Rolnick.", "subjects": "Combinatorics (math.CO)", "title": "Two classes of modular $p$-Stanley sequences", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9886682468025242, "lm_q2_score": 0.8198933271118221, "lm_q1q2_score": 0.8106024982807336 }
https://arxiv.org/abs/math/0609827
On A. Zygmund differentiation conjecture
Consider $v$ a Lipschitz unit vector field on $R^n$ and $K$ its Lipschitz constant. We show that the maps $S_s:S_s(X) = X + sv(X)$ are invertible for $0\leq |s|<1/K$ and define nonsingular point transformations. We use these properties to prove first the differentiation in L^p norm for $1\le p<\infty.$ Then we show the existence of a universal set of values $s\in [-1/2K,1/2K]$ of measure 1/K for which the Lipschitz unit vector fields $v\circ S_s^{-1}$ satisfy Zygmund's conjecture for all functions in $L^p(\R^n)$ and for each p, $1\leq p< \infty.$
\section{Introduction} Lebesgue differentiation theorem states that given a function $f\in L^1(\mathbb{R})$ the averages $\displaystyle \frac{1}{2t}\int_{-t}^t f(x+u) du$ converge a.e. to $f(x)$ when $t$ tends to zero. The differentiation for functions $F$ defined on $\mathbb{R}^2$ is more subtle. Actually it is a longstanding problem to find analogue of Lebesgue differentiation theorem for averages of the form $$ M_t(F)(x,y)= \frac{1}{2t}\int_{-t}^t F[(x,y)+ \beta v(x,y)]d\beta$$ for a measurable function $v$. One would expect these averages to converge a.e. to $F(x,y).$ In other words one looks at the differentiation along the vector field $v$ (or the direction $v$).(see for instance \cite{Stein}, \cite{DeGuzman}). One can see that because of the geometry of $R^2$ multiple directions are possible. In fact the example of the Nikodym set \cite{DeGuzman} shows that condition on $v$ must be imposed if one expects the differentiation to hold. J. Bourgain \cite{Bourgain} established the differentiation of the averages $M_t(F)$ for function $F\in L^2$ and $v$ a real analytic vector field. N.H. Katz \cite{Katz} has some partial result for Lipschitz vector fields. A longstanding conjecture attributed to A. Zygmund (see the paper by M. Lacey and X. Li, \cite{LaceyLi}) is the following. \vskip1ex \noindent{\bf {Zygmund's conjecture}} \vskip1ex {\em Let $v$ be a Lipschitz unit vector field and let $F\in L^2(\mathbb{R}^2)$. Do the averages $$ M_t(F)(x,y)=\frac{1}{2t}\int_{-t}^t F[(x,y)+ \beta v(x,y)]d\beta$$ converge a.e. to $F(x,y)$?} \vskip1ex First we will observe that for $s$ small enough (if $K$ is the Lipschitz constant of $v$ we will require $|s|< 1/2K$), the maps $\displaystyle S_s: S_s(x)= x + sv(x)$ are invertible. This observation will allow us to derive the norm convergence of the averages $$M_t(F)= \frac{1}{2t}\int_{-t}^t F(x+ \beta v(x))d\beta$$ to the function F in all $L^p$ spaces, $1\leq p <\infty.$ This norm convergence result was apparently an open problem (see \cite{Bourgain}.) Then we will show that Zygmund's conjecture holds in all $L^p$ spaces $1\leq p <\infty$ for the unit vector fields $v\circ S_s^{-1}$ when $s\in \mathcal{T},$ a universal subset of $[-1/2K, 1/2K]$ with measure $1/K.$ The method we use extends to $R^n.$ \vskip1ex \noindent{\bf Acknowledgments} We thank C. Thiele and C. Demeter for bringing this problem to our attention. Thanks also to C. Demeter for his comments on a preliminary version of the paper. \vskip1ex \section{Differentiation in $\mathbb{R}^2$} The main steps are as follows. First we show that for $s$ small enough the maps $S_s:S_s(x,y) = (x, y) + sv(x,y)$ are invertible and nonsingular in the sense that $\mu(A)=0$ if and only $\mu(S_s(A))= 0.$ A more precise statement is given in Lemma 1 where we prove that the operators induced by these maps are uniformly bounded on $L^p(\mathbb{R}^2)$ for $1\leq p\leq \infty.$ From this we derive the norm convergence of the averages $M_t(F)$ to $F$. Two consequences are derived from Lemma 1. First we obtain a "weak" version of our main result, Proposition 2, where we show that given a function $F\in L^1(\mathbb{R}^2)$ the differentiation occurs along the vector fields $v\circ S_s^{-1}$ as long as $s$ belongs to a set of measure $1/K$ depending a priori on $F.$ Then we use Hardy Littlewood maximal inequality on $L^1(\mathbb{R})$ to derive a first maximal inequality for the differentiation problem (Theorem 3). Our main result is proved by showing that the set where the differentiation occurs can in fact be taken independently of any $F\in L^1(\mathbb{R}^2).$ Finally we establish a "local" maximal inequality for the maximal operator associated with these averages. \subsection{Convergence in $L^p$ norm} \begin{lemma} Assume that $v$ is a unit vector field (i.e $\|v(x,y)\|_2= 1$ for all $(x,y) \in R^2$ and a Lipschitz map with constant $K$. Then for each $t; |t|\leq T<\frac{1}{K}$ the map $S_t$ from $R^2$ to $R^2$ such that $S_t(x, y) = (x,y) + tv(x,y)$ is one to one and onto. Furthermore if we denote by $\mu$ Lebesgue measure on $R^2$ for all measurable sets $A\subset R^2,$ for all $|s|\leq T,$ we have $$\frac{1}{2\pi(1+ |s|K)^2}\mu(S_s(A)) \leq \mu(A)\leq 2\pi \big(\frac{1}{1-|s|K}\big)^2\mu(S_s(A)).$$ \end{lemma} \begin{proof} \vskip1ex First if $S_t(x_1,y_1)= S_t(x_2, y_2)$ then we have $$\|(x_1,y_1)- (x_2, y_2)\|= \|t(v(x_1, y_1)- v(x_2, y_2)\|\leq KT \|(x_1,y_1)- (x_2, y_2)\|.$$ As $KT<1$ this shows that $S_t$ is one to one. \vskip1ex The equation $Z=(z_1, z_2)= (x,y) + tv(x,y)= X + tv(X)$ has a solution in $X=(x,y)$ that can be found by applying the fixed point theorem to the function $R_Z$:$R_{Z}(X)= Z + X - S_t(X).$ \vskip1ex To establish the second part of the lemma we can observe that it is enough to prove it for cubes $A.$ For any two points $Z_1= X_1 + sv(X_1)$ and $Z_2= X_2 + sv(X_2)$ we have $\|Z_1- Z_2\|\leq (1+ |s|K)\|X_1- X_2\|,$ and $\|X_1-X_2\|\leq \frac{1}{1-|s|K} \|S_s(X_1)-S_s(X_2)\|.$ Also for each measurable compact set $B\subset R^2$ we have $\mu (B) \leq \pi diam(B)^2,$ where $diam(B)$ is the diameter of the bounded set $B.$ As $\|S_s(X)- S_s(Y)\| \leq (1+ |s|K)\|X-Y\|$ we have $diam(S_s(A)) \leq (1+|s|K)diam(A).$ Therefore if we denote by $r$ the side length of the cube $A$ we have $$\mu (S_s(A))\leq \pi diam(S_s(A))^2\leq \pi (1+|s|K)^2 diam(A)^2= \pi (1+|s|K)^2 2r^2\leq 2\pi (1+|s|K)^2\mu(A).$$ By approximation we conclude that for any measurable set $A$ we have the same inequality. From the inequality $\|X_1-X_2\|\leq \frac{1}{1-|s|K} \|S_s(X_1)-S_s(X_2)\|,$ we can conclude that $$\|S_s^{-1}(Y_1)- S_S^{-1}(Y_2)\| \leq \frac{1}{1-|s|K}\|Y_1- Y_2\|$$ for all $Y_1, Y_2\in R^2.$ The same path will lead us then to the inequality $$\mu(S_s^{-1}(B)) \leq 2\pi \big(\frac{1}{1-|s|K}\big)^2 \mu (B)$$ for all measurable set $B\subset R^2.$ From this we can derive the second inequality in the lemma. \end{proof} \vskip1ex Using the notations of Lemma 1 we can obtain the convergence in $L^p$ norm. \begin{proposition} For $0<|t|\leq T$ and for $1\leq p \leq \infty$ the operators $M_t$ defined pointwise by $$M_t(F)(x,y) = \frac{1}{2t}\int_{-t}^t F[(x,y) + sv(x,y)]ds $$ map $L^p$ into $L^p.$ Furthermore for each $1\leq p<\infty$ we have $$\lim_{t\rightarrow 0}\|M_t(F)- F\|_p = 0.$$ \end{proposition} \begin{proof} It follows immediately from Lemma 1. Indeed the case $p= \infty$ is obvious. For the other values of $p$, consider a nonnegative simple $L^p$ integrable function $\displaystyle F= \sum_{n=1}^N \alpha_n\mathbf{1}_{A_n}$ with disjoint measurable sets $A_n$. We have \[ \begin{aligned} &\|M_t(F)\|_p^p = \int_{\mathbb{R}}\bigg|\frac{1}{2t}\int_{-t}^{t}\sum_{n=1}^N \alpha_n\mathbf{1}_{A_n}(S_s(x,y))ds\bigg|^p d\mu\\ &\leq \int_{\mathbb{R}}\bigg(\frac{1}{2t}\int_{-t}^{t}\sum_{n=1}^N \alpha_n^p\mathbf{1}_{A_n}(S_s(x,y)\bigg)ds d\mu = \frac{1}{2t}\int_{-t}^{t}\sum_{n=1}^N \alpha_n^p\mu(S_s^{-1}(A_n))ds\\ &\leq \frac{1}{2t}\bigg(\int_{-t}^t 2\pi\big(\frac{1}{1-|s|K}\big)^2 ds\bigg)\sum_{n=1}^N \alpha_n^p\mu(A_n) = \frac{2\pi}{1-tK}\|F\|_p^p \end{aligned} \] The boundedness of the operators $M_t$ follows by approximation. \vskip1ex The second part of the proposition is a consequence of the simple fact that for the dense set of continuous functions with compact support we have the pointwise and norm convergence of the operators $M_t.$ \end{proof} \subsection{A "weak" version of Zygmund's conjecture} \vskip1ex The next proposition is a "weak" version of Zygmund's conjecture in the sense that for each function $F\in L^1(\mu)$ there exists a set of $s$ of measure $T$ such that $$\lim_{t\rightarrow 0}\frac{1}{2t}\int_{-t}^t F[(x,y) + \beta v(S_s^{-1}x)d\beta= F(x,y)$$ for almost every $(x,y)\in R^2.$ In other words the set of $s$ and Lipschitz vector fields $v\circ S_s^{-1}$ for which the differentiation occurs may depend on $F.$ The next proposition gives us also a path on how to approach Zygmund's conjecture, more precisely by considering the averages along the values of the function $F$ at $(x,y) + \beta v(S_s^{-1}(x,y))$ and by exploiting the invertibility of the maps $S_s.$ \begin{proposition} Let $v$ be a Lipschitz function from $\mathbb{R}^2$ to $R^2$ with Lipschitz constant $K$ such that $\|v(x,y)\|_2=1$ for almost all $(x,y)\in \mathbb{R}^2.$ Then for all function $F\in L^1(R^2)$ for almost every $s\in [-T/2, T/2]$, for almost every $(x,y) \in R^2$ we have $$\lim_{t\rightarrow 0} \frac{1}{2t}\int_{-t}^t F[(x,y)+ (s+ \beta)v(x,y) ]d\beta = F[(x,y)+ sv(x, y)]$$ and $$\lim_{t\rightarrow 0} \frac{1}{2t}\int_{-t}^t F[(x,y)+ \beta v(S_{s}^{-1}(x,y)]d\beta = F(x,y).$$ \end{proposition} \begin{proof} For $t, s$ and $\beta$ small enough we consider the averages $$\frac{1}{2t}\int_{-t}^{t} F[(x,y)+ (s+\beta)v(x,y)]d\beta$$ Because of the assumptions made on $v$ by Lemma 1 for each $s; |s|\leq T<\frac{1}{K}$ for almost all $(x, y)\in R^2$ $$G_{x,y}(s)= F[(x,y)+ s v(x,y)] $$ is well defined and $G_{x,y}\in L^1([-T,T]).$ By Lebesgue differentiation theorem, for almost every $s\in [-T/2, T/2]$ we have $$\lim_t \frac{1}{2t}\int_{-t}^t F[(x,y)+ (s+ \beta) v(x,y)]d\beta = F[(x,y)+ sv(x,y)].$$ Let us consider the complement $E$ in $\mathbb{R}^2\times [-T/2, T/2]$ of the set $$ \{ (x, y, s): \lim_t \frac{1}{2t}\int_{-t}^t F[(x,y)+ (s+ \beta) v(x,y)]d\beta = F[(x,y)+ sv(x,y)]\}$$ By Fubini this set has measure zero. Again by Fubini for almost all $s$ the set $E_s = \{(x, y): (x,y, s)\in E\}$ also has measure zero. By lemma 1 the corresponding sets $S_s(E_s)$ will also have measure zero. This proves the second part of the proposition. \vskip1ex \end{proof} \vskip1ex As indicated above the maximal inequality allowing to derive the conclusions of proposition 3 is given by the following result. \begin{theorem} Let $K$ be the Lipschitz constant for the unit vector field $v.$ Then for each $T$, $0<T< 1/K,$ for all $\lambda>0$ \[ \begin{aligned} &\frac{1}{T}\int_{-T/2}^{T/2}\mu\bigg\{(x,y)\in R^2: \sup_{0<t\leq T/2}\frac{1}{2t}\int_{-t}^t |F[(x,y) + \beta v(S_s^{-1}(x,y))]|d\beta >\lambda\bigg\}dm(s)\\ &\leq \frac{4\pi^2 (1+ TK)^2}{(1-TK)^2}\frac{1}{\lambda}\int_{\mathbb{R}^2} |F(x,y)|d\mu. \end{aligned} \] where $m$ denotes Lebesgue measure on $[-T/2, T/2].$ \end{theorem} \begin{proof} \vskip1ex For a.e. $(x, y)$ the function $G(x,y): G_{x,y}(s)= \mathbf{1}_{[-T , T]}(s). F[(x,y)+ s v(x,y)] $ belongs to $L^1.$ By Hardy-Littlewood maximal inequality applied to this function we have; \[ \begin{aligned} & m\bigg\{s\in [-T/2, T/2]: \sup_{0<t\leq T/2}\frac{1}{2t}\int_{-t}^t |F[(x,y)+ (s + \beta )v(x,y)]| d\beta > \lambda \bigg\} \\ &\leq \frac{1}{\lambda} \int_{-T}^{T} |F[(x,y)+ \beta v(x,y)]|d\beta. \end{aligned} \] We can integrate both sides of this inequality with respect to Lebesgue measure $\mu$ on $\mathbb{R}^2$ and apply Fubini theorem. We obtain by using Lemma 1, \[ \begin{aligned} &\mu\times{m} \bigg\{(x,y,s)\in R^2\times [-T/2, T/2] : \sup_{0<t\leq T/2}\frac{1}{2t}\int_{-t}^t |F[(x,y)+ (s+ \beta) v(x,y)]|d\beta > \lambda \bigg\} \\ &\leq \frac{1}{\lambda} \int_{-T}^{T}\int_{\mathbb{R}^2} |F[(x,y)+ \beta v(x,y)]|d\mu d\beta \\ &\leq \frac{2\pi}{(1-TK)^2\lambda}\int_{-T}^{T}\int_{\mathbb{R}^2}|F(x,y)|d\mu d\beta \\ &= \frac{2\pi T}{(1-TK)^2\lambda}\int_{\mathbb{R}^2} |F(x,y)|d\mu = \frac{CT}{\lambda} \int_{\mathbb{R}^2} |F(x,y)|d\mu \end{aligned} \] Dividing all expressions above by $T$ and rewriting $$\mu\times{m}\bigg\{(x,y,s)\in R^2\times [-T/2, T/2] : \sup_{0<t\leq T/2}\frac{1}{2t}\int_{-t}^t |F[(x,y)+ (s+ \beta) v(x,y)]|d\beta > \lambda \bigg\} $$ as $$\int_{-T/2}^{T/2} \mu\bigg\{(x,y)\in R^2:\sup_{0<t\leq T/2}\frac{1}{2t}\int_{-t}^t |F[(x,y)+ (s+ \beta) v(x,y)]| d\beta > \lambda \bigg\}dm(s),$$ we derive the following inequality: \[ \begin{aligned} &\frac{1}{T}\int_{-T/2}^{T/2} \mu\bigg\{(x,y)\in R^2:\sup_{0<t\leq T/2}\frac{1}{2t}\int_{-t}^t |F[(x,y)+ (s+ \beta) v(x,y)]|d\beta >\lambda \bigg\}dm(s) \\ &\leq \frac{C}{\lambda} \int_{\mathbb{R}^2} |F(x,y)|d\mu. \end{aligned} \] Using Lemma 1 we can observe that \[ \begin{aligned} &\mu\bigg\{(x,y)\in R^2:\sup_{0<t\leq T/2}\frac{1}{2t}\int_{-t}^t |F[(x,y)+ (s+ \beta) v(x,y)]|d\beta >\lambda\bigg\}\\ &\geq \frac{1}{2\pi(1+ TK)^2}\mu\bigg\{(x,y)\in R^2: \sup_{0<t\leq T/2}\frac{1}{2t}\int_{-t}^t |F[(x,y)+ \beta v(S_s^{-1}(x,y))]|d\beta >\lambda\bigg\}. \end{aligned} \] Therefore, for all $\lambda>0,$ we have the inequality \[ \begin{aligned} &\frac{1}{T}\int_{-T/2}^{T/2}\mu\bigg\{(x,y)\in R^2: \sup_{0<t\leq T/2}\frac{1}{2t}\int_{-t}^t |F[(x,y) + \beta v(S_s^{-1}(x,y))]|d\beta >\lambda\bigg\}dm(s)\\ &\leq \frac{4\pi^2 (1+ TK)^2}{(1-TK)^2}\frac{1}{\lambda}\int_{\mathbb{R}^2} |F(x,y)|d\mu. \end{aligned} \] \end{proof} \subsection{A universal set of unit Lipschtiz vector fields satisfying Zygmund's conjecture in all $L^p$ spaces.} \vskip1ex As indicated in the introduction we want to strengthen Proposition 2 by showing that a universal set of vector fields $v\circ S_s^{-1}$ satisfy Zygmund's conjecture. More precisely we want to prove the following result. \begin{theorem} Let $v$ be a unit Lipschitz vector field with Lipschitz constant $K= 1/T$. Then there exists a set $\mathcal{T}\subset [-T/2, T/2]$ of measure $T$ such that for each $s\in \mathcal{T}$ the unit Lipschitz vector field $v\circ S_s^{-1}$ satisfies Zygmund's conjecture in all $L^p$ spaces for $1\leq p<\infty$. More precisely for all $F\in L^p(\mathbb{R}^2)$ the averages $$\frac{1}{2t}\int_{-t}^t F[(x,y) + \beta v(S_s^{-1}(x,y))d\beta$$ converge a.e to $F(x,y).$ \end{theorem} To prove this theorem we introduce some notation. We denote by $\displaystyle \mathcal{D}_N = \{(x,y)\in \mathbb{R}^2: \|(x,y)\|\leq N\}$ and by $\mathcal{E}$ a countable set of continuous functions with compact support dense in the unit closed ball of $L^1(\mathbb{R}^2).$ We will use the notation $\displaystyle M_t^s(F)(x,y)$ for the averages $$\frac{1}{2t}\int_{-t}^t |F[(x,y) + \beta v(S_s^{-1}(x,y))]|d\beta$$ \vskip1ex Theorem 4 is a consequence of the following result. \begin{theorem} Under the assumptions of Theorem 4, we have for each p, $ 1\leq p<\infty$ and for a.e. $s\in [-T/2, T/2]$ $$ \lim_{n\rightarrow \infty}\sup_{\|F\|_p\leq 1}\mu\{(x,y)\in \mathcal{D}_N: \sup_{0<t<T/2}M_t^s(F)(x,y)> n\} = 0.$$ \end{theorem} Our proof of these theorems will require several lemmas. We only give the proof for the case $p=1.$ The case $p>1$ can be obtained similarly without difficulty as the differentiation is a local property. Given any function $F\in L^1(\mu)$ there exists a subsequence $\displaystyle G_j= F_{N_j}$ such that $\lim_j G_j(x,y) = F(x,y)$ except on a set of measure zero $\mathcal{N}$. The next lemma is a consequence of Lemma 1. It shows that for almost every $(x,y)$ we can keep this convergence along the line segments $[(x,y)- \beta w(x,y), (x,y) + \beta w(x,y)]]$ where $\beta$ is in absolute value smaller that the reciprocal of the Lipschitz constant of the unit vector field $w$. \begin{lemma} Let $F\in L^1(\mu)$ and $G_j$ a sequence of continuous function with compact support converging a.e. to $F$ . Let $w$ be a unit vector field with Lipschitz constant {\bf{K}} . Then for almost all $x\in \mathbb{R}^2$ for all $t\in [-T/2, T/2]$ we have \[ \begin{aligned} &\frac{1}{2t}\int_{-t}^t |F[(x,y)+ \beta w(x,y)]|d\beta= \frac{1}{2t}\int_{-t}^t \liminf_j|G_j|[(x,y)+ \beta w(x,y)]d\beta \\ & = \sup_j\frac{1}{2t}\int_{-t}^t \inf_{k\geq j} |G_k|[(x,y)+ \beta w(x,y)]d\beta \end{aligned} \] \end{lemma} \begin{proof} Let us consider the null set $\mathcal{N}$ off which the sequence $G_j$ converges to $F$. We can assume that this set is measurable. Hence by Fubini we have \[ \begin{aligned} &\int_{\mathbb{R}^2}\int_{-T}^T \mathbf{1}_{\mathcal{N}}[(x, y) + \beta w(x,y)]d\beta d\mu = \int_{-T}^T \int_{\mathbb{R}^2}\mathbf{1}_{\mathcal{N}}[(x, y) + \beta w(x,y)]d\beta d\mu \\ &\leq \big(\frac{1}{1-TK}\big)^2 2T\mu (\mathcal{N})= 0. \end{aligned} \] Therefore there exists a set $A$ of zero measure such that for $(x,y)\in A^c$ we have \vskip1ex $\displaystyle \int_{-T}^T \mathbf{1}_{\mathcal{N}}[(x, y) + \beta w(x,y)]d\beta= 0$. Hence for all $t\in [-T/2, T/2]$ we also have $\displaystyle \int_{-t}^t \mathbf{1}_{\mathcal{N}}[(x, y) + \beta w(x,y)]d\beta= 0.$ Writing the function $|F|$ as $\mathbf{1}_{\mathcal{N}}|F| + \mathbf{1}_{\mathcal{N}^c}|F|$ we have then for $(x, y)\in A^c$ \[ \begin{aligned} & \int_{-t}^t |F[(x, y) + \beta w(x,y)]|d\beta = \int_{-t}^t \mathbf{1}_{\mathcal{N}^c}[(x,y) + \beta w(x, y)]|F[(x,y) + \beta w(x,y)]|d\beta\\ & = \int_{-t}^t\mathbf{1}_{\mathcal{N}^c}[(x,y) + \beta w(x, y)]\liminf_j|G_j|[(x,y)+ \beta w(x,y)]d\beta \\ & = \lim_j\frac{1}{2t}\int_{-t}^t \mathbf{1}_{\mathcal{N}^c}[(x,y) + \beta w(x, y)]\inf_{k\geq j} |G_k|[(x,y)+ \beta w(x,y)]d\beta \\ & \text{by the monotone convergence theorem} \\ & = \lim_j\frac{1}{2t}\int_{-t}^t \inf_{k\geq j} |G_k|[(x,y)+ \beta w(x,y)]d\beta \end{aligned} \] \end{proof} Next we want to check that the preceding lemma applies to all Lipschitz unit vector fields $v\circ S_s^{-1}.$ \begin{lemma} Let $v$ be a Lipschitz unit vector field with Lipschitz constant $K$. Consider $0<T<1/K.$ Then for all $s$ such that $0<|s|< T/2$ the unit vector fields $v\circ S_s^{-1}$ are Lipschitz vector fields with Lipschitz constant $2K.$ \end{lemma} \begin{proof} As $v$ is a Lipschitz vector field with Lipschitz constant $K$ we have for all $X$, $Y$ in $\mathbb{R}^2$ $$\|v(S_s^{-1})(X) - v(S_s^{-1})(Y)|\leq K\|S_s^{-1}(X) - S_s^{-1}(Y)\|.$$ We denote $Z_1= S_s^{-1}(X)$ and $Z_2 = S_s^{-1}(Y).$ Then $\displaystyle X = Z_1 + sv(Z_1)$ and $\displaystyle Y = Z_2 + sv(Z_2).$ Therefore $\displaystyle \|Z_1- Z_2\|\leq \|X - Y\| + |s|K\|Z_2- Z_1\|$ and we obtain \vskip1ex $\displaystyle \|Z_1 - Z_2\|\leq \frac{1}{(1-|s|K)}\|X-Y\|.$ We conclude then that $$\|v(S_s^{-1})(X) - v(S_s^{-1})(Y)|\leq K\frac{1}{(1-|s|K)}\|X-Y\|.$$ Noticing that for $0<|s|<T/2$ we have $\displaystyle \frac{1}{(1-|s|K)}\leq 2$ and this concludes the proof of this lemma. \end{proof} \vskip1ex Thus we can apply Lemma 2 with the constant $\bold{K}= 2K.$ \begin{lemma} For each $\lambda>0$ and each $s\in [-T/2, T/2]$ we have \begin{equation} \begin{aligned} & \sup_{\|F\|_1\leq 1}\mu\bigg\{(x,y)\in \mathcal{D}_N: \sup_{0<t<T/2}\frac{1}{2t}\int_{-t}^t |F[(x,y) + \beta v(S_s^{-1}(x,y)]|d\beta >\lambda \bigg\} \\ &=\sup_{\Phi_j\in \mathcal{E}}\mu\bigg\{(x,y)\in \mathcal{D}_N: \sup_{0<t<T/2}\frac{1}{2t}\int_{-t}^t |\Phi_j[(x,y) + \beta v(S_s^{-1}(x,y)]|d\beta >\lambda \bigg\} \end{aligned} \end{equation} \end{lemma} \begin{proof} Let us fix $\epsilon>0$. We can find a function $F\in L^1$ with $\|F\|_1\leq 1$ such that \begin{equation} \begin{aligned} & \sup_{\|G\|_1\leq 1}\mu\bigg\{(x,y)\in \mathcal{D}_N: \sup_{0<t<T/2}\frac{1}{2t}\int_{-t}^t |G[(x,y) + \beta v(S_s^{-1}(x,y)]|d\beta >\lambda \bigg\} \\ &\leq \mu\bigg\{(x,y)\in \mathcal{D}_N: \sup_{0<t<T/2}\frac{1}{2t}\int_{-t}^t |F[(x,y) + \beta v(S_s^{-1}(x,y)]|d\beta >\lambda \bigg\} + \epsilon \end{aligned} \end{equation} For the function $F$ we can find a subsequence $G_j= F_{n_j}$ of continuous functions in $\mathcal{E}$ which converges a.e. to $F.$ Applying Lemma 2 with $w= v\circ S_s^{-1}$ off a null set $\mathcal{N}_s$ we have \[ \begin{aligned} &\sup_{0<t<T/2}\frac{1}{2t}\int_{-t}^t |F[(x,y)+ \beta w(x,y)]|d\beta= \sup_{0<t<T/2}\frac{1}{2t}\int_{-t}^t \liminf_j|G_j|[(x,y)+ \beta w(x,y)]d\beta \\ & = \sup_{0<t<T/2}\sup_j\frac{1}{2t}\int_{-t}^t \inf_{k\geq j} |G_k|[(x,y)+ \beta w(x,y)]d\beta\\ & = \sup_{j}\sup_{0<t<T/2}\frac{1}{2t}\int_{-t}^t \inf_{k\geq j} |G_k|[(x,y)+ \beta w(x,y)]d\beta \\ & = \lim_j \sup_{0<t<T/2}\frac{1}{2t}\int_{-t}^t \inf_{k\geq j} |G_k|[(x,y)+ \beta w(x,y)]d\beta \\ & \text{(Noticing that the sup is the limit because we have an increasing sequence)} \end{aligned} \] Hence we have \[ \begin{aligned} &\mu\bigg\{(x,y)\in \mathcal{D}_N: \sup_{0<t<T/2}\frac{1}{2t}\int_{-t}^t |F[(x,y) + \beta v(S_s^{-1}(x,y)]|d\beta >\lambda \bigg\} \\ &= \mu\bigg\{(x,y)\in \mathcal{D}_N: \lim_j \sup_{0<t<T/2}\frac{1}{2t}\int_{-t}^t \inf_{k\geq j} |G_k|[(x,y)+ \beta v(S_s^{-1}(x,y)]d\beta >\lambda\bigg\} \\ &= \lim_j\mu\bigg\{(x,y)\in \mathcal{D}_N: \sup_{0<t<T/2}\frac{1}{2t}\int_{-t}^t \inf_{k\geq j} |G_k|[(x,y)+ \beta v(S_s^{-1}(x,y)]d\beta>\lambda\bigg\}\\ &\leq \limsup_j\mu\bigg\{(x,y)\in \mathcal{D}_N: \sup_{0<t<T/2}\frac{1}{2t}\int_{-t}^t \inf_{k\geq j} |G_k|[(x,y)+ \beta v(S_s^{-1}(x,y)]d\beta>\lambda\bigg\} \\ &\leq\sup_{\Phi\in\mathcal{E}}\mu\bigg\{(x,y)\in \mathcal{D}_N:\sup_{0<t<T/2}\frac{1}{2t}\int_{-t}^t |\Phi|[(x,y)+ \beta v(S_s^{-1}(x,y)]d\beta >\lambda\bigg\} \end{aligned} \] This last inequality combined with (2) proves Lemma 4. \end{proof} \begin{lemma} For $F$ continuous with compact support and each $\lambda>0$ the map $$s\in [-T/2, T/2]:\rightarrow \mu\bigg\{(x,y)\in \mathcal{D}_N; \sup_{0<t<T/2} M_t^s(F)(x,y)>\lambda \bigg\}$$ is continuous. \end{lemma} \begin{proof} Again we denote by $X$ the vector $(x,y)\in \mathbb{R}^2$. For all $|\beta|\leq T/2$ and for all $|s_1|, |s_2|\leq T/2$ we have $$\|(X + \beta v(S_{s_1}^{-1}X)) - (X + \beta v(S_{s_2}^{-1}X))\|\leq \frac{T}{2}K\|S_{s_1}(X) - S_{s_2}(X)\|.$$ For $Z_1 = S_{s_1}(X)$ and $Z_2= S_{s_2}(X)$ we have $$Z_1 + s_1 v(Z_1)= X = Z_2 + s_2v(Z_2).$$ Therefore we have $$Z_1 - Z_2 = s_2 v(Z_2) - s_1v(Z_1) = (s_2-s_1)v(Z_2) + s_1(v(Z_2)-v(Z_1)).$$ As a consequence we obtain $$\|Z_1- Z_2\| \leq |s_2- s_1| + \frac{T}{2}K \|Z_1- Z_2\|$$ and this gives us the uniform estimate $$\|(X + \beta v(S_{s_1}^{-1}X)) - (X + \beta v(S_{s_2}^{-1}X))\|\leq \frac{KT}{2}{(1- \frac{TK}{2})}|s_1- s_2|= C|s_1- s_2|.$$ Now we can conclude by using the uniform continuity of the function $F.$ For $|s_1-s_2|<\frac{\delta(\epsilon)}{C}$ then for all $X\in R^2$ we have $$\bigg|\sup_{0<t<T/2}M_t^{s_1}F(X) - \sup_{0<t<T/2}M_t^{s_2}F(X)\bigg|<\epsilon.$$ \end{proof} The following lemma is well known and can be found in \cite{Folland}. We just state it to make the paper hopefully easier to read. \begin{lemma} Let $\mathcal{C}$ be any collection of open intervals $B$ in $\mathbb{R}$ and let $U$ be the union of all these open intervals. If $c< m(U),$ then there exist disjoint $B_1, ...,B_k\in \mathcal{C}$ such that $\displaystyle \sum_{j=1}^k m(B_j)>\frac{1}{3} c.$ \end{lemma} Now we can proceed with the proof of Theorem 5. For simplicity we will denote by $M_*^s(F)(X)$ the maximal function $$\sup_{0<t<T/2}M_t^s(F)(x,y)$$ \vskip1ex \noindent{\bf{Proof of Theorem 5}} \vskip1ex We will argue by contradiction. Because of Lemma 4 the functions $$H_n: s\in [-T/2, T/2]\rightarrow H_n(s)=\sup_{\|F\|_1\leq 1}\mu\bigg\{(x,y)\in \mathcal{D}_N: \sup_{0<t<T/2}M_t^s(F)(x,y)> n\bigg\}$$ being equal for each s to $$\sup_{F_i\in \mathcal{E}}\mu\bigg\{(x,y)\in \mathcal{D}_N: \sup_{0<t<T/2}M_t^s(F_i)(x,y)> n\bigg\}$$ are measurable and decreasing with $n$. If the conclusion of Theorem 5 was false then we could find a measurable set $A\subset (-T/2, T/2)$ with positive measure and a positive number $\delta$ such that for each $s\in A$ and for each $n\in \mathbb{N}$ we would have \begin{equation} H_n(s) > \delta \end{equation} We can observe that the set $A$ can be written as $$A = \bigcap_{n=1}^{\infty}\bigcup_{i=1}^{\infty}\bigg\{s\in(-T/2, T/2): \mu\bigg\{(x,y)\in \mathcal{D}_N: M_*^s(F_i)(x,y)> n\bigg\}>\delta\bigg\}.$$ For each $n$ the set $$\bigcup_{i=1}^{\infty}\bigg\{s\in(-T/2, T/2): \mu\bigg\{(x,y)\in \mathcal{D}_N: M_*^s(F_i)(x,y)> n\bigg\}>\delta\bigg\},$$ being open by Lemma 5 , it is a countable union of disjoint open intervals. Therefore the collection (with $n$) of all these intervals is countable. Because of the decreasing nature of the sets $$\bigcup_{i=1}^{\infty}\bigg\{s\in(-T/2, T/2): \mu\bigg\{(x,y)\in \mathcal{D}_N: M_*^s(F_i)(x,y)> n\bigg\}>\delta\bigg\}$$ with $n,$ the intervals obtained at stage $k+1$ are included in those corresponding to stage $k.$ \vskip1ex Our goal is to find a more appropriate countable covering of $A.$ First we can pick an integer $N_1$ large enough and an increasing sequence of integers $(N_k)_{k>1}$ such that the following conditions are satisfied. \begin{enumerate} \item $$A\subset V_{N_1}=\bigcup_{i=1}^{\infty}\bigg\{s\in (-T/2, T/2): \mu\bigg\{(x,y)\in \mathcal{D}_N: M_*^s(F_i)(x,y)> N_1\bigg\}>\delta\bigg\}.$$ \item $$m \bigg\{\bigcup_{i=1}^{\infty}\bigg\{s\in(-T/2, T/2): \mu\bigg\{(x,y)\in \mathcal{D}_N: M_*^s(F_i)(x,y)> N_1\bigg\}>\delta\bigg\}\bigg\}\leq 2m(A)$$ \item $\displaystyle \sum_{k=1}^{\infty} \frac{k}{N_k}\leq (\frac{\delta}{3})^2 m(A)\gamma,$ where the constant $\gamma$ will be specified later in order to establish a contradiction. \end{enumerate} To start the selection process we pick any $s_1\in A.$ Then there exists an open interval $I_{1, N_1}\subset V_{N_1}$ that contains $s_1.$ Then we pick $s_2\in A\cap I_{1, N_1}^c$ and select $I_{1, N_2}$ containing $s_2.$ By induction we can obtain a countable collection of open intervals $\displaystyle \mathcal{J}_1=\bigcup_{k=1}^{\infty} I_{1, N_k}\subset V_{N_1}.$ If this collection does not cover $A$ then we continue the selection process by picking $s'\in A\cap \mathcal{J}_1^c$ and an open interval \vskip1ex $\displaystyle I_{2, N2}\subset V_{N_2}= \bigcup_{i=1}^{\infty}\bigg\{s\in (-T/2, T/2): \mu\bigg\{(x,y)\in \mathcal{D}_N: M_*^s(F_i)(x,y)> N_2\bigg\}>\delta\bigg\}\subset V_{N_1}$ that contains $s'.$ The difference between the collections $\mathcal{J}_1$ and $\mathcal{J}_2$ is that the first is built with the sequence $(N_k)_{k\geq 1}$ while the second starting with $N_2$ is built with the sequence $(N_{k+1})_{k\geq 1}$ Because, as we noticed above, we started with at most countably many open intervals and that at each step we picked a different open interval, the selection process has to stop after countably many iterations. So we obtain after induction at most a countable number of collections $\mathcal{J}_r,$ $r\in \mathbb{N},$ that will cover $A$ and will all be contained in $V_{N_1}.$ \vskip1ex We denote the union of these collections of sets by $\displaystyle \mathcal{R} = \bigcup_{r=1}^{\infty}\mathcal{J}_r,.$ We can observe that with this selection process we have at most one interval associated with $N_1$, two with $N_2$ and generally at most $k$ with $N_k.$ Now we can use Lemma 6 to extract of this collection of open intervals, disjoint open intervals $G_1$, $G_2,$ ...$,G_R$ such that \vskip1ex $(**) \displaystyle \sum_{h=1}^R m(G_h)> \frac{1}{3} m(A).$ As all these intervals are disjoint subsets of $V_{N_1}$ we also have \vskip1ex $\displaystyle(***)\, \sum_{h=1}^R m(G_h)\leq 2m(A).$ \vskip1ex Now we can reach a contradiction. We combine what we obtained so far to make our choice of $\gamma.$ We have \[\begin{aligned} &\frac{\delta}{3} m(A)\leq \int_{-T/2}^{T/2} \sum_{h=1}^R \mathbf{1}_{G_h}(s)\mu\bigg\{ X\in\mathcal{D}_N; M_*^{s}(F_{m_h})(X)> \Gamma_h\bigg\}ds \\ &\text{for some integers $m_h$ and $\Gamma_h,$} \\ & \leq \sum_{h=1}^R (m(G_h))^{1/2}\bigg(\int_{-T/2}^{T/2}\bigg(\mu\bigg\{X\in\mathcal{D}_N; M_*^{s}(F_{m_h})(X)> \Gamma_h\bigg\}\bigg)^2 ds\bigg)^{1/2} \\ &\text{by Cauchy Schwartz's inequality,} \\ & \leq \sum_{h=1}^R (m(G_h))^{1/2}(\mu(\mathcal{D}_N))^{1/2} \bigg(\int_{-T/2}^{T/2}\bigg(\mu\bigg\{X\in\mathcal{D}_N; M_*^{s}(F_{m_h})(X)> \Gamma_h\bigg\}\bigg)ds\bigg)^{1/2} \\ & \leq \big(\sum_{h=1}^R m(G_h)\big)^{1/2}(\mu(\mathcal{D}_N))^{1/2}\bigg(\sum_{h=1}^R \int_{-T/2}^{T/2}\bigg(\mu\bigg\{X\in\mathcal{D}_N; M_*^{s}(F_{m_h})(X)> \Gamma_h\bigg\}\bigg)ds\bigg)^{1/2} \\ &\text{by Cauchy Schwartz's inequality,}\\ & \leq (2m(A))^{1/2}\mu(\mathcal{D}_N)^{1/2}T^{1/2}\bigg(\sum_{h=1}^R \frac{4\pi^2 (1+ TK)^2}{(1-TK)^2}\frac{1}{M_h} \bigg)^{1/2}\\ &\text{by using (***), Theorem 3 and the fact that $\|F_{m_h}\|_1\leq 1$} \\ \end{aligned} \] Therefore we have \[\begin{aligned} &\frac{\delta}{3} (m(A))^{1/2}\leq T^{1/2}\mu(\mathcal{D}_N)^{1/2}(\frac{8\pi^2 (1+ TK)^2}{(1-TK)^2})^{1/2}\bigg(\sum_{h=1}^R \frac{1}{\Gamma_h}\bigg)^{1/2} \\ &\leq T^{1/2}\mu(\mathcal{D}_N)^{1/2}(\frac{8\pi^2 (1+ TK)^2}{(1-TK)^2})^{1/2}\bigg(\sum_{h=1}^{\infty} \frac{h}{N_h}\bigg)^{1/2} \\ & \text{ because we had for each $k$ at most $k$ intervals corresponding to $N_k$}\\ &< T^{1/2}\mu(\mathcal{D}_N)^{1/2}(\frac{8\pi^2 (1+ TK)^2}{(1-TK)^2})^{1/2}\frac{\delta}{3}(m(A))^{1/2}\gamma^{1/2}\\ &\text{by using (3).} \end{aligned} \] To establish a contradiction it is enough now to pick $$\gamma< \frac{1}{T\mu(\mathcal{D}_N)\frac{8\pi^2 (1+ TK)^2}{(1-TK)^2}}$$ choice that we could have made independently of the selection process. This ends the proof of Theorem 5. \vskip1ex Because of Lemma 1 Theorem 5 can be reformulated in the following way \begin{corollary} Let $v$ be a unit Lipschitz vector field with Lipschitz constant $K= 1/T$. Then there exists a set $\mathcal{T}\subset [-T/2, T/2]$ of measure $T$ such that for each $s\in \mathcal{T}$, for all $F\in L^p(\mathbb{R}^2)$, $1\leq p<\infty$, the averages $$\frac{1}{2t}\int_{-t}^t F[(x,y) + (\beta +s)v(x,y))d\beta$$ converge a.e. to $F[(x,y)+ sv(x,y)].$ \end{corollary} \noindent{\bf Proof of Theorem 4} \vskip1ex Theorem 5 provides us with a set $\mathcal{T}$ of measure $T$ such that for each $s\in \mathcal{T}$ we have $$\lim_{n}\sup_{\|F\|_1\leq 1}\mu\bigg\{X\in \mathcal{D}_N: M_*^s(F)(X)> n \bigg\} = 0.$$ We can conclude, by using similar arguments as those displayed in \cite{Folland}, that for each $s\in \mathcal{T}$ the set of functions in $L^1(\mathbb{R}^2)$ for which the pointwise convergence holds on $\mathcal{D}_N$ is closed in $L^1$. As we obviously have the dense set of continuous functions for which the differentiation holds we have proved Theorem 4 for values of $X\in \mathcal{D}_N.$ The general case follows by letting $N$ tend to infinity. \subsection{ A maximal inequality for the unit vector field $v\circ S_s^{-1}$.} First we want to refine the proof of Theorem 5 in order to evaluate the rate of convergence to zero (with $n$) of the maximal function $\displaystyle \sup_{\|F\|_1\leq 1}\mu\bigg\{(x,y)\in \mathcal{D}_N: \sup_{0<t<T/2}M_t^s(F)(x,y)> n\bigg\}.$ \begin{theorem} For each $0<\alpha< 1/2,$ for a.e. $s$ in a set of measure $T$ in $[-T/2, T/2]$ we have $$\lim_n n^{\alpha}\sup_{\|F\|_1\leq 1}\mu\bigg\{(x,y)\in \mathcal{D}_N: \sup_{0<t<T/2}M_t^s(F)(x,y)> n\bigg\}=0.$$ \end{theorem} \begin{proof} As in Theorem 5 we argue by contradiction. Instead of the functions $H_n$ we use this time the functions $$O_n: s\in [-T/2, T/2]\rightarrow O_n(s) = n^{\alpha}\sup_{\|F\|_1\leq 1}\mu\bigg\{(x,y)\in \mathcal{D}_N: \sup_{0<t<T/2}M_t^s(F)(x,y)> n\bigg\}$$ which for each $s$ are equal to $$n^{\alpha}\sup_{F_i\in \mathcal{E}}\mu\bigg\{(x,y)\in \mathcal{D}_N: \sup_{0<t<T/2}M_t^s(F)(x,y)> n\bigg\}$$ (by Lemma 4.) The set that replaces $A$ is $$B= \bigg\{s \in (-T/2, T/2): \limsup_n O_n(s)>\delta\bigg\}$$ By Lemma 5, for each positive integer $L$ the set $$ W_L= \bigcap_{n=1}^L\bigcup_{j\geq n}\bigcup_{i=1}^{\infty}\bigg\{s\in (-T/2, T/2): j^{\alpha}\mu\bigg\{(x,y)\in \mathcal{D}_N: M_*^s(F_i)(x,y)>j\bigg\}>\delta \bigg\}$$ as a finite intersection of open sets is a countable union of disjoint open intervals. By taking the collection (with $L$) of all these open intervals we obtain a countable number of such intervals. As before the intervals obtained at stage $L+1$ are subsets of those corresponding to stage $L.$ Having a countable number of intervals we proceed with an increasing sequence of integers $N_k$ such that \begin{equation} \sum_{k=1}^{\infty}\frac{k}{N_k^{1-2\alpha}}\leq (\frac{\delta}{3})^2m(B)\gamma' \end{equation} We can start the selection process with the additional conditions \begin{equation} B\subset W_{N_1}= \bigcap_{n=1}^{N_1}\bigcup_{j\geq n}\bigcup_{i=1}^{\infty} \bigg\{s\in (-T/2, T/2): \mu\bigg\{(x,y)\in \mathcal{D}_N: M_*^s(F_i)(x,y)> j\bigg\}>\delta\bigg\} \end{equation} \begin{equation} m (W_{N_1})\leq 2m(B) \end{equation} As before we select by induction a covering $\displaystyle \mathcal{R}'= \bigcup_{r'=1}^{\infty}\mathcal{I}_r'$ of $B$ by open intervals, subsets of those composing $\displaystyle W_{N_1}.$ Furthermore in this entire collection $\displaystyle \mathcal{R}'$ of open intervals we have at most one associated with $N_1$, two with $N_2$ and more generally at most $k$ with $N_k.$ Next we use Lemma 6 to extract of this collection , disjoint open intervals, $G_1',G_2',...,G_{R}'$ such that \vskip1ex $\displaystyle (+++)\, \frac{1}{3} m(B)<\sum_{h=1}^R m(G_h') \leq 2m(B),$ as these intervals are disjoint subsets of $W_{N_1}.$ To establish the contradiction we will choose later $\gamma'$ appropriately. We have \[ \begin{aligned} &\frac{\delta}{3} m(B)\leq \int_{-T/2}^{T/2} \sum_{h=1}^R \mathbf{1}_{G_h'}(s)(\Gamma_h')^{\alpha}\mu\bigg\{ X\in\mathcal{D}_N; M_*^{s}(F_{m_h'})(X)> \Gamma_h'\bigg\}ds \\ &\text{for some integers $m_h'$ and $\Gamma_h'.$} \\ \end{aligned} \] We can use Cauchy Schwarz's inequality to dominate this last term. \[\begin{aligned} & \leq \sum_{h=1}^R (m(G_h'))^{1/2}(\Gamma_h')^{\alpha}\bigg(\int_{-T/2}^{T/2}\bigg(\mu\bigg\{X\in\mathcal{D}_N; M_*^{s}(F_{m_h})(X)> \Gamma_h'\bigg\}\bigg)^2 ds\bigg)^{1/2} \\ & \leq \sum_{h=1}^R (m(G_h))^{1/2}(\Gamma_h')^{\alpha}(\mu(\mathcal{D}_N))^{1/2} \bigg(\int_{-T/2}^{T/2}\bigg(\mu\bigg\{X\in\mathcal{D}_N; M_*^{s}(F_{m_h'})(X)> \Gamma_h'\bigg\}\bigg)ds\bigg)^{1/2} \\ & \leq \big(\sum_{h=1}^R m(G_h')\big)^{1/2}(\mu(\mathcal{D}_N))^{1/2}\bigg(\sum_{h=1}^R (\Gamma_h')^{2\alpha}\int_{-T/2}^{T/2}\bigg(\mu\bigg\{X\in\mathcal{D}_N; M_*^{s}(F_{m_h'})(X)> \Gamma_h'\bigg\}\bigg)ds\bigg)^{1/2} \\ &\text{by Cauchy Schwartz's inequality,}\\ & \leq (2m(B))^{1/2}\mu(\mathcal{D}_N)^{1/2}T^{1/2}\bigg(\sum_{h=1}^R \frac{4\pi^2 (1+ TK)^2}{(1-TK)^2} \frac{(\Gamma_h')^{2\alpha}}{\Gamma_h'} \bigg)^{1/2}\\ &\text{by using (+++) and Theorem 3} \\ &=(2m(B))^{1/2}\mu(\mathcal{D}_N)^{1/2}T^{1/2}\bigg(\sum_{h=1}^R \frac{4\pi^2 (1+ TK)^2}{(1-TK)^2} \frac{1}{(\Gamma_h')^{1-2\alpha}}\bigg)^{1/2}. \\ &\text{Hence we have}\\ &\frac{\delta}{3}( m(B))^{1/2}\leq T^{1/2}\mu(\mathcal{D}_N)^{1/2}(\frac{8\pi^2 (1+ TK)^2}{(1-TK)^2})^{1/2}\bigg(\sum_{h=1}^R \frac{1}{\Gamma_h'^{1-2\alpha}}\bigg)^{1/2} \\ &\leq T^{1/2}\mu(\mathcal{D}_N)^{1/2}(\frac{8\pi^2 (1+ TK)^2}{(1-TK)^2})^{1/2}\bigg(\sum_{h=1}^{\infty} \frac{h}{N_h^{1-2\alpha}}\bigg)^{1/2} \\ & \text{ because we had for each $k$ at most $k$ intervals corresponding to $N_k$}\\ &< T^{1/2}\mu(\mathcal{D}_N)^{1/2}(\frac{8\pi^2 (1+ TK)^2}{(1-TK)^2})^{1/2}\frac{\delta}{3}(m(B))^{1/2}\gamma'^{1/2}\\ &\text{by using (4).} \end{aligned} \] The contradiction is obtained for $$ \gamma'<\frac{1}{T\mu(\mathcal{D}_N)\frac{8\pi^2 (1+ TK)^2}{(1-TK)^2}}.$$ \end{proof} The following result can be derived from Theorem 5. \vskip1ex \begin{theorem} For each $0<\alpha<1/2$ there exists a function $C_{\alpha}$ almost everywhere finite on $[-T/2, T/2]$ such that for all $\lambda >1,$ for all $F\in L^1$ such that $\lambda>\|F\|_1$ we have \begin{equation} \mu\bigg\{X\in \mathcal{D}_N: M_*^s(F)(X)>\lambda\bigg\}\leq 2^{\alpha}C_{\alpha}(s)\bigg(\frac{\|F\|_1}{\lambda}\bigg)^{\alpha} \end{equation} \end{theorem} \begin{proof} For a fixed $\alpha$ we denote by $C_{n,\alpha}(s)$ the a.e. finite function $$n^{\alpha}\sup_{\|F\|_1\leq 1}\mu\bigg\{(x,y)\in \mathcal{D}_N: \sup_{0<t<T/2}M_t^s(F)(x,y)> n\bigg\}.$$ By Theorem 6 we have $\displaystyle \lim_n C_{n,\alpha}(s) = 0 $ for a.e. $s\in [-T/2, T/2].$ Hence the function $\displaystyle C_{\alpha}: C_{\alpha}(s)= \sup_{n} C_{n,\alpha}(s)$ is a.e. finite on $[-T/2, T/2].$ Furthermore for each $\lambda >1$ we have \[\begin{aligned} &\sup_{\|F\|_1\leq 1}\mu\bigg\{(x,y)\in \mathcal{D}_N: \sup_{0<t<T/2}M_t^s(F)(x,y)> \lambda\bigg\}\\ &\leq \sup_{\|F\|_1\leq 1}\mu\bigg\{(x,y)\in \mathcal{D}_N: \sup_{0<t<T/2}M_t^s(F)(x,y)> [\lambda]\bigg\}\\ &\leq \frac{C_{\alpha}(s)}{[\lambda]^{\alpha}}\\ \end{aligned} \] Therefore by using the inequality $\lambda <2 [\lambda]$ we have $$ \sup_{\|F\|_1\leq 1}\mu\bigg\{X\in \mathcal{D}_N: M_*^s(F)(X)>\lambda\bigg\} \leq \frac{2^{\alpha}C_{\alpha}(s)}{\lambda^{\alpha}}$$ Now by taking the functions $F/\|F\|_1$ and making the change $\lambda \|F\|_1 = t$ we can derive (7). \end{proof} \section{Differentiation in $R^n$} The results obtained in the previous section can be extended without difficulty to $\mathbb{R}^n.$ In fact the only part where we use the fact that we were in $\mathbb{R}^2$ is when we proved Lemma 1. This appears in the constant of this Lemma. We only state the lemma that would replace Lemma 1 in $\mathbb{R}^n.$ We consider a Lipschitz unit vector field $v$ on $R^n$ with constant $K$ and simply denote by $\displaystyle M_t(F)(X)$ the averages $$\frac{1}{2t}\int_{-t}^t F[X+ \beta v(X)]d\beta$$ where $X = (x_1,x_2, ...,x_n).$ We use the same notation for $S_s(X) = X + \beta v(X)$ a map from $R^n$ to $R^n.$ We denote by $\mu_n$ Lebesgue measure on $R^n.$ \begin{lemma} For all $|s| \leq T$ where $T<1/K$ the maps $S_s$ are one to one and onto. Furthermore there exist constants $c_n$ and $C_n$ depending only on $n$, $K$ and $T$ such that for all measurable sets $A$ in $R^n$ we have $$c_n\mu_n(S_s(A))\leq \mu_n(A) \leq C_n\mu_n(S_s(A))$$ \end{lemma} \begin{proof} The invertible character of the maps $S_s$ for small $s$ can be established in the same way. The inequalities $$c_n\mu_n(S_s(A))\leq \mu_n(A) \leq C_n\mu_n(S_s(A))$$ follow from the inequalities $\|Z_1- Z_2\|\leq (1+ |s|K)\|X_1- X_2\|,$ and $\|X_1-X_2\|\leq \frac{1}{1-|s|K} \|S_s(X_1)-S_s(X_2)\|$ where $Z_1 = S_s(X_1)$ and $Z_2= S_s(X_2).$ \end{proof} The maximal inequality that replaces Theorem 3 is the following \begin{theorem} Let $K$ be the Lipschitz constant for the unit vector field $v.$ Then for each $T$, $0<T< 1/K,$ there exists a constant $\mathcal{C}_n$ such that for all $\lambda>0$ \[ \begin{aligned} &\frac{1}{T}\int_{-T/2}^{T/2}\mu_n\bigg\{X\in R^n: \sup_{0<t\leq T/2}\frac{1}{2t}\int_{-t}^t |F[ X + \beta v(S_s^{-1}(X))]|d\beta >\lambda\bigg\}dm(s)\\ &\leq \frac{\mathcal{C}_n}{\lambda}\int_{\mathbb{R}^n} |F(X)|d\mu_n. \end{aligned} \] where $m$ denotes Lebesgue measure on $[-T/2, T/2].$ \end{theorem} The proof is identical to the one given for Theorem 3 so we skip it. From this maximal inequality the reasoning is identical. The only difference is the constant $\mathcal{C}_n$ that depends on $T,$ $K$ and $n.$ From that point on by using the same path one can extend Theorem 4 and Theorem 5 to the case of $\mathbb{R}^n.$
{ "timestamp": "2006-09-29T00:36:39", "yymm": "0609", "arxiv_id": "math/0609827", "language": "en", "url": "https://arxiv.org/abs/math/0609827", "abstract": "Consider $v$ a Lipschitz unit vector field on $R^n$ and $K$ its Lipschitz constant. We show that the maps $S_s:S_s(X) = X + sv(X)$ are invertible for $0\\leq |s|<1/K$ and define nonsingular point transformations. We use these properties to prove first the differentiation in L^p norm for $1\\le p<\\infty.$ Then we show the existence of a universal set of values $s\\in [-1/2K,1/2K]$ of measure 1/K for which the Lipschitz unit vector fields $v\\circ S_s^{-1}$ satisfy Zygmund's conjecture for all functions in $L^p(\\R^n)$ and for each p, $1\\leq p< \\infty.$", "subjects": "Classical Analysis and ODEs (math.CA)", "title": "On A. Zygmund differentiation conjecture", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.983085084750966, "lm_q2_score": 0.8244619242200082, "lm_q1q2_score": 0.8105162206457712 }
https://arxiv.org/abs/2108.12514
A Bisection Method Like Algorithm for Approximating Extrema of a Continuous Function
For a continuous function $f$ defined on a closed and bounded domain, there is at least one maximum and one minimum. First, we introduce some preliminaries which are necessary through the paper. We then present an algorithm, which is similar to the bisection method, to approximate those maximum and minimum values. We analyze the order of the convergence of the method and the error at the $k$-th step. Then we discuss the pros and cons of the method. Finally, we apply our method for some special classes of functions to obtain nicer results. At the end, we write a Matlab script which implements our algorithm.
\section{Introduction} For nonlinear equations, there is no general algorithm or method that computes an exact root of a nonlinear equation, unfortunately. However, in science and engineering, one usually encounters a situation in which she needs to compute a root of a nonlinear equation. For this reason, there are several methods that approximates a root of a nonlinear equation, numerically. One of those methods is the bisection method, which is alternatively called binary chopping, interval halving, or bracketing method. Before introducing and proving the bisection method, we need to recall some concepts and theorems from calculus. \\ {\bf 1.1. Monotone Convergence Theorem(Ross[1]).} All bounded monotone sequences converge. \begin{proof} See [1, Theorem 10.2.] \end{proof} \par \noindent {\bf 1.2. The Squeeze(Sandwich) Theorem([2]).} Suppose that $g(x)\leq f(x) \leq h(x)$ for all $x$ in some open interval containing $c$, except possibly at $x=c$ itself. Suppose also that $\displaystyle{\lim_{x \to c} g(x)=\lim_{x \to c} h(x) = L}$. Then $\displaystyle{\lim_{x \to c} f(x) = L}$. \begin{proof} See [2, Theorem 4] \end{proof} \par \noindent {\bf 1.3. Intermediate-Value Theorem(Lang[3]).} Let $f$ be a continuous function on a closed interval $[a,b]$. Let $\alpha = f(a)$ and $\beta = f(b)$. Let $\gamma$ be a number such that $\alpha < \gamma < \beta$ or $\beta < \gamma < \alpha$. Then there exists a number $c$, $a<c<b$, such that $f(c)=\gamma$. \begin{proof} See [3, Theorem 4.4.] \end{proof} \par \noindent {\bf 1.4. Extreme Value Theorem(Lang[4]).} Let $f$ be a continuous function on a closed interval $[a,b]$. Then there exists an element $c\in [a,b]$, such that $f(c)\geq f(x)$ for all $x\in [a,b]$. That is, $c$ is a maximum for $f$ on $[a,b]$. Also, there exists an element $d\in[a,b]$ such that $f(x)\geq f(d)$ for all $x\in[a,b]$. That is, $d$ is a minimum for $f$ on $[a,b]$. \begin{proof} See [4, Theorem 4.3.] \end{proof} \par \noindent {\bf 1.5. The Bisection Method(Kincaid, Cheney[5]).} If $f$ is a continuous function on a closed interval $[a,b]$ and if $f(a)f(b)<0$, then $f$ must have a zero in $(a,b)$. Since $f(a)f(b)<0$, the function $f$ changes sign on the interval $[a,b]$ and, therefore, it has at least one zero in the interval. This is a consequence of the Intermediate-Value Theorem. The Bisection Method exploits this idea in the following way. If $f(a)f(b)<0$, then we compute $\displaystyle{c=\frac{a+b}{2}}$ and test whether $f(a)f(c)<0$. If this is true, then $f$ has a zero in $[a,c]$. So we rename $c$ as $b$ and start again with the new interval $[a,b]$, which is half as large as the original interval. If $f(a)f(c)>0$,then $f(c)f(b)<0$, and in this case we rename $c$ as $a$. In either case, a new interval containing a zero of $f$ has been produced, and the process can be repeated. The Bisection Method finds one zero but not all the zeros in the interval $[a,b]$. Of course, if $f(a)f(c)=0$, then $f(c)=0$ and a zero has been found. However, it is quite unlikely that $f(c)$ is exactly $0$ in the computer because of roundoff errors. Thus, the stopping criterion should not be whether $f(c)=0$. A reasonable tolerance must be allowed depending on the situation. \\ {\bf An example Matlab code for solving $e^x-4x=0$ in the interval $[0,1]$ by using the bisection method:} \begin{lstlisting}[frame=single] f = @(x) exp(x)-4*x; fplot(f, [0,1]) epsilon = 1e-7; a = 0.0; b = 1; while abs(b - a)/2 > epsilon c = (a+b)/2; if f(b)*f(c) > 0 b = c; else a = c; end end estRoot = (a+b)/2 display(f(estRoot)) \end{lstlisting} \par \noindent {\bf Theorem 1.6.} If $[a_0,b_0 ],[a_1,b_1 ],\dots,[a_n,b_n ],\dots$ denote the intervals in the bisection method, then the limits $\displaystyle{\lim_{n \to \infty} a_n}$ and $\displaystyle{\lim_{n \to \infty} b_n}$ exist, are equal, and represent a zero of $f$. If $r=\displaystyle{\lim_{n \to \infty} c_n}$ and $c_n=\displaystyle{\frac{a_n+b_n}{2}}$, then \begin{equation*} |r-c_n|\leq 2^{-(n+1)}(b_0-a_0) \end{equation*} \begin{proof} See [5, pp. 74-78] \end{proof} \section{Construction of Our Algorithm} Let $f$ be a continuous function on a closed interval $[a,b]$ and $\alpha=f(a)$ and $\beta=f(b)$. By the Extreme Value Theorem, $f$ has at least one maximum value and one minimum value. Therefore, $f$ is bounded on the interval $[a,b]$. Let $U$ be an upper bound and $L$ be a lower bound for $f$ on the interval $[a,b]$.That is, $L\leq f(x)\leq U$ for all $x\in[a,b]$. Let $f$ attain its maximum value at $c\in[a,b]$ and $M=f(c)$. Since $f$ is continuous on $[a,b]$, $f$ is also continuous on $[a,c]$ and $[c,b]$. Let $m$ be the minimum of $\alpha=f(a)$ and $\beta=f(b)$, that is, $m=\text{min}\{\alpha,\beta\}$. Then there are two cases: 1) $m=\alpha$ or 2) $m=\beta$. \par \noindent {\bf Case 1.} $m=\alpha$ If $m=\alpha$, then for all $y\in(m,M)$, there is a number $x\in(a,c)$ such that $y=f(x)$. Thus, there is no number $y$ between $m$ and $M$ such that the equation $y=f(x)$ has no solution in the interval $(a,c)$. Since $f(x)\leq U$ for all $x\in[a,b]$, we can conclude that $\alpha=m\leq M=f(c)\leq U$. Now, let $N$ be any number between $\alpha=m$ and $M=f(c)$. Then $N\leq M=f(c)\leq U$. Let $\displaystyle{A=\frac{N+U}{2}}$. If the equation $f(x)=A$ has a solution in the interval $[a,b]$, then $A\leq M=f(c)\leq U$ and we rename $A$ as $N$. So we start again with the new interval $[N,U]$, which is half as large as the original interval. If the equation $f(x)=A$ has no solution in the interval $[a,b]$, then $A$ is a new upper bound for the function $f$ on the interval $[a,b]$ because we have shown that for all $y\in(m,M)$, there is a number $x\in(a,c)$ such that $y=f(x)$. Thus, if the equation $f(x)=A$ has no solution in the interval $[a,b]$, then $A$ must be a new upper bound which is less than $U$. Then we rename $A$ as $U$. In either case, a new interval containing a maximum value of $f$ has been produced, and the process can be repeated. By repeating this process, we obtain a smaller and smaller interval that contains the maximum value of $f$. \par \noindent {\bf Case 2.} $m=\beta$ When $m=\beta$, a similar discussion like in Case 1 $(m=\alpha)$ can be carried out. Thus, in either case, our algorithm converges to the maximum value of the function. \begin{algorithm}[h] \SetKwInOut{Input}{Input}\SetKwInOut{Output}{Output} \Input{1) Function $f$ \\ 2) Tolerance(=Error) $\epsilon_{tol}$ \\ 3) Points $m$ and $u$ such that $m=f(x)$ has a solution $x\in [a,b]$ and $u=f(x)$ \\ has no solution in $[a,b]$ \\ 4) Maximum iterations $N_{MAX}$ to prevent infinite loop} \Output{Value which differs from the maximum of $f(x)$ in the interval $[a,b]$ by less than $\epsilon_{tol}$} \caption{Our Constructed Algorithm\label{our algorithm}} \BlankLine $N\leftarrow 1$ \\ \While{$N \leq N_{MAX}$}{ $c\leftarrow (u+m)/2$\; \If{$(u-m)/2 \leq \epsilon_{tol}$}{ Output($c$)\; \par \textbf{Stop} } $N=N+1$\; \If{$c=f(x)$ has a solution in $[a,b]$}{ $m\leftarrow c$\; } \Else{ $u\leftarrow c$\; } } Output(``Maximum iterations reached without the desired tolerance. Input a bigger $N_{MAX}$") \end{algorithm} \section{Error Analysis} Let $m_0$ be a number such that $m\leq m_0\leq M=f(c)$ and $u_0$ be an upper bound for the function $f$ on the interval $[a,b]$. Let $[m_n,u_n ]$ be the interval at the $n$-th step. Then \begin{equation*} \label{eq1} \begin{split} m_0 & \leq m_1 \leq m_2 \leq \dots \leq u_0 \\ u_0 & \geq u_1 \geq u_2 \geq \dots \geq m_0 \end{split} \end{equation*} \begin{flalign}\label{eq:1} & & & \ \ \ u_{n+1}-m_{n+1} = \frac{u_n-m_n}{2} & & (n\geq 0) \end{flalign} Since the sequence $(m_n)$ is nondecreasing (i.e. monotone) and bounded, it converges by The Monotone Convergence Theorem. Likewise, $(u_n)$ converges as it is nonincreasing (i.e. monotone) and bounded. If we apply Equation [\ref{eq:1}], repeatedly, we find that \begin{equation*} u_n-m_n=2^{-n}(u_0-m_0) \end{equation*} Thus, \begin{equation*} \lim_{n \to \infty} u_n - \lim_{n \to \infty} m_n = \lim_{n \to \infty} 2^{-n}(u_0-m_0)=0 \end{equation*} So, $\displaystyle{\lim_{n \to \infty} u_n = \lim_{n \to \infty} m_n}$. If we put \begin{equation*} M = \lim_{n \to \infty} u_n = \lim_{n \to \infty} m_n \end{equation*} then, by The Squeeze Theorem, we observe that $M$ is the maximum value of the function $f$ on the interval $[a,b]$. Suppose that, at a certain stage in the process, the interval $[m_n,u_n]$ has just been defined. If the process is now stopped, the maximum value of $f$ is certain to lie in this interval. The best estimate of the maximum value at this stage is not $m_n$ or $u_n$ but the midpoint of the interval: \begin{equation*} c_n=\frac{m_n+u_n}{2} \end{equation*} The error is then bounded as follows: \begin{equation*} |M-c_n|\leq \frac{u_n-m_n}{2} =2^{-(n+1)}(u_0-m_0) \end{equation*} Summarizing this discussion, we have the following proposition. {\bf Proposition 3.1.} If $[m_0,u_0 ],[m_1,u_1 ],\dots,[m_n,u_n ],\dots$ denote the intervals in the algorithm, then the limits $\displaystyle{\lim_{n \to \infty} u_n}$ and $\displaystyle{\lim_{n \to \infty} m_n}$ exist, are equal, and represent the maximum value of the function $f$ on the interval $[a,b]$. If $M=\displaystyle{\lim_{n \to \infty} c_n}$ and $c_n=\displaystyle{\frac{m_n+u_n}{2}}$, then \begin{equation}\label{eq:2} |M-c_n|=2^{-(n+1)}(u_0-m_0) \end{equation} \begin{proof} The proof is explained in the discussion above. \end{proof} {\bf Proposition 3.2.} The number of iterations needed, $n$, to achieve a given error (or tolerance), $\epsilon$, is given by \begin{equation*} n\geq \log_{2}{\frac{u_0-m_0}{\epsilon}}-1 = \frac{\log {(u_0-m_0)} - \log \epsilon}{\log 2} - 1 \end{equation*} \begin{proof} By Inequality [\ref{eq:2}], we can write \begin{equation*} |M-c_n|=2^{-(n+1)}(u_0-m_0) \end{equation*} We want that $\displaystyle{|M-c_n|=2^{-(n+1)}(u_0-m_0)\leq \epsilon}$. If we solve the inequality $\displaystyle{2^{-(n+1)}(u_0-m_0 )\leq \epsilon}$ for $n$, we obtain that \begin{equation*} n\geq \log_{2}{\frac{u_0-m_0}{\epsilon}}-1 = \frac{\log {(u_0-m_0)} - \log \epsilon}{\log 2} - 1 \end{equation*} \end{proof} {\bf Proposition 3.3.} The order of convergence of our algorithm is $1$. That is, the algorithm converges linearly. \begin{proof} The error at the $(n+1)$-th and $n$-th step is $|M-c_{n+1}|$ and $|M-c_n|$, respectively. \begin{equation*} \lim_{n \to \infty} \frac{|M-c_{n+1}|}{|M-c_n|}=\frac{2^{-(n+2)}(u_0-m_0)}{{[2^{-(n+1)}(u_0-m_0)]}^p}=\frac{1}{2}{\left( \frac{2^n}{u_0-m_0} \right)}^{p-1} \end{equation*} Thus, we can conclude that the order of convergence $p$ is $1$ and error constant $C$ is $1$. \end{proof} {\bf Corollary 3.4.} The algorithm can be used to approximate the minimum value of a function $f$ which is continuous on the interval $[a,b]$. \begin{proof} If we approximate the maximum value of the function $y=-f(x)$ on the interval $[a,b]$ and change the sign of the approximated maximum value, then we get the minimum value of $f$. \end{proof} \section{Pros and Cons of the Algorithm} There are a lot of algorithms and methods for unconstrained optimization. However, most of those methods and algorithms require differentiability criterion. Our algorithm does not require differentiability criterion. Thus, it can be applied any continuous function. What is more, we can remove continuity requirement because all we need is the Darboux Property which can be stated as \begin{center} whenever $f(x_1)$ and $f(x_2)$ are different and $u$ is any number between them, then $f(x) = u$ for at least one $x$ between $x_1$ and $x_2$ \end{center} Since continuous functions satisfy the Darboux property, we develop our algorithm over continuous functions. However, there are functions that are discontinuous at some points but the algorithm is still applicable. For example, consider the following piecewise function. \[ f(x) = \begin{cases} x+5 & \text{if \ $-4\leq x \leq -1$} \\ 4 & \text{if \ $-1 < x < 0$} \\ 3 & \text{if \ \ \ $0 \leq x \leq 1$} \end{cases} \] \begin{figure}[h] \centering \includegraphics[width=82mm,scale=0.5]{func.png} \caption{Graph of the piecewise function $f$} \label{graph} \end{figure} The function is clearly discontinuous at some points but it has the Darboux property. Therefore, our algorithm is still valid for the the piecewise function $f$. Another advantage of the algorithm is that it always converges without a restriction on the initial starting points. Now, let us have a look at some problematic aspects of the algorithm. The most restricting condition is that deciding whether the equation $f(x)=c$ has a solution or not may not be so easy. If we encounter a situation in which we cannot decide whether the equation $f(x)=c$ has a solution or not, then we cannot move on to the next step. Another issue is that there is no stopping criterion when we find the exact maximum value unless the given function is differentiable. Thus, we need to set an error to stop the algorithm. However, if the given function is differentiable, then we can decide that the algorithm find the exact maximum value. This topic will be discussed in the next section. The last handicap is that the order of the convergence of the algorithm is $1$. Therefore, the algorithm converges, slowly. \section{Analyzing Differentiable Functions} Let $f$ be a differentiable function on the interval $[a,b]$. In the previous section, we emphasized that there is no stopping criterion when we find the exact maximum value, in general. However, if the function is differentiable, we can set a stopping criterion. If a function $f$ has a global or local maximum value at a point $x=c$, then $f'(c)$ must be equal to $0$. If $f(x)=c$ has a solution in the interval $[a,b]$ and we check whether $y=c$ is tangent to the function $f$, then we can decide whether we find the exact maximum value or not. \section{Further Works} There are some further works to improve the algorithm. Since, I do not have enough time, I could not work on those subjects. However, anyone who is interested in this topic can study the following: \begin{enumerate} \item It seems that the algorithm can be generalized into higher dimensions. Assume that $f:D\rightarrow \mathbb{R}$ is a function from $D\subseteq \mathbb{R} ^n$ to $\mathbb{R}$, where $D$ is a bounded, closed, and path connected subset of $\mathbb{R} ^n$. Then we can apply the Intermediate-Value Theorem and the Extreme Value Theorem for the function $f$. Thus, we may obtain an analogous version of the algorithm. \item The algorithm works for unconstrained optimization. However, it may be modified in such a way that the algorithm also works for constrained optimization. \item If the given function is a special class of functions(e.g. differentiable, convex, etc.), then the algorithm may be modified to work more efficiently. \item If someone needs just an integer value output, then the algorithm may be modified to produce the largest possible integer value output. \end{enumerate} \newpage \section{References} [1]. Kenneth A. Ross. \emph{Elementary Analysis}. 2nd Edition. 2013. Springer. New York. p 57. [2]. George B. Thomas, Joel Hass, Christopher Heil, and Maurice D. Weir. \emph{Thomas' Calculus: Early Transcendentals}. 14th Edition. 2018. Pearson, Boston, p 70. [3]. Serge Lang. \emph{Undergraduate Analysis}. 2nd Edition. 2005. Springer. New York. p 62. [4]. Serge Lang. \emph{Undergraduate Analysis}. 2nd Edition. 2005. Springer. New York. p 61. [5]. David Kincaid, Ward Cheney. \emph{Numerical Analysis: Mathematics of Scientific Computing}. 3rd Revised Edition. 2010. American Mathematical Society. Rhode Island. pp 74-78. \newpage \section{Appendix} {\bf An example Matlab code which implements our algorithm:} \begin{lstlisting}[frame=single] syms x eqn = input(sprintf('Enter a continuous function as a function of x')); a = input(sprintf('Enter the lower bound of the interval [a,b]')); b = input(sprintf('Enter the upper bound of the interval [a,b]')); if b<=a fprintf('Error: b must be bigger than a\n') return end u = input(sprintf('Enter a number u such that f(x)=u has a solution in the interval [a,b]\n')); k = solve(eqn==u,x,'Real',true); t = isempty(k); g = max(size(k)); z = 0; if t == 1 fprintf('Error: Enter a number u such that f(x)=u has a solutionin the interval [a,b]\n') return end for i=1;g; if k(i)>a && k(i)<b z=1; end end if z==0 fprintf('Error: Enter a number u such that f(x)=u has a solutionin the interval [a,b]\n') return end v = input(sprintf('Enter a number v bigger than u such that f(x)=v has no solution')); l = solve(eqn==v,x,'Real',true); r = isempty(l); o = 0; for i=1;g; if r==0 if l(i)>a && l(i)<b o=1; end end end if r == 0 && o==1 fprintf('Error: Enter a number v bigger than u such that f(x)=v has no solution') return end if v<=u fprintf('Error: Enter a number u such that f(x)=u has a solutionin the interval [a,b]\n') return end nmax = input(sprintf('Enter the maximum number of iterations')); e = input(sprintf('Enter the maximum error')); if nmax<0 || e<0 fprintf('Error: Enter a positive number for the number of iterations and the error') return end n=1; while n<=nmax p=(u+v)/2; k = solve(eqn==p,x,'Real',true); t = isempty(k); g = max(size(k)); z=0; if t == 1 v=p; end for i=1;g; if t==0 if k(i)>a && k(i)<b z=1; end end end if z==1 u=p; else v=p; end if v-u<e fprintf('approximate maximum is return end n=n+1; if n>nmax fprintf('Error: please enter a bigger number of maximum iteration to approximate the maximum with an error less than end end \end{lstlisting} \end{document}
{ "timestamp": "2021-08-31T02:05:03", "yymm": "2108", "arxiv_id": "2108.12514", "language": "en", "url": "https://arxiv.org/abs/2108.12514", "abstract": "For a continuous function $f$ defined on a closed and bounded domain, there is at least one maximum and one minimum. First, we introduce some preliminaries which are necessary through the paper. We then present an algorithm, which is similar to the bisection method, to approximate those maximum and minimum values. We analyze the order of the convergence of the method and the error at the $k$-th step. Then we discuss the pros and cons of the method. Finally, we apply our method for some special classes of functions to obtain nicer results. At the end, we write a Matlab script which implements our algorithm.", "subjects": "Numerical Analysis (math.NA)", "title": "A Bisection Method Like Algorithm for Approximating Extrema of a Continuous Function", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9830850872288504, "lm_q2_score": 0.8244619199068831, "lm_q1q2_score": 0.8105162184485236 }
https://arxiv.org/abs/1712.06896
Integrable geodesic flows on tubular sub-manifolds
In this paper we construct a new class of surfaces whose geodesic flow is integrable (in the sense of Liouville). We do so by generalizing the notion of tubes about curves to 3-dimensional manifolds, and using Jacobi fields we derive conditions under which the metric of the generalized tubular sub-manifold admits an ignorable coordinate. Some examples are given, demonstrating that these special surfaces can be quite elaborate and varied.
\section{Introduction} Let $(\mathcal{M}^n,g)$ be a Riemannian manifold. We may write the geodesic equations as a Hamiltonian system on the cotangent bundle $T^*\mathcal{M}$. The integral curves of this vector field constitute a flow, called the geodesic flow. If this Hamiltonian system has $n$ first integrals, functionally independent and in involution, we say the geodesic flow is Liouville integrable (see, for example, \cite{kling},\cite{josesaletan},\cite{sakai},\cite{docarmoriemm}). Throughout this paper we will describe this property with phrases such as ``the manifold has integable geodesic flow'', or ``the manifold is integrable''. The known surfaces and manifolds with integrable geodesic flow is surprisingly small. The classic examples are surfaces of revolution and ellipsoids (see \cite{kling} for a detailed treatment or \cite{tabach} for an unusual approach), both known for many years (18th and 19th centuries respectively). The case of ellipsoids has been extended to quadratic manifolds in general \cite{tabach}, and there was a recent series of new examples of integrable manifolds in the form of Lie groups (see \cite{bolsinov},\cite{bols2} for reviews and references). On the other hand, in some recent papers \cite{TWspherical},\cite{TWmonge},\cite{TWetds} the author has shown that there are classes of surfaces and manifolds whose geodesic flow is {\it not} integrable. It would seem that surfaces/manifolds with integrable geodesic flow are indeed very rare and special, and often those that are known are not of a type which can be visualized easily \cite{dullin}. In this paper we will add to the list of integrable surfaces, by first considering the geodesic flow on tubes about curves and then generalizing this notion. We will show that in the examples we construct the Hamiltonian has an ignorable coordinate and hence by Noether's theorem \cite{josesaletan} admits a linear integral; together with the Hamiltonian itself this means the surface has integrable geodesic flow. We begin with an observation about curves in $\mathbb{R}^3$. Let $\bmt{\gamma}:I\subset\mathbb{R}\to\mathbb{R}^3$ be a simple smooth regular curve, and let $\{\bmt{T},\bmt{N},\bmt{B}\}$ be the associated Frenet frame. The tube of radius $\rho_0$ about $\bmt{\gamma}$ has parameterisation \[ \bmt{\beta}:I\times\mathbb{S}^1\to\mathbb{R}^3:\bmt{\beta}(s,\psi;\rho_0)=\bmt{\gamma}(s)+\rho_0\cos\psi\bmt{N}(s)+\rho_0\sin\psi\bmt{B}(s).\] The line element is found to be \begin{equation} d\varsigma^2=\big[(1-k_1(s)\rho_0\cos\psi)^2+k_2(s)^2\rho_0^2\big]ds^2+2k_2(s)\rho_0^2 dsd\psi+\rho_0^2d\psi^2, \label{ler3} \end{equation} where $k_1(s),k_2(s)$ are the curvature scalars (curvature and torsion) associated with $\bmt{\gamma}$, and we restrict $\rho_0$ such that the tube is a well-defined surface (see later). From the line element we can find the Hamiltonian for the geodesic flow, $\mathcal{H}=\tfrac{1}{2}g^{ij}p_ip_j$, and we observe that $s$ only enters the Hamiltonian explicitly via the curvature scalars $k_{1,2}$. Thus if $k_{1,2}$ are constants, $s$ will be an ignorable coordinate leading to a linear integral, $p_s$, and the geodesic flow will be integrable. Hence the geodesic flow on the tube about a helix in $\mathbb{R}^3$ is integrable. Let us state this explicitly: the tube around a curve in $\mathbb{R}^3$ has integrable geodesic flow if the curve has constant curvature scalars. There are a number of ways of generalizing this statement which spring to mind, and some are discussed in the conclusions. In this work we will focus on the following: suppose the curve is instead embedded in some manifold $\mathcal{M}$ of dimension 3, and the curve has constant curvature scalars with respect to $\mathcal{M}$; is the geodesic flow on the tube about this curve integrable? We will see that the answer is yes when $\mathcal{M}$ is a space form, but also in the case that $\mathcal{M}$ is of a more general class and {\it not} of constant curvature. First some background theory which will be useful in what follows. The Frenet-Serret equations are generalized to curves in Riemannian manifolds in the following way: let $\bmt{\gamma}=\bmt{\gamma}(s):I\subset\mathbb{R}\to\mathcal{M}$ be a simple smooth regular curve parameterized by arc-length. Letting $\bmt{T}=\dot{\bmt{\gamma}}$ and $\{\bmt{T},\bmt{N},\bmt{B}\}$ be an orthonormal frame in $T_{\bmt{\gamma}} \mathcal{M}$, the frame evolves along $\bmt{\gamma}$ according to (\cite{Gutkin},\cite{Barros},\cite{Tamura}): \[ \frac{D}{ds}\begin{pmatrix} \bmt{T}\\ \bmt{N}\\ \bmt{B} \end{pmatrix}=\begin{pmatrix} 0 & k_1(s) & 0 \\ -k_1(s) & 0 & k_2(s) \\ 0 & -k_2(s) & 0 \end{pmatrix}\begin{pmatrix} \bmt{T}\\ \bmt{N}\\ \bmt{B} \end{pmatrix}. \] Here and throughout we will let $D/ds$ denote the covariant derivative with respect to $\dot{\bmt{\gamma}}$, as opposed to $D$ in \cite{Gutkin}, $\bar{\nabla}_{\bmt{T}}$ in \cite{Barros} and $D_{\dot{\bmt{\gamma}}}$ in \cite{Tamura}. $k_1$ is often known as the `geodesic curvature', however we will simply refer to $k_{1,2}$ as the `curvature scalars'. If $k_{1,2}$ are constants we will refer to $\bmt{\gamma}$ as a `constant curvature curve' (as opposed to `helix', `proper helix' etc. \cite{Barros},\cite{Tamura}). It will be convenient to denote by $T_{\bmt{\gamma}}^{\perp}\mathcal{M}$ the orthogonal complement of $\dot{\bmt{\gamma}}$ in $T_{\bmt{\gamma}}\mathcal{M}$ (i.e.\ span$(\bmt{N},\bmt{B})$). In Euclidean space, the tube about a curve $\bmt{\gamma}$ is the locus of circles of fixed radius orthogonal to $\bmt{\gamma}$. However, in $\mathcal{M}$ the tube about $\bmt{\gamma}$ is the locus of {\it geodesic} circles whose radial geodesics have tangent vectors in $T_{\bmt{\gamma}}^{\perp}\mathcal{M}$ at $\bmt{\gamma}$. As such a parameterisation of the tube of radius $\rho_0$ about $\bmt{\gamma}\in\mathcal{M}$ would be \[ \bmt{\beta}:I\times\mathbb{S}^1\to\mathcal{M}:\bmt{\beta}(s,\psi;\rho_0)=\textrm{exp}_{\bmt{\gamma}(s)}\big(\rho_0\cos\psi\bmt{N}(s)+\rho_0\sin\psi\bmt{B}(s)\big), \] where $\textnormal{exp}_p:T_p\mathcal{M}\to\mathcal{M}$ denotes the exponential map defined as follows: let $\bmt{\Gamma}$ be the unique geodesic through $p$ with tangent vector $\bmt{v}$ at $p$; then $\textnormal{exp}_p(\bmt{v})=\bmt{\Gamma}(\|\bmt{v}\|)$. As alluded to previously, we restrict $\bmt{\gamma}$ and $\rho_0$ as required in order for the tube to be well defined, by avoiding either the curvature of $\bmt{\gamma}$ being too high at a certain point, reaching conjugate/focal points along radial geodesics (see \cite{sakai},\cite{TWbif}), or $\bmt{\gamma}$ passing close to itself leading to self-intersections. The question we wish to address is: for what $\mathcal{M}$ does the tube $\bmt{\beta}(s,\psi;\rho_0)$ have integrable geodesic flow? While tubes as sub-manifolds have been much studied previously (see for example \cite{gray}), it is typically in the context of the area and volume of the tube, and not in terms of the geodesic flow {\it on} the tube. In the following section we formulate the problem using Jacobi fields and prove that if $\mathcal{M}$ is a space form then the geodesic flow on the tube about a constant curvature curve is integrable; we also give some examples. An advantage to formulating the problem in terms of Jacobi fields is we lay the foundations for further study of evolutes, focal surfaces, etc. (see \cite{TWbif} for example). In Section 3 we consider curves in more general manifolds not of constant curvature, and find some conditions for the geodesic flow on tubular sub-manifolds to be integrable; again we give some examples. In each case considered we show the existence of an ignorable coordinate, i.e. the metric admits a Killing vector field and the sub-manifold is therefore invariant under a 1-parameter group of isometries; integrability follows naturally. Section 4 contains some conclusions and further comments. \section{Curves in space forms} Consider the following parameterisation of the neighbourhood of $\bmt{\gamma}$: \[ \bmt{\beta}:I\times\mathbb{S}^1\times \mathbb{R}^+\to\mathcal{M}:\bmt{\beta}(s,\psi,\rho)=\textnormal{exp}_{\bmt{\gamma}(s)}\big(\rho\cos\psi\bmt{N}+\rho\sin\psi\bmt{B}\big) \] with $\bmt{\beta}(\rho=0)=\bmt{\gamma}$ and the appropriate restriction on $\rho$ as discussed previously. We could describe this as a `geodesic cylindrical coordinate system', with $\bmt{\gamma}$ as the axis and $\rho=\rho_0$ defining the tube of radius $\rho_0$ about $\bmt{\gamma}$. We will show that under certain conditions the line element on $\rho=\rho_0$ is independent of $s$; this will imply the geodesic flow on $\rho=\rho_0$ is integrable. For this we require the pairwise inner products of $d\bmt{\beta}/ds$ and $d\bmt{\beta}/d\psi$. \begin{figure} \scalebox{0.8}{ \begin{tikzpicture} \clip (2,0.6) rectangle (14,8); \draw[thick] (1,0) .. controls (9,3) and (1,6) .. (3,8); \draw (4,4.8) arc (130:111:7); \draw (4.38,4.15) arc (120:100:7); \draw (4.6,3.5) arc (110:91:7); \draw[thick] (8,0) .. controls (16,3) and (8,6) .. (10,8); \draw (5,2) node {$\bmt{\gamma}(s)$}; \draw (4.1,3.3) node {\footnotesize{$s_{-1}$}}; \draw (4,4) node {\footnotesize{$s_0$}}; \draw (3.7,4.6) node {\footnotesize{$s_1$}}; \draw (8,5) node {$\bmt{\Gamma}(\rho;s_0,\psi_0)$}; \draw[->,thick] (4.38,4.15) -- (3.5,6); \draw[->,thick] (6.65,4.98) -- (6,6.85); \draw (4.5,6) node {$\bmt{T}(s_0)$}; \draw (7.6,6.8) node {$\bmt{J}_s(\rho;s_0,\psi_0)$}; \draw (11.38,4.15) arc (160:135:2.5); \draw (11.38,4.15) arc (128:106.5:6); \draw (11.38,4.15) arc (90:72.5:5); \draw[dashed,rotate around={20:(11.38,4.15)}] (11,3.3) arc (-100:100:2.4cm and 0.8cm); \draw[->, thick] (13.4,5.175) -- (11.5,5.9); \draw (12.7,6.5) node {$\bmt{J}_\psi(\rho;s_0,\psi_0)$}; \draw (13.2,4.8) node {\footnotesize{$\psi_0$}}; \draw (11.8,5.3) node {\footnotesize{$\psi_1$}}; \draw (13.2,3.7) node {\footnotesize{$\psi_{-1}$}}; \draw[->,thick] (4.26,2) -- (4.325,2.1); \draw[->,thick] (11.26,2) -- (11.325,2.1); \draw[->,thick] (2.825,6.6) -- (2.795,6.65); \draw[->,thick] (9.825,6.6) -- (9.795,6.65); \draw[dashed] (7,3) .. controls (6.88,5) and (6.7,5) .. (5.4,7); \draw[->] (7.5,4.7) arc (-30:-150:0.77); \end{tikzpicture} } \caption{The radial geodesics along $\bmt{\gamma}$ give two variations through geodesics by letting either $s$ or $\psi$ vary; the variation fields $\bmt{J}_s$ and $\bmt{J}_{\psi}$ are shown.}\label{varfields} \end{figure} If we fix $s=s_0,\psi=\psi_0$ and consider the radial geodesic \[ \bmt{\Gamma}(\rho;s_0,\psi_0)=\textnormal{exp}_{\bmt{\gamma}(s_0)}\big(\rho\cos\psi_0\bmt{N}(s_0)+\rho\sin\psi_0\bmt{B}(s_0)\big), \] we may describe two variation fields along this geodesic (see Figure \ref{varfields}) \[ \bmt{J}_s(\rho)=\frac{\partial \bmt{\beta}}{\partial s},\quad\textrm{and}\quad\bmt{J}_{\psi}(\rho)=\frac{\partial \bmt{\beta}}{\partial \psi} \] where we will suppress cumbersome notation such as $\bmt{J}_s(\rho;s_0,\psi_o)$. Both $\bmt{J}_s$ and $\bmt{J}_{\psi}$ solve the Jacobi equation (here and throughout a prime denotes $d/d\rho$) \begin{equation} \frac{D^2}{d\rho^2}\bmt{J}_*=R(\bmt{J}_*,\bmt{\Gamma}')\bmt{\Gamma}',\qquad (*=s,\psi), \label{jaceq}\end{equation} as indeed do all Jacobi fields; what distinguishes one Jacobi field from the next is the initial conditions. Focussing on $\bmt{J}_s$ first, we note \[ \bmt{J}_s(\rho=0)=\frac{\partial \bmt{\beta}}{\partial s}(\rho=0)=\frac{\partial}{\partial s}\big( \bmt{\beta}(\rho=0)\big)=\frac{\partial}{\partial s}\big(\bmt{\gamma}\big)=\bmt{T}. \] Also (letting $|_0$ denote $\rho=0$), \[ \left.\frac{D\bmt{J}_s}{d\rho}\right|_0=\left.\left[\frac{D}{d\rho}\left(\frac{\partial \bmt{\beta}}{\partial s}\right)\right]\right|_0=\left.\left[\frac{D}{ds}\left(\frac{\partial \bmt{\beta}}{\partial \rho}\right)\right]\right|_0=\frac{D}{ds}\left[\left.\left(\frac{\partial \bmt{\beta}}{\partial \rho}\right)\right|_0\right]. \] Remembering that $\bmt{\beta}(s,\psi,\rho)$ denotes the unit speed geodesic, parameterized by $\rho$, passing through $\bmt{\gamma}(s)$ with tangent vector $\cos\psi\bmt{N}(s)+\sin\psi\bmt{B}(s)$, we have \[ \frac{D}{ds}\left[\left.\left(\frac{\partial \bmt{\beta}}{\partial \rho}\right)\right|_0\right]=\frac{D}{ds}\left(\cos\psi\bmt{N}+\sin\psi\bmt{B}\right)=\cos\psi(-k_1\bmt{T}+k_2\bmt{B})+\sin\psi(-k_2\bmt{N}). \] In summary both $\bmt{J}_s$ and $\bmt{J}_\psi$ solve the Jacobi equation \eqref{jaceq} with initial conditions \begin{equation} \bmt{J}_s(0)=\bmt{T},\quad \frac{D\bmt{J}_s}{d\rho}(0)=-k_1\cos\psi\bmt{T}-k_2\sin\psi\bmt{N}+k_2\cos\psi\bmt{B}, \label{jsdata} \end{equation} and \begin{equation} \bmt{J}_\psi(0)=\bmt{0},\quad \frac{D\bmt{J}_\psi}{d\rho}(0)=-\sin\psi\bmt{N}+\cos\psi\bmt{B}. \label{jpsidata} \end{equation} We are now ready to prove the first theorem. \begin{thm} Let $\bmt{\gamma}=\bmt{\gamma}(s):I\subset \mathbb{R}\to\mathcal{M}$ be a simple smooth regular curve in the 3-dimensional Riemannian manifold $\mathcal{M}$. If $\mathcal{M}$ is a space form $(\mathbb{R}^3/\mathbb{S}^3/\mathbb{H}^3)$ and $\bmt{\gamma}$ is a constant curvature curve, then the geodesic flow on the tube about $\bmt{\gamma}$ is integrable. \end{thm} \begin{proof} If $\mathcal{M}$ is a space form with constant sectional curvature $K_0$, $\bmt{\Gamma}$ is a unit speed geodesic of $\mathcal{M}$, and $\bmt{J}_*$ is a Jacobi field orthogonal to $\bmt{\Gamma}'$, then \cite{docarmoriemm} \[ R(\bmt{J}_*,\bmt{\Gamma}')\bmt{\Gamma}'=-K_0\bmt{J}_* \] and hence the Jacobi equation \eqref{jaceq} becomes \[ \frac{D^2\bmt{J}_*}{d\rho^2}=-K_0\bmt{J}_*. \] Given w.l.o.g.\ $K_0=-1,0,1$ we may simply solve this Jacobi equation for $\bmt{J}_s$ and $\bmt{J}_\psi$ with the data given in \eqref{jsdata},\eqref{jpsidata}, then take their pairwise inner products to find the line element on the tube $\rho=\rho_0$, which is \[ d\varsigma^2=\big[(F_0-k_1(s)G_0\cos\psi)^2+k_2(s)^2G_0^2\big]ds^2+2k_2(s)G_0^2 ds d\psi+G_0^2 d\psi^2 \] where \[ \left. \begin{array}{c} F_0=\cos(\rho_0),\ G_0=\sin(\rho_0) \\ F_0=1,\ G_0=\rho_0 \\ F_0=\cosh(\rho_0),\ G_0=\sinh(\rho_0) \end{array} \right\} \quad \textrm{if}\quad \left\{\begin{array} {c} K_0=1,\\ K_0=0, \\ K_0=-1. \end{array} \right. \] In each case the coordinate $s$ only enters the line element via the curvature scalars $k_1$ and $k_2$, and hence if $\bmt{\gamma}$ is a constant curvature curve $s$ will be an ignorable coordinate and the geodesic flow on the tube will be integrable. \end{proof} Before giving an example we can extend this result by observing that a `tube', defined as an equidistant surface, consists of a circle in $T_{\bmt{\gamma}}^{\perp}\mathcal{M}$ projected under exp into $\mathcal{M}$; however we can also consider the `generalized tube' where any simple closed curve, fixed w.r.t.\ $s$, in $T_{\bmt{\gamma}}^{\perp}\mathcal{M}$ is projected into $\mathcal{M}$. That is, if $(f(\psi),g(\psi))$ is a simple closed curve in the plane, then the generalized tube (intuitively understood as this simple closed curve being ``carried along'' by $\bmt{\gamma}$) also has integrable geodesic flow. \begin{corr} The geodesic flow on the generalized tube \[ \bmt{\beta}(s,\psi,\rho_0)=\textnormal{exp}_{\bmt{\gamma}(s)}\big(\rho_0 f(\psi)\bmt{N}(s)+\rho_0 g(\psi)\bmt{B}(s) \big) \] has integrable geodesic flow if $\bmt{\gamma}\subset\mathcal{M}$ has constant curvature scalars and $\mathcal{M}$ is $\mathbb{R}^3,\mathbb{S}^3$ or $\mathbb{H}^3$. \end{corr} We give an example. Using the Hopf parameterisation of $\mathbb{S}^3$, \[ \bmt{\sigma}(\eta,\theta,\phi)=(\sin\eta\cos\theta,\sin\eta\sin\theta,\cos\eta\cos\phi,\cos\eta\sin\phi), \] we consider the curve $(\eta,\theta,\phi)=(\eta_0,\alpha t,\beta t)$. We require $\alpha/\beta\in\mathbb{Q}$ (in order to avoid the tube self-intersecting for $t\in\mathbb{R}$) in which case $\bmt{\gamma}$ is a knot on the Clifford torus in $\mathbb{S}^3$. $t$ is only the arc-length if $\sqrt{\alpha^2\sin^2\eta_0+\beta^2\cos^2\eta_0}\equiv p=1$. The curvature scalars are found to be (correcting \cite{Tamura}) \[ k_1=\frac{(\alpha^2-\beta^2)\sin\eta_0\cos\eta_0}{p^2}, \quad k_2=\frac{\alpha \beta}{p^2}. \] Note this curve has constant curvature scalars, and hence its tube has integrable geodesic flow. In Fig \ref{stereo} we show the stereographic projection from $\mathbb{S}^3$ into $\mathbb{R}^3$ of the generalized tube with $f=(1+0.3\cos(3\psi))\cos\psi,\ g=(1+0.3\cos(3\psi))\sin\psi,\rho_0=0.2$ about the constant curvature curve described in the proceeding paragraph with $\alpha=5,\beta=2,\eta_0=\pi/4$. \begin{figure}[h!] \begin{center} \includegraphics[width=0.75\textwidth]{3sphplot2}\caption{The stereograhic projection from $\mathbb{S}^3$ into $\mathbb{R}^3$ of a generalized tube about a constant curvature curve (see text for details). This surface has integrable geodesic flow.} \label{stereo} \end{center} \end{figure} \section{Curves in more general manifolds} We suspect that there may be manifolds more complicated than the space forms which admit integrable sub-manifolds in the form of (generalized) tubes about curves if the curves take advantage of some symmetry in the manifold. Indeed this is the case as we will show. \begin{thm} Let $\bmt{\gamma}=\bmt{\gamma}(s):I\subset \mathbb{R}\to\mathcal{M}$ be a simple smooth regular curve in the 3-dimensional Riemannian manifold $\mathcal{M}$. The neighbourhood of $\bmt{\gamma}\subset\mathcal{M}$ can be parameterized as follows: \[ \bmt{\beta}:I\times\mathbb{S}^1\times \mathbb{R}^+\to\mathcal{M}:\bmt{\beta}(s,\psi,\rho)=\textnormal{exp}_{\bmt{\gamma}(s)}\big(\rho\cos\psi\bmt{N}+\rho\sin\psi\bmt{B}\big) \] with $\bmt{\beta}(\rho=0)=\bmt{\gamma}$ and the appropriate restriction on $\rho$. If in the tubular neighbourhood of $\bmt{\gamma}$ the sectional curvatures are independent of $s$, and $\bmt{\gamma}$ is a constant curvature curve, then the tube $\bmt{\beta}(s,\psi;\rho_0)$ will have integrable geodesic flow. \end{thm} \begin{proof} We begin by extending the $\bmt{T},\bmt{N},\bmt{B}$ frame into the neighbourhood of $\bmt{\gamma}$ by parallel transport along the radial geodesics $\bmt{\Gamma}$, i.e.\ define $(\tilde{\bmt{T}},\tilde{\bmt{N}},\tilde{\bmt{B}})(s,\psi,\rho)$ via \[ \nabla_{\bmt{\Gamma}'}(\tilde{\bmt{T}},\tilde{\bmt{N}},\tilde{\bmt{B}})\equiv\frac{D}{d\rho}(\tilde{\bmt{T}},\tilde{\bmt{N}},\tilde{\bmt{B}})=\bmt{0},\quad (\tilde{\bmt{T}},\tilde{\bmt{N}},\tilde{\bmt{B}})(\rho=0)=(\bmt{T},\bmt{N},\bmt{B}). \] Parallel transport preserves lengths and angles so $(\tilde{\bmt{T}},\tilde{\bmt{N}},\tilde{\bmt{B}})$ is an orthonormal basis in the neighbourhood of $\bmt{\gamma}$, and we can write \[ \bmt{J}_*=t_*\tilde{\bmt{T}}+n_*\tilde{\bmt{N}}+b_*\tilde{\bmt{B}} \] where everything depends on $s,\psi,\rho$, and the Jacobi equation separates into three ODE's: \begin{align*} t_*''+\langle R(\bmt{\Gamma}',\bmt{J}_*)\bmt{\Gamma}',\tilde{\bmt{T}}\rangle=0, \\ n_*''+\langle R(\bmt{\Gamma}',\bmt{J}_*)\bmt{\Gamma}',\tilde{\bmt{N}}\rangle=0, \\ b_*''+\langle R(\bmt{\Gamma}',\bmt{J}_*)\bmt{\Gamma}',\tilde{\bmt{B}}\rangle=0. \end{align*} Note $\bmt{\Gamma}'(s,\psi,\rho)=\cos\psi\tilde{\bmt{N}}+\sin\psi\tilde{\bmt{B}}$ (since $\nabla_{\bmt{\Gamma}'}(\cos\psi\tilde{\bmt{N}}+\sin\psi\tilde{\bmt{B}})=\bmt{0}$ and $\bmt{\Gamma}'(s,\psi,0)=\cos\psi\bmt{N}+\sin\psi\bmt{B}$), thus each of these equations has 12 terms. Many are zero however (using the symmetries of the Riemann tensor and \cite{sakai}), reducing to \begin{align} t_*''+t_*\cos^2\psi(\tilde{\bmt{N}},\tilde{\bmt{T}},\tilde{\bmt{N}},\tilde{\bmt{T}})+t_*\sin^2\psi(\tilde{\bmt{B}},\tilde{\bmt{T}},\tilde{\bmt{B}},\tilde{\bmt{T}})=0,\nonumber \\ n_*''+(\tilde{\bmt{N}},\tilde{\bmt{B}},\tilde{\bmt{N}},\tilde{\bmt{B}})[n_*\sin^2\psi-b_*\cos\psi\sin\psi]=0, \\ b_*''+(\tilde{\bmt{N}},\tilde{\bmt{B}},\tilde{\bmt{N}},\tilde{\bmt{B}})[b_*\cos^2\psi-n_*\cos\psi\sin\psi]=0=0,\nonumber \end{align} (using the notation for sectional curvature in \cite{docarmoriemm}) with initial data \begin{align*} t_s(0)=1,\ n_s(0)=0,\ b_s(0)=0,\\ t_s'(0)=-k_1(s)\cos\psi,\ n_s'(0)=-k_2(s)\sin\psi,\ b_s'(0)=k_2(s)\cos\psi, \end{align*} and \begin{align*} t_{\psi}(0)=0,\ n_{\psi}(0)=0,\ b_{\psi}(0)=0,\\ t_{\psi}'(0)=0,\ n_{\psi}'(0)=-\sin\psi,\ b_{\psi}'(0)=k_2(s)\cos\psi. \end{align*} In general, since the coefficients of these differential equations depend on $s,\psi,\rho$ then the solutions will depend on $s,\psi,\rho$. However, if the sectional curvatures are independent of $s$, then the coefficients in these differential equations do not depend explicitly on $s$. Hence the dependence of $t_*$ etc.\ on $s$ only manifests itself via the initial conditions, which in turn only depend on $s$ via the curvature scalars $k_{1,2}$. Thus if these scalar functions are in fact constants, then $t_*$ etc.\ will not depend explicitly on $s$ and therefore $\langle\bmt{J}_s,\bmt{J}_s\rangle=t_s^2+n_s^2+b_s^2$ etc.\ will not depend on $s$. Hence $s$ will be an ignorable coordinate for the Hamiltonian function describing the geodesic flow on the tubes $\bmt{\beta}(s,\psi;\rho_0)$, which will therefore be integrable. \end{proof} \noindent We give an example. Consider the manifold (a degenerate 3-ellipsoid) \[ \frac{x_1^2}{a^2}+\frac{x_2^2}{a^2}+\frac{x_3^2}{b^2}+\frac{x_4^2}{b^2}=1. \] Modifying the Hopf coordinates in the obvious way, the metric tensor is \[ diag\big(a^2\cos^2\eta+b^2\sin^2\eta,a^2\sin^2\eta,b^2\cos^2\eta \big) \] and the curve $(\eta,\theta,\phi)=(\eta_0,\alpha t,\beta t)$ has constant curvatures \[ k_1=\left(\frac{b^2\beta^2}{p^2}-\frac{a^2\alpha^2}{p^2}\right)\ \frac{\cos\eta_0\sin\eta_0}{q},\quad k_2=\frac{a b \alpha \beta}{p^2 q} \] and Frenet frame \[ \bmt{T}=(0,\alpha/p,\beta/p),\quad \bmt{N}=(1/q,0,0),\quad \bmt{B}=\left(0,\frac{b \beta\cos\eta_0}{a p},-\frac{a \alpha \tan\eta_0}{b p}\right), \] where $p=\sqrt{a^2\alpha^2\sin^2\eta_0+b^2\beta^2\cos^2\eta_0}$ and $q=\sqrt{a^2\cos^2\eta_0+b^2\sin^2\eta_0}$. Now to show that the tube around $(\eta,\theta,\phi)=(\eta_0,\alpha s/p,\beta s/p)$ is integrable we need to show that the sectional curvatures in the neighbourhood of $\bmt{\gamma}$ do not depend on $s$. The difficulty is that Theorem 2 is phrased in terms of the geodesic cylindrical coordinates $s,\psi,\rho$, whereas our manifold is parameterised by $\eta,\theta,\phi$ and the connection between these two coordinate systems is not known explicitly. However, we do know that if we let $R$ denote a generic component of the Riemann tensor, then \begin{equation} \frac{\partial R}{\partial s}=\frac{\partial R}{\partial \eta}\frac{\partial \eta}{\partial s}+\frac{\partial R}{\partial \theta}\frac{\partial \theta}{\partial s}+\frac{\partial R}{\partial \phi}\frac{\partial \phi}{\partial s}. \label{genR} \end{equation} The last two terms are zero as the metric tensor depends only on $\eta$. To show the first term is zero we consider $\partial \eta/\partial s$. Suppose we take a point $s=s_0$ on $\bmt{\gamma}$ and consider the radial geodesic whose initial tangent vector makes the angle $\psi_0$ in the $\bmt{N}(s_0),\bmt{B}(s_0)$ plane; this geodesic will solve the initial value problem \[ \ddot{\eta}+\Gamma^1_{11}\dot{\eta}^2+\Gamma^1_{22}\dot{\theta}^2+\Gamma^1_{33}\dot{\phi}^2=0,\quad \ddot{\theta}+2\Gamma^2_{12}\dot{\eta}\dot{\theta}=0,\quad \ddot{\phi}+2\Gamma^3_{13}\dot{\eta}\dot{\phi}=0, \] with \begin{align*} \big(\eta,\theta,\phi,\dot{\eta},\dot{\theta},\dot{\phi}\big)(0)=\left(\eta_0,\frac{\alpha s_0}{p},\frac{\beta s_0}{p},\frac{\cos\psi_0}{q},\frac{\sin\psi_0 b \beta\cos\eta_0}{a p},-\frac{\sin\psi_0 a \alpha \tan\eta_0}{b p}\right). \end{align*} However, since the Christoffel symbols do not depend on $\theta,\phi$, we see the following $4$-$d$ first order system decouples: \[ \dot{\eta}=u,\ \dot{u}=-\Gamma^1_{11}u^2-\Gamma^1_{22}v^2-\Gamma^1_{33}w^2,\ \dot{v}=-2\Gamma^2_{12}uv,\ \dot{w}=-2\Gamma^3_{13}uw, \] with \[ \eta(0)=\eta_0,\ u(0)=\frac{\cos\psi_0}{q},\ v(0)=\frac{\sin\psi_0 b \beta\cos\eta_0}{a p},\ w(0)=-\frac{\sin\psi_0 a \alpha \tan\eta_0}{b p}. \] This system is completely independent of $s$, as moving along $\bmt{\gamma}$ only changes $\theta(0),\phi(0)$ which are not part of this decoupled system. As such, the solutions for $(\eta,u,v,w)$ are independent of $s$. Since $d\eta/ds=0,$ this means via \eqref{genR} that the elements of the Riemann tensor are independent of $s$ and hence the sectional curvatures in the neighbourhood of $\bmt{\gamma}$ are independent of $s$. Hence Theorem 2 implies the tube around this curve is integrable. \section{Conclusion} We have shown that there is a broad class of sub-manifolds (which could be described as `generalized tubes') which have geodesic flow that is integrable in the sense of Liouville. This adds to the very limited number of integrable surfaces known to date, and we can see from Figure \ref{stereo} that these sub-manifolds can be quite varied and elaborate. What's more our formulation in terms of Jacobi fields provides a foundation for further study of caustics and envelopes in curved manifolds. Our results arose out of generalizing to $\mathcal{M}$ the following observation in $\mathbb{R}^3$: the tube around a curve in $\mathbb{R}^3$ is integrable if the curve has constant curvature scalars. As mentioned in the Introduction there are other possibilities in generalizing this result. Firstly we could consider changing the previous statement to: the tube around a curve in $\mathbb{R}^3$ is integrable {\it if and only if} the curve has constant curvature scalars. An obvious place to begin would be to consider the tube around an ellipse. Rigourous proofs of non-integrability using differential Galois theory could be attempted as in \cite{TWspherical},\cite{TWmonge},\cite{TWetds}, however strong evidence of non-integrability can be provided much more quickly by the method of Poincar\'{e} section (see Appendix A). While it seems very likely the ``if and only if'' statement is true, it would be very hard to prove: there is no topological obstruction to integrability (the tubes are either cylinders or torii) and as such the geometry of the tube would need to be specified exactly (or at best within a large family of curves with parameters). A second generalization would be to consider curves in $\mathbb{R}^n$, and construct the following manifold: at each point on the curve take the orthogonal complement to the tangent vector and place an $(n-2)$-dimensional sphere. This would generate an $(n-1)$-dimensional manifold, and we would suspect that this manifold might be integrable if the $n-1$ curvature scalars associated with the curve are constants. This is however too much to ask: we find that again $s$ is an ignorable coordinate leading to an integral of the motion, which together with the ``energy'' gives two integrals, but for curves in $\mathbb{R}^4$ or above this would not be enough to imply integrability. Again, we offer numerical evidence: for the 3-dimensional manifold around a constant curvature curve in $\mathbb{R}^4$, we use the integral $p_s$ to reduce the geodesic flow to a 2-dimensional Hamiltonian system and take a Poincar\'{e} section. The results are very like those of Appendix A and will not be shown. Nonetheless there is some numerical evidence to suggest that there may be families of integrable manifolds in this broad class and this may be worth further investigation.
{ "timestamp": "2017-12-20T02:07:46", "yymm": "1712", "arxiv_id": "1712.06896", "language": "en", "url": "https://arxiv.org/abs/1712.06896", "abstract": "In this paper we construct a new class of surfaces whose geodesic flow is integrable (in the sense of Liouville). We do so by generalizing the notion of tubes about curves to 3-dimensional manifolds, and using Jacobi fields we derive conditions under which the metric of the generalized tubular sub-manifold admits an ignorable coordinate. Some examples are given, demonstrating that these special surfaces can be quite elaborate and varied.", "subjects": "Differential Geometry (math.DG)", "title": "Integrable geodesic flows on tubular sub-manifolds", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9884918492405833, "lm_q2_score": 0.8198933293122506, "lm_q1q2_score": 0.8104578732718851 }
https://arxiv.org/abs/2003.08190
Random triangles on flat tori
Inspired by classical puzzles in geometry that ask about probabilities of geometric phenomena, we give an explicit formula for the probability that a random triangle on a flat torus is homotopically trivial. Our main tool for this computation involves reducing the problem to new invariant of measurable sets in the plane that is unchanged under area-preserving affine transformations. Our result show that this probability is minimized at all rectangular tori and maximized at the regular hexagonal torus.
\section{Introduction} A classical problem, recorded as problem 58 in a 1893 book of puzzles by Charles Dodgson \cite{Dodgson}, is to determine the probability that a triangle on the plane is obtuse. Since the plane has no finite measure invariant by translation, this problem is not well defined and several answers can be given. For triangles in finite area regions, this questions has been studied by Kendall \cite{Kendall}, Guy \cite{Guy} and Portnoy \cite{Portnoy}, among others. On the sphere the problem has been treated by Cooper-Eum-Li \cite{LiCooperEum} and Isokawa look at similar problems on the hyperbolic plane \cite{Isokawa}. Our interest lies in extending such probabilistic puzzles to finite area surfaces.\\ For us, a triangle is a choice of three points of a surface with vertices connected by shortest geodesic segments. The first thing to note is that such triangles fall into two classes: homotopically trivial and homotopically non-trivial. With this dichotomy in mind, we ask the following question for triangles on surfaces: \begin{itemize} \item What is the probability of a random triangle being homotopically trivial on a surface? \end{itemize} In this note, we answer this question for flat tori. Let $\Mod_1$ be the moduli space of flat structures of the torus $T^2$. We identify $\Mod_1$ with the set of points in the modular domain $$\mathcal M = \{a+i b \in \mathbb{C} \mid a \in (-1/2,1/2], \; b > \sqrt{1-a^2}\text{ or } b = \sqrt{1-a^2} \text{ and } a \in [0,1/2] \}.$$ For $\tau \in \mathcal M$, we let $\Sigma_\tau$ be the flat structure $\mathbb{R}^2/ \langle 1, \tau \rangle$ in $\Mod_1$. Define $P(\tau)$ to be the probability that three points, sampled uniformly on $\Sigma_\tau$, give rise to a homotopically-trivial triangle by connecting them by shortest geodesic segments. Note, since shortest geodesic segments between points are unique away from a set of measure zero, $P(\tau)$ is well-defined. Our main result gives $P(\tau)$ as a rational function of $a, b$ where $a + b i = \tau \in \mathcal M$. \begin{theorem}\label{thm:flat_torus} For $\tau \in \mathcal M$ one has $$P(\tau) = \frac{9}{16} + \frac{3|a|^2}{8b^2} - \frac{|a|^3}{2b^2} - \frac{|a|^3}{2b^4} + \frac{17 |a|^4}{16b^4} - \frac{|a|^5}{2b^4}$$ \end{theorem} \begin{corollary} $$\frac{1}{vol(\Mod_1)}\int_{\tau \in \Mod_1} P(\tau) \, d \tau = \frac{1}{20} \left(13 - \frac{3\,\sqrt{3}}{\pi}\right)$$ where $d\tau$ is the Teichm\"uller (i.e. hyperbolic) metric on $\Mod_1$ and so is the volume. \end{corollary} \begin{corollary} $P(\tau) = 9/16$ if and only if the two shortest closed geodesics on $\Sigma_\tau$ meet at a right angle. Further, $P(\tau) = 7/12$ if and only if $\Isom(\Sigma_\tau)$ contains $D_6$ as a subgroup. \end{corollary} Our main tool for this computation involves reducing the problem to computing a new invariant for measurable sets in the plane that is preserved under the action of $\mathbb{R}^2 \rtimes \mathrm{SL}_2(\mathbb{R})$. This reduction arrises by rephrasing the question in terms of the overlap of the Dirichlet domains of the sampled points. Even though the observation about Dirichlet domains extends to surfaces of higher genus with a hyperbolic or even Riemannian metric, the process of computing an exact value in that setting seems rather difficult. For example, for the complete finite-area hyperbolic metric on the thrice punctured sphere, the computation of $P(\tau)$ begins to involve elliptic functions and their integrals. The asymptotic behavior of $P(\tau)$ over the moduli space of higher genus surfaces is more amenable to an analysis, which we address in a forthcoming paper \cite{forthcoming}. \section{Reduction to the overlap invariant} In this section, we reduce the computation of $P(\tau)$ to that of an invariant of triples of measurable sets in $\mathbb{R}^2$ that is preserved under the action of $\mathbb{R}^2 \rtimes \mathrm{SL}_2(\mathbb{R})$. \begin{definition} Let $A\subset \mathbb{R}^2$ be a subset. For $w \in \mathbb{R}^2$, define $$A_w := \{ x+ w\,|\, x\in A\}.$$ Given measurable $A,B,C\subset \mathbb{R}^2$ , consider the function $$F(A,B,C) := \int_{a \in A} area(B\cap C_a) \, da.$$ \end{definition} For $\tau \in \mathcal M$, we have $\Sigma_\tau = \mathbb{R}^2/ \Gamma_\tau$ where $\Gamma_\tau = \langle 1, \tau\rangle$. Recall that the Dirichlet domain of a point $x \in \mathbb{R}^2$ is the set $D_{\tau, x} = \{ y \in \mathbb{R}^2 \mid d(x,y) \leq d(x, \gamma y) \text{ for all } \gamma \in \Gamma_\tau\}$. Since $\Gamma_\tau$ is a subgroup of translations in the plane, $D_{\tau, x_1}$ and $D_{\tau, x_2}$ are translates of each other for any $x_1, x_2 \in \mathbb{R}^2$. It therefore makes sense to fix the origin $o = (0,0) \in \mathbb{R}^2$ and let $D_\tau = D_{\tau, o}$. A key property of Dirichlet domains is the fact that the distance between $y\in D_{\tau,x}$ and $x$ on $\mathbb{R}^2$ is equal to the distance between $x+\Gamma_t$ and $y+\Gamma_t$ on $\Sigma_\tau$. In particular the geodesic line between $x$ and $y\in D_{\tau,x}$ in $\mathbb{R}^2$, projects to the minimizing geodesic on the torus. \begin{proposition} $P(\tau) = F(D_\tau,D_\tau,D_\tau)/area(D_\tau)^2$ \end{proposition} \begin{proof} Pick $3$ points $\{x_1,x_2, x_3\}$ on the flat torus $\Sigma_\tau$. Connecting these by shortest geodesics gives a triangle. Lifting to the universal cover, we can suppose that $\widetilde{x}_1 = o$ is the center of $D_\tau$ and choose $\widetilde{x}_2, \widetilde{x}_3 \in D_\tau$. Now, by lifting the geodesic from $x_1$ to $x_2$, we see that the triangle formed by $\{x_1,x_2,x_3\}$ on $\Sigma_\tau$ is homotopically trivial if and only if $\widetilde{x}_3$ is contained in the Dirichlet domain centered at $\widetilde{x}_2$. Thus, given $\widetilde{x}_2$, the set of $\widetilde{x}_3$ that give a homotopically trivial triangle is precisely $D_\tau \cap D_{\tau, \widetilde{x}_2} = D_\tau \cap (D_\tau)_{\widetilde{x}_2}$. Integrating this over all possible choices of for $\widetilde{x}_2 \in D_\tau$ gives $F(D_\tau,D_\tau,D_\tau)$. Lastly, normalizing to unit area gives the scaling factor of $1/area(D_\tau)^2$ and the desired result. \end{proof} To compute $P(\tau)$, we analyze the behavior of $F$ and the shape of $D_\tau$. \begin{proposition}[Properties of $F$]\label{prop:prop_of_F}\hspace*{0.1in} \begin{enumerate} \item $F$ is invariant under $\mathbb{R}^2 \rtimes \mathrm{SL}_2(\mathbb{R})$ acting diagonally by affine transformations. \item $F(t \, A, t \, B, t \, C) = t^4 \, F(A,B,C)$ where $t X$ denotes scaling $X$ by $t \in \mathbb{R}^+$. \item $F$ is additive with respect to disjoint union in $A, B,$ and $C$. \item $F(Q,Q,Q)=9/16$ where $Q = [-1/2,1/2] \times [-1/2,1/2]$ is the unit square. \item $F(\mathbb{R}^2,X,X)= area(X)^2$ for all measurable sets $X$. \end{enumerate} \end{proposition} \begin{proof} Statements (1), (2), and (3) follow from the definition of $F$. For (4) we can use the symmetry of the unit square and compute directly in each quadrant as follows. \[F(Q,Q,Q) = 4\int_{0}^{1/2}\int_{0}^{1/2} (1-x)(1-y)dxdy = \frac{9}{16}.\] For (5), we observe that $F$ is just the integral of the $\mathbb{R}^2$ convolution of indicator functions. \begin{eqnarray*} F(\mathbb{R}^2,X,X) &=& \int_{w\in \mathbb{R}^2} area( X\cap X_w) \, dw\\ &=& \int_{w \in \mathbb{R}^2} \int_{s\in \mathbb{R}^2}\mathds{1}_X(s)\mathds{1}_X(s+w) \,ds dw\\ &=& \int_{s\in \mathbb{R}^2}\left(\mathds{1}_X(s)\int_{x\in \mathbb{R}^2}\mathds{1}_X(s+w) \,dw \right) \,ds\\ &=& \int_{s\in \mathbb{R}^2}\mathds{1}_X(s) \,ds \int_{w\in \mathbb{R}^2}\mathds{1}_{X_{-s}}(w) \, dw \\ &=& area(X)^2 \end{eqnarray*} \end{proof} \section{The shape of Dirichlet domains} Whenever $a = \mathrm{Re}\, \tau \neq 0$ the shape of $D_\tau$ is a hexagon with parallel sides. Otherwise, it is a rectangle. It is clear that $D_\tau$ and $D_{-\overline{\tau}}$ are mirror images of each other across the vertical axis. We will therefore assume that $a \in [0,1/2]$ for the rest of this section. \begin{proposition} Let $\tau= a+ib \in \mathcal M$ with $a \in [0,1/2]$. Then $D_\tau$ is given by the convex hull of the following six points $A = (1/2,\alpha)$, $B=(a-1/2,\beta)$, $C=(-1/2,\alpha)$, $A'=-A$, $B'=-B$, and $C'=-C$, where $\displaystyle{\alpha = \frac{b^2+a^2-a}{2b}}$ and $\displaystyle{\beta= \frac{b^2 - a^2 +a}{2b}}$. \end{proposition} \begin{proof} The Dirichlet domain is given by the intersection of the half-planes $$ H_\gamma =\{x \in \mathbb{R}^2 \, |\, d(x, o) \leq d(\gamma x, o)\} \text{ for } \gamma \in \Gamma_\tau.$$ Since $\Gamma_\tau = \langle 1, \tau \rangle$, most intersections are redundant. Indeed, a simple computation show that since $a \in [0,1/2]$, the Dirichlet domain is given by the intersection of only $6$ half-planes corresponding to $\gamma = (1,0), (a,b), (a-1,b)$ and their inverses. Therefore $D_\tau$ is bounded by the $6$ lines whose equations are $$L_{(1,0)} := \{x=1/2\}$$ $$L_{(a,b)} := \{ax+by = \frac{a^2+b^2}{2}\}$$ $$L_{(a-1,b)} := \{(a-1)x+by = \frac{(a-1)^2+b^2}{2}\},$$ and their reflections around the origin. It is then easy to compute that \begin{align*} & L_{(1,0)}\cap L_{(a,b)} = (1/2,\alpha)=A \\ & L_{(a,b)}\cap L_{(a-1,b)} = (a-1/2,\beta)=B \\ & L_{(-1,0)}\cap L_{(a,b)} = (-1/2, \alpha) = C. \end{align*} \end{proof} We will apply an element of $\mathrm{GL}_2(\mathbb{R})$ to map $D_\tau$ to a particular hexagon family. Let $H(s,t)$ be the hexagon obtained from the square $[-1/2, 1/2] \times [-1/2,1/2]$ by removing the two right angled triangles $T$ and $-T$ from the corners. The edge lengths of $T$ and $-T$ will be $s$ and $t$ as seen in Figure \ref{fig:H_st}. \begin{figure}[hbt] \begin{center} \begin{overpic}[scale=1]{H_st} \put(20,97){$s$} \put(0,82){$t$} \put(40,45){$H(s,t)$} \put(13,84){$T$} \put(77,10){$-T$} \end{overpic} \end{center} \caption{The $H(s,t)$ hexagon inside the unit square $Q$.} \label{fig:H_st} \end{figure} \begin{proposition}\label{prop:st_shape} Let $\tau= a+ib \in \mathcal M$ with $a \in [0,1/2]$. Then $D_\tau$ is $\mathrm{GL}_2(\mathbb{R})$ conjugate to $H(s,t)$ for $s= a$ and $t= \frac{a}{a^2+b^2}$. \end{proposition} \begin{proof} The element $$\gamma=\begin{pmatrix} 1&0 \\ \frac{a}{a^2+b^2} & \frac{b}{a^2+b^2} \end{pmatrix}$$ sends $D_\tau $ to $H_{s,t}$ as $$\gamma A = \gamma \left(\frac{1}{2}, \frac{b^2+a^2-a}{2b}\right)= \left(\frac{1}{2},\frac{1}{2}\right)$$ $$\gamma B = \gamma \left(a-\frac{1}{2}, \frac{b^2 - a^2 +a}{2b}\right) = \left(a-\frac{1}{2}, \frac{1}{2}\right)$$ $$ \gamma C = \gamma \left(-\frac{1}{2}, \frac{b^2+a^2-a}{2b}\right)= \left(-\frac{1}{2},\frac{1}{2}-\frac{a}{a^2+b^2}\right)$$ \end{proof} \begin{corollary}\label{cor:prob} $P(\tau) = F(H(s,t),H(s,t),H(s,t))/area(H(s,t))^2$ \end{corollary} \section{Computations} Fix $s,t \in [0,1/2]$ and let $H=H(s,t) $. We denote by $T$ (resp. $-T$) the upper left (resp. the bottom right) triangle. Let $Q = H \cup T \cup (-T)$ be the unit square containing $H$. \begin{lemma}\label{lem:sep} $F(H,H,H) = F(Q,Q,Q) -4F(H,H,T) - 2F(H,T,T) -2F(T,Q,Q).$ \end{lemma} \begin{proof} $$ F(Q,Q,Q) = F(H,Q,Q) +2F(T,Q,Q)$$ By symmetry of $Q$, $T$, and $-T$. $$F(H,Q,Q) = F(H,H,Q) + 2F(H,T,Q)$$ by symmetry of $H$, $T$ and $-T$. Similarly, $$F(H,H,Q)= F(H,H,H) +2F(H,H,T)$$ $$F(H,T,Q) = F(H,T,-T) + F(H,T,T)+F(H,T,H).$$ Therefore : \begin{eqnarray*} F(Q,Q,Q) &=& F(H,H,H)+ 2F(H,H,T) +\\ &+& 2(F(H,T,-T) + F(H,T,T)+F(H,T,H))+ 2 F(T,Q,Q)\\ F(Q,Q,Q) &= &F(H,H,H) + 4F(H,H,T) + 2F(H,T,T) +2F(T,Q,Q) \end{eqnarray*} Since $F(H,H,T)=F(H,T,H)$ and $F(H,T,-T)=0$, where the last equality follows from the fact that $(-T) \cap T_w = \emptyset$ for all $w \in H$. \end{proof} \begin{lemma}\label{lem:TQQ} $F(T,Q,Q)= \frac{st}{24}\left(3 + 2 s + 2 t + s t \right).$ \end{lemma} \begin{proof} We can assume, by symmetry across the vertical axis, that $T$ is in the first quadrant. For $(u,v)\in T$, $Q \cap Q_{(u,v)}$ is a rectangle with sides of length $(1-u)$ and $(1-v)$. Therefore $$F(T,Q,Q) = \int_{(u,v)\in T} (1-u)(1-v) \, du dv.$$ $T$ is parametrized by $u \in [1/2 - s,1/2]$ and $v\in \left[\frac{1}{2} - \frac{t}{s}\left(u- \frac{1}{2} + s \right), 1/2\right].$ Therefore \begin{align*} F(T,Q,Q) &= \int_{u=1/2-s}^{1/2} \int_{v= \frac{1}{2} - \frac{t}{s}\left(u- \frac{1}{2} + s \right)}^{1/2} (1-u)(1-v) \, d u dv\\ &= \int_{x=0}^{s}\int_{y=0}^{ \frac{t}{s}\left(s-x\right)}\left(\frac{1}{2}+x\right)\left(\frac{1}{2}+y\right) \,d x d y \end{align*} where we make a change of variables $x = 1/2-u, y = 1/2-v$. Computing further, \begin{align*} F(T,Q,Q) & = \int_{x = 0}^s \frac{1}{2}\left(\frac{1}{2}+x\right)\left(\left(\frac{1}{2}+\frac{t}{s}\left(s-x\right)\right)^2 - \frac{1}{4}\right) \, dx\\ & = \frac{t}{4 s^2} \int_{x = 0}^s \left(1+ 2 x\right)\left(s-x\right)\left(s(1+t) - tx\right) \, dx\\ & = \frac{t}{4 s^2} \int_{x = 0}^s 2 t x^3 + \left( t - 2s - 4 st\right) x^2 - s \left(1 + 2t - 2s - 2s t\right) x + s^2(1+t) \, dx \\ & = \frac{t}{4 s^2} \left( \frac{t s^4}{2} + \frac{\left(t - 2s - 4 st\right) s^3}{3} - \frac{\left(1 + 2t - 2s - 2s t\right)s^3}{2}+ s^3(1+t)\right)\\ & = \frac{st}{24} \left( 3 s t + 2 \left(t - 2s - 4 st\right) - 3\left(1 + 2t - 2s - 2s t\right)+ 6 (1+t)\right)\\ & = \frac{st}{24}\left(3 + 2 s + 2 t + s t\right) \end{align*} \end{proof} For the remaining computations we will need to introduce the hexagon $V = V(s,t) = \{ u - v \mid u, v \in T \}$. Note that $V$ is the set of all vectors $w$ such that $T \cap T_w \neq \emptyset$. It is not hard to see that $V$ is the hexagon depicted in Figure \ref{fig:V_st}. \begin{figure}[hbt] \begin{center} \begin{overpic}[scale=1]{V_st} \put(67,74){$s$} \put(88,58){$t$} \put(41,45){$V(s,t)$} \end{overpic} \end{center} \caption{The hexagon $V = V(s,t)$ of all vectors $w$ such that $T \cap T_w \neq \emptyset$.} \label{fig:V_st} \end{figure} \begin{lemma}\label{lem:HTT} $ F(H,T,T)=area(T)^2 = \frac{s^2t^2}{4}$ \end{lemma} \begin{proof} Notice that $V(s,t) \subseteq H(s,t)$ for all $s,t \in [0,1/2]$. Therefore $F(H,T,T) = F(V,T,T) = F(\mathbb{R}^2,T,T) = area(T)^2$ by (5) of Proposition \ref{prop:prop_of_F}. \end{proof} \begin{lemma}\label{lem:HHT} $F(H,H,T)= \frac{st}{24} \left( 3 + 2s + 2t - 11 st\right)$ \end{lemma} \begin{proof} For this computation, we will divide $H$ into 7 disjoint regions $H = \bigcup_{i = 0}^6 P_i$. Consider the line in the $u,v$-plane given by $v = t \, u / s$. Let $$P_0 = \{ (u,v) \in H \mid (v > t \, u / s) \vee ( (u > s) \wedge (v > t) ) \vee ((u < -s) \wedge (v < -t))\}$$ $$P_1 = \{ (u,v) \in H \mid (u > 0) \wedge (v < 0) \wedge (v + t < t \, u /s) \}$$ $$P_2 = (s,1/2) \times (0, t)$$ $$P_3 = (-s,0) \times (-t, -1/2)$$ $$P_4 = \{ (u,v) \in H \mid (0 < u < s) \wedge (0 < v < t) \wedge (v < t \, u /s) \}$$ $$P_5 = \{ (u,v) \in H \mid (-s < u < 0) \wedge (-t < v < 0) \wedge (v < t \, u /s) \}$$ $$P_6 = \{ (u,v) \in H \mid (0 < u < s) \wedge (0 < v < t) \wedge (v + t > t \, u /s) \}$$ Here, $\vee$ denotes ``or'' and $\wedge$ denotes ``and.'' You can see the labeled regions on Figure \ref{fig:HHT}. \begin{figure}[hbt] \begin{center} \begin{overpic}[scale=1.5]{HHT_all} \put(35,65){$P_0$} \put(72,25){$P_1$} \put(88.9,59){$P_2$} \put(29.5,14){$P_3$} \put(73,56){$P_4$} \put(36,33){$P_5$} \put(58,40){$P_6$} \end{overpic} \end{center} \caption{Partition of $H$ into seven region for computations in Lemma \ref{lem:HHT}.} \label{fig:HHT} \end{figure} \begin{itemize} \item ($P_0$) It is easy to see that for $w \in P_0$ one has $H \cap T_w = \emptyset$, so $F(P_0, H, T) = 0$. \item ($P_1$) For $w \in P_1$, one has $T_w \subset H$. Therefore, $$F(P_1, H, T) = area(T) \cdot area(P_1) = area(T) \left(\frac{1}{4} - 2 \, area(T)\right) = \frac{st}{8}\left(1 - 4 st\right) .$$ \item ($P2$ and $P3$) Note that $F(P_2,H,T)$ and $F(P_3,H,T)$ are symmetric up to a switch of $s$ and $t$. Let us compute $F(P_2, H, T)$. For very $(u,v) \in P_2$, the intersection $H \cap T_{(u,v)}$ is triangle similar to $T$. In fact, the height of this triangle is exactly $t - v$. Thus, $$F(P_2, H, T) = \int_{u = s}^{1/2} \int_{v = 0}^t \frac{st}{2} \frac{(t-v)^2}{t^2} \, du dv = \frac{st^2}{6}\left(\frac{1}{2} - s\right) = \frac{st^2}{12}\left(1 - 2s\right).$$ By flipping $s$ and $t$, we obtain $$F(P_3, H, T) = \frac{s^2t}{12}\left(1 - 2t\right).$$ \item ($P4$ and $P5$). Similarly, $F(P_4,H,T)$ and $F(P_5,H,T)$ are symmetric up to a switch of $s$ and $t$. Let us compute $F(P_4, H, T)$. For $(u,v) \in P_4$, $area(H \cap T_{(u,v)})$ is the area of a similar triangle of height $t-v$ {\it minus} $area(T \cap T_{u,v})$, as seen in Figure \ref{fig:T_cap_H_1}. \begin{figure}[hbt] \begin{center} \begin{overpic}[scale=1]{T_cap_H_1} \end{overpic} \end{center} \caption{Partition of $H \cap T_{(u,v)}$ for $(u,v) \in P_4$. The orange part is outside of $H$.} \label{fig:T_cap_H_1} \end{figure} Therefore, \begin{align*} F(P_4, H, T) &= \int_{u = 0}^s \int_{v = 0}^{t \, u /s} \frac{st}{2} \frac{(t-v)^2}{t^2} \, du dv - \int_{(u,v) \in P_4} area(T \cap T_{u,v}) \, du dv\\ & = \frac{st^2}{6} \int_{u = 0}^s 1 - \left(1 - \frac{u}{s}\right)^3 \, du - F(P_4, T,T)\\ & = \frac{st^2}{6}\left(s - \frac{s}{4}\right) - \frac{1}{6} \, area(T)^2 = \frac{st^2}{6}\left(s - \frac{s}{4}\right) - \frac{s^2 t^2}{24} = \frac{s^2t^2}{12}. \end{align*} We observe that $F(P_4, T,T) = \frac{1}{6} \, area(T)^2$ as follows. Apply $A \in \mathrm{SL}_2(\mathbb{R})$ that sends $T$ to an equilateral triangle. Then $A \cdot P_4$ is a sixth of the regular hexagon $A \cdot V$. By symmetry, $F(P_4, T,T) = \frac{1}{6} F(V,T,T) = \frac{1}{6} \, area(T)^2$. Finally, switching $s$ and $t$, $$F(P_5, H, T) = \frac{s^2t^2}{12}.$$ \item ($P_6$) For $(u,v) \in P_6$, observe that $area(H \cap T_{(u,v)}) = area(T) - area(T \cap T_{(u,v)})$ as in Figure \ref{fig:T_cap_H_2}. \begin{figure}[hbt] \begin{center} \begin{overpic}[scale=1]{T_cap_H_2} \end{overpic} \end{center} \caption{Partition of $H \cap T_{(u,v)}$ for $(u,v) \in P_6$.} \label{fig:T_cap_H_2} \end{figure} \end{itemize} We can therefore compute \begin{align*} F(P_6, H, T) & = \int_{(u,v) \in P_6} area(T) - area(T \cap T_{(u,v)}) \, du dv = area(T)^2 - F(P_6, T,T)\\ & = \frac{s^2t^2}{4} - \frac{s^2t^2}{24} = \frac{5\, s^2 t^2}{24}. \end{align*} We can now combine all of our computations to obtain : \begin{align*} F(H,H,T) &= \sum_{i = 0}^6 F(P_i,H,T)\\ &= 0 + \frac{st}{8}\left(1 - 4 st\right) + \frac{st^2}{12}\left(1 - 2s\right)+\frac{s^2t}{12}\left(1 - 2t\right) + \frac{s^2t^2}{12} + \frac{s^2t^2}{12} + \frac{5\, s^2 t^2}{24}\\ & = \frac{1}{24}\left( 3st(1-4st) + 2st^2(1-2s) + 2 s^2t(1-2t) + 9 s^2 t^2 \right)\\ & = \frac{1}{24}\left(3st + 2 st^2 + 2s^2 t - 11 s^2t^2\right) = \frac{st}{24} \left( 3 + 2s + 2t - 11 st\right). \end{align*} \end{proof} \begin{lemma} $F(H,H,H) = \frac{9}{16} - \frac{st}{12} \left(9 + 6 s + 6 t - 15 st\right) = \frac{9}{16} - \frac{3st}{4} - \frac{s^2t}{2} - \frac{st^2}{2} + \frac{5 s^2t^2}{4}.$ \end{lemma} \begin{proof} We now combine Lemmas \ref{lem:sep}, \ref{lem:HHT}, \ref{lem:HTT}, \ref{lem:TQQ} and Proposition \ref{prop:prop_of_F} (4) to compute : \begin{align*} F(H,H,H) &= F(Q,Q,Q) -4F(H,H,T) - 2F(H,T,T) -2F(T,Q,Q)\\ & = \frac{9}{16} - \frac{st}{6} \left( 3 + 2s + 2t - 11 st\right) - \frac{s^2t^2}{2} - \frac{st}{12}\left(3 + 2 s + 2 t + s t \right)\\ & = \frac{9}{16} - \frac{st}{12} \left( 6 + 4s + 4t - 22 st + 6 st + 3 + 2s + 2t + st\right)\\ & = \frac{9}{16} - \frac{st}{12} \left(9 + 6 s + 6 t - 15 st\right)\\ & = \frac{9}{16} - \frac{3st}{4} - \frac{s^2t}{2} - \frac{st^2}{2} + \frac{5 s^2t^2}{4}. \end{align*} \end{proof} {\it Proof of Theorem \ref{thm:flat_torus}} By construction, $area(H) = (1- s t)$. For $\tau = a + ib \in \mathscr{D}$ with $a \in [0,1/2]$, Corollary \ref{cor:prob} gives $$P(\tau) = F(H,H,H)/area(H)^2 = \frac{1}{(1- st)^2} \left( \frac{9}{16} - \frac{3st}{4} - \frac{s^2t}{2} - \frac{st^2}{2} + \frac{5 s^2t^2}{4} \right).$$ By Proposition \ref{prop:st_shape}, $s= a$ and $t= \frac{a}{a^2+b^2}$, so $1-st = \frac{b^2}{a^2+b^2}$. Computing, \begin{align*} P(\tau) & = \frac{(a^2+b^2)^2}{b^4} \left(\frac{9}{16} - \frac{3 a^2}{4(a^2+b^2)} - \frac{a^3}{2(a^2+b^2)} - \frac{a^3}{2(a^2+b^2)^2} + \frac{5 a^4}{4(a^2+b^2)^2}\right)\\ & = \frac{1}{16 b^4} \left(9(a^2+b^2)^2 -12 a^2(a^2+b^2) - 8 a^3(a^2+b^2) - 8a^3 + 20 a^4\right)\\ & = \frac{1}{16 b^4}\left((9-12+20)a^4+9b^4 + (18 -12)a^2b^2-8a^5-8a^3b^2-8a^3 \right)\\ & = \frac{9}{16} + \frac{3a^2}{8b^2} - \frac{a^3}{2b^2} - \frac{a^3}{2b^4} + \frac{17 a^4}{16b^4} - \frac{a^5}{2b^4}. \end{align*} Since $D_\tau$ and $D_{-\overline{\tau}}$ are mirror images of each other across the vertical axis, the above formula holds for all $\tau = a + i b \in \mathscr{D}$ if $a$ is replaced by $|a|$. \qed \bibliographystyle{alpha}
{ "timestamp": "2020-03-19T01:09:44", "yymm": "2003", "arxiv_id": "2003.08190", "language": "en", "url": "https://arxiv.org/abs/2003.08190", "abstract": "Inspired by classical puzzles in geometry that ask about probabilities of geometric phenomena, we give an explicit formula for the probability that a random triangle on a flat torus is homotopically trivial. Our main tool for this computation involves reducing the problem to new invariant of measurable sets in the plane that is unchanged under area-preserving affine transformations. Our result show that this probability is minimized at all rectangular tori and maximized at the regular hexagonal torus.", "subjects": "Combinatorics (math.CO); Geometric Topology (math.GT); Probability (math.PR)", "title": "Random triangles on flat tori", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9883127433812522, "lm_q2_score": 0.8198933447152497, "lm_q1q2_score": 0.810311040795559 }
https://arxiv.org/abs/1201.1851
Enumerating Trees
In this note we discuss trees similar to the Calkin-Wilf tree, a binary tree that enumerates all positive rational numbers in a simple way. The original construction of Calkin and Wilf is reformulated in a more algebraic language, and an elementary application of methods from analytic number theory gives restrictions on possible analogues.
\section{The Calkin-Wilf Tree} \noindent In \cite{CalkinWilf}, Neil Calkin and Herbert Wilf introduced a remarkably beautiful\footnote{It was considered worthy by the authors of \cite{AignerZiegler09} to be included into their BOOK.} way to enumerate the positive rational numbers, drawing together several observations by Stern \cite{Stern1858} and Reznick \cite{Reznick90}. The enumeration is along a binary tree in the sense of computer science, i.e. an infinite rooted tree in which each node has two children\footnote{By the recursive procedure for constructing the tree, it seems natural to use the family metaphor in this direction. Since this is the usual terminology, we stick to it. The reverse direction would be somewhat more fitting, though, since (at least by the current state of art in reproductive medicine) everybody has precisely two parents, one of which is ``male'' and one of which is ``female''. But to produce children, you need a partner, and their number is generally not fixed to two. In either direction, an infinite chain appears problematic, although there can be little doubt that Thomas Aquinas would have preferred an infinite sequence of children.}, one of which is called ``left'' and the other ``right''. This naming should be considered not just as a device for drawing the tree, but rather as part of the mathematical structure. Here comes its construction. The nodes of the tree are labelled by positive rational numbers. For ease of notation, we write each such number in the form $\frac{p}{q}$ with $p,q\in\mathbb{N}\smallsetminus \{ 0\}$ coprime. The rule for labelling is recursive: the tree's root is labelled by $\frac{1}{1}$. If a node is labelled $\frac{p}{q}$, then its left child bears the label $\frac{p}{p+q}$ and its right child bears the label $\frac{p+q}{q}$. By induction we directly see that these are reduced fractions as written. Before proving and stating the basic properties of this tree, we encourage the reader to contemplate Table \ref{CWBild} where the first few layers are shown. \begin{table}\label{CWBild} \begin{center} \caption{The first five layers of the Calkin-Wilf tree} \begin{tikzpicture} [every node/.style={draw,circle,inner sep=2pt}, level 1/.style={sibling distance=80mm}, level 2/.style={sibling distance=40mm}, level 3/.style={sibling distance=20mm}, level 4/.style={sibling distance=10mm}] \node {$\dfrac{1}{1}$} child {node {$\dfrac{1}{2}$} child {node {$\dfrac{1}{3}$} child {node {$\dfrac{1}{4}$} child {node {$\frac{1}{5}$}} child {node {$\frac{5}{4}$}} } child {node {$\dfrac{4}{3}$} child {node {$\frac{4}{7}$}} child {node {$\frac{7}{3}$}} } } child {node {$\dfrac{3}{2}$} child {node {$\dfrac{3}{5}$} child {node {$\frac{3}{8}$}} child {node {$\frac{8}{5}$}} } child {node {$\dfrac{5}{2}$} child {node {$\frac{5}{7}$}} child {node {$\frac{7}{2}$}} } } } child {node {$\dfrac{2}{1}$} child {node {$\dfrac{2}{3}$} child {node {$\dfrac{2}{5}$} child {node {$\frac{2}{7}$}} child {node {$\frac{7}{5}$}} } child {node {$\dfrac{5}{3}$} child {node {$\frac{5}{8}$}} child {node {$\frac{8}{3}$}} } } child {node {$\dfrac{3}{1}$} child {node {$\dfrac{3}{4}$} child {node {$\frac{3}{7}$}} child {node {$\frac{7}{4}$}} } child {node {$\dfrac{4}{1}$} child {node {$\frac{4}{5}$}} child {node {$\frac{5}{1}$}} } } }; \end{tikzpicture} \end{center} \end{table} \begin{Proposition}[Calkin-Wilf]\label{CWProposition} In the Calkin-Wilf tree, every positive rational appears exactly once. \end{Proposition} \begin{proof} For ease of parlance, we confuse nodes with their labels. Writing a positive rational as $p/q$ with $p,q$ coprime positive integers, we proceed by induction on $m=\max (p,q)$. For $m=1$ there is only $p=q=1$ to consider. The rational number $1/1=1$ does appear in the tree, namely at its root; it cannot occur anywhere else, since each left child $p/(p+q)$ is smaller than $1$ and each right child $(p+q)/q$ is bigger than $1$. Assume now that the statement is proved for all $m<m_0$, and let $x=p/q$ with $\max (p,q)=m_0$. Then either $x<1$ or $x>1$. In the first case, we have $m_0=q>p$, hence $x$ is the left child of the (by assumption) unique node labelled $p/(q-p)$, and since it cannot be a right child (else $x>1$), it cannot occur at any other place. Similarly, if $x>1$, it must be a right child, and it must be the right child of $(p-q)/q$ which, by assumption, does occur exactly once. \end{proof} The proof already shows that the position of a positive rational $p/q$ can be determined by performing the Euclidean algorithm on $p$ and $q$. It is also clear that the continued fraction expansion of $p/q$ and the sequence of left / right moves one has to make from $1$ in order to get to $p/q$ are easily translated into one another. So, if we write down the first line, then the second line, then the third line of the Calkin-Wilf tree and so on, we obtain a list of the positive rationals in which each of them appears exactly once, i.e. a bijection $\mathbb{N}_0\to\mathbb{Q}_{>0}$. As can be checked from Table \ref{CWBild}, this list begins with \begin{equation}\label{CWList} \frac{1}{1},\frac{1}{2},\frac{2}{1},\frac{1}{3},\frac{3}{2},\frac{2}{3},\frac{3}{1},\frac{1}{4},\frac{4}{3},\frac{3}{5},\frac{5}{2},\frac{2}{5},\frac{5}{3},\frac{3}{4},\frac{4}{1},\frac{1}{5},\frac{5}{4},\frac{4}{7},\frac{7}{3},\frac{3}{8},\frac{8}{5},\frac{5}{7},\frac{7}{2},\frac{2}{7},\frac{7}{5},\frac{5}{8},\frac{8}{3},\frac{3}{7},\frac{7}{4},\frac{4}{5},\frac{5}{1},\ldots \end{equation} The attentive reader will long have noticed that the denominator of each term is equal to the numerator of its successor. This can easily be proved by induction. Hence there must be a function $f :\mathbb{N}_0\to\mathbb{N}$ such that $f(n)$ and $f(n+1)$ are coprime, and the $n$-th element of the sequence (\ref{CWList}) is equal to $f(n)/f(n+1)$. It is proved in \cite{CalkinWilf} that $f(n)$ is the number of ways to partition $n$ into powers of two, each power occurring at most twice. Moshe Newman also has found a simple recursive construction of the sequence (\ref{CWList}) that does not make reference to the tree anymore: it is the sequence $(a_n)$ with $a_0=1$ and \begin{equation*} a_{n+1}=\frac{1}{1+\lfloor a_n\rfloor -\{ a_n\} }. \end{equation*} Here, $\lfloor a_n\rfloor$ is the largest integer $\le a_n$ and $\{ a_n\} =a_n-\lfloor a_n\rfloor$ is the ``fractional part'' of $a_n$. This was a solution to a problem raised by Donald Knuth in the American Mathematical Monthly, see \cite{Knuth}. For more details, and further interesting developments in directions not touched upon in this paper, see \cite{BatesMansour}, \cite{BergstraTucker}, and \cite{Northshield}. We wish to look upon the Calkin-Wilf tree from another point of view: that of M\"{o}bius transformations. Recall that the group of M\"{o}bius transformations over a field $K$ is the group $\operatorname{PGL}_2(K)=\operatorname{GL}_2(K)/K^{\times }$. We introduce the following notation: $$\begin{bmatrix} a&b\\ c&d \end{bmatrix}$$ is the element of $\operatorname{PGL}_2(K)$ represented by $$\begin{pmatrix} a&b\\ c&d \end{pmatrix}\in\operatorname{GL}_2(K).$$ These M\"{o}bius transformations operate upon $\mathbb{P}^1(K) =K\cup\{\infty\}$ in the well-known way $$\begin{bmatrix} a&b\\ c&d \end{bmatrix}\cdot z=\frac{az+b}{cz+d}.$$ The subgroup $\operatorname{PSL}_2(\mathbb{Z} )=\operatorname{SL}_2(\mathbb{Z} )/\{\pm\mathbf{1}\}$ of $\operatorname{PGL}_2(\mathbb{Q} )$ has been much investigated, and it operates transitively on $\mathbb{P}^1(\mathbb{Q} )$. A closer look at the rules generating the Calkin-Wilf tree shows that if a node is labelled by $x\in\mathbb{Q}_{>0}\subset\mathbb{P}^1(\mathbb{Q} )$, then its left child is labelled by $L(x)$ and its right child by $R(x)$, where \begin{equation} L=\begin{bmatrix}1&0\\ 1&1\end{bmatrix}\text{ and }R=\begin{bmatrix}1&1\\ 0&1\end{bmatrix}. \end{equation} These choices may at first glance look arbitrary, but we shall argue in the next section that they are not. \section{The Monoid $\operatorname{SL}_2(\mathbb{N}_0)$} \noindent Most of the literature on M\"{o}bius transformations deals with groups of them, but here we shall be concerned with monoids. Since this term is somewhat ambigous, let us fix a definition: \begin{Definition} A \emph{monoid} is a set $M$ together with a binary operation $\cdot :M\times M\to M$ with the following properties: \begin{enumerate} \item it is associative, i.e. $x(yz)=(xy)z$ for any $x,y,z\in S$, and \item there exists an identity element, i.e. an element $e\in M$ such that $ex=x=xe$ for all $x\in M$. \end{enumerate} \end{Definition} Such an identity element is necessarily unique. As usual in algebra, one can now introduce free monoids. If $A$ is a set (considered as an ``alphabet''), then the free monoid\footnote{Friends of abstract nonsense will immediately recognize that this is equivalent to the definition in terms of an adjoint functor to the forgetful functor to sets that they sure would have proposed.} $\mathrsfs{F}(A)$ generated by $A$ consists of all formal words of finite length in the alphabet $A$. Multiplication is given by concatenation. The empty word $\varnothing$ is allowed and serves as the identity element in $\mathrsfs{F}(A)$. If $M$ is a monoid and $A\subseteq M$ a subset, we say that $M$ is \emph{free on} $A$ or \emph{freely generated by} $A$ if the obvious map $\mathrsfs{F}(A)\to M$ is an isomorphism of monoids; in other words, if each element of $M$ can be written in a unique way as a product of elements of $A$. What do free monoids look like? Certainly the free monoid on one element is isomorphic to $\mathbb{N}_0$ with addition. The free monoid on two generators is much richer in structure. It is tempting to think of it as similar to the free group on two generators; but it is in fact much more rigid. Namely: \begin{Lemma}\label{AutOfFreeMonoids} Let $X=\{ x_1,\ldots ,x_n\}$ be a finite set with $n$ elements, and set $\mathrsfs{F}_n=\mathrsfs{F}(X)$. Then any automorphism of $\mathrsfs{F}_n$ is obtained from a permutation of the $x_i$. \end{Lemma} \begin{proof} Consider $X$ as a subset of $\mathrsfs{F}_n$. Then an element $\gamma\in\mathrsfs{F}_n$ is in $X$ if and only if $\gamma\neq 1$ and whenever $\gamma =\delta\varepsilon$, then at least one of $\delta$, $\varepsilon$ is equal to $1$. Hence any automorphism of $\mathrsfs{F}_n$ takes $X$ to itself. In particular, $\mathrsfs{F}_n\simeq\mathrsfs{F}_m$ if and only $m=n$. \end{proof} An automorphism of $\mathrsfs{F}_n$ is of course determined by what it does on $X$, and so we get an automorphism $\operatorname{Aut}\mathrsfs{F}_n\simeq\mathfrak{S}_n$, the symmetric group. By contrast, if $F_n$ denotes the free \emph{group} on $n$ letters, the automorphism group $\operatorname{Aut} F_n$ is huge. But the picture becomes clearer when one notices that the analogue of $\operatorname{Aut} F_n$ should not be the group $\operatorname{Aut}\mathrsfs{F}_n$, but the monoid $\operatorname{End}\mathrsfs{F}_n$, which is much larger. But now enough abstract algebra; we finally introduce the object announced in the section title. As one would expect from the notation, the monoid $\operatorname{SL}_2(\mathbb{N}_0)$ consists of all $(2\times 2)$-matrices with entries in $\mathbb{N}_0$ having determinant one, with matrix multiplication as the monoid operation. In other words, $\operatorname{SL}_2(\mathbb{N}_0)$ is the sub-monoid of $\operatorname{SL}_2(\mathbb{Z} )$ consisting of all matrices with nonnegative entries. Note that the composition \begin{equation} \operatorname{SL}_2(\mathbb{N}_0)\to\operatorname{SL}_2(\mathbb{Z} )\to\operatorname{PSL}_2(\mathbb{Z} ) \end{equation} is injective, so that we can and will view $\operatorname{SL}_2(\mathbb{N}_0)$ as a submonoid of $\operatorname{PSL}_2(\mathbb{Z} )$. Hence the M\"{o}bius transformations $L$ and $R$ introduced above can be viewed as elements of $\operatorname{SL}_2(\mathbb{N}_0)$. \begin{Proposition}[Folklore]\label{SLZwoNFrei} The monoid $\operatorname{SL}_2(\mathbb{N}_0)$ is freely generated by the elements \begin{equation} L=\begin{pmatrix}1&0\\ 1&1\end{pmatrix}\text{ and }R=\begin{pmatrix}1&1\\ 0&1\end{pmatrix}. \end{equation} \end{Proposition} \begin{proof} We first show that $\operatorname{SL}_2(\mathbb{N}_0)$ is generated by $L$ and $R$. So let $$\gamma =\begin{pmatrix} a&b\\ c&d \end{pmatrix} \in\operatorname{SL}_2(\mathbb{N}_0).$$ We set $\Sigma (\gamma )=a+b+c+d$ and proceed by induction on $\Sigma (\gamma )$. It is clear that $\Sigma (\gamma )\ge 2$, with equality if and only if $\gamma =\mathbf{1}$. Hence we may assume that $\Sigma (\gamma )\ge 3$ and $\gamma\neq\mathbf{1}$. Consider the two products in $\operatorname{SL}_2(\mathbb{Z} )$: $$L^{-1}\gamma =\begin{pmatrix} a&b\\ c-a&d-b \end{pmatrix}\text{ and }R^{-1}\gamma= \begin{pmatrix} a-c&b-d\\ c&d \end{pmatrix}.$$ By Lemma \ref{LemmaInequality} below, $(a-c)(b-d)\ge 0$, so at least one of these is in $\operatorname{SL}_2(\mathbb{N}_0)$. For sake of simplicity, assume that $L^{-1}\gamma\in\operatorname{SL}_2(\mathbb{N}_0)$, the other cases is treated analogously. Then $\Sigma (L^{-1}\gamma )<\Sigma (\gamma )$, so by induction hypothesis $L^{-1}\gamma$ is a product of $L$ and $R$. Hence so is $\gamma$. Now we have proved that $L$ and $R$ generate $\operatorname{SL}_2(\mathbb{N}_0)$. As to freedom, we show that $\operatorname{SL}_2(\mathbb{N}_0)$ is the disjoint union of the sets $\{\mathbf{1}\}$, $L\cdot\operatorname{SL}_2(\mathbb{N}_0)$ and $R\cdot\operatorname{SL}_2(\mathbb{N}_0)$. That it is their union follows from the fact already proved (that $L$ and $R$ generate $\operatorname{SL}_2(\mathbb{N}_0)$), and the disjointness follows by contemplating the equations $$ L\cdot\begin{pmatrix} a&b\\ c&d \end{pmatrix}=\begin{pmatrix} a&b\\ a+c&b+d \end{pmatrix}\text{ and }R\cdot\begin{pmatrix} a&b\\ c&d \end{pmatrix}=\begin{pmatrix} a+c&b+d\\ c&d \end{pmatrix}. $$ (Just consider the possible order relations between entries.) But this observation gives an induction proof on word length for the uniqueness of a word defining an element. \end{proof} We should remark that $L$ and $R$ do \emph{not} generate a free group of matrices, nor of M\"{o}bius transformations. To be more specific, the subgroup of $\operatorname{GL}_2(\mathbb{Q} )$ they generate is $\operatorname{SL}_2(\mathbb{Z} )$, and correspondingly the subgroup of $\operatorname{PGL}_2(\mathbb{Q} )$ they generate is $\operatorname{PSL}_2(\mathbb{Z} )$. Both groups are well-known to contain nontrivial torsion elements. For instance, we have the equations $(RL^{-1}R)^2=\mathbf{1}$ in $\operatorname{PSL}_2(\mathbb{Z} )$ and $(RL^{-1}R)^4=\mathbf{1}$ in $\operatorname{SL}_2(\mathbb{Z} )$. \begin{Lemma}\label{LemmaInequality} Let $$\begin{pmatrix} a&b\\ c&d\end{pmatrix}\in\operatorname{SL}_2(\mathbb{N}_0)$$ be different from the identity matrix. Then $(a-c)(b-d)\ge 0$. \end{Lemma} \begin{proof} Assume that $(a-c)(b-d)<0$, i.e. that $a-c$ and $b-d$ are both nonzero and have opposite signs. There are two cases. The first case is that $a>c$ and $d>b$. Then $a\ge c+1$ and $d\ge b+1$, whence $$1=ad-bc\ge (c+1)(b+1)-bc =b+c+1\ge 1,$$ so equality has to hold everywhere, and $b=c=0$. From $ad-bc=1$ we get that $a=d=1$, hence the matrix in question is the identity matrix. The second case is that $c>a$ and $b>d$. Then $c\ge a+1$ and $b\ge d+1$, so that $$-1=bc-ad \ge (a+1)(d+1)-ad =a+d+1\ge 1,$$ contradiction. \end{proof} We can now reinterpret the Calkin-Wilf tree in a new light: it is the directed Cayley graph of $\operatorname{SL}_2(\mathbb{N}_0)$. Let us make this precise. \begin{Definition} A \emph{directed graph} is a quadruple $(V,E,s,t)$, where $V$ and $E$ are sets (of ``vertices'' and ``edges'', respectively) and $s$ and $t$ are maps $E\to V$ (designating ``source'' and ``target''). \end{Definition} When we draw (or imagine) a directed graph, we draw a node for each $v\in V$, and for each $e\in E$ an arrow originating in $s(e)$ and ending in $t(e)$. Forgetting the orientations of the arrows gives a graph in the usual sense, and we say that a directed graph is a (directed) tree if this underlying undirected graph is a tree. \begin{Definition} Let $M$ be a monoid and $A\subseteq M$ a generating set. The \emph{directed Cayley graph} $C(M,A)$ is the directed graph $(V,E,s,t)$ with $V=M$ and $E=M\times A$, such that $s(\mu ,\alpha )=\mu$ and $t(\mu ,\alpha )=\alpha\mu$. \end{Definition} In less formal terms, the vertices are in bijection with $M$, and for each $\mu\in M$ and each $\alpha\in A$ we draw an arrow from $\mu$ to $\alpha\mu$. Note that if $A$ freely generates $M$, then $C(M,A)$ is a directed tree where every arrow points away from the ``root'' $e\in M$. When treating Cayley graphs of groups, there is often a nasty ambiguity involved in choosing a set of generators. As a consequence, one is mainly interested in properties of the Cayley graph that do not depend on the choice of a particular set of generators. Here, however, we are in a much nicer situation. Proposition \ref{SLZwoNFrei} gives us an explicit isomorphism between $\mathrsfs{F}_2$ and $\operatorname{SL}_2(\mathbb{N}_0)$, and from Lemma \ref{AutOfFreeMonoids} we learn that $\{ L,R\}$ is \emph{the only} subset that freely generates $\operatorname{SL}_2(\mathbb{N}_0)$. In other words, if we want a tree, we have no other choice for our generators. \begin{Proposition}\label{BijektionSLZwoQ} Consider $\operatorname{SL}_2(\mathbb{N}_0)$ as a submonoid of the group $\operatorname{PSL}_2(\mathbb{Z} )$, acting on $\mathbb{P}^1(\mathbb{Q} )$ by M\"{o}bius transformations. The orbit map $\gamma\mapsto \gamma (1)$ defines a bijection $\Omega :\operatorname{SL}_2(\mathbb{N}_0)\to\mathbb{Q}_{>0}$. Furthermore, $\Omega$ defines an isomorphism of directed graphs between the directed Cayley tree $C(\operatorname{SL}_2(\mathbb{N}_0),\{ L,R\} )$ and the Calkin-Wilf tree. Here we identify the vertex set of the Calkin-Wilf tree with $\mathbb{Q}_{>0}$, and we orient each of its edges as pointing away from $1$.\hfill $\square$ \end{Proposition} This has an amusing simple consequence in terms of Diophantine equations: \begin{Corollary} Let $p,q$ be coprime positive integers. Then there exist unique $a,b,c,d\in\mathbb{N}_0$ with $a+b=p$, $c+d=q$ and $ad-bc=1$. \end{Corollary} \begin{proof} Set $x=p/q$. The system of equations given above can be translated into $\gamma (1)=x$ for $\gamma\in\operatorname{SL}_2(\mathbb{N}_0)$. \end{proof} \section{Injective Families} \noindent We are looking for generalisations of the Calkin-Wilf tree; we first generalise the original construction in four different respects and then ask ourselves if we get any new examples with comparably nice properties. \begin{enumerate} \item Replace $2$ by any positive integer $n$: consider directed trees in which every node has $n$ (ordered) children. \item Replace $\mathbb{Q}$ by any number field. \item Replace the initial value $1\in\mathbb{P}^1(\mathbb{Q} )$ by any $x_0\in\mathbb{P}^1(K)$. \item Replace the two M\"{o}bius transformations $L$ and $R$ by $n$ rational maps $f_1,\ldots ,f_r\in K(t)$. \end{enumerate} These data (i) --- (iv) should fit together in the following way: if we label the tree in (i) in such a way that the root is labelled $x_0$, and that if a node is labelled by $x\in\mathbb{P}^1(K)$, then its $n$ children are labelled $f_1(x),\ldots ,f_n(x)$, in this order. Then every element $x\in\mathbb{P}^1(K)$ should appear at most once in the tree, and the set of those that do occur should be some ``simple'' subset of $\mathbb{P}^1(K)$ (in the Calkin-Wilf tree, it would be $\mathbb{Q}_{>0}$ which is arguably quite simple). Of course, what we mean by ``simple'' has to become clear in the course of the discussion. Let us first consider the tree. The description can be made more conceptual by saying that it should be the Cayley tree $C(\mathrsfs{F}(X),X)$, where $X=\{ x_1,\ldots ,x_n\}$ with the $x_i$ pairwise distinct. As above, we set $\mathrsfs{F}_n=\mathrsfs{F}(X)$, and in addition $C(\mathrsfs{F}_n)$ short for $C(\mathrsfs{F}(X),X)$. Our rational maps should, of course, be nonconstant; hence they should live in the monoid $\mathrsfs{R}(K)$ which consists of all nonconstant rational maps $f\in K(t)$, with composition $f\circ g$ as multiplication. This may be viewed as a sub-monoid of the monoid of endomorphisms $\operatorname{End}\mathbb{P}_K^1=\operatorname{Hom}_K(\mathbb{P}^1_K,\mathbb{P}^1_K)$. Here $\mathbb{P}^1_K$ is considered as a $K$-variety. The invertible elements in this monoid are precisely the M\"{o}bius transformations, so that we get a canonical identification $\mathrsfs{R}(K)^{\times }=\operatorname{PGL}_2(K)$. Note that, since $K$ is infinite, we need not distinguish between a rational function as a formal expression and the map $\mathbb{P}^1(K)\to\mathbb{P}^1(K)$ it induces. \begin{Proposition} For every number field $K$, the monoid $\mathrsfs{R}(K)$ is infinitely generated. \end{Proposition} \begin{proof} First we show that certain groups are not finitely generated. To begin with, an abelian $2$-torsion group is the same as an $\mathbb{F}_2$-vector space; hence such an abelian group is finitely generated if and only if it is finite. For any number field, the group $K^{\times }/(K^{\times })^2$ is infinite\footnote{This can be seen, for instance, as follows: By Dirichlet's density theorem, see \cite[Chapter VII, Theorem 13.2]{Neukirch99}, there are infinitely many prime ideals in the ring of integers $\mathfrak{o}_K$ which are principal ideals. Let these be $\mathfrak{p}_1,\mathfrak{p}_2$ etc., and let $p_k$ be a generator of $\mathfrak{p}_k$. Then the elements $p_1,p_2$ etc. are all distinct modulo $(K^{\times})^2$.}. Hence it is infinitely generated, and therefore also the group $\operatorname{PGL}_2(K)$, which surjects onto it, must be infinitely generated. But from this it follows that $\mathrsfs{R}(K)$ cannot be finitely generated. Suppose it were, say generated by $f_1,\ldots ,f_r,g_1,\ldots ,g_s$ with $\deg f_i=1$ and $\deg g_i>1$. Since $\deg (\varphi\circ\psi )=\deg\varphi\cdot\deg\psi$, we see that any composition containing at least one $g_i$ must have degree $>1$. So the monoid (and hence also the group) $\operatorname{PGL}_2(K)$ must be generated by $f_1,\ldots ,f_r$, which we have just seen to be impossible. \end{proof} It is all the more astonishing that we can express all $f\in\mathrsfs{R}(K)$ as compositions of just two admittedly strange maps $\mathbb{P}^1(K)\to\mathbb{P}^1(K)$. \begin{Theorem}[Sierpi\'{n}ski]\label{SierpinskiTheorem} Let $A$ be an infinite set, and let $\mathrsfs{M}(A)$ be the monoid of \emph{all} maps $A\to A$, with composition of maps as monoid composition. Let $X\subset\mathrsfs{M}(A)$ be any countable subset. Then there exist elements $\varphi ,\psi\in\mathrsfs{M}(A)$ such that $X$ is contained in the submonoid of $\mathrsfs{M}(A)$ generated by $\varphi$ and $\psi$.\hfill $\square$ \end{Theorem} This Theorem was first proved in \cite{Sierpinski35}; shortly afterwards, Banach gave a very elegant proof, see \cite{Banach35}. \begin{Corollary} For any countable field $K$, there exist two maps $\varphi ,\psi$ from $\mathbb{P}^1(K)=K\cup\{\infty \}$ to itself such that \emph{every nonconstant rational map} $\mathbb{P}^1(K)\to\mathbb{P}^1(K)$ can be written as a finite composition involving only $\varphi$ and $\psi$. \end{Corollary} \begin{proof} Apply Theorem \ref{SierpinskiTheorem} to $A=\mathbb{P}^1(K)$ and $X=\mathrsfs{R}(K)$. \end{proof} So having chosen rational maps $f_1,\ldots ,f_n$, we consider the unique morphism of monoids $h:\mathrsfs{F}_n\to\mathrsfs{R}(K)$ with $h(x_i)=f_i$; then our tree is the Cayley tree $C(\mathrsfs{F}_n)$, where the node corresponding to $\gamma\in\mathrsfs{F}_n$ is labelled by $h(\gamma )(x_0)$. This defines an ``evaluation'' map \begin{equation}\label{Evaluation} \Omega :\mathrsfs{F}_n\to\mathbb{P}^1(K),\quad\gamma\mapsto h(\gamma )(x_0). \end{equation} \begin{Definition} Let $K$ be a number field, let $x_0\in\mathbb{P}^1(K)$ and let $f_1,\ldots ,f_n\in\mathrsfs{R}(K)$. The family $(f_1,\ldots ,f_n)\in\mathrsfs{R}(K)^n$ is called \emph{injective at $x_0$} if the map $\Omega$ as in (\ref{Evaluation}) is injective. \end{Definition} Clearly, a family $(f_1,\ldots ,f_n)\in\mathrsfs{R}(K)^n$ is injective at $x_0$ if and only if the $f_i$ generate a free submonoid $\Gamma\subset\mathrsfs{R}(K)$ and the orbit map $\Gamma\to\mathbb{P}^1(K)$ sending $\gamma$ to $\gamma (x_0)$ is injective. By conjugating with a suitable M\"{o}bius transformation, we can always assume that $x_0=1$. Some interesting injective families over $\mathbb{Q}$, all of whose members are M\"{o}bius transformations, have been found by S.H. Chan, see \cite{Chan2011}. These give rather forests with a finite number of components, instead of isolated trees. For the reader's convenience, we describe them in our terms. For every integer $k\ge 2$, a family $\mathrsfs{G}_k$ is defined by consisting of these $2k$ M\"{o}bius transformations: \begin{equation*} \begin{bmatrix} 1&0\\ 2&1 \end{bmatrix},\begin{bmatrix} 2&1\\ 3&2 \end{bmatrix},\ldots ,\begin{bmatrix} k-1&k-2\\ k&k-1 \end{bmatrix}, \begin{bmatrix} k&k-1\\ k&k \end{bmatrix}, \end{equation*} \begin{equation*} \begin{bmatrix} k&k\\ k-1&k \end{bmatrix},\begin{bmatrix} k-1&k\\ k-2&k-1 \end{bmatrix},\ldots , \begin{bmatrix} 2&3\\ 1&2 \end{bmatrix},\begin{bmatrix} 1&2\\ 0&1 \end{bmatrix}. \end{equation*} It is injective on each of the initial values $x_1,\ldots ,x_{2k-1}$ given by \begin{equation*} \frac{1}{2},\frac{2}{3},\ldots ,\frac{k-1}{k},\frac{k}{k},\frac{k}{k-1},\ldots ,\frac{3}{2},\frac{2}{1}. \end{equation*} Furthermore, the orbits $\Gamma (x_1),\ldots ,\Gamma (x_{2k-1})$ are disjoint and their union is $\mathbb{Q}_{>0}$. All this is proved in \cite[Theorem 4]{Chan2011}. There is a similar infinite family of injective families; they enumerate the slightly more complicated set $\mathbb{Q}_{>0}^{\text{even}}$ of all positive rational numbers $\frac{p}{q}$ with $p,q$ coprime and $pq$ even. For every integer $k\ge 1$, let $\mathrsfs{H}_k$ be the family of $2k+1$ M\"{o}bius transformations: \begin{equation*} \begin{bmatrix} 1&0\\ 2&1 \end{bmatrix},\begin{bmatrix} 2&1\\ 3&2 \end{bmatrix},\ldots ,\begin{bmatrix} k&k-1\\ k+1&k \end{bmatrix},\begin{bmatrix} k+1&k\\ k&k+1 \end{bmatrix}, \end{equation*} \begin{equation*} \begin{bmatrix} k&k-1\\ k+1&k \end{bmatrix},\ldots ,\begin{bmatrix} 2&3\\ 1&2 \end{bmatrix}, \begin{bmatrix} 1&2\\ 0&1 \end{bmatrix}. \end{equation*} It is injective on each of the initial values $y_1,\ldots ,y_{2k}$ given as \begin{equation*} \frac{1}{2},\frac{2}{3},\ldots ,\frac{k}{k+1},\frac{k+1}{k},\ldots ,\frac{3}{2},\frac{2}{1}. \end{equation*} The orbits $\Gamma (y_1),\ldots ,\Gamma (y_{2k})$ are disjoint and their union is $\mathbb{Q}_{>0}^{\text{even}}$. This can be found in \cite[Theorems 2 and 5]{Chan2011}. Theorem 2 in op. cit. is followed by a detailed discussion of the simplest case $k=1$. Similar to the interpretation of the denominators and numerators of the Calkin-Wilf sequence as a combinatorial function, there are further combinatorial interpretations of these forests in \cite{Chan2011}. \section{Heights on $\mathbb{P}^1$ and the Distribution of Points} \noindent Let $K$ be a number field. A \emph{place} of $K$ is an equivalence class of valuations; denote the set of all places of $K$ by $\mathrsfs{P}(K)$. If $\mathfrak{p}$ is a place of $K$, write $K_{\mathfrak{p}}$ for the corresponding completion. For every place $\mathfrak{p}$ we choose a representing valuation $|\cdot |_{\mathfrak{p}}: K\to [0,\infty )$ in the following way: \begin{enumerate} \item If $\mathfrak{p}$ is real, there is a unique isomorphism of fields $K_{\mathfrak{p}}\simeq\mathbb{R}$, and we pull back along this isomorphism the usual absolute value $|x|=\max (x,-x)$ on the reals. \item If $\mathfrak{p}$ is complex, there are two isomorphisms $\tau, \overline{\tau }:K_{\mathfrak{p}}\simeq\mathbb{C}$ of \emph{topological} fields, and we set $|x|_{\mathfrak{p}}=\tau (x)\overline{\tau }(x)$. \item If $\mathfrak{p}$ is non-archimedean, let $q$ be the cardinality of the corresponding residue class field. Let $\pi\in K$ be a uniformising element; we normalise $|\cdot |_{\mathfrak{p}}$ in such a way that $|\pi |_{\mathfrak{p}}=\frac{1}{q}$. \end{enumerate} With these normalisations, we have the famous product formula, see \cite[Chapter III, Proposition 1.3]{Neukirch99}: for any $x\in K^{\times }$, all but a finite number of the $|x|_{\mathfrak{p}}$ are equal to $1$, and \begin{equation} \prod_{\mathfrak{p}\in\mathrsfs{P}(K)}|x|_{\mathfrak{p}}=1. \end{equation} As a consequence, the following construction gives a well-defined function on $\mathbb{P}^n(K)$ which can be thought of as measuring the arithmetic complexity of a point. \begin{Definition} Let $K$ be a number field of degree $d$ and let $x\in\mathbb{P}^n(K)$. Choose $x_0,\ldots ,x_n\in K$ such that $x=(x_0:\cdots :x_n)$; the \emph{(absolute) height} of $x$ is the real number \begin{equation} H(x)=\sqrt[d]{\prod_{\mathfrak{p}\in\mathrsfs{P}(K)}\max (|x_0|_{\mathfrak{p}},\ldots ,|x_n|_{\mathfrak{p}})}. \end{equation} The \emph{(absolute) logarithmic height} of $x$ is the real number \begin{equation} h(x)=\log H(x). \end{equation} \end{Definition} We always have $H(x)\ge 1$ and therefore $h(x)\ge 0$, with equality if and only if $x$ is a root of unity, see \cite[Theorem 3.8]{Silverman07}. The absolute height is defined in such a way that the functions $H\colon\mathbb{P}^1(K)\to [1,\infty )$ for varying $K$ glue together to $H\colon\mathbb{P}^n(\overline{\mathbb{Q} })\to [1,\infty )$, similarly for $h$. For $K=\mathbb{Q}$, there is a description of the height which is much more intuitive and makes computations much easier: if $x\in\mathbb{P}^n(\mathbb{Q} )$, we can write it as $x=(x_0:\cdots :x_n)$ with $x_0,\ldots ,x_n\in\mathbb{Z}$ coprime. Then \begin{equation} H(x)=\max (|x_0|_{\infty },\ldots ,|x_n|_{\infty }). \end{equation} Here, of course, $|\cdot |_{\infty }$ is the usual absolute value on $\mathbb{Z}\subset\mathbb{R}$, i.e. $|a|_{\infty }=\max (a,-a)$. We now examine how $H(f(x))$ relates to $H(x)$, where $f$ is a rational function. First we consider the case of M\"{o}bius transforms. By identifying the matrix entries with coordinates, we can view $\operatorname{GL}_2(K)$ as a subset of $K^4$. This is compatible with the action of $K^{\times }$, on the matrix group by multiplication with scalar matrices, and on the linear space by multiplication with scalars. So we can view $\operatorname{PGL}_2(K)=\operatorname{GL}_2(K)/K^{\times }$ as a subset of $\mathbb{P}^3(K)$ and define the height of an element of $\operatorname{PGL}_2(K)$ as the height of the corresponding point in $\mathbb{P}^3(K)$. By the simple description of heights for $K=\mathbb{Q}$, we get an equally simple description of the height of an element $\gamma\in\operatorname{PGL}_2(\mathbb{Q} )$: represent $\gamma$ by a matrix \begin{equation*} \begin{pmatrix} a&b\\ c&d \end{pmatrix}\in\operatorname{GL}_2(\mathbb{Q} ) \end{equation*} with $a,b,c,d\in\mathbb{Z}$ having greatest common divisor $1$. Then $$H(\gamma )=H((a:b:c:d))=\max (|a|_{\infty },|b|_{\infty },|c|_{\infty },|d|_{\infty }).$$ \begin{Lemma}\label{HGammaGleichHGammaInvers} Let $K$ be a number field of degree $d$ and $\gamma ,\delta\in\operatorname{PGL}_2( K)$. Then $H(\gamma )=H(\gamma^{-1})$. \end{Lemma} \begin{proof} If $\gamma\in\operatorname{PGL}_2(K)$ is represented by the matrix $A$, then $\gamma^{-1}$ is represented by the matrix $A^{-1}=(\det A)^{-1}A^{\sharp }$, where the matrix $A^{\sharp }$ is obtained by permuting the entries of $A$ in a well-known fashion and multiplying two of them with $-1$. But by the definition of $\operatorname{PGL}_2$, we see that $\gamma^{-1}$ is also represented by $A^{\sharp }$, whence $H(\gamma )=H(\gamma^{-1})$. \end{proof} \begin{Proposition}\label{DistortionOfHeightByMoebius} Let $K$ be a number field of degree $d$, let $x\in\mathbb{P}^1(K)$ and $\gamma\in\operatorname{PGL}_2(K)$. Then $$\frac{1}{2H(\gamma )}H(x)\le H(\gamma (x))\le 2H(\gamma )H(x).$$ \end{Proposition} \begin{proof} We only need to show the second inequality; the first will follow by replacing $\gamma$ by $\gamma^{-1}$ and using Lemma \ref{HGammaGleichHGammaInvers}. So choose a representative matrix $$\begin{pmatrix} a&b\\ c&d \end{pmatrix}\in\operatorname{GL}_2(K)$$ for $\gamma$. Write $x=(x_0:x_1)$. Then for any place $\mathfrak{p}$ of $K$ we get $$\max (|ax_0+bx_1|_{\mathfrak{p}},|cx_0+dx_1|_{\mathfrak{p}})\le t_{\mathfrak{p}}\cdot \max (|a|_{\mathfrak{p}},|b|_{\mathfrak{p}},|b|_{\mathfrak{p}},|d|_{\mathfrak{p}})\cdot\max (|x_0|_{\mathfrak{p}},|x_1|_{\mathfrak{p}})$$ by the triangle inequality; here $t_{\mathfrak{p}}$ is $1$ if $\mathfrak{p}$ is non-archimedean, $2$ if $\mathfrak{p}$ is real and $4$ if $\mathfrak{p}$ is complex. Taking the product over all $\mathfrak{p}$ and then taking $d$-th roots yields the desired result. \end{proof} Thus M\"{o}bius transformations can only change the height by a multiplicative factor. With some more effort, one obtains the following special case of \cite[Theorem 3.11]{Silverman07}: \begin{Theorem}\label{TheoremSilverman} Let $K$ be a number field and $f\in\mathrsfs{R}(K)$ a rational map of degree $d$. Then there exist constants $c_1,c_2>0$ such that for all $x\in\mathbb{P}^1(K)$, \begin{equation*} c_1\cdot H(x)^d\le H(f(x))\le c_2\cdot H(x)^d. \end{equation*} \end{Theorem} We now turn to estimating points in a fixed field of bounded height. \begin{Theorem}\label{AsymptoticsForQ} We have the following asymptotics as $N\to\infty $: \begin{equation*} \operatorname{card}\{ x\in\mathbb{P}^1(\mathbb{Q} )\mid H(x)\le N\} =\frac{12}{\pi^2}N^2+O(N\log N). \end{equation*} \end{Theorem} \begin{proof} This is classical and can, up to reformulation into more elementary language, be found in \cite{Apostol76}, in the proof of Theorem 3.9. \end{proof} For other number fields, there is a similar estimate: \begin{Theorem}[Schanuel]\label{AsymptoticsForK} Let $K$ be a number field of degree $d_K>1$. Then for $N\to\infty$ we have \begin{equation*} \operatorname{card}\{ x\in\mathbb{P}^1(K)\mid H(x)\le N\}=c_K\cdot N^{2d_K}+O(N^{2d_K-1}), \end{equation*} with the constant \begin{equation*} c_K=\frac{2^{2r_1+r_2-1}(2\pi )^{r_2}}{\sqrt{|\Delta_K|}}\cdot\frac{\operatorname{Res}_{s=1}\zeta_K(s)}{\zeta_K(2)}=\frac{h_K\cdot R_K\cdot 2^{3r_1+r_2-1}\cdot (2\pi )^{2r_2}}{w_K\cdot |\Delta_K|\cdot \zeta_K(2)}. \end{equation*} Here, as usual, $r_1$ is the number of real places, $r_2$ the number of complex places, $\Delta_K$ the discriminant, $\zeta_K$ the Dedekind zeta function, $h_K$ the class number, $R_K$ the regulator and $w_K$ the number of roots of unity in $K$. \end{Theorem} \begin{proof} This is a special case of the main result in \cite{Schanuel79}; the equality of the two expressions for $c_K$ follows from the class number formula. Note that Schanuel uses a different normalisation for the height, whence the different exponent. \end{proof} Note that for $K=\mathbb{Q}$, the formula for $c_K$ gives $12/\pi^2$, as above; the only reason that we have to treat this case seperately is that the error term has a different shape. And, of course, Theorem \ref{AsymptoticsForQ} is much more elementary than Theorem \ref{AsymptoticsForK}. The notion of height helps us to measure the ``size'' of a subset $A\subseteq\mathbb{P}^1(K)$. \begin{Definition} Let $K$ be a number field and $A\subseteq\mathbb{P}^1(K)$. Its \emph{lower height density} is the number \begin{equation*} \delta_h^{-}(A)=\liminf_{N\to\infty }\frac{\operatorname{card}\{ x\in A\mid H(x)\le N\} }{\operatorname{card}\{ x\in\mathbb{P}^1(K)\mid H(x)\le N\} }\in [0,1]; \end{equation*} its \emph{upper height density} is the number \begin{equation*} \delta_h^{+}(A)=\limsup_{N\to\infty }\frac{\operatorname{card}\{ x\in A\mid H(x)\le N\} }{\operatorname{card}\{ x\in\mathbb{P}^1(K)\mid H(x)\le N\} }\in [0,1]. \end{equation*} If these two are equal, we say that ``$A$ has a height density'' and call the quantity $\delta_h(A)=\delta_h^-(A)=\delta_h^+(A)$ the height density of $A$. \end{Definition} By Theorems \ref{AsymptoticsForQ} and \ref{AsymptoticsForK}, we see that $A$ has a height density if and only if the limit \begin{equation*} \lim_{N\to\infty }\frac{\operatorname{card}\{ x\in A\mid H(x)\le N\} }{N^2} \end{equation*} exists, and the height density is then this limit divided by the constant $c_K$. We now give some examples for height density. \begin{enumerate} \item If $K$ is given as a subfield of $\mathbb{R}$, then the set of all positive $x\in K$ has height density $\frac{1}{2}$. This is because $H(x)=H(-x)$. \item If $K$ is a number field of degree $d$, $\gamma\in\operatorname{PGL}_2(K)$ is a M\"{o}bius transformation and $A\subseteq\mathbb{P}^1(K)$ is any subset, then $$\delta_h^-(\gamma (A))\ge\frac{\delta_h^-(A)}{(2H(\gamma ))^{2d}}\text{ and }\delta_h^+(\gamma (A))\le (2H(\gamma ))^{2d}\delta_h^+(A).$$ This follows from Proposition \ref{DistortionOfHeightByMoebius} together with the observation that the number of points of height below $N$ grows like $N^{2d}$. In particular if $A$ has nonzero lower height density, then so has $\gamma (A)$. \item Combining the two previous examples, we see: if $K\subset\mathbb{R}$ is a number field and $a<b$, then the subset $K\cap [a,b]\subset\mathbb{P}^1(K)$ has positive lower height density, since there exists a M\"{o}bius transformation in $\operatorname{PGL}_2(K)$ which maps $[0,\infty )$ into $[a,b]$. \item The set $\mathbb{Q}_{>0}^{\text{even}}$ introduced before has positive height density in $\mathbb{P}^1(\mathbb{Q} )$. This can be seen as follows. Let us estimate the number of pairs $(p,q)\in\mathbb{N}^2$ with $p,q$ coprime, $q\le p\le N$ and $p$ even. If we can show that this number is bounded below by some positive constant times $N^2$, we are done. Now this number is equal to \begin{equation*} \sum_{\substack{1<p\le N\\ p\text{ even}}}\varphi (p)\ge\sum_{\substack{1<p\le N\\ p\text{ even}}}\varphi\left(\frac{p}{2}\right) =\sum_{n=1}^{\lfloor N/2\rfloor }\varphi (n)=\frac{3}{\pi^2}\cdot\left(\frac{N}{2}\right)^2+O(N\log N). \end{equation*} The first inequality is derived from the elementary inequality $\varphi (2n)\ge \varphi (n)$, and the final equality follows from \cite[Theorem 3.7]{Apostol76}. \end{enumerate} \section{Constraints on Injective Families} In this final section we shall show that if an injective family consists only of maps of degree at least two, then its image in $\mathbb{P}^1(K)$ must have height density zero. So to get started, assume that $K$ is a number field and $(f_1,\ldots ,f_n)\in\mathrsfs{R}(K)^n$ is an injective family for some initial value $x_0\in\mathbb{P}^1(K)$, where $\deg f_i\ge 2$ for all $i$. Denote by $\Gamma$ the free monoid generated by the $f_i$ in $\mathrsfs{R}(K)$, and let $\lVert\gamma\rVert$ be the word norm on $\Gamma$. That is, for $\gamma =f_{i_1}f_{i_2}\cdots f_{i_r}$ set $\lVert\gamma\rVert =r$. We prefer to work with logarithmic heights in this section. By Theorem \ref{TheoremSilverman}, we find a constant $c>0$ such that for all $1\le i\le n$ and all $x\in\mathbb{P}^1(K)$, the inequality \begin{equation}\label{WachstumAusserhalbS} h(f_i(x))\ge 2h(x)-c \end{equation} holds. By replacing $c$ with a larger constant if necessary, we may also assume that $$c\ge 1.$$ Hence $\Gamma$ ``explodes'' heights outside the \emph{exceptional set} \begin{equation*} S=\{ x\in\mathbb{P}^1(K)\mid h(x)\le 2c\} . \end{equation*} By Theorem \ref{AsymptoticsForQ} or \ref{AsymptoticsForK}, depending on whether $K=\mathbb{Q}$ or not, this is a finite set. \begin{Lemma}\label{LemmaUeberWachstumAusserhalbS} Under these assumptions, every element of $\Gamma$ takes the complement of $S$ to itself. In formul\ae : \begin{equation} \Gamma (\mathbb{P}^1(K)\smallsetminus S)\subseteq \mathbb{P}^1(K)\smallsetminus S. \end{equation} Furthermore, for any $x\in\mathbb{P}^1(K)\smallsetminus S$ and $\gamma\in\Gamma$ we have the inequality \begin{equation} h(\gamma (x))\ge\left(\frac{3}{2}\right)^{\lVert\gamma\rVert }\cdot h(x). \end{equation} \end{Lemma} \begin{proof} Let $x$ be in the complement of $S$, i.e. $h(x)>2c$. Then from (\ref{WachstumAusserhalbS}), we obtain $$h(f_i(x))\ge 2h(x)-c>4c-c>2c.$$ In particular, $f_i(x)\notin S$. Since the $f_i$ generate $\Gamma$, this shows the first part. The second inequality also needs only to be checked for $\gamma =f_i$ or, equivalently, $\lVert\gamma\rVert =1$. But using that $c<\frac{1}{2}h(x)$, we find that $$h(f_i(x))\ge 2h(x)-c>2h(x)-\frac{1}{2}h(x)=\frac{3}{2}h(x),$$ which is just what is to be proved for $\lVert\gamma\rVert =1$. \end{proof} If one enlarges $S$ suitably, the estimate can of course be sharpened in such a way that the constant $\frac{3}{2}$ can be replaced by any $2-\varepsilon$ with $\varepsilon >0$. Because the orbit map $\gamma\mapsto\gamma (x_0)$ is injective, it can hit $S$ only up to a finite word length. So there exists some $n_0\in\mathbb{N}$ with the property that whenever $\lVert\gamma\rVert \ge n_0$, then $\gamma (x_0)\notin S$ (and consequently $h(\gamma (x_0))>2c$). \begin{Lemma}\label{LemmaOnExponentialGrowth} Let $\gamma\in\Gamma$ with $\lVert\gamma\rVert >n_0$. Then \begin{equation*} h(\gamma (x_0))>\left(\frac{3}{2}\right)^{\lVert\gamma\rVert -n_0}. \end{equation*} \end{Lemma} \begin{proof} Set $N=\lVert\gamma\rVert -n_0$. Write $\gamma =\gamma_1\gamma_2$ with $\lVert\gamma_1\rVert =N$ and $\lVert\gamma_2\rVert =n_0$. Then \begin{equation*} h(\gamma (x_0))=h(\gamma_1(\gamma_2(x_0)))\ge\left(\frac{3}{2}\right)^{\lVert\gamma_1\rVert }\cdot h(\gamma_2(x_0))>\left(\frac{3}{2}\right)^N\cdot 2c>\left(\frac{3}{2}\right)^N. \end{equation*} The ``$\ge $'' sign is obtained from Lemma \ref{LemmaUeberWachstumAusserhalbS}, setting $x=\gamma_2(x_0)\notin S$ (by assumption on $\gamma_2$). The first ``$>$'' is justified again by the observation that $\gamma_2(x_0)\notin S$ and the definition of $S$. The second ``$>$'' sign finally is justified by $c\ge 1$ (remember we made it that way). \end{proof} \begin{Proposition}\label{PropositionOnLogarithmicGrowth} Under the above assumptions, there exist constants $c'>0$ and $k\in\mathbb{N}$ such that for all sufficiently big positive reals $B$ one has \begin{equation} \operatorname{card}\{\gamma\in\Gamma\mid h(\gamma (x_0))\le B\}\le c'\cdot B^k. \end{equation} \end{Proposition} \begin{proof} Since $\Gamma$ is free on $r$ generators, we get that \begin{equation*} \operatorname{card}\{\gamma\in\Gamma\mid\lVert\gamma\rVert\le C\}=\sum_{\nu =0}^{\lfloor C\rfloor } r^{\nu }\le r^{C+1} \end{equation*} if $r\ge 2$; for $r=1$ we get the even simpler estimate $\lfloor C\rfloor +1$ that will also do the job. We assume from now on that $r\ge 2$ since the calculation for $r=1$ is even easier. By Lemma \ref{LemmaOnExponentialGrowth}, we find \begin{equation*} \begin{split} \operatorname{card}\{\gamma\in\Gamma\mid h(\gamma (x_0))\le B\} &\le\operatorname{card}\{\gamma\in\Gamma\mid\left(\frac{3}{2}\right)^{\lVert\gamma\rVert -n_0}\le B\}\\ &=\operatorname{card}\{\gamma\in\Gamma\mid (\lVert\gamma\rVert -n_0)\log\frac{3}{2}\le\log B\}\\ &=\operatorname{card}\{\gamma\in\Gamma\mid \lVert\gamma\rVert\le n_0+\frac{\log B}{\log\frac{3}{2}}\}\\ &\le r^{n_0+\log B /\log\frac{3}{2}+1}=r^{n_0+1}\cdot B^{\log r/\log\frac{3}{2}}, \end{split} \end{equation*} so that setting $c'=r^{n_0+1}$ and $k=\lceil\log r/\log\frac{3}{2}\rceil$ will yield the desired estimate. \end{proof} \begin{Theorem}\label{LastTheorem} Let $K$ be a number field and $(f_1,\ldots ,f_n)\in\mathrsfs{R}(K)^n$ an injective family for the initial value $x_0\in\mathbb{P}^1(K)$. Assume that $\deg f_i\ge 2$ for all $1\le i\le n$. Let $\Gamma\subset\mathrsfs{R}(K)$ be the submonoid generated by the $f_i$. Then the image $\Gamma (x_0)\subseteq\mathbb{P}^1(K)$ has height density zero. \end{Theorem} \begin{proof} We translate the previous considerations back from statements about logarithmic heights into statements about heights. Since $H(x)\le N$ if and only if $h(x)\le\log N$, we see from Proposition \ref{PropositionOnLogarithmicGrowth} that there exists a positive integer $k$ with \begin{equation*} \operatorname{card}\{ x\in\Gamma (x_0)\mid H(x)\le N\} =O((\log N)^k). \end{equation*} Comparing this with Theorems \ref{AsymptoticsForQ} and \ref{AsymptoticsForK}, we see that $\Gamma (x_0)$ must have height density zero. \end{proof} We have seen before that in the case $K=\mathbb{Q}$, for every $n\ge 2$ there exists an injective family whose orbit has positive height density and which consists of $n$ M\"{o}bius transformations. It is easy to see that we cannot get positive height density for a family consisting of just one M\"{o}bius transformation. Note, however, that Newman's map \begin{equation*} x\mapsto\frac{1}{1+\lfloor x\rfloor -\{ x\} }, \end{equation*} being not terribly far apart from a M\"{o}bius transformation, gives an``injective family'' with just one element, whose orbit $\mathbb{Q}_{>0}$ has height density $\frac{1}{2}$. The last theorem tells us that we cannot get positive height density if we only work with maps of higher degree. So there remain two open questions: what about the mixed case, i.e. injective families consisting of both M\"{o}bius transformations and higher degree maps, and what about M\"{o}bius transformations in general number fields? We conjecture that the condition ``$\deg f_i\ge 2$ for all $i$'' in Theorem \ref{LastTheorem} can be relaxed to the weaker condition ``$\deg f_i\ge 2$ for at least one $i$''. In other words, that if the orbit of an injective family has positive upper height density, then the family must consist entirely of M\"{o}bius transformations. Note that then the injectivity of the family would be a crucial condition since otherwise we could just add some higher degree maps to the Calkin-Wilf family. As to the second question, there might be interesting trees similar to the Calkin-Wilf tree already over quadratic number fields.
{ "timestamp": "2012-01-10T02:03:53", "yymm": "1201", "arxiv_id": "1201.1851", "language": "en", "url": "https://arxiv.org/abs/1201.1851", "abstract": "In this note we discuss trees similar to the Calkin-Wilf tree, a binary tree that enumerates all positive rational numbers in a simple way. The original construction of Calkin and Wilf is reformulated in a more algebraic language, and an elementary application of methods from analytic number theory gives restrictions on possible analogues.", "subjects": "Number Theory (math.NT); Combinatorics (math.CO)", "title": "Enumerating Trees", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9828232874658904, "lm_q2_score": 0.824461932846258, "lm_q1q2_score": 0.8103003872304415 }
https://arxiv.org/abs/1403.0053
Bootstrapping and Askey-Wilson polynomials
The mixed moments for the Askey-Wilson polynomials are found using a bootstrapping method and connection coefficients. A similar bootstrapping idea on generating functions gives a new Askey-Wilson generating function. An important special case of this hierarchy is a polynomial which satisfies a four term recurrence, and its combinatorics is studied.
\section{Introduction} The Askey-Wilson polynomials \cite{AskeyWilson} $p_n(x;a,b,c,d|q)$ are orthogonal polynomials in $x$ which depend upon five parameters: $a$, $b$, $c$, $d$ and $q$. In \cite[\S2]{BI} Berg and Ismail use a bootstrapping method to prove orthogonality of Askey-Wilson polynomials by initially starting with the orthogonality of the $a=b=c=d=0$ case, the continuous $q$-Hermite polynomials, and successively proving more general orthogonality relations, adding parameters along the way. In this paper we implement this idea in two different ways. First, using successive connection coefficients for two sets of orthogonal polynomials, we will find explicit formulas for generalized moments of Askey-Wilson polynomials, see Theorem~\ref{thm:xnP}. This method also gives a heuristic for a relation between the two measures of the two polynomial sets, see Remark~\ref{remark:heur}, which is correct for the Askey-Wilson hierarchy. Using this idea we give a new generating function (Theorem~\ref{thm:dual_q_Hahn}) for Askey-Wilson polynomials when $d=0.$ The second approach is to assume the two sets of polynomials have generating functions which are closely related, up to a $q$-exponential factor. We prove in Theorem~\ref{thm:main} that if one set is an orthogonal set, the second set has a recurrence relation of predictable order, which may be greater than three. We give several examples using the Askey-Wilson hierarchy. Finally we consider a more detailed example of the second approach, using a generating function to define a set of polynomials called the discrete big $q$-Hermite polynomials. These polynomials satisfy a 4-term recurrence relation. We give the moments for the pair of measures for their orthogonality relations. Some of the combinatorics for these polynomials is given in \S~\ref{sec:comb-discr-big}. Finally we record in Proposition~\ref{prop:addthm} a possible $q$-analogue of the Hermite polynomial addition theorem. We shall use basic hypergeometric notation, which is in Gasper-Rahman \cite{GR} and Ismail \cite{Is}. \section{Askey-Wilson polynomials and connection coefficients} \label{sec:comp-line-funct} The connection coefficients are defined as the constants obtained when one expands one set of polynomials in terms of another set of polynomials. For the Askey-Wilson polynomials \cite[15.2.5,~p.~383]{Is} \[ p_n(x;a,b,c,d|q)= \frac{(ab,ac,ad)_n}{a^n} \hyper43{q^{-n},abcdq^{n-1},ae^{i\theta},ae^{-i\theta}} {ab,ac,ad}{q;q}, \quad x=\cos\theta \] we shall use the connection coefficients obtained by successively adding a parameter \[ (a,b,c,d)=(0,0,0,0)\rightarrow (a,0,0,0)\rightarrow (a,b,0,0)\rightarrow (a,b,0,0)\rightarrow (a,b,c,0) \rightarrow (a,b,c,d). \] Using a simple general result on orthogonal polynomials, we derive an almost immediate proof of an explicit formula for the mixed moments of Askey-Wilson polynomials. First we set the notation for an orthogonal polynomial set $p_n(x).$ Let $\LL_p$ be the linear functional on polynomials for which orthogonality holds \[ \LL_p(p_m(x)p_n (x)) =h_n \delta_{mn}, \quad 0\le m,n. \] \begin{defn} The mixed moments of $\LL_p$ are $\LL_p(x^np_m(x)),\quad 0\le m,n.$ \end{defn} The main tool is the following Proposition, which allows the computation of mixed moments of one set of orthogonal polynomials from another set if the connection coefficients are known. \begin{prop} \label{prop:bootstrap} Let $R_n(x)$ and $S_n(x)$ be orthogonal polynomials with linear functionals $\LL_R$ and $\LL_S$, respectively, such that $\LL_R(1)=\LL_S(1) = 1$. Suppose that the connection coefficients are \begin{equation} \label{eq:conncoef1} R_k(x) = \sum_{i=0}^k c_{k,i} S_i(x). \end{equation} Then \[ \LL_S(x^n S_m(x)) = \sum_{k=0}^n \frac{\LL_R(x^n R_k(x))}{\LL_R(R_k(x)^2)} c_{k,m} \LL_S(S_m(x)^2). \] \end{prop} \begin{proof} If we multiply both sides of \eqref{eq:conncoef1} by $S_m(x)$ and apply $\LL_S$, we have \[ \LL_S(R_k(x)S_m(x)) = c_{k,m} \LL_S(S_m(x)^2). \] Then by expanding $x^n$ in terms of $R_k(x)$ \[ x^n=\sum_{k=0}^n \frac{\LL_R(x^n R_k(x))}{\LL_R(R_k(x)^2)} R_k(x) \] we find \begin{align*} \LL_S(x^n S_m(x)) = \LL_S\left( \sum_{k=0}^n \frac{\LL_R(x^n R_k(x))}{\LL_R(R_k(x)^2)} R_k(x) S_m(x)\right) = \sum_{k=0}^n \frac{\LL_R(x^n R_k(x))}{\LL_R(R_k(x)^2)} c_{k,m} \LL_S(S_m(x)^2). \end{align*} \end{proof} \begin{remark} \label{remark:heur} One may also use the idea of Proposition~\ref{prop:bootstrap} to give a heuristic for representing measures of the linear functionals. Putting $m=0,$ if representing measures were absolutely continuous, say $w_R(x)dx$ for $R_n(x)$, and $w_S(x)dx$ for $S_n(x)$ then one might guess that \[ w_S(x) = w_R(x) \sum_{k=0}^\infty \frac{R_k(x)}{\LL_R(R_k(x)^2)} c_{k,0}. \] \end{remark} For the rest of this section we will compute the mixed moments $\LL_p(x^n p_m(x))$ for the Askey-Wilson polynomials using Proposition~\ref{prop:bootstrap} starting from the $q$-Hermite polynomials. Let $\LL_{a,b,c,d}$ be the linear functional for $p_n(x;a,b,c,d|q)$ satisfying $\LL_{a,b,c,d}(1)=1$. Then $\LL=\LL_{0,0,0,0}$, $\LL_{a}=\LL_{a,0,0,0}$, $\LL_{a,b}=\LL_{a,b,0,0}$, and $\LL_{a,b,c}=\LL_{a,b,c,0}$ are the linear functionals for these polynomials: $q$-Hermite, $H_n(x|q)=p_n(x;0,0,0,0|q)$, the big $q$-Hermite $H_n(x;a|q)=p_n(x;a,0,0,0|q)$, the Al-Salam-Chihara $Q_n(x;a,b|q)=p_n(x;a,b,0,0|q)$, and the dual $q$-Hahn $p_n(x;a,b,c|q)=p_n(x;a,b,c,0|q)$. The $L^2$-norms are given by \cite[15.2.4~p.383]{Is} \begin{align} \label{eq:orth_hermit} \LL(H_n(x|q) H_m(x|q)) &= (q)_n\delta_{mn},\\ \label{eq:orth_bighermit} \LL_a(H_n(x;a|q) H_m(x;a|q)) &= (q)_n\delta_{mn},\\ \label{eq:orth_ASC} \LL_{a,b}(Q_n(x;a,b|q) Q_m(x;a,b|q)) &= (q,ab)_n\delta_{mn},\\ \label{eq:orth_dual_q_Hahn} \LL_{a,b,c}(p_n(x;a,b,c|q) p_m(x;a,b,c|q)) &= (q,ab,ac,bc)_n\delta_{mn},\\ \label{eq:orth_AW} \LL_{a,b,c,d}(p_n(x;a,b,c,d|q) p_m(x;a,b,c,d|q)) &= \frac{(q,ab,ac,ad,bc,bd,cd,abcdq^{n-1})_n}{(abcd)_{2n}}\delta_{mn}. \end{align} To apply Proposition~\ref{prop:bootstrap}, we need the following connection coefficient formula for the Askey-Wilson polynomials given in \cite[(6.4)]{AskeyWilson} \begin{equation} \label{eq:cc} \frac{p_n(x;A,b,c,d|q)}{(q,bc,bd,cd)_n} =\sum_{k=0}^n \frac{p_k(x;a,b,c,d|q)}{(q,bc,bd,cd)_k} \times\frac{a^{n-k}(A/a)_{n-k}(A bcdq^{n-1})_k} {(abcdq^{k-1})_k (q,abcdq^{2k})_{n-k}}. \end{equation} The following four identities are special cases of \eqref{eq:cc}: \begin{align} \label{eq:cc0} H_n(x|q) &= \sum_{k=0}^n \qbinom{n}{k} H_k(x;a|q) a^{n-k},\\ \label{eq:cca} H_n(x;a|q) &=\sum_{k=0}^n \qbinom nk Q_k(x;a,b|q) b^{n-k},\\ \label{eq:ccab} Q_n(x;a,b|q) &=(ab)_n \sum_{k=0}^n \qbinom nk \frac{p_k(x;a,b,c|q)}{(ab)_k} c^{n-k},\\ \frac{p_n(x;b,c,d|q)}{(q,bc,bd,cd)_n} \label{eq:ccabc} &=\sum_{k=0}^n \frac{p_k(x;a,b,c,d|q)}{(q,bc,bd,cd)_k} \cdot\frac{a^{n-k}} {(abcdq^{k-1})_k (q,abcdq^{2k})_{n-k}}. \end{align} For the initial mixed moment we need the following result proved independently by \JV. \cite[Proposition 5.1]{JV_rook} and Cigler \cite[Proposition 15]{Cigler2011} \[ \LL(x^n H_m(x;q)) = \frac{(q)_m}{2^n} \op(n,m), \] where \[ \overline P(n,m) = \sum_{k=m}^n \left( \binom{n}{\frac{n-k}2} - \binom{n}{\frac{n-k}2-1}\right) (-1)^{(k-m)/2} q^{\binom{(k-m)/2+1}2} \qbinom{\frac{k+m}2}{\frac{k-m}2}. \] We shall use the convention $\binom nk = \qbinom nk = 0$ if $k<0$, $k>n$, or $k$ is not an integer. Thus $\overline P(n,m)=0$ if $n\not\equiv m \mod 2$. \begin{thm} \label{thm:xnP} We have \begin{align} \label{eq:big} \LL_a(x^n H_m(x;a|q)) &= \frac{(q)_m}{2^n}\sum_{\alpha\ge0} \op(n,\alpha+m) \qbinom{\alpha+m}{m} a^{\alpha}, \\ \label{eq:ASC} \LL_{a,b}(x^n Q_m(x;a,b|q)) &= \frac{(q,ab)_m}{2^n} \sum_{\alpha,\beta\ge0} \op(n,\alpha+\beta+m) \qbinom{\alpha+\beta+m}{\alpha,\beta,m} a^{\alpha} b^{\beta}, \\ \label{eq:dualqHahn} \LL_{a,b,c}(x^n p_m(x;a,b,c|q)) &= \frac{(q,ac,bc)_m}{2^n} \sum_{\alpha,\beta,\gamma\ge0} \op(n,\alpha+\beta+\gamma+m) \qbinom{\alpha+\beta+\gamma+m}{\alpha,\beta,\gamma,m} \\ \notag &\quad \times a^{\alpha} b^{\beta} c^{\gamma} (ab)_{\gamma+m},\\ \label{eq:AW} \LL_{a,b,c,d}(x^n p_m(x;a,b,c,d|q)) &= \frac{1}{2^n}\sum_{\abcd,\ge0}\abcdpower \op(n,\abcd+) \qbinom{\abcd+}{\abcd,} \\ \notag &\quad \times \frac{(bd)_{\alpha}(cd)_{\alpha}(bc)_{\alpha+\delta}}{(abcd)_\alpha} \cdot \frac{(ab,ac,ad)_m (q^{\alpha};q^{-1})_m}{a^m (abcdq^{\alpha})_m}. \end{align} \end{thm} \begin{proof} By \eqref{eq:cc0}, Proposition~\ref{prop:bootstrap} and \eqref{eq:orth_hermit}, \begin{align*} \LL_a(x^n H_m(x;a|q)) &= \sum_{k=0}^n \frac{\LL(x^n H_k(x|q))}{\LL(H_k(x)^2)} \qbinom{k}{m} a^{k-m}\LL_a(H_m(x;a|q)^2)\\ &= \frac{(q)_m}{2^n}\sum_{k=0}^n \op(n,k) \qbinom{k}{m} a^{k-m}. \end{align*} Equations~\eqref{eq:ASC}, \eqref{eq:dualqHahn}, and \eqref{eq:AW} can be proved similarly using the connection coefficient formulas \eqref{eq:cca}, \eqref{eq:ccab}, and \eqref{eq:ccabc}. \end{proof} Letting $m=0$ in \eqref{eq:AW} we obtain a formula for the $n$th moment of the Askey-Wilson polynomials. \begin{cor} \label{cor:AWmoment} We have \begin{equation} \label{eq:AWmoment2} \LL_{a,b,c,d}(x^n) = \frac{1}{2^n}\sum_{\abcd,\ge0}\abcdpower \op(n,\abcd+) \qbinom{\abcd+}{\abcd,} \frac{(bd)_{\alpha}(cd)_{\alpha}(bc)_{\alpha+\delta}}{(abcd)_\alpha}. \end{equation} \end{cor} In \cite{KimStanton} the authors found a slightly different formula \[ \LL_{a,b,c,d}(x^n)= \frac{1}{2^n}\sum_{\abcd,\ge0}\abcdpower \op(n,\abcd+) \qbinom{\abcd+}{\abcd,} \frac{(ad)_{\beta+\gamma}(ac)_{\beta}(bd)_{\gamma}}{(abcd)_{\beta+\gamma}}, \] which can be rewritten using the symmetry in $a,b,c,d$ as \begin{equation} \label{eq:AWmoment1} \LL_{a,b,c,d}(x^n)= \frac{1}{2^n}\sum_{\abcd,\ge0}\abcdpower \op(n,\abcd+) \qbinom{\abcd+}{\abcd,} \frac{(bc)_{\alpha+\delta}(bd)_{\alpha}(ac)_{\delta}}{(abcd)_{\alpha+\delta}}. \end{equation} One can obtain \eqref{eq:AWmoment1} from \eqref{eq:AWmoment2} by applying the $_3\phi_1$-transformation \cite[(III.8)]{GR} to the $\alpha$-sum after fixing $\gamma$, $\delta$, and $N=\alpha+\beta$. We next check if the heuristic in Remark~\ref{remark:heur} leads to correct results in these cases. The absolutely continuous Askey-Wilson measure $w(x;a,b,c,d|q)$ with total mass $1$ for $0<q<1$, $\max(|a|,|b|,|b|,|d|)<1$ is, if $x=\cos\theta$, $\theta\in [0,\pi]$, \begin{align} \label{eq:wabcd} w(\cos\theta;a,b,c,d|q) &= \frac{(q,ab,ac,ad,bc,bd,cd)_\infty}{2\pi(abcd)_\infty} \\ \notag &\quad \times \frac{(e^{2i\theta},e^{-2i\theta})_\infty} {(ae^{i\theta},ae^{-i\theta}, be^{i\theta},be^{-i\theta},ce^{i\theta},ce^{-i\theta},de^{i\theta},de^{-i\theta})_\infty}. \end{align} Then the measures for the $q$-Hermite $H_n(x|q)$, the big $q$-Hermite $H_n(x;a|q)$, the Al-Salam-Chihara $Q_n(x;a,b|q)$, and the dual $q$-Hahn $p_n(x;a,b,c|q)$ are, respectively, $w(\cos\theta;0,0,0,0|q)$, $w(\cos\theta;a,0,0,0|q)$, $w(\cos\theta;a,b,0,0|q)$, and $w(\cos\theta;a,b,c,0|q)$. Notice that each successive measure comes from the previous measure by inserting infinite products. \begin{example} Let $R_k(x) = H_k(x|q)$ and $S_k(x)=H_k(x;a|q)$ so that \[ w_S(\cos \theta)=w_R(\cos \theta) \frac{1}{(ae^{i\theta},ae^{-i\theta})_\infty}. \] In this case, we have $\LL_R(R_k(x)^2) =(q)_k$ and \[ R_k(x) = \sum_{i=0}^k c_{k,i} S_i(x), \] where $c_{k,i} = \qbinom{k}{i}a^{k-i}$. By the heuristic in Remark~\ref{remark:heur}, \[ w_S(x) = w_R(x) \sum_{k=0}^\infty \frac{R_k(x)}{(q)_k} a^k= w_R(x)\frac{1}{(ae^{i\theta} , ae^{-i\theta})_\infty}, \] where we have used the $q$-Hermite generating function \cite[(14.26.11), p.542]{KLS}. \end{example} \begin{example} Let $R_k(x) = H_k(x;a|q)$ and $S_k(x)=Q_k(x;a,b|q)$ so that \[ w_S(\cos \theta)=w_R(\cos \theta) \frac{(ab)_\infty}{(be^{i\theta},be^{-i\theta})_\infty}. \] In this case, we have $\LL_R(R_k(x)^2) =(q)_k$ and \[ R_k(x) = \sum_{i=0}^k c_{k,i} S_i(x), \] where $c_{k,i} = \qbinom{k}{i}b^{k-i}$. By the heuristic in Remark~\ref{remark:heur}, \[ w_S(x) = w_R(x) \sum_{k=0}^\infty \frac{R_k(x)}{(q)_k} c^k= w_R(x)\frac{(ab)_\infty}{(be^{i\theta} , be^{-i\theta})_\infty}, \] where we have used the big $q$-Hermite generating function \cite[(14.18.13), p.512]{KLS}. \end{example} \begin{example} Let $R_k(x) = Q_k(x;a,b|q)$ and $S_k(x)=p_k(x;a,b,c|q)$ so that \[ w_S(\cos \theta)=w_R(\cos \theta) \frac{(ac,bc)_\infty}{(ce^{i\theta},ce^{-i\theta})_\infty}. \] In this case, we have $\LL_R(R_k(x)^2) =(q,ab)_k$ and \[ R_k(x) = \sum_{i=0}^k c_{k,i} S_i(x), \] where $c_{k,i} = \qbinom{k}{i}\frac{(ab)_k}{(ab)_i}c^{k-i}$. By the heuristic in Remark~\ref{remark:heur}, \[ w_S(x) = w_R(x) \sum_{k=0}^\infty \frac{R_k(x)}{(q,ab)_k} (ab)_k c^k= w_R(x)\frac{(ac,bc)_\infty}{(ce^{i\theta} , ce^{-i\theta})_\infty}, \] where we have used the Al-Salam-Chihara generating function \cite[(14.8.13), p.458]{KLS}. \end{example} Notice that in the above example we used the known generating function for the Al-Salam-Chihara polynomials $Q_n(x;a,b|q)$. If we apply the same steps to $R_k(x)=p_k(x;a,b,c,0|q)$ and $S_k(x)=p_k(x;a,b,c,d|q)$, a new generating function appears. \begin{thm} \label{thm:dual_q_Hahn} We have \[ (abct)_\infty \sum_{k=0}^\infty \frac{p_k(x;a,b,c,0|q)}{(q,abct)_k} t^k = \frac{(at,bt,ct)_\infty}{(te^{i\theta} , te^{-i\theta})_\infty}. \] \end{thm} \begin{proof} We must show \begin{equation} \label{eq:1} (abct)_\infty \sum_{n=0}^\infty \frac{t^n}{(q,abct)_n} p_n(x;a,b,c,0|q)= \frac{(bt,ct)_\infty}{(te^{i\theta},te^{-i\theta})_\infty}(at)_\infty. \end{equation} Using the Al-Salam-Chihara generating function and the $q$-binomial theorem \cite[(II.3), p. 354]{GR}, \eqref{eq:1} is equivalent to \begin{equation} \label{eq:3} \sum_{n=0}^N \frac{p_n(x;b,c,0,0|q)}{(q)_n} \frac{(-a)^{N-n}q^{\binom{N-n}{2}}}{(q)_{N-n}}= \sum_{n=0}^N \frac{p_n(x;a,b,c,0|q)}{(q)_n} \frac{(-abcq^n)^{N-n}q^{\binom{N-n}{2}}}{(q)_{N-n}}. \end{equation} Now use the connection coefficients \[ p_n(x;b,c,0,0|q)= (bc)_n \sum_{k=0}^n \qbinom nk p_k(x;a,b,c,0|q)\frac{a^{n-k}}{(bc)_{k}}, \] to show that \eqref{eq:3} follows from \[ \sum_{n=k}^N \frac{(bc)_n}{(q)_n} \qbinom nk \frac{a^{N-k}}{(bc)_k} \frac{(-1)^{N-n}q^{\binom{N-n}{2}}}{(q)_{N-n}}= \frac{1}{(q)_k}\frac{(-abcq^k)^{N-k}}{(q)_{N-k}} q^{\binom{N-k}{2}}. \] This summation is a special case of the $q$-Vandermonde theorem \cite[(II.6), p. 354]{GR}. \end{proof} A generalization of Theorem~\ref{thm:dual_q_Hahn} to Askey-Wilson polynomials is given in \cite{IS2013}. A natural generalization of the mixed moments in \eqref{eq:AW} is \[ \LL_{a,b,c,d}(x^n p_m(x;a,b,c,d|q) p_\ell(x;a,b,c,d|q)). \] For general orthogonal polynomials Viennot has given a combinatorial interpretation for $\LL(x^np_mp_\ell)$ in terms of weighted Motzkin paths. An explicit formula when $p_n= p_n(x;a,b,c,d|q)$ may be given using \eqref{eq:cc} and a $q$-Taylor expansion \cite{IS2003}, but we do not state the result here. \section{Generating functions} \label{sec:mult-yt_infty-or} In \S~\ref{sec:comp-line-funct} we noted the following generating functions for our bootstrapping polynomials: continuous $q$-Hermite $H_n(x|q)$, continuous big $q$-Hermite $H_n(x;a|q)$, and Al-Salam-Chihara $Q_n(x;a,b|q)$ \begin{equation} \label{eq:gf_Hermite} \sum_{n=0}^\infty \frac{H_n(x|q)}{(q)_n} t^n = \frac{1}{(te^{i\theta},te^{-i\theta})_\infty}, \end{equation} \begin{equation} \label{eq:gf_big_Hermite} \sum_{n=0}^\infty \frac{H_n(x;a|q)}{(q)_n} t^n = \frac{(at)_\infty}{(te^{i\theta} , te^{-i\theta})_\infty}, \end{equation} \begin{equation} \label{eq:gf_ASC} \sum_{n=0}^\infty \frac{Q_n(x;a,b|q)}{(q)_n}t^n = \frac{(at,bt)_\infty}{(te^{i\theta} , te^{-i\theta})_\infty}. \end{equation} Note that \eqref{eq:gf_big_Hermite} is obtained from \eqref{eq:gf_Hermite} by multiplying by $(at)_\infty$ and \eqref{eq:gf_ASC} is obtained from \eqref{eq:gf_big_Hermite} by multiplying by $(bt)_\infty$. However, if we multiply \eqref{eq:gf_ASC} by $(ct)_\infty,$ we no longer have a generating function for orthogonal polynomials. It is the generating function for polynomials which satisfy a recurrence relation of finite order, but longer than order three, which orthogonal polynomials have. The purpose of this section is to explain this phenomenon. We consider polynomials whose generating function are obtained by multiplying the generating function of orthogonal polynomials by $(yt)_\infty$ or $1/(-yt)_\infty.$ We say that polynomials $p_n(x)$ \emph{satisfy a $d$-term recurrence relation} if there exist a real number $A$ and sequences $\{b_{n}^{(0)}\}_{n\ge0}, \{b_{n}^{(1)}\}_{n\ge1},\dots, \{b_{n}^{(d-2)}\}_{n\ge d-2}$ such that, for $n\ge0$, \[ p_{n+1}(x) = (Ax - b_n^{(0)})p_n(x) - b_n^{(1)}p_{n-1}(x)-\dots-b_n^{(d-2)}p_{n-d+2}(x), \] where $p_{i}(x)=0$ for $i<0$. \begin{thm} \label{thm:main} Let $p_n(x)$ be polynomials satisfying $p_{n+1}(x) = (Ax-b_n)p_n(x)-\lambda_n p_{n-1}(x)$ for $n\ge0$, where $p_{-1}(x)=0$ and $p_0(x)=1$. If $b_{k}$ and $\frac{\lambda_{k}}{1-q^{k}}$ are polynomials in $q^k$ of degree $r$ and $s$, respectively, which are independent of $y$, then the polynomials $P^{(1)}_n(x,y)$ in $x$ defined by \[ \sum_{n=0}^\infty P^{(1)}_{n} (x,y) \frac{t^n}{(q)_n} =(yt)_\infty\sum_{n=0}^\infty p_{n} (x) \frac{t^n}{(q)_n} \] satisfy a $d$-term recurrence relation for $d=\max(r+2,s+3)$. \end{thm} We use two lemmas to prove Theorem~\ref{thm:main}. In the following lemmas we use the same notations as in Theorem~\ref{thm:main}. \begin{lem} \label{lem:yq} We have \[ P^{(1)}_n(x,y) = P^{(1)}_n(x,yq) - y(1-q^n) P^{(1)}_{n-1}(x,yq). \] \end{lem} \begin{proof} This is obtained by equating the coefficients of $t^n$ in \[ \sum_{n=0}^\infty P^{(1)}_n(x,y) \frac{t^n}{(q)_n} = (1-yt) \sum_{n=0}^\infty P^{(1)}_n(x,yq) \frac{t^n}{(q)_n}. \] \end{proof} \begin{lem} \label{lem:rec1} Suppose that $b_{k}$ and $\frac{\lambda_{k}}{1-q^{k}}$ are polynomials in $q^k$ of degree $r$ and $s$, respectively, i.e., \[ b_k = \sum_{j=0}^r c_j (q^k)^j,\qquad \frac{\lambda_{k}}{1-q^{k}} = \sum_{j=0}^s d_j (q^k)^j. \] Then \[ P_{n+1}^{(1)}(x,y) = (Ax-y) P_{n}^{(1)}(x,yq) -\sum_{j=0}^r c_j q^{nj} P^{(1)}_n (x,yq^{1-j}) -(1-q^n)\sum_{j=0}^s d_j q^{nj} P^{(1)}_{n-1} (x,yq^{1-j}). \] \end{lem} \begin{proof} Expanding $(yt)_\infty$ using the $q$-binomial theorem, we have \[ P^{(1)}_n(x,y) = \sum_{k=0}^n \qbinom nk (-1)^k y^k q^{\binom k2} p_{n-k}(x). \] Using the relation $\qbinom{n+1}k = \qbinom{n}{k-1}+q^k\qbinom nk$, we have \begin{align*} P^{(1)}_{n+1}(x,y) &= \sum_{k=0}^{n+1} \left(\qbinom{n}{k-1}+q^k\qbinom nk\right) (-1)^k y^k q^{\binom k2} p_{n+1-k}(x)\\ &= -y P^{(1)}_n(x,yq) +\sum_{k=0}^n \qbinom nk (-1)^k (yq)^k q^{\binom k2} p_{n+1-k}(x). \end{align*} By $\qbinom nk = \frac{1-q^n}{1-q^{n-k}}\qbinom {n-1}{k}$ and the 3-term recurrence \[ p_{n+1-k}(x)=(Ax-b_{n-k})p_{n-k}(x)-\lambda_{n-k}p_{n-1-k}(x), \] we get \begin{multline}\label{eq:pn+1} P^{(1)}_{n+1}(x,y) = (Ax-y) P^{(1)}_n(x,yq) -\sum_{k=0}^n \qbinom nk (-1)^k (yq)^k q^{\binom k2} p_{n-k}(x) b_{n-k}\\ -(1-q^n) \sum_{k=0}^{n-1} \qbinom {n-1}k (-1)^k (yq)^k q^{\binom k2} p_{n-1-k}(x) \frac{\lambda_{n-k}}{1-q^{n-k}}. \end{multline} Since \[ b_{n-k} = \sum_{j=0}^r c_j q^{nj} (q^k)^{-j},\qquad \frac{\lambda_{n-k}}{1-q^{n-k}} = \sum_{j=0}^s q^{nj} d_j (q^k)^{-j}, \] and \[ P^{(1)}_n(x,yq^{1-j}) = \sum_{k=0}^n \qbinom nk (-1)^k (yq)^k q^{\binom k2} p_{n-k}(x) (q^{k})^{-j}, \] we obtain the desired recurrence relation. \end{proof} Now we can prove Theorem~\ref{thm:main}. \begin{proof}[Proof of Theorem~\ref{thm:main}] By Lemma~\ref{lem:rec1}, we can write \[ P_{n+1}^{(1)}(x,y) = (Ax-y) P_{n}^{(1)}(x,yq) -\sum_{j=0}^r c_j q^{nj} P^{(1)}_n (x,yq^{1-j}) -(1-q^n)\sum_{j=0}^s d_j q^{nj} P^{(1)}_{n-1} (x,yq^{1-j}). \] Using Lemma~\ref{lem:yq} we can express $P^{(1)}_k (x,yq^{1-j})$ as a linear combination of \[ P^{(1)}_k (x,yq), P^{(1)}_{k-1} (x,yq), \dots, P^{(1)}_{k-j} (x,yq). \] Replacing $y$ by $y/q$, we obtain a $\max(r+2,s+3)$-term recurrence relation for $P^{(1)}_n(x,y)$. \end{proof} \begin{remark} One may verify that the order of recurrence for $P_{n}^{(1)}(x,y)$ is exactly $\max(2+r,3+s)$ in the following way. Lemma~\ref{lem:yq} is applied $s$ times to the term $P^{(1)}_{n-1} (x,yq^{1-s})$ to obtain a linear combination of $ P^{(1)}_{n-1} (x,yq) ,P^{(1)}_{n-2} (x,yq), \cdots, P^{(1)}_{n-s-1} (x,yq).$ The coefficient of $P^{(1)}_{n-s-1} (x,yq)$ in this expansion is $(-1)^s (q^{n-1};q^{-1})_s y^s q^{\binom{s}{2}}.$ Similarly, considering $P^{(1)}_{n} (x,yq^{1-r})$, the coefficient of $P^{(1)}_{n-r} (x,yq)$ in the expansion is $(-1)^r (q^{n};q^{-1})_r y^r q^{\binom{r}{2}}.$ These terms are non-zero, give a recurrence of order $\max(r+2,s+3)$, and could only cancel if $r=s+1.$ In this case, the coefficient of $P^{(1)}_{n-s-1} (x,yq)$ is \[ (q^n;q^{-1})_{s+1} (-1)^{s+1}y^s q^{\binom{s}{2}}q^{ns}\left( d_s-yc_{r}q^{r+s}\right). \] Since $d_s$ and $c_r$ are non-zero and independent of $y$, this is non-zero. \end{remark} \begin{remark} Theorem~\ref{thm:main} can be generalized for polynomials $p_n(x)$ satisfying a finite term recurrence relation of order greater than $3$. For instance, if $p_{n+1}(x) = (Ax-b_n)p_n(x)-\lambda_n p_{n-1}(x) - \nu_n p_{n-2}(x)$, then using $\qbinom nk = \frac{1-q^n}{1-q^{n-k}}\qbinom {n-1}{k}$ twice one can see that Equation~\eqref{eq:pn+1} has the following extra sum in the right hand side: \[ -(1-q^n)(1-q^{n-1}) \sum_{k=0}^{n-1} \qbinom {n-1}k (-1)^k (yq)^k q^{\binom k2} p_{n-2-k}(x) \frac{\nu_{n-k}}{(1-q^{n-k})(1-q^{n-k-1})}. \] Thus if $\frac{\nu_k}{(1-q^k)(1-q^{k-1})}$ is a polynomial in $q^k$ then $P_n^{(1)}(x,y)$ satisfy a finite term recurrence relation. \end{remark} Note that by using Lemmas~\ref{lem:yq} and \ref{lem:rec1}, one can find a recurrence relation for $P_n^{(1)}(x,y)$ in Theorem~\ref{thm:main}. An analogous theorem holds for polynomial in $q^{-k}.$ We state the result without proof. \begin{thm} \label{thm:mainflip} Let $p_n(x)$ be polynomials satisfying $p_{n+1}(x) = (Ax-b_n)p_n(x)-\lambda_n p_{n-1}(x)$ for $n\ge0$, where $p_{-1}(x)=0$ and $p_0(x)=1$. If $b_{k}$ and $\frac{\lambda_{k}}{1-q^{k}}$ are polynomials in $q^{-k}$ of degree $r$ and $s$, respectively, which are independent of $y$, and the constant term of $\frac{\lambda_{k}}{1-q^{k}}$ is zero, then the polynomials $P^{(2)}_n(x,y)$ defined by \[ \sum_{n=0}^\infty P^{(2)}_{n} (x,y) \frac{q^{\binom n2}t^n}{(q)_n} =\frac1{(-yt)_\infty}\sum_{n=0}^\infty p_{n} (x) \frac{q^{\binom n2}t^n}{(q)_n} \] satisfy a $d$-term recurrence relation for $d= \max(r+1,s+2)$. \end{thm} We now give several applications of Theorem~\ref{thm:main} and Theorem~\ref{thm:mainflip}. In the following examples, we use the notation in these theorems. \begin{example} Let $p_n(x)$ be the continuous $q$-Hermite polynomial $H_n(x|q)$. Then $A=2, b_n =0$, and $\lambda_n=1-q^n$. Since $r=-\infty$ and $s=0$, $P^{(1)}_n(x,y)$ satisfies a 3-term recurrence relation. By Lemma~\ref{lem:rec1}, we have \[ P^{(1)}_{n+1}(x,y) = (2x-y) P^{(1)}_{n}(x,yq) - (1-q^n) P^{(1)}_{n-1}(x,yq). \] By Lemma~\ref{lem:yq} we have \[ P^{(1)}_{n+1}(x,y) = P^{(1)}_{n+1}(x,yq) - y(1-q^n) P^{(1)}_{n}(x,yq). \] Thus \[ P^{(1)}_{n+1}(x,yq) =(2x-yq^n) P^{(1)}_{n}(x,yq) - (1-q^n) P^{(1)}_{n-1}(x,yq). \] Replacing $y$ by $y/q$ we obtain \[ P^{(1)}_{n+1}(x,y) =(2x-yq^{n-1}) P^{(1)}_{n}(x,y) - (1-q^n) P^{(1)}_{n-1}(x,y). \] Thus $P_n(x,y)$ are orthogonal polynomials, which are the continuous big $q$-Hermite polynomials $H_n(x;y|q)$. \end{example} \begin{example} Let $p_n(x)$ be the continuous big $q$-Hermite polynomials $H_n(x;a|q)$. Then $A=2, b_n = aq^n$, and $\lambda_n = 1-q^n$. Since $r=1$ and $s=0$, $P^{(1)}_n(x,y)$ satisfies a 3-term recurrence relation. Using the same method as in the previous example, we obtain \[ P^{(1)}_{n+1}(x,y) =(2x-(a+y)q^{n}) P^{(1)}_{n}(x,y) - (1-q^n)(1-ayq^{n-1}) P^{(1)}_{n-1}(x,y). \] Thus $P^{(1)}_n(x,y)$ are orthogonal polynomials, which are the Al-Salam-Chihara polynomials $Q_n(x;a,y|q)$. \end{example} \begin{example} Let $p_n(x)$ be the Al-Salam-Chihara polynomials $Q_n(x;a,b|q)$. Then $A=2, b_n= (a+b)q^n$, and $\lambda_n = (1-q^n) (1-abq^{n-1})$. Since $r=1$ and $s=1$, $P_n(x,y)$ satisfies a 4-term recurrence relation. By Lemma~\ref{lem:rec1}, we have \[ P^{(1)}_{n+1}(x,y) = (2x-y)P^{(1)}_n(x,yq) - (a+b)q^n P^{(1)}_n(x,y) -(1-q^n)(-abq^{n-1}P^{(1)}_{n-1}(x,y) + P^{(1)}_{n-1}(x,yq)). \] Using Lemma~\ref{lem:yq} we get \[ P^{(1)}_{n+1} = (2x-(a+b+y)q^n)P^{(1)}_n -(1-q^n)(1-(ab+ay+by)q^{n-1}) P^{(1)}_{n-1} -abyq^{n-2}(1-q^n)(1-q^{n-1}) P^{(1)}_{n-2}. \] \end{example} \begin{example} Let $p_n(x)$ be the continuous dual $q$-Hahn polynomials $p_n(x;a,b,c|q)$. Then $A=2$ and \begin{align*} b_n &= (a+b+c)q^n -abcq^{2n}-abcq^{2n-1}, \\ \lambda_n & = (1-q^n) (1-abq^{n-1}) (1-bcq^{n-1}) (1-caq^{n-1}). \end{align*} Since $r=2$ and $s=3$, $P_n(x,y)$ satisfies a 6-term recurrence relation. It is possible to find an explicit recurrence relation using the same idea as in the previous example. \end{example} \begin{example} \label{ex:dqh1} Let $p_n(x)$ be the discrete $q$-Hermite I polynomial $h_n(x;q)$. Then $A=1, b_n =0$, and $\lambda_n=q^{n-1}(1-q^n)$. Since $r=-\infty$ and $s=1$, $P^{(1)}_n(x,y)$ satisfies a 4-term recurrence relation which is \[ P^{(1)}_{n+1}(x,y) = (x-yq^{n}) P^{(1)}_n (x,y) -q^{n-1}(1-q^n) P^{(1)}_{n-1}(x,y) +yq^{n-2}(1-q^n)(1-q^{n-1}) P^{(1)}_{n-2}(x,y). \] In \S4 we will study $P^{(1)}_n(x,y)=h_n(x,y;q)$, the discrete big $q$-Hermite I polynomials $h_n(x,y;q)$. This is a proof of Theorem~\ref{thm:4term}. \end{example} \begin{example} \label{ex:dqh2} Let $p_n(x)$ be the discrete $q$-Hermite II polynomial $\HT_n(x;q)$. Then $A=1, b_n =0$, and $\lambda_n=q^{-2n+1}(1-q^n)$. Since $b_n$ and $\lambda_n/(1-q^n)$ are polynomials in $q^{-n}$ of degrees $-\infty$ and $2$, respectively, and the constant term of $\lambda_n/(1-q^n)$ is 0, so $P^{(2)}_n(x,y)$ satisfies a 4-term recurrence relation. It is \[ P^{(2)}_{n+1}(x,y) = (x-yq^{-n}) P^{(2)}_n (x,y) -q^{-2n+1}(1-q^n) P^{(2)}_{n-1}(x,y) -yq^{3-3n}(1-q^n)(1-q^{n-1}) P^{(2)}_{n-2}(x,y). \] $P^{(2)}_n(x,y)$ are the discrete big $q$-Hermite II polynomials $\HT_n(x,y;q)$ of \S~\ref{sec:comb-discr-big}. \end{example} \begin{example} The \emph{Al-Salam--Carlitz I} polynomials $U_n^{(a)}(x;q)$ are defined by \[ \sum_{n=0}^\infty \frac{U_n^{(a)}(x;q)}{(q)_n}t^n =\frac{(t)_\infty (at)_\infty}{(xt)_\infty }. \] They have the 3-term recurrence relation \[ \label{eq:asc1} U_{n+1}^{(a)}(x;q) = (x-(1+a)q^{n}) U_{n}^{(a)}(x;q) +aq^{n-1}(1-q^n) U_{n-1}^{(a)}(x;q). \] Let $p_n(x)$ be the polynomials with generating function \[ \sum_{n=0}^\infty \frac{p_n(x)}{(q)_n} t^n =\frac{(t)_\infty}{(xt)_\infty} =\sum_{n=0}^\infty \frac{x^n(1/x)_n}{(q)_n} t^n. \] Then $p_n(x) = x^n (1/x)_n$. Thus $p_{n+1}(x) = (x-q^{n}) p_n(x)$, and we have $A=1, b_n = q^{n}$, and $\lambda_n =0$, and $U_n^{(a)}(x;q) = P^{(1)}_n(x,a)$. \end{example} \begin{example} The \emph{Al-Salam--Carlitz II} polynomials $V_n^{(a)}(x;q)$ are defined by \[ \sum_{n=0}^\infty \frac{(-1)^n q^{\binom n2}}{(q)_n} V_n^{(a)}(x;q) t^n =\frac{(xt)_\infty}{(t)_\infty (at)_\infty}. \] They have the 3-term recurrence relation \begin{equation} \label{eq:2} V_{n+1}^{(a)}(x;q) = (x-(1+a)q^{-n}) V_{n}^{(a)}(x;q) -aq^{-2n+1}(1-q^n) V_{n-1}^{(a)}(x;q). \end{equation} Let $p_n(x)$ be the polynomials with generating function \[ \sum_{n=0}^\infty \frac{q^{\binom n2}}{(q)_n}p_n(x) t^n =\frac{(xt)_\infty}{(t)_\infty} =\sum_{n=0}^\infty \frac{(x)_n}{(q)_n} t^n =\sum_{n=0}^\infty \frac{(-1)^nq^{\binom n2}x^n(1/x)_n}{(q)_n} t^n. \] Then $p_n(x) = (-1)^n x^n (1/x)_n$. Thus $p_{n+1}(x) = (-x+q^{-n}) p_n(x)$, and we have $A=-1, b_n = -q^{-n}$, and $\lambda_n =0$ and we obtain $V_n^{(a)}(x;q) = (-1)^n P^{(2)}_n(-x,-a)$ and \eqref{eq:2}. \end{example} Garrett, Ismail, and Stanton \cite[Section 7]{GIS} considered the polynomials $\hat H_n(x|q)$ defined by the generating function \[ \sum_{n=0}^\infty \hat H_n(x|q) \frac{t^n}{(q)_n}= \frac{(t^2;q)_\infty}{(te^{i\theta},te^{-i\theta};q)_\infty}= (t^2;q)_\infty \sum_{n=0}^\infty H_n(x|q) \frac{t^n}{(q)_n}. \] It turns out that $p_n=\hat H_n(x|q)$ satisfies the 5-term recurrence relation \[ p_{n+1}= 2xp_n +(q^{2n}+q^{2n-1}-q^{n-1}-1)p_{n-1}+ q^{n-2}(1-q^n)(1-q^{n-1})(1-q^{n-2})p_{n-3}. \] The following generalization of Theorem~\ref{thm:main} explains this phenomenon for $m=2$, $r=0$, and $s=0$. We omit the proof, which is similar to that of Theorem~\ref{thm:main}. \begin{thm} \label{thm:main_gen} Let $m$ be a positive integer. Let $p_n(x)$ be polynomials satisfying $p_{n+1}(x) = (Ax-b_n)p_n(x)-\lambda_n p_{n-1}(x)$ for $n\ge0$, where $p_{-1}(x)=0$ and $p_0(x)=1$. If $b_{k}$ and $\frac{\lambda_{k}}{1-q^{k}}$ are polynomials in $q^k$ of degree $r$ and $s$, respectively, which are independent of $y$, then the polynomials $P_n(x,y)$ in $x$ defined by \[ \sum_{n=0}^\infty P_{n} (x,y) \frac{t^n}{(q)_n} =(yt^m)_\infty\sum_{n=0}^\infty p_{n} (x) \frac{t^n}{(q)_n} \] satisfy a $d$-term recurrence relation for $d= \max(rm^2+2,sm^2+3,m^2+1)$. \end{thm} \section{Discrete big $q$-Hermite polynomials} \label{sec:discrete-big-q} In this section we study a set of polynomials which satisfy a 4-term recurrence relation, called the discrete big $q$-Hermite polynomials (see Definition~\ref{defn:big}). These polynomials generalize the discrete $q$-Hermite polynomials and appear in Example~\ref{ex:dqh1}. Recall \cite{Is} that the \emph{continuous $q$-Hermite polynomials} $H_n(x|q)$ are defined by \[ \sum_{n=0}^\infty \frac{H_n(x|q)}{(q)_n} t^n = \frac{1}{(te^{i\theta},te^{-i\theta})_\infty}, \] and the \emph{continuous big $q$-Hermite polynomials} $H_n(x;a|q)$ are defined by \[ \sum_{n=0}^\infty \frac{H_n(x;a|q)}{(q)_n} t^n = \frac{(at)_\infty}{(te^{i\theta},te^{-i\theta})_\infty}. \] Observe that the generating function for $H_n(x;a|q)$ is the generating function for $H_n(x|q)$ multiplied by $(at)_\infty$. In this section we introduce \emph{discrete big $q$-Hermite polynomials} in an analogous way. The \emph{discrete $q$-Hermite I polynomials} $h_n(x;q)$ have generating function \[ \sum_{n=0}^\infty \frac{h_n(x;q)}{(q;q)_n} t^n = \frac{(t^2;q^2)_\infty}{(xt)_\infty}. \] \begin{defn} \label{defn:big} The \emph{discrete big $q$-Hermite I polynomials} $h_n(x,y;q)$ are given by \begin{equation} \label{eq:hn} \sum_{n=0}^\infty h_n(x,y;q) \frac{t^n}{(q;q)_n} = \frac{(t^2;q^2)_\infty (yt)_\infty}{(xt)_\infty}. \end{equation} \end{defn} Expanding the right hand side of \eqref{eq:hn} using the $q$-binomial theorem, we find the following expression for $h_n(x,y;q)$. \begin{prop} For $n\ge 0,$ \[ h_n(x,y;q) = \sum_{k=0}^{\flr{n/2}} \qbinom{n}{2k} (q;q^2)_k q^{2\binom k2} (-1)^k x^{n-2k} (y/x;q)_{n-2k}. \] \end{prop} The polynomials $h_n(x,y;q)$ are orthogonal polynomials in neither $x$ nor $y$. However they satisfy the following simple 4-term recurrence relation which was established in Example~\ref{ex:dqh1}. \begin{thm} \label{thm:4term} For $n\ge 0,$ \[ h_{n+1}(x,y;q) = (x-yq^{n}) h_n (x,y;q) -q^{n-1}(1-q^n)h_{n-1}(x,y;q) +yq^{n-2}(1-q^n)(1-q^{n-1})h_{n-2}(x,y;q). \] \end{thm} Note that when $y=0$, the 4-term recurrence relation reduces to the 3-term recurrence relation for the discrete $q$-Hermite I polynomials. The polynomials $h_n(x,y;q)$ are not symmetric in $x$ and $y$. If we consider $h_n(x,y;q)$ as a polynomial in $y$, then it does not satisfy a finite term recurrence relation, see Proposition~\ref{prop:rr_HH}. Since $h_n(x,y;q)$ satisfies a 4-term recurrence, it is a multiple orthogonal polynomial in $x.$ Thus there are two linear functionals $\LL^{(0)}$ and $\LL^{(1)}$ such that, for $i\in\{0,1\}$, \[ \LL^{(i)}(h_m)=\delta_{mi}, \quad m \ge 0, \] \[ \LL^{(i)}(h_m (x,y;q) h_n (x,y;q)) = 0 \quad \mbox{if $m>2n+i$, and} \quad \LL^{(i)}(h_{2n+i}(x,y;q) h_n (x,y;q)) \ne 0. \] We have explicit formulas for the moments for $\LL^{(0)}$ and $\LL^{(1)}$. \begin{thm} \label{thm:hermite_moments} The moments for the discrete big $q$-Hermite polynomials are \[ \LL^{(0)}(x^n) = \sum_{k=0}^{\flr{n/2}} \qbinom{n}{2k} (q;q^2)_k y^{n-2k}, \] \[ \LL^{(1)}(x^n) = (1-q^n) \sum_{k=0}^{\flr{n/2}} \qbinom{n-1}{2k} (q;q^2)_k y^{n-2k-1}. \] \end{thm} Before proving Theorem~\ref{thm:hermite_moments} we show that in general there is a way to find the linear functionals of $d$-orthogonal polynomials if we know how to expand certain orthogonal polynomials in terms of these $d$-orthogonal polynomials. This is similar to Proposition~\ref{prop:bootstrap}. \begin{thm} \label{thm:op_mop} Let $R_n(x)$ be orthogonal polynomials with linear functionals $\LL_R$ such that $\LL_R(1) = 1$. Let $S_n(x)$ be $d$-orthogonal polynomials with linear functionals $\{\LL_S^{(i)}\}_{i=0}^{d-1}$ such that $\LL_S^{(i)}(S_n(x)) = \delta_{n,i}$. Suppose \begin{equation} \label{eq:conncoef} R_k(x) = \sum_{m=0}^k c_{km} S_m(x). \end{equation} Then \[ \LL_S^{(i)}(x^n) = \sum_{k=0}^n \frac{\LL_R(x^n R_k(x))}{\LL_R(R_k(x)^2)} d_{k,i}, \] where \[ d_{k,i}= \begin{cases} c_{k,i} {\text{ if }} k\ge i,\\ 0 {\text{ \quad if }} k<i. \end{cases} \] \end{thm} \begin{proof} If we apply $\LL_S^{(i)}$ to both sides of \eqref{eq:conncoef}, we have \[ \LL_S^{(i)}(R_k(x)) = d_{k,i}. \] Then by expanding $x^n$ in terms of $R_k(x)$ we get \begin{align*} \LL_S^{(i)}(x^n) = \LL_S^{(i)}\left( \sum_{k=0}^n \frac{\LL_R(x^n R_k(x))}{\LL_R(R_k(x)^2)} R_k(x)\right) = \sum_{k=0}^n \frac{\LL_R(x^n R_k(x))}{\LL_R(R_k(x)^2)}d_{k,i}. \end{align*} \end{proof} We will apply Theorem~\ref{thm:op_mop} with $R_n(x) = h_n(x;q)$ and $S_n(x) = h_n(x,y;q)$ to prove Theorem~\ref{thm:hermite_moments}. The first ingredient is \eqref{eq:conncoef}, which follows from the generating function \eqref{eq:hn} \[ h_k(x;q) = \sum_{m=0}^k \qbinom{k}{m} y^{k-m} h_m(x,y;q). \] The second ingredient is the value of $\LL_h(x^nh_k).$ \begin{prop}\label{prop:xnh} Let $\LL_h$ be the linear functional for $h_n(x;q)$ with $\LL_h(1)=1$. Then \[ \LL_h(x^n h_{m}(x;q)) = \begin{cases} 0 {\text{ if }} m>n {\text{ or }}n\not\equiv m\mod 2,\\ \frac{q^{\binom{m}{2}}(q)_n}{(q^2;q^2)_{\frac{n-m}{2}}} {\text{ if }} n\ge m, n\equiv m\mod 2. \end{cases} \] \end{prop} \begin{proof} Clearly we may assume that $n\ge m$ and $n\equiv m\mod 2.$ Using the explicit formula \[ h_m(x;q) =x^m \hyper20{q^{-m},q^{-m+1}}{-}{q^2, \frac{q^{2m-1}}{x^2}}, \] and the fact \[ \LL_h(x^k)= \begin{cases} 0 {\text{\qquad\qquad if $k$ is odd,}}\\ (q;q^2)_{k/2} {\text{ if $k$ is even,}} \end{cases} \] we obtain \[ \LL_h(x^n h_{m}(x;q)) = (q;q^2)_{\frac{n+m}2} \hyper21{q^{-m},q^{-m+1}}{q^{-n-m+1}}{q^2,q^{m-n}}, \] \[ \LL_h(x^n h_{m}(x;q)) = (q;q^2)_{\frac{n+m}2} \] which is evaluable by the $q$-Vandermonde theorem \cite[(II.5), p, 354]{GR}. \end{proof} The discrete $q$-Hermite polynomials have the following orthogonality: \begin{equation} \label{eq:orthogonality} \LL_h(h_m(x;q) h_n(x;q)) = q^{\binom n2} (q)_n \delta_{mn}. \end{equation} Using Theorem~\ref{thm:op_mop}, Proposition~\ref{prop:xnh}, and \eqref{eq:orthogonality} we have proven Theorem~\ref{thm:hermite_moments}. We do not know representing measures for the moments in Theorem~\ref{thm:hermite_moments}. One may also find a recurrence relation for $h_n(x,y;q)$ as a polynomial in $y$, whose proof is routine. \begin{prop} \label{prop:rr_HH} For $n\ge 0$, we have \[ yq^nh_n(x,y;q)=-h_{n+1}(x,y;q)+ \sum_{k=0}^n (q^n;q^{-1})_k (-1)^k h_{n-k}(x,y,;q) \times\begin{cases} x {\text{ if $k$ is even}}\\ 1 {\text{ if $k$ is odd.}} \end{cases} \] \end{prop} We can also consider discrete $q$-Hermite II polynomials. The \emph{discrete $q$-Hermite II polynomials} $\HT_n(x,y;q)$ have the generating function \[ \sum_{n=0}^\infty \frac{q^{\binom n2} \HT_n(x;q)}{(q)_n} t^n = \frac{(-xt)_\infty }{(-t^2;q^2)_\infty}. \] We define the \emph{discrete big $q$-Hermite II polynomials} $\HT_n(x,y;q)$ by \[ \sum_{n=0}^\infty \HT_n(x,y;q) \frac{q^{\binom n2} t^n}{(q;q)_n} = \frac{1}{(-t^2;q^2)_\infty} \frac{(-xt;q)_\infty}{(-yt;q)_\infty}. \] Then $\HT_n(x,0|q)$ is the discrete $q$-Hermite II polynomial. The following proposition is straightforward to check. \begin{prop} For $n\ge 0$, we have \[ \HT_n(x,y;q) = i^{-n} h_n(ix, iy;q^{-1}). \] \end{prop} \section{Combinatorics of the discrete big $q$-Hermite polynomials} \label{sec:comb-discr-big} In this section we give some combinatorial information about the discrete big $q$-Hermite polynomials. This includes a combinatorial interpretation of the polynomials (Theorem~\ref{thm:comb}), and a combinatorial proof of the 4-term recurrence relation. Viennot's interpretation of the moments as weighted generalized Motzkin paths is also considered. For the purpose of studying $h_n(x,y;q)$ combinatorially we will consider the following rescaled continuous big $q$-Hermite polynomials $h^*_n(x,y;q)$: \[ h^*_n(x,y;q) = (1-q)^{-n/2} h_n(x\sqrt{1-q} ,y\sqrt{1-q}|q). \] By \eqref{eq:hn} we have \begin{equation} \label{eq:hn*} h^*_n(x,y;q) = \sum_{k=0}^{\flr{n/2}} (-1)^k q^{2\binom k2} [2k-1]_q!! \qbinom{n}{2k} x^{n-2k} (y/x;q)_{n-2k}. \end{equation} Because $h^*_n(x,y;1) = H_n(x-y),$ which is a generating function for bicolored matchings of $[n]:=\{1,2,\dots,n\},$ we need to consider $q$-statistics on matchings. A \emph{matching} of $[n]=\{1,2,\dots,n\}$ is a set partition of $[n]$ in which every block is of size 1 or 2. A block of a matching is called a \emph{fixed point} if its size is $1$, and an \emph{edge} if its size is 2. When we write an edge $\{u,v\}$ we will always assume that $u<v$. A \emph{fixed point bi-colored matching} or \emph{FB-matching} is a matching for which every fixed point is colored with $x$ or $y$. Let $\fbm(n)$ be the set of FB-matchings of $[n]$. Let $\pi\in \fbm(n)$. A \emph{crossing} of $\pi$ is a pair of two edges $\{a,b\}$ and $\{c,d\}$ such that $a<c<b<d$. A \emph{nesting} of $\pi$ is a pair of two edges $\{a,b\}$ and $\{c,d\}$ such that $a<c<d<b$. An \emph{alignment} of $\pi$ is a pair of two edges $\{a,b\}$ and $\{c,d\}$ such that $a<b<c<d$. The \emph{block-word} $\bw(\pi)$ of $\pi$ is the word $w_1w_2\dots w_n$ such that $w_i = 1$ if $i$ is a fixed point and $w_i=0$ otherwise. An \emph{inversion} of a word $w_1w_2\dots w_n$ is a pair of integers $i<j$ such that $w_i>w_j$. The number of inversions of $w$ is denoted by $\inv(w)$. Suppose that $\pi$ has $k$ edges and $n-2k$ fixed points. The \emph{weight} $\wt(\pi)$ of $\pi$ is defined by \begin{equation} \label{eq:wt} \wt(\pi) = (-1)^k q^{2\binom k2+2\ali(\pi)+\cro(\pi) + \inv(\bw(\pi))} z_1z_2\dots z_{n-2k}, \end{equation} where $z_i=x$ if the $i$th fixed point is colored with $x$, and $z_i=-yq^{i-1}$ if the $i$th fixed point is colored with $y$. A \emph{complete matching} is a matching without fixed points. Let $\CM(2n)$ denote the set of complete matchings of $[2n]$. \begin{prop} We have \[ \sum_{\pi\in\CM(2n)} q^{2\ali(\pi)+\cro(\pi)} = [2n-1]_q!!. \] \end{prop} \begin{proof} It is known that \[ \sum_{\pi\in\CM(2n)} q^{\cro(\pi)+2\nes(\pi)} = \sum_{\pi\in\CM(2n)} q^{2\cro(\pi)+\nes(\pi)} = [2n-1]_q!!. \] Since a pair of two edges is either an alignment, a crossing, or a nesting we have $\ali(\pi)+\nes(\pi)+\cro(\pi)=\binom n2$. Thus \[ \sum_{\pi\in\CM(2n)} q^{2\ali(\pi)+\cro(\pi)} = q^{2\binom n2}\sum_{\pi\in\CM(2n)} q^{-2\nes(\pi)-\cro(\pi)} = q^{2\binom n2} [2n-1]_{q^{-1}}!! = [2n-1]_q!!. \] \end{proof} \begin{thm} \label{thm:comb} We have \[ h^*_n(x,y;q) = \sum_{\pi\in\fbm(n)} \wt(\pi). \] \end{thm} \begin{proof} Let $M(n)$ be the set of 4-tuples $(k,w,\sigma,X)$ such that $0\le k\le \flr{n/2}$, $w$ is a word of length $n$ consisting of $k$ 0's and $n-2k$ 1's, $\sigma\in \CM(2k)$, and $Z=(z_1,z_2,\dots,z_{n-2k})$ is a sequence such that $z_i$ is either $x$ or $-yq^{i-1}$ for each $i$. For $\pi\in\fbm(n)$ we define $g(\pi)$ to be the 4-tuple $(k,w,\sigma,Z)\in M(n)$, where $k$ is the number of edges of $\pi$, $w=\bw(\pi)$, $\sigma$ is the induced complete matching of $\pi$, and $Z=(z_1,z_2,\dots, z_{n-2k})$ is the sequence such that $z_i=x$ if the $i$th fixed point is colored with $x$, and $z_i=-yq^{i-1}$ if the $i$th fixed point is colored with $y$. Here, the \emph{induced complete matching} of $\pi$ is the complete matching of $[2k]$ for which $i$ and $j$ form an edge if and only if the $i$th non-fixed point and the $j$th non-fixed point of $\pi$ form an edge. It is easy to see that $g$ is a bijection from $\fbm(n)$ to $M(n)$ such that if $g(\pi)=(k,w,\sigma,Z)$ with $Z=(z_1,z_2,\cdots,z_{n-2k})$ then \[ \wt(\pi) = (-1)^k q^{2\binom k2} q^{2\ali(\sigma)+\cro(\sigma)} q^{\inv(w)} z_1z_2\cdots z_{n-2k}. \] Thus \begin{align*} \sum_{\pi\in\fbm(n)} \wt(\pi) &= \sum_{(k,w,\sigma,Z)\in M(n)} (-1)^k q^{\binom k2} q^{2\ali(\sigma)+\cro(\sigma)} q^{\inv(w)}z_1z_2\cdots z_{n-2k}. \end{align*} Here once $k$ is fixed $\sigma $ can be any complete matching of $[2k]$, $w$ can be any word consisting of $k$ 0's and $n-2k$ 1's, and for $Z=(z_1,z_2,\cdots,z_{n-2k})$ each $z_i$ can be either $x$ or $-yq^{i-1}$. Thus the sum of $q^{2\ali(\sigma)+\cro(\sigma)}$ for all such $\sigma$'s gives $[2k-1]_q!!$, the sum of $\inv(w)$ for all such $w$ gives $\qbinom n{2k}$, the sum of $z_1z_2\cdots z_{n-2k}$ for all such $Z$ gives $(x/y)_{n-2k}$. This finishes the proof. \end{proof} \begin{prop} \label{prop:4-term*} For $n\ge0$, we have \[ h^*_{n+1} = (x-yq^n) h^*_n - q^{n-1}[n]_q h^*_{n-1} +y q^{n-2}[n-1]_q(1-q^n) h^*_{n-2}. \] \end{prop} \begin{proof}[Proof of Proposition~\ref{prop:4-term*}] Let $W_-(n)$ be the sum of $\wt(\pi)$ for all $\pi\in\fbm(n)$ such that $n$ is not a fixed point. Let $W_x(n)$ (respectively $W_y(n)$) be the sum of $\wt(\pi)$ for all $\pi\in\fbm(n)$ such that $n$ is a fixed point colored with $x$ (respectively $y$). Then \[ h^*_{n+1}(x,y;q) = \sum_{\pi\in\fbm(n)} \wt(\pi) = W_-(n+1) + W_x(n+1) + W_y(n+1). \] We claim that \begin{align} \label{eq:c1} W_x(n+1) &= x h^*_{n}(x,y;q), \\ \label{eq:c2} W_y(n+1) &= -yq^n (W_x(n)+W_y(n)) - yW_-(n),\\ \label{eq:c3} W_-(n+1) &= -q^{n-1}[n]_q h^*_{n-1}(x,y;q). \end{align} From \eqref{eq:wt} we easily get \eqref{eq:c1}. For \eqref{eq:c3}, consider a matching $\pi\in\fbm(n+1)$ such that $n+1$ is connected with $i$ where $1\le i\le n$. Suppose that $\pi$ has $k$ edges and $n+1-2k$ fixed points. Let us compute the contribution of an edge of a fixed point together with the edge $\{i,n+1\}$ to $2\ali(\pi)+\cro(\pi) + \inv(\bw(\pi))$. An edge with two integers less than $i$ contributes $2$ to $2\ali(\pi)$. An edge with exactly one integer less than $i$ contributes $1$ to $\cro(\pi)$. An edge with two integers greater than $i$ contributes nothing. Each fixed point of $\pi$ less than $i$ contributes $2$ to $\inv(\bw(\pi))$ together with the edge $\{i,n+1\}$. Each fixed point of $\pi$ greater than $i$ contributes $1$ to $\inv(\bw(\pi))$ together with the edge $\{i,n+1\}$. Thus the contribution of the edge $\{i,n+1\}$ to $2\ali(\pi)+\cro(\pi) + \inv(\bw(\pi))$ is equal to $i-1 + (n+1-2k)$. Let $\sigma$ be the matching obtained from $\pi$ by removing the edge $\{i,n+1\}$. Then \[ 2\ali(\pi)+\cro(\pi) + \inv(\bw(\pi)) = 2\ali(\sigma)+\cro(\sigma) + \inv(\bw(\sigma)) +i-1 + (n+1-2k). \] Thus, using \eqref{eq:wt}, the above identity and $2\binom k2 = 2\binom{k-1}2+2k-2$, we have $\wt(\pi) = -q^{n-1} q^{i-1} \wt(\sigma)$. Since $i$ can be any integer from $1$ to $n$ and $\sigma\in\fbm(n-1)$ we get \eqref{eq:c3}. Now we prove \eqref{eq:c2}. Consider a matching $\pi\in\fbm(n+1)$ such that $n+1$ is a fixed point colored with $y$. Suppose that $\pi$ has $k$ edges with $2k$ non-fixed points $b_1<b_2<\dots<b_{2k}$. For $0\le i\le 2k+1$, let $a_i = b_i-b_{i-1}-1$, where $b_0=0$ and $b_{2k+1}=n$. Then $a_0+a_1+\cdots+a_{2k+1}=n-2k$. Let $\sigma$ be the matching obtained from $\pi$ by removing $n+1$. Then we have $\wt(\pi) = -yq^{n-2k}\wt(\sigma)$. We consider two cases. Case 1: $a_0\ne 0$. Let $\tau$ be the matching obtained from $\sigma$ by changing $1$ into $n$ and decreasing the other integers by $1$. We color the $i$th fixed point of $\tau$ with the same color of the $i$th fixed point of $\sigma$. Then $\wt(\sigma) = q^{2k} \wt(\tau)$ and $\wt(\pi)=-yq^n(\tau)$. Since $n$ is a fixed point in $\tau$ the sum of $\wt(\pi)$ in this case gives $-yq^n (W_x(n)+W_y(n))$. Case 2: $a_0=0$. Note that \[ \bw(\sigma)=0 \overbrace{1\cdots 1}^{a_1}0 \overbrace{1\cdots 1}^{a_2}0 \cdots 0\overbrace{1\dots 1}^{a_{2k}} 0\overbrace{1\dots 1}^{a_{2k+1}}. \] We define $\tau$ to be the matching with \[ \bw(\tau)=\overbrace{1\cdots 1}^{a_1}0 \overbrace{1\cdots 1}^{a_2}0 \overbrace{1\cdots 1}^{a_3}1 \cdots 0\overbrace{1\dots 1}^{a_{2k+1}}0 \] and the $i$th fixed point of $\tau$ is colored with the same color of the $i$th fixed point of $\sigma$. Then $\wt(\sigma) = q^{-n+2k}\wt(\tau)$ and $\wt(\pi) = -y\wt(\tau)$. Since $n$ is a non-fixed point in $\tau$, the sum of $\wt(\pi)$ in this case gives $- yW_-(n)$. It is easy to see that \eqref{eq:c1}, \eqref{eq:c2}, and \eqref{eq:c3} implies the 4-term recurrence relation. \end{proof} Since the polynomials $h_n(x,y;q)$ satisfy a 4-term recurrence relation, they are 2-fold multiple orthogonal polynomials in $x$. By Viennot's theory, we can express the two moments $\LL^{(0)}(x^n)$ and $\LL^{(1)}(x^n)$ as a sum of weights of certain lattice paths. A \emph{2-Motzkin path} is a lattice path consisting of an up step $(1,1)$, a horizontal step $(1,0)$, a down step $(1,-1)$, and a double down step $(1,-2)$, which starts at the origin and never goes below the $x$-axis. For $i=0,1$ let $\Mot_i(n)$ denote the set of 2-Motzkin paths of length $n$ with final height $i$. The \emph{weight} of $M\in\Mot_i(n)$ is the product of weights of all steps, where the weight of each step is defined as follows. \begin{itemize} \item An up step has weight $1$. \item A horizontal step starting at level $i$ has weight $yq^i$. \item A down step starting at level $i$ has weight $q^{i-1}(1-q^i)$. \item A double down step starting at level $i$ has weight $-yq^{i-2}(1-q^i) (1-q^{i-1})$. \end{itemize} Then by Viennot's theory we have \[ \LL_i(y^n) = \sum_{M\in\Mot_i(n)} \wt(M). \] Thus we obtain the following corollary from Theorem~\ref{thm:hermite_moments}. \begin{cor} For $n\ge 0$, we have \begin{align*} \sum_{M\in\Mot_0(n)} \wt(M) &= \sum_{k=0}^{\flr{n/2}} \qbinom{n}{2k} (q;q^2)_k y^{n-2k},\\ \sum_{M\in\Mot_1(n)} \wt(M) &= (1-q^n) \sum_{k=0}^{\flr{n/2}} \qbinom{n-1}{2k} (q;q^2)_k y^{n-2k-1}. \end{align*} \end{cor} It would be interesting to prove the above corollary combinatorially. \section{An addition theorem} \label{sec:addition_theorem} A Hermite polynomial addition theorem is \begin{equation} \label{q=1} H_n(x+y)=\sum_{k=0}^n \binom{n}{k} H_k(x/a)a^kH_{n-k}(y/b)b^{n-k} \end{equation} where $a^2+b^2=1$. We give a $q$-analogue of this result (Proposition~\ref{prop:addthm}) using the discrete big $q$-Hermite polynomials. We will use $h_n(x,y;q)$ as our $q$-version of $H_n(x-y)$, \[ \lim_{q\to1} h^*_n(x,y;q)=\lim_{q\to1} \frac{h_n(x\sqrt{1-q},y\sqrt{1-q};q)}{(1-q)^{n/2}}=H_n(x-y). \] and $h_n(x/a,0;q),$ the discrete $q$-Hermite, as our version of $H_n(x/a)$ \[ \lim_{q\to1} h^*_n(x,0;q)=\lim_{q\to1} \frac{h_n(x\sqrt{1-q},0;q)}{(1-q)^{n/2}}=H_n(x). \] Another $q$-version of $b^{n-k}H_{n-k}(y/b)$, $a^2+b^2=1$ is given by $p_{n-k}(y,a;q)$, where \[ p_{t}(y,a;q)=\sum_{m=0}^{[t/2]} \gauss{t}{2m}(q;q^2)_m a^{2m}(1/a^2;q^2)_m y^{t-2m} q^{\binom{t-2m}{2}}. \] \[ \lim_{q\to1} \frac{p_t(y\sqrt{1-q},a;q)} {(1-q)^{t/2}} =b^tH_n(x/b). \] The result is \begin{prop} \label{prop:addthm} For $n\ge 0$, \[ h_n(x,y;q)=(-1)^n\sum_{k=0}^n \gauss{n}{k} h_k(x/a,0|q) (-a)^k p_{n-k}(y,a|q). \] \end{prop} \begin{proof} The generating function of $p_n$ is \[ F(y,a,w)=\sum_{n=0}^\infty \frac{p_n(y,a;q)}{(q)_n} w^n= \frac{(w^2;q^2)_\infty (-yw)_\infty}{(a^2w^2;q^2)_\infty}. \] If \[ G(x,y,t)= \frac{(t^2;q^2)_\infty (yt)_\infty}{(xt)_\infty} \] is the discrete big $q$-Hermite generating function, then \[ G(x,y,-t)= G(x/a,0,-at) F(y,a,t), \] which gives Proposition~\ref{prop:addthm}. \end{proof} \bibliographystyle{abbrv}
{ "timestamp": "2014-03-04T02:01:59", "yymm": "1403", "arxiv_id": "1403.0053", "language": "en", "url": "https://arxiv.org/abs/1403.0053", "abstract": "The mixed moments for the Askey-Wilson polynomials are found using a bootstrapping method and connection coefficients. A similar bootstrapping idea on generating functions gives a new Askey-Wilson generating function. An important special case of this hierarchy is a polynomial which satisfies a four term recurrence, and its combinatorics is studied.", "subjects": "Classical Analysis and ODEs (math.CA); Combinatorics (math.CO)", "title": "Bootstrapping and Askey-Wilson polynomials", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9828232884721166, "lm_q2_score": 0.8244619220634456, "lm_q1q2_score": 0.8103003774624375 }
https://arxiv.org/abs/1605.03322
On tiling the integers with $4$-sets of the same gap sequence
Partitioning a set into similar, if not, identical, parts is a fundamental research topic in combinatorics. The question of partitioning the integers in various ways has been considered throughout history. Given a set $\{x_1, \ldots, x_n\}$ of integers where $x_1<\cdots<x_n$, let the {\it gap sequence} of this set be the nondecreasing sequence $d_1, \ldots, d_{n-1}$ where $\{d_1, \ldots, d_{n-1}\}$ equals $\{x_{i+1}-x_i:i\in\{1,\ldots, n-1\}\}$ as a multiset. This paper addresses the following question, which was explicitly asked by Nakamigawa: can the set of integers be partitioned into sets with the same gap sequence? The question is known to be true for any set where the gap sequence has length at most two. This paper provides evidence that the question is true when the gap sequence has length three. Namely, we prove that given positive integers $p$ and $q$, there is a positive integer $r_0$ such that for all $r\geq r_0$, the set of integers can be partitioned into $4$-sets with gap sequence $p, q$, $r$.
\section{Introduction} Let $[n]$ denote the set $\{1, \ldots, n\}$ and let $[a, b]$ denote the set $\{a, \ldots, b\}$. Note that $[1, 0]=\emptyset$. An $n$-set is a set of size $n$. Partitioning a set into similar, if not, identical, parts is a fundamental research topic in combinatorics. In the literature, it is typically said that $T$ {\it tiles} $S$ if the set $S$ can be partitioned into parts that are all ``similar'' to $T$ in some sense. For example, Golomb initiated the study of tilings of the checker board with ``polyominoes'' in 1954~\cite{1954Go}, and it has attracted a vast audience of both mathematicians and non-mathematicians. See the book by Golumb~\cite{1994Go} for recent developments regarding this particular problem. The question of partitioning the integers $\ZZ$ (and the positive integers $\ZZ^+$) in various ways has been considered throughout history. For two sets $T$ and $S$ where $T\subseteq S$ and a group $G$ acting on $S$, we say that ``$T$ tiles $S$ under $G$" if $S$ can be partitioned into copies that are obtainable from $T$ via $G$; namely, there is a subset $X$ of $G$ such that $S=\amalg_{\gamma\in X}\gamma(T)$. Tilings of $\ZZ$ and $\ZZ^+$ under translation have already been extensively studied~\cite{1950Br,1967Lo}. It is known that a set $S$ of integers tiles $\ZZ^+$ under translation if and only if $S$ tiles some interval of $\ZZ$ under translation. In particular, a $3$-set $S$ tiles $\ZZ^+$ under translation if and only if the elements of $S$ form an arithmetic progression. It is easy to see that an arbitrary $2$-set of integers tiles an interval of $\ZZ$ (and therefore tiles $\ZZ$) under translation, and there are $3$-sets of integers that do not tile $\ZZ$ under translation. However, if both translation and reflection are allowed, then Sands and Swierczkowski~\cite{1960SaSw} provided a short proof that an arbitrary $3$-set of real numbers tiles $\RR$ (simplifying a proof in~\cite{1958KoSe}), and on the way they also proved that an arbitrary $3$-set of integers tiles $\ZZ$. It is also known that not all $4$-sets of integers tile $\ZZ$ under translation and reflection. In his book~\cite{1976Ho}, Honsberger strengthened the previous result with a simple greedy algorithm by showing that an arbitrary $3$-set of integers tiles an interval of $\ZZ$ under translation and reflection. Meyerowitz~\cite{1988Me} analyzed this algorithm and gave a constructive proof that the algorithm produces a tiling of an interval of $\ZZ$, and also proved that a $3$-set of real numbers tiles $\RR^+$, strengthening an aforementioned result. This algorithm does not necessarily find the shortest interval of $\ZZ$ that a $3$-set of integers can tile; there has been effort in trying to determine the shortest such interval~\cite{1981AlHo,2000Na}, and in some cases the shortest such interval is known. Gordan~\cite{1980Go} generalized the problem to higher dimensions. He proved that a $3$-set of $\ZZ^n$ tiles $\ZZ^n$ under the Euclidean group actions (translation, reflection, and rotation), and that there is a set of size $4n-2\lfloor{n/2}\rfloor$ of $\ZZ^n$ that does not tile $\ZZ^n$ under the Euclidean group actions. More information regarding higher dimensions is in Section~\ref{sec:open}. There is also a paper~\cite{2015Na} that studies tilings of the cyclic group $\ZZ_n$. This paper focuses on partitioning $\ZZ$ into sets with the same ``gap sequence'' and ``gap length'', which is the term used in~\cite{2015Na} and~\cite{2005Na}, respectively. Given a set $\{x_1, \ldots, x_n\}$ of integers where $x_1<\cdots<x_n$, let the {\it gap sequence} of this set be the nondecreasing sequence $d_1, \ldots, d_{n-1}$ where $\{d_1, \ldots, d_{n-1}\}$ equals $\{x_{i+1}-x_i:i\in\{1,\ldots, n-1\}\}$ as a multiset. Note that the gap sequence of a set with $n$ elements has length $n-1$. Roughly speaking, in addition to reflecting the order of the gaps of a given set, any permutation of the order of the gaps of the set is allowed. In~\cite{2005Na}, the following question was explicitly asked: \begin{ques}[\cite{2005Na}]\label{ques} For a gap sequence $S$ of length $n-1$, can $\ZZ$ be partitioned into $n$-sets with the same gap sequence $S$? \end{ques} Since allowing permutations of the order of the gaps of a given set does not provide additional help (when reflections of the gaps are already allowed), previous results above imply that this question is true when $n\in\{1, 2, 3\}$. In this paper, we prove the following theorem that provides evidence that the question is true when $n=4$. Corollary~\ref{cor:easy} is an immediate consequence of the theorem. \begin{thm}\label{thm:main} There is an interval of the integers that can be partitioned into $4$-sets with the same gap sequence $p, q, r$, if $q\geq p$ and $r\geq \max\{4q(4q-1), {\frac{1}{\gcd(p, q)}}({5p+4q}-\gcd(p,q))({4p+3q}-\gcd(p,q)) \}$. \end{thm} \begin{cor}\label{cor:easy} There is an interval of the integers that can be partitioned into $4$-sets with the same gap sequence $p, q, r$, if $r\geq 63(\max\{p, q\})^2$. \end{cor} Note that for the sake of presentation, we omit some improvements on the constants of the threshold on $r$. Our proof follows the ideas in~\cite{2000Na,2005Na}, where partitions of $\ZZ^2$ is used to aid the partition of $\ZZ$. We develop and push the method further and generalize it to $\ZZ^3$. In Section~\ref{sec:lemmas}, we show that we can partition certain subsets of $\ZZ^3$ into smaller subsets of $\ZZ^3$ that we call {\it blocks}. In Section~\ref{sec:main}, we demonstrate how to use the lemmas in Section~\ref{sec:lemmas} to tile an interval of $\ZZ$ with $4$-sets with the desired gap sequence. We finish the paper with some open questions in Section~\ref{sec:open}. \section{Lemmas}\label{sec:lemmas} Given three vectors $d_1, d_2, d_3$ in $\ZZ^3$, a $4$-set $\{v_1, v_2, v_3, v_4\}$ of $\ZZ^3$ in which $\{v_4-v_3, v_3-v_2, v_2-v_1\}=\{d_1, d_2, d_3\}$ is called a {\it $(d_1, d_2, d_3)$-block}. For a set $V$ of triples of vectors in $\ZZ^3$, we say a set $S$ of $\ZZ^2$ can be {\it covered} (with {\it height $h(S)$}) by $V$-blocks if there exists an integer $h(S)$ such that $S\times h(S)=\{(x, y, z):(x, y)\in S, z\in[h(S)]\}$ can be partitioned into blocks from $V$. If $V$ only has one vector $v$, then we simply write ``covered by $v$-blocks'' instead of ``covered by $\{v\}$-blocks''. Let $e_1=(1, 0, 0)$, $e_2=(0, 1, 0)$, and $e_3=(0, 0, 1)$ be unit vectors in $\ZZ^3$. By stretching $X\subset \ZZ^3$ in the $e_1$, $e_2$, and $e_3$ direction by a real number $w$, we obtain $\{(wx, y, z): (x, y, z)\in X\}$, $\{(x, wy, z): (x, y, z)\in X\}$, and $\{(x, y, wz): (x, y, z)\in X\}$, respectively. \subsection{When $q\geq 2p$}\label{subsec:qgeq2p} \begin{lem}\label{lem:block1} The following sets of $\ZZ^2$ can be covered by $(e_1, e_2, e_3)$-blocks: \begin{enumerate}[$(i)$] \item $S_1=\{(1, 1), (1, 2), (2, 2)\}$ with $h(S_1)=4$ \item $S_2=\{(1, 1), (2, 1), (2, 2)\}$ with $h(S_2)=4$ \item $S_3=[3]\times [2]$ with $h(S_3)=4$ \item $S_4=[k]\times[4]$ for $k\geq 2$ with $h(S_4)=20$ \item $S_5=([2]\times [4])\cup\{(3, 1),(3,2)\}$ with $h(S_5)=4$ \item $S_6=([2]\times [4])\cup\{(3, 4)\}$ with $h(S_6)=4$ \item $S_7=([ k]\times[4])\cup\{(k+1, 4)\}$ for $k\geq 2$ with $h(S_7)=20$ \needed{ \item $S_{10}=[4]\times [2]$\hfill NECESSARY? \item $S_{11}=[5]\times [2]$\hfill NECESSARY? \item $S_{12}=[3]\times [3]$\hfill NECESSARY?} \end{enumerate} \end{lem} \sout{ \begin{proof} See Figure~\ref{fig:block1} for an illustration of some cases. $(i)$: \begin{center} $B_1=\{(1, 1, 1), (1, 2, 1), (2, 2, 1), (2, 2, 2)\}$, $B_2=\{(1, 1, 2), (1, 2, 2), (1, 2, 3), (2, 2, 3)\}$, $B_3=\{(1, 1, 3), (1, 1, 4), (1, 2, 4), (2, 2, 4)\}$. \end{center} $(ii)$: \begin{center} $B_1=\{(1, 1, 1), (2, 1, 1), (2, 2, 1), (2, 2, 2)\}$, $B_2=\{(1, 1, 2), (2, 1, 2), (2, 1, 3), (2, 2, 3)\}$, $B_3=\{(1, 1, 3), (1, 1, 4), (2, 1, 4), (2, 2, 4)\}$. \end{center} $(iii)$: Combine $S_1$ and the block obtained by shifting $S_2$ by $e_1$. $(iv)$: If $k$ is even, then we can do better and obtain $h(S_4)=5$. It is not hard to see we can fill $S_4$ with blocks of $[ 2]\times [ 4]$ by putting them side by side, so it is sufficient to show how to fill $[ 2]\times [ 4]$. \begin{center} $B_1=\{(1, 1, 1), (2, 1, 1), (2, 1, 2), (2, 2, 2)\}$, $B_2=\{(1, 2, 1), (2, 2, 1), (2, 3, 1), (2, 3, 2)\}$, $B_3=\{(1, 3, 1), (1, 4, 1), (2, 4, 1), (2, 4, 2)\}$, $B_4=\{(1, 1, 2), (1, 2, 2), (1, 2, 3), (2, 2, 3)\}$, $B_5=\{(1, 3, 2), (1, 4, 2), (1, 4, 3), (2, 4, 3)\}$, $B_6=\{(1, 1, 3), (2, 1, 3), (2, 1, 4), (2, 2, 4)\}$, $B_7=\{(1, 3, 3), (2, 3, 3), (2, 3, 4), (2, 4, 4)\}$, $B_8=\{(1, 1, 4), (1, 1, 5), (2, 1, 5), (2, 2, 5)\}$, $B_9=\{(1, 2, 4), (1, 2, 5), (1, 3, 5), (2, 3, 5)\}$, $B_{10}=\{(1, 3, 4), (1, 4, 4), (1, 4, 5), (2, 4, 5)\}$. \end{center} If $k$ is odd, then $h(S_4)=20$. It is not hard to see we can fill $S_4$ with two copies of $S_3$ (which was already shown to be covered in $(iii)$) by putting one on top of another and then using blocks of $[2]\times [4]$ side by side. Note that the least common multiple of $h(S_3)=4$ and $h([2]\times[4])=5$ is $20$. $(v)$: \begin{center} $B_1=\{(1, 1, 1), (1, 1, 2), (1, 2, 2), (2, 2, 2)\}$, $B_2=\{(1, 2, 1), (2, 2, 1), (2, 3, 1), (2, 3, 2)\}$, $B_3=\{(2, 1, 1), (3, 1, 1), (3, 2, 1), (3, 2, 2)\}$, $B_4=\{(1, 3, 1), (1, 4, 1), (2, 4, 1), (2, 4, 2)\}$, $B_5=\{(2, 1, 2), (3, 1, 2), (3, 1, 3), (3, 2, 3)\}$, $B_6=\{(1, 3, 2), (1, 4, 2), (1, 4, 3), (2, 4, 3)\}$, $B_7=\{(1, 1, 3), (1, 1, 4), (1, 2, 4), (2, 2, 4)\}$, $B_8=\{(1, 2, 3), (2, 2, 3), (2, 3, 3), (2, 3, 4)\}$, $B_9=\{(2, 1, 3), (2, 1, 4), (3, 1, 4), (3, 2, 4)\}$, $B_{10}=\{(1, 3, 3), (1, 3, 4), (1, 4, 4), (2, 4, 4)\}$. \end{center} $(vi)$: \begin{center} $B_1=\{(1, 1, 1), (2, 1, 1), (2, 2, 1), (2, 2, 2)\}$, $B_2=\{(1, 2, 1), (1, 2, 2), (1, 3, 2), (2, 3, 2)\}$, $B_3=\{(1, 3, 1), (1, 4, 1), (1, 4, 2), (2, 4, 2)\}$, $B_4=\{(2, 3, 1), (2, 4, 1), (3, 4, 1), (3, 4, 2)\}$, $B_5=\{(1, 1, 2), (2, 1, 2), (2, 1, 3), (2, 2, 3)\}$, $B_6=\{(1, 1, 3), (1, 1, 4), (2, 1, 4), (2, 2, 4)\}$, $B_7=\{(1, 2, 3), (1, 2, 4), (1, 3, 4), (2, 3, 4)\}$, $B_8=\{(1, 3, 3), (1, 4, 3), (1, 4, 4), (2, 4, 4)\}$, $B_9=\{(2, 3, 3), (2, 4, 3), (3, 4, 3), (3, 4, 4)\}$. \end{center} $(vii)$: Assume $k$ is even. It is not hard to see we can fill $S_7$ with one $S_6$ (which was already shown to be covered in $(vi)$) and then using blocks of $[ 2]\times [ 4]$ side by side. Note that the least common multiple of $h(S_6)=4$ and $h([2]\times[4])=5$ is $20$. Assume $k$ is odd. When $k=3$, it is not hard to see that we can fill $S_7$ with one $S_1$ (which was already shown to be covered in $(i)$) and one $S_5$ (which was already shown to be covered in $(v)$). When $k>3$, attach blocks of $[2]\times [4]$ side by side to the configuration when $k=3$. Note that the least common multiple of $h(S_6)=4$ and $h([2]\times[4])=5$ is $20$. \needed{ \\$(x)$: $h(S_{10})=5$. NECESSARY?\\ $(xi)$: $h(S_{11})=8$. NECESSARY?\\ $(xii)$: $h(S_{12})=4$. NECESSARY?} \end{proof} } \begin{figure}[h] \begin{center} \includegraphics[scale=0.5]{fig-block1.pdf} \caption{Illustration for some cases of Lemma~\ref{lem:block1}.} \label{fig:block1} \end{center} \end{figure} \begin{lem}\label{lem:layer1} Given $q\geq 2p$, the following sets can be covered by $(pe_1, e_2, e_3)$-blocks: \begin{enumerate}[(i)] \item $X_1=[ q]\times[4]$ with $h(X_1)=20$ \item $X_2=([ q]\times[4])\cup\{(q+1, 4)\}$ with $h(X_2)=20$ \end{enumerate} \end{lem} \sout{ \begin{proof} It is sufficient to show that $X_1$ and $X_2$ can be covered by $(e_1, e_2, e_3)$-blocks, but stretched by $p$. Let $a=\lfloor{\frac{q}{p}}\rfloor$ and $b=q-ap$ so that $b\in[0, p-1]$. Note that $a\geq 2$ since $q\geq 2p$. Obtain $P^{a+1}_4$, $P^{a}_4$, and $P^{a}_7$ by stretching $S_4$ with $k=a+1$, $S_4$ with $k=a$, and $S_7$ with $k=a$, respectively, from Lemma~\ref{lem:block1} in the $e_1$ direction by $p$; in other words, $P^{a+1}_4=\{(1+ip, y):i\in[0, a], y\in[4]\}$, $P^{a }_4=\{(1+ip, y):i\in[0, a-1], y\in[4]\}$, and $P^{a}_7=P^{a}_4\cup\{(1+ap, 4)\}$. Let $P^*_4=\{P^{a+1}_4+(i,0): i\in[0, b-1]\}$ and $P^{**}_4=\{P^a_4+(i,0):i\in[b+1, p-1]\}$. $(i)$: Now $P^*_4$, $P^a_4+(b,0)$, $P^{**}_4$ is a partition of $X_1$. Since $S_4$ can be covered with height $20$, we conclude that $X_1$ can be covered with height $20$. $(ii)$: Now $P^*_4$, $P^a_7+(b,0)$, $P^{**}_4$ is a partition of $X_2$. Since $S_4$ and $S_7$ can be covered with height $20$, we conclude that $X_2$ can be covered with height $20$. See Figure~\ref{fig:layer1} for an illustration. \end{proof} } \begin{figure}[h] \begin{center} \includegraphics[scale=1]{fig-layer1.pdf} \caption{Figure for Lemma~\ref{lem:layer1}. Same shape and same shade means same block. Same shape and different shade means same type of block, but different block.} \label{fig:layer1} \end{center} \end{figure} \subsection{When $p\leq q\leq 2p$}\label{subsec:qleq2p} \begin{lem}\label{lem:block2} The following sets of $\ZZ^2$ can be covered by $(e_1, e_2-e_1, e_3)$-blocks: \begin{enumerate}[$(i)$] \item $T_1=\{(1, 1), (1, 2), (2, 1)\}$ with $h(T_1)=4$ \item $T_2=\{(1, 2), (2, 1), (2, 2)\}$ with $h(T_2)=4$ \item $T_3=\{(1, 2), (1,3), (2, 1), (2, 2)\}$ with $h(T_3)=2$ \item $T_4=[1,2]\times [1,2]$ with $h(T_4)=2$ \item $T_5=\{(1,1), (1,2), (1,3), (2,1), (2,2), (3,1)\}$ with $h(T_5)=2 \needed{ \item $T_5=[1,3]\times [1,3]$\hfill NECESSARY?} \end{enumerate} \end{lem} \sout{ \begin{proof} See Figure~\ref{fig:block2} for an illustration. $(i)$: \begin{center} $B_1=\{(1, 1, 1), (2, 1, 1), (1, 2, 1), (1, 2, 2)\}$, $B_2=\{(1, 1, 2), (2, 1, 2), (2, 1, 3), (1, 2, 3)\}$, $B_3=\{(1, 1, 3), (1, 1, 4), (2, 1, 4), (1, 2, 4)\}$. \end{center} $(ii)$: \begin{center} $B_1=\{(2, 1, 1), (1, 2, 1), (2, 2, 1), (2, 2, 2)\}$, $B_2=\{(2, 1, 2), (1, 2, 2), (1, 2, 3), (2, 2, 3)\}$, $B_3=\{(2, 1, 3), (2, 1, 4), (1, 2, 4), (2, 2, 4)\}$. \end{center} $(iii)$: \begin{center} $B_1=\{(2, 1, 1), (2, 1, 2), (1, 2, 2), (2, 2, 2)\}$, $B_2=\{(1, 2, 1), (2, 2, 1), (1, 3, 1), (1, 3, 2)\}$. \end{center} $(iv)$: \begin{center} $B_1=\{(1, 1, 1), (1, 1, 2), (2, 1, 2), (1, 2, 2)\}$, $B_2=\{(2, 1, 1), (1, 2, 1), (2, 2, 1), (2, 2, 2)\}$. \end{center} $(v)$: \begin{center} $B_1=\{(1, 1, 1), (1, 1, 2), (2, 1, 2), (1, 2, 2)\}$, $B_2=\{(2, 1, 1), (3, 1, 1), (3, 1, 2), (2, 2, 2)\}$, $B_3=\{(1,2,1), (2,2,1), (1,3,1), (1,3,2)\}$. \end{center} \needed{ $(v)$: $h(T_5)=4$. NECESSARY?} \end{proof} } \begin{figure}[h] \begin{center} \includegraphics[scale=0.85]{fig-block2.pdf} \caption{Illustration for Lemma~\ref{lem:block2}.} \label{fig:block2} \end{center} \end{figure} \begin{lem}\label{lem:layer2} Given $p\leq q\leq 2p$, the following sets can be covered by $\{(pe_1, e_2-pe_1, e_3), (qe_1, e_2-qe_1, e_3)\}$-blocks: \begin{enumerate}[(i)] \item $Y_1=([p+q]\times[4])\cup \{(x,5): x\in[p]\}$ with $h(Y_1)=4$ \item $Y_2=([p+q]\times[3])\cup \{(x, 4): x\in[p]\}$ with $h(Y_2)=4$ \end{enumerate} \end{lem} \sout{ \begin{proof} It is sufficient to show that $Y_1$ and $Y_2$ can be covered by $(e_1, e_2-e_1, e_3)$-blocks, but stretched by either $p$ or $q$. Let $q=p+t$ so that $t\in[0, p]$. Obtain $P_1$, $P_2$, $P_3$, $P_4$, and $P_5$ by stretching $T_1$, $T_2$, $T_3$, $T_4$, and $T_5$ respectively, from Lemma~\ref{lem:block2} in the $e_1$ direction by $p$; in other words $P_1=\{(1, 1), (p+1, 1), (1, 2)\}$, $P_2=\{(1, 2), (p+1, 1), (p+1, 2)\}$, $P_3=\{(1, 2), (1, 3), (1+p, 1), (1+p, 2)\}$, $P_4=\{(1, 1), (1, 2), (1+p, 1), (1+p, 2)\}$, $P_5=\{(1,1), (1,2), (1,3), (1+p,1), (1+p,2), (1+2p,1)\}$. Obtain $Q_1$ by stretching $T_1$ from Lemma~\ref{lem:block2} in the $e_1$ direction by $q$; in other words $Q_1=\{(1, 1), (q+1, 1), (1, 2)\}$. $(i)$: Let $P^*_1=\{P_1+(i,0):i\in[t,p-1]\}$, $P^*_2=\{P_2+(i,1):i\in[t,p-1]\}$, $P^*_3=\{P_3+(i,1):i\in[p,p+t-1]\}$, $P^*_5=\{P_5+(i,0):i\in[0,t-1]\}$, and $Q^*_1=\{Q_1+(i,3):i\in[0,p-1]\}$. Now $P^*_1, P^*_2, P^*_3, P^*_5, Q^*_1$ is a partition of $Y_1$. $(ii)$: Let $P^{**}_1=\{P_1+(i,0):i\in[0,t-1]\}$, $P^{**}_3=\{P_3+(i,0):i\in[p,p+t-1]\}$, $P^{**}_4=\{P_4+(i,0):i\in[t,p-1]\}$, and $Q^{**}_1=\{Q_1+(i,2):i\in[0,p-1]\}$. Now $P^{**}_1, P^{**}_3, P^{**}_4, Q^{**}_1$ is a partition of $Y_2$. See Figure~\ref{fig:layer2} for an illustration. \end{proof} } \begin{figure}[h] \begin{center} \includegraphics[scale=1]{fig-layer2.pdf} \caption{Figure for Lemma~\ref{lem:layer2}. Same shape and same shade means same block. Same shape and different shade means same type of block, but different block.} \label{fig:layer2} \end{center} \end{figure} \section{Main result}\label{sec:main} A set $S\subset \ZZ^2$ is called a {\it layer}, and $S$ is an {\it $a$-nice layer} if $S$ is of the form $([a]\times[b])\cup\{(i,b+1):i\in[c]\}$ where $a, b, c$ are integers. Given an element $(x, y)\in S$, the {\it row} of $(x, y)$ is the set of elements in $S$ with the same second coordinate. Note that the sets $X_1$ and $X_2$ from Lemma~\ref{lem:layer1} are $q$-nice layers and the sets $Y_1$ and $Y_2$ from Lemma~\ref{lem:layer2} are $(p+q)$-nice layers. For convenience, we will say a set with gap sequence $d_1, \ldots, d_{n-1}$ is a $(d_{\sigma(1)}, \ldots, d_{\sigma(n-1)})$-set for any permutation $\sigma$ of $[n-1]$. \begin{lem}\label{lem:new} For each $i\in[n]$, let $S_i$ be an $a$-nice layer that can be covered with height $h(S_i)$ by $V$-blocks, and let $l=\lcm{_{i\in[n]}\{h(S_i)\}}$. For $r\geq 1-d+d\sum_{i\in[n]}|S_i|$ and positive integers $d, p, q$ with $q\geq p$, the set $\bigcup_{j\in[l]}(d[\sum_{i\in[n]}|S_i|]+(j-1)r)$ can be partitioned into \begin{enumerate}[$(i)$] \item $(dp, da, r)$-sets when $V=\{(pe_1, e_2, e_3)\}$. \item $(dp, d(a-p), r)$-sets and $(dq,d(a-q),r)$-sets when $V=\{(pe_1, e_2-pe_1, e_3),(qe_1, e_2-qe_1, e_3)\}$. \end{enumerate} \end{lem} \sout { \begin{proof} Let $<$ be an ordering of the elements of $\{(S_i\times l, i): i\in[n]\}$ such that $((x_1, y_1, z_1), i_1)<((x_2, y_2, z_2), i_2)$ if $(a)$ $z_1<z_2$ or $(b)$ $z_1=z_2$ and $i_1<i_2$ or $(c)$ $z_1=z_2$, $i_1=i_2$, and $y_1<y_2$ or $(d)$ $z_1=z_2$, $i_1=i_2$, $y_1=y_2$, and $x_1<x_2$. This ordering $<$ gives a natural bijection $\varphi$ between $\{(S_i\times l, i): i\in[n]\}$ and $\bigcup_{j\in[l]}(d[\sum_{i\in[n]}|S_i|]+(j-1)r)$. Note that the condition $r\geq 1-d+d\sum_{i\in[n]}|S_i|$ is needed to ensure that $\varphi$ is a bijection. $(i)$ Assume each $S_i$ can be covered by $(pe_1, e_2, e_3)$-blocks, and let $u$ and $v$ be two elements of one particular block. If $|u-v|=pe_1$, then $|\varphi(u)-\varphi(v)|=dp$ since $u$ and $v$ are in the same row. If $|u-v|=e_2$, then $|\varphi(u)-\varphi(v)|=da$ since the lower row of $u$ and $v$ has exactly $a$ elements. If $|u-v|=e_3$, then $|\varphi(u)-\varphi(v)|=r$ since $u$ and $v$ must be in different layers. Therefore, $\bigcup_{j\in[l]}(d[\sum_{i\in[n]}|S_i|]+(j-1)r)$ can be partitioned into $(dp, da, r)$-sets. $(ii)$ Assume each $S_i$ can be covered by $\{(pe_1, e_2-pe_1, e_3),(qe_1,e_2-qe_1,e_3)\}$-blocks, and let $u$ and $v$ be two elements of a $(pe_1,e_2-pe_1,e_3)$-block. If $|u-v|=pe_1$, then $|\varphi(u)-\varphi(v)|=dp$ since $u$ and $v$ are in the same row. If $|u-v|=e_2-pe_1$, then $|\varphi(u)-\varphi(v)|=d(a-p)$ since the lower row of $u$ and $v$ has exactly $a$ elements. If $|u-v|=e_3$, then $|\varphi(u)-\varphi(v)|=r$ since $u$ and $v$ must be in different layers. The case when $u$ and $v$ are two elements of a $(qe_1,e_2-qe_1,e_3)$-block is analogous. Therefore, $\bigcup_{j\in[l]}(d[\sum_{i\in[n]}|S_i|]+(j-1)r)$ can be partitioned into $(dp, d(a-p),r)$-sets and $(dq, d(a-q), r)\}$-sets. \end{proof} } \begin{lem}\label{lem:new2} Given positive integers $r_1$ and $r_2$, let $d=\gcd(r_1, r_2)$, and also let $p$ and $q$ be positive integers such that $q\geq p$ and $p/d$ and $q/d$ are also integers. For all integers $r\geq d(r_1/d-1)(r_2/d-1)$, there is an interval of $\ZZ$ that can be partitioned into $(p, q, r)$-sets if $L_1$ and $L_2$ is a layer of size $r_1/d$ and $r_2/d$, respectively, and both $L_1$ and $L_2$ are \begin{enumerate}[$(i)$] \item $(q/d)$-nice layers that can be covered by $(pe_1/d, e_2,e_3)$-blocks. \item $(p/d+q/d)$-nice layers that can be covered by $\{(pe_1/d, e_2-pe_1/d,e_3),(qe_1/d, e_2-qe_1/d, e_3)\}$-blocks. \end{enumerate} \end{lem} { \begin{proof} Let $l=\lcm\{h(L_1),h(L_2)\}$. An integer $s$ is {\it good} if $(r_1/d-1)(r_2/d-1)\leq s\leq {\frac{r-1+d}{ d}}$. Note that a good integer $s$ satisfies $r\geq 1-d+ds$. Since $r_1/d$ and $r_2/d$ are coprime, a good $s$ can be expressed as a linear combination of $r_1/d$ and $r_2/d$ with nonnegative coefficients. Therefore, given a good $s$, the set $T(s)=\bigcup_{j\in[l]}(d[s]+(j-1)r)$ can be partitioned into $(p, q, r)$-sets by Lemma~\ref{lem:new} since $(p, q, r)$-sets are also $(q, p, r)$-sets, for both $(i)$ and $(ii)$. Let $r'=r-d\lfloor{r/d}\rfloor$ so that $r'\in[0,d-1]$. If $r'\neq 0$, then both $\lfloor{r\over d}\rfloor$ and $\lfloor{r\over d}\rfloor+1$ are good, and therefore by the above paragraph, both $T(\lfloor{r\over d}\rfloor)$ and $T(\lfloor{r\over d}\rfloor+1)$ can be partitioned into $(p, q, r)$-sets. Now, $\bigcup_{i\in[r']}(T(\lfloor{r\over d}\rfloor+1)+i)\cup\bigcup_{i\in[r'+1,d-1]}(T(\lfloor{r\over d}\rfloor)+i)=[lr]+d$. If $r'= 0$, then $\lfloor{r\over d}\rfloor$ is good, and therefore by the first paragraph, $T(\lfloor{r\over d}\rfloor)$ can be partitioned into $(p, q, r)$-sets. Now, $\bigcup_{i\in[1,d-1]}(T(\lfloor{r\over d}\rfloor)+i)=[lr]+d$. In both cases, $[lr]+d$ can be partitioned into $(p, q, r)$-sets. \end{proof} } \begin{thm}\label{thm:big} For positive integers $p, q$ with $q\geq2p$, if $r\geq 4q(4q-1)$, then there is an interval of $\ZZ$ that can be partitioned into $4$-sets of the same gap sequence $p, q, r$. \end{thm} \begin{proof} Since $q\geq 2p$, the layer $X_1$ and $X_2$ in Lemma~\ref{lem:layer1} is a $q$-nice layer of size $4q$ and $4q+1$, respectively, that can be covered by $(pe_1, e_2, e_3)$-blocks. Note that $\gcd(4q, 4q+1)=1$. Thus, by Lemma~\ref{lem:new2}, there is an interval of $\ZZ$ that can be partitioned into $(p, q, r)$-sets for all integers $r\geq 4q(4q-1)$. \end{proof} \begin{thm}\label{thm:small} For positive integers $p, q$ with $q\in[p,2p]$, if $r\geq {1\over\gcd(p, q)}({5p+4q}-\gcd(p,q))({4p+3q}-\gcd(p,q))$, then there is an interval of $\ZZ$ that can be partitioned into $4$-sets of the same gap sequence $p, q, r$. \end{thm} \begin{proof} Since $q\in[p, 2p]$, the layer $Y_1$ and $Y_2$ in Lemma~\ref{lem:layer2} is a $(p+q)$-nice layer of size $5p+4q$ and $4p+3q$, respectively, that can be covered by $\{(pe_1, e_2-pe_1, e_3), (qe_1, e_2-qe_1, e_3)\}$-blocks. Note that $\gcd(5p+4q, 4p+3q)=\gcd(p,q)$. Thus, by Lemma~\ref{lem:new2}, there is an interval of $\ZZ$ that can be partitioned into $(p, q, r)$-sets for all integers $r\geq\gcd(p, q)({5p+4q\over\gcd(p,q)}-1)({4p+3q\over\gcd(p,q)}-1)$. \end{proof} Theorem~\ref{thm:main} follows directly from Theorem~\ref{thm:big} and Theorem~\ref{thm:small}. \section{Future directions and open questions}\label{sec:open} As noted in the introduction, we omit some improvements on the constants of the threshold on $r$ in Theorem~\ref{thm:main}. For example, it is not hard to show that $([ q]\times[4])\cup\{(q+j, 4):j\in[i]\}$ can be covered by $(pe_1,e_2,e_3)$-blocks for all $i\in[0,p]$, but we only provided the proof when $i\in\{0, 1\}$. Finding more blocks in Lemma~\ref{lem:block1} and Lemma~\ref{lem:block2} will help finding more layers that can be covered in Lemma~\ref{lem:layer1} and Lemma~\ref{lem:layer2}, and appropriate combinations will improve the constants on the threshold on $r$. \bigskip We approached Question~\ref{ques} with the mind set of allowing all gap sequences, but focusing on the case when $n=4$, which is the first open case. Another approach is to investigate the question for all $n$, but for special gap sequences. The following conjecture was explicitly made in~\cite{2005Na}: \begin{conj}[\cite{2005Na}] There is an interval of $\ZZ$ that can be partitioned into $(k+l+1)$-sets with the same gap sequence $p_1, \ldots, p_k, q_1, \ldots, q_l$ where $p_1=\cdots=p_k$ and $q_1=\cdots=q_l$. \end{conj} The truth of Question~\ref{ques} when $n=3$ is equivalent to this conjecture when $k=l=1$. Some partial results on this conjecture were made in~\cite{2005Na}. \bigskip As mentioned in the introduction, Gordon~\cite{1980Go} investigated the question in higher dimensions. We iterate some open questions for the $2$-dimensional case. As it is known that there is a $6$-set of $\ZZ^2$ that does not tile $\ZZ^2$ under the Euclidean group actions, the following statement is stated as ``conceivable'' in~\cite{1980Go}: \begin{ques}[\cite{1980Go}] Does every set $S$ of $\ZZ^2$ with $|S|\leq 5$ tile $\ZZ^2$ under the Euclidean group actions? \end{ques} Gordon~\cite{1980Go} also proved that a $3$-set of $\ZZ^2$ tiles $\ZZ^+\times\ZZ$ under the Euclidean group actions, whereas there is a $4$-set of $\ZZ^2$ that does not. Actually, the same $4$-set does not even tile $\ZZ^+\times\ZZ^+$ under the Euclidean group actions, but Gordon~\cite{1980Go} proved that every $2$-set does tile $\ZZ^+\times\ZZ^+$ under the Euclidean group actions. To the authors' knowledge, the following question, which appeared in~\cite{1980Go}, is still open: \begin{ques}[\cite{1980Go}] Does every $3$-set of $\ZZ^2$ tile $\ZZ^+\times\ZZ^+$ under the Euclidean group actions? \end{ques} \section*{Acknowledgments} The authors thank Jae Baek Lee for introducing the problem to the authors. \bibliographystyle{alpha}
{ "timestamp": "2016-05-12T02:06:48", "yymm": "1605", "arxiv_id": "1605.03322", "language": "en", "url": "https://arxiv.org/abs/1605.03322", "abstract": "Partitioning a set into similar, if not, identical, parts is a fundamental research topic in combinatorics. The question of partitioning the integers in various ways has been considered throughout history. Given a set $\\{x_1, \\ldots, x_n\\}$ of integers where $x_1<\\cdots<x_n$, let the {\\it gap sequence} of this set be the nondecreasing sequence $d_1, \\ldots, d_{n-1}$ where $\\{d_1, \\ldots, d_{n-1}\\}$ equals $\\{x_{i+1}-x_i:i\\in\\{1,\\ldots, n-1\\}\\}$ as a multiset. This paper addresses the following question, which was explicitly asked by Nakamigawa: can the set of integers be partitioned into sets with the same gap sequence? The question is known to be true for any set where the gap sequence has length at most two. This paper provides evidence that the question is true when the gap sequence has length three. Namely, we prove that given positive integers $p$ and $q$, there is a positive integer $r_0$ such that for all $r\\geq r_0$, the set of integers can be partitioned into $4$-sets with gap sequence $p, q$, $r$.", "subjects": "Combinatorics (math.CO)", "title": "On tiling the integers with $4$-sets of the same gap sequence", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9881308810508339, "lm_q2_score": 0.8198933359135361, "lm_q1q2_score": 0.8101619243839497 }
https://arxiv.org/abs/1903.06317
Limits of Sums for Binomial and Eulerian Numbers and their Associated Distributions
We provide a unified, probabilistic approach using renewal theory to derive some novel limits of sums for the normalized binomial coefficients and for the normalized Eulerian numbers. We also investigate some corresponding results for their associated distributions -- the binomial distributions for the binomial coefficients and the Irwin-Hall distributions (uniform B-splines) for the Eulerian numbers.
\section{Introduction}\label{sec:introduction} Start with Pascal's triangle, the binomial coefficients ${n \choose k}$ arranged in a triangular array as in Figure~\ref{fig:pascals triangle}. Now normalize each row to sum to one: for each $n$ divide the $n$th row by $2^n$. After this normalization we can ask: what are the sums of the columns? The entries in the first column form the geometric progression $1, 1/2, 1/4, \ldots$ which sums to 2. In fact, it happens that when all the rows are normalized to sum to one, all the columns sum to two. But why 2? Equivalently we could ask: what are the sums of the long diagonals going from the upper left to the lower right? By symmetry the sums along these long diagonals are the same as the sums of the columns, so if we can sum the columns we can sum these long diagonals. But what about the short diagonals -- that is, what about the sums along the short diagonals that go from lower left to upper right? If we compute the sums along these diagonals, we see the Fibonacci series: $1, 1, 2, 3, 5, 8, 13, \ldots$. But these Fibonacci numbers emerge before we normalize the rows. We could also ask: what are these sums after we normalize the rows to sum to one? Then the series becomes: $1, 1/2, 3/4, 5/8, 11/16, 21/32,\ldots$. In fact, we shall show in Section~\ref{sec:binomial} that this series converges to 2/3. But why 2/3? \begin{figure}[h!t!] \centering \begin{tabular}{ccccccccc} 1& & & & & & & & \\ 1& 1& & & & & & & \\ 1& 2& 1& & & & & & \\ 1& 3& 3& 1& & & & & \\ 1& 4& 6& 4& 1 & & & & \\ 1& 5& 10& 10& 5 & 1 & & & \\ 1& 6& 15& 20& 15 & 6 & 1& & \\ 1& 7& 21& 35& 35 & 21& 7 & 1& \\ 1& 8& 28& 56& 70 & 56& 28& 8& 1 \\ \multicolumn{9}{c}{$\vdots$} \end{tabular} \caption{Pascal's triangle -- levels 0 to 8 -- before normalization.} \label{fig:pascals triangle} \end{figure} Consider next the Eulerian numbers $\genfrac{\langle}{\rangle}{0pt}{}{n}{k}$ arranged again in a triangular array as in Figure~\ref{fig:eulers triangle}. For the Eulerian numbers we can also normalize each row to sum to one: for each $n$ divide the $n$th row by $n!$. Now once again we can ask: what are the sums of the columns? After normalization, the sum of the first column is the Taylor expansion of $e$, but the other columns certainly do not sum to $e$. What then can we say about the sums of these columns? By symmetry the sums along the long diagonals are the same as the sums of the columns, so if we can sum the columns we can sum these long diagonals. But again what about the short diagonals -- that is, what about the sums of the short diagonals that go from lower left to upper right after we normalize each row to sum to one? Are these sums in any way related to the corresponding sums in the normalized version of Pascal's triangle? \begin{figure}[h!t!] \centering \begin{tabular}{ccccccccc} 1 &&&&&&&&\\ 1 & 0&&&&&&&\\ 1 & 1 & 0&&&&&&\\ 1 & 4 & 1 & 0&&&&&\\ 1 & 11 & 11 & 1 & 0&&&&\\ 1 & 26 & 66 & 26 & 1 & 0&&&\\ 1 & 57 & 302 & 302 & 57 & 1 & 0&&\\ 1 & 120 & 1191 & 2416 & 1191 & 120 & 1 & 0&\\ 1 & 247 & 4293 & 15619 & 15619 & 4293 & 247 & 1 & 0 \\ \multicolumn{9}{c}{$\vdots$} \end{tabular} \caption{The Eulerian numbers -- levels 0 to 8 -- before normalization.} \label{fig:eulers triangle} \end{figure} The binomial coefficients ${n \choose k}$ count the number of subsets of $\{1,\ldots,n\}$ of exact order $k$, and satisfy the recurrence $$ {n \choose k} = {n - 1 \choose k} + {n - 1 \choose k - 1}. $$ In contrast, the Eulerian numbers $\genfrac{\langle}{\rangle}{0pt}{}{n}{k}$ count the number of permutations of $\{1,\ldots,n\}$ with exactly $k$ ascents~\cite{grahamconcrete} and satisfy the recurrence $$ \genfrac{\langle}{\rangle}{0pt}{}{n}{k} = (k + 1) \genfrac{\langle}{\rangle}{0pt}{}{n - 1}{k} + (n - k) \genfrac{\langle}{\rangle}{0pt}{}{n - 1}{k - 1}. $$ (Here we follow the standard conventions that ${n \choose k}$ and $\genfrac{\langle}{\rangle}{0pt}{}{n}{k}$ are both zero for $n < k$ and that ${0 \choose 0}$ and $\genfrac{\langle}{\rangle}{0pt}{}{0}{0}$ are both equal to one.) At first glance then there is no reason to suspect any deep connection between these two triangular arrays of integers. But initial impressions can be deceiving; perhaps we can see some hidden connections if we look at some pictures. We can depict 2-dimensional arrays of integers in the following fashion: represent each odd integer by a black square and each even integer by a white square. Applying this approach to the binomial coefficients and to the Eulerian numbers generates Figures~\ref{fig:fractal.binomial} and~\ref{fig:fractal.eulerian}. \begin{figure}[h!t!] \centering \begin{tabular}{c @{\hskip 0.5in} c} \includegraphics[width = 0.3\linewidth]{figures/image1.png} & \includegraphics[width = 0.3\linewidth]{figures/image2.png} \\ (a) Levels 0 to 3 & (b) Levels 0 to 127 \\ \end{tabular} \caption{Pascal's triangle depicted by representing each odd number with a black square and each even number with a white square: left -- levels 0 to 3, right -- levels 0-127. As the number of levels increases, the Sierpinski triangle appears to emerge.} \label{fig:fractal.binomial} \vspace{0.2in} \begin{tabular}{c @{\hskip 0.5in} c} \includegraphics[width = 0.3\linewidth]{figures/image3.png} & \includegraphics[width = 0.3\linewidth]{figures/image4.png} \\ (a) Levels 0 to 3 & (b) Levels 0 to 127 \\ \end{tabular} \caption{The Eulerian numbers depicted by representing each odd number with a black square and each even number with a white square: left -- levels 0 to 3, right -- levels 0-127. Once again as the number of levels increases, the Sierpinski triangle appears to emerge.} \label{fig:fractal.eulerian} \end{figure} Recurrences often generate fractals. While level for level Figures~\ref{fig:fractal.binomial} and~\ref{fig:fractal.eulerian} are different -- compare, for example, levels 0 to 3 of the binomial coefficients with levels 0 to 3 of the Eulerian numbers -- in the large both arrays look very much the same: they both appear to incarnate the same fractal, the Sierpinski triangle. This long-term likeness suggests that in some limiting fashion these two arrays exhibit similar behaviors. \textit{The goal of this paper is to explore some limiting connections between sums of binomial coefficients and sums of Eulerian numbers along with corresponding results for their associated distributions: the binomial distributions for the binomial coefficients and the Irwin-Hall distributions (uniform B-splines) for the Eulerian numbers.} In particular, we are going to provide a unified, probabilistic approach using renewal theory to derive the following limiting identities: \begin{description} \item A. Binomial Coefficients \begin{description} \item[\namedlabel{eq:binomial.cs}{A1}] $\lim_{k \rightarrow \infty} \sum_{n = 0}^{\infty }\frac{1}{2^n} {n \choose k} = 2$ \hfill (Columns) \item[\namedlabel{eq:binomial.sd}{A2}] $\lim_{n \rightarrow \infty} \sum_{k \geq 0} \frac{1}{2^{n - k}}{{n - k}\choose{k}} = \frac{2}{3}$ \hfill (Short Diagonals) \item[\namedlabel{eq:binomial.as}{A3}] $\lim_{k \rightarrow \infty} \sum_{n = 0}^{\infty} (-1)^n \frac{1}{2^n} {n \choose k} = 0$ \hfill (Alternating Sums) \end{description} \item B. Eulerian Numbers \begin{description} \item[\namedlabel{eq:eulerian.cs}{B1}] $\lim_{k \rightarrow \infty} \sum_{n = 0}^{\infty} \frac{1}{n!} \genfrac{\langle}{\rangle}{0pt}{}{n}{k} = 2$ \hfill (Columns) \item[\namedlabel{eq:eulerian.sd}{B2}] $\lim_{n \rightarrow \infty} \sum_{k \geq 0} \frac{1}{(n - k)!}\genfrac{\langle}{\rangle}{0pt}{}{n - k}{k} = \frac{2}{3}$ \hfill (Short Diagonals) \item[\namedlabel{eq:eulerian.as}{B3}] $\lim_{k \rightarrow \infty} \sum_{n = 0}^{\infty} (-1)^n \frac{1}{n!} \genfrac{\langle}{\rangle}{0pt}{}{n}{k} = 0$ \hfill (Alternating Sums) \end{description} \item C. Bernstein Polynomials -- Binomial Distributions \begin{description} \item[\namedlabel{eq:bernstein.cs}{C1}] $\lim_{k \rightarrow \infty} \sum_{n = 0}^{\infty} B^n_k(t) = \frac{1}{t}$ for $t \in (0, 1)$ \hfill (Columns) \item[\namedlabel{eq:bernstein.sd}{C2}] $\lim_{n \rightarrow \infty} \sum_{k \geq 0} B_k^{n - k}(t) = \frac{1}{1 + t}$ for $t \in (0, 1)$ \hfill (Short Diagonals) \item[\namedlabel{eq:bernstein.as}{C3}] $\lim_{k \rightarrow \infty} \sum_{n = 0}^{\infty} (-1)^n B^n_k(t) = 0$ for $t \in (0, 1)$ \hfill (Alternating Sums) \end{description} \item D. $h$-Bernstein Polynomials -- P\'{o}lya-Eggenberger Distributions \begin{description} \item[\namedlabel{eq:hbernstein.cs}{D1}] $\lim_{k \rightarrow \infty} \sum_{n = 0}^{\infty} B_k^n(t; h) =\frac{1 - h}{t - h}$ for $0 < h < t < 1$ \hfill (Columns) \item[\namedlabel{eq:hbernstein.sd}{D2}] $\lim_{n \rightarrow \infty} \sum_{k \geq 0} B_k^{n - k}(t; h) = \int_0^1 \frac{x^{a - 1} (1 - x)^{b - 1}}{(1 + x) \mathrm{B}(a, b)} dx$ \hfill (Short Diagonals) \\ for $t \in (0, 1)$ and $h > 0$, where $a = t/h, b = (1 - t)/h$ \item D2a. $\lim_{n \rightarrow \infty} \sum_{k \geq 0} B_k^{n - k}(t; 1) = 2^{-t}$ for $t \in (0, 1)$ \hfill ($h$ = 1 in D2) \item[\namedlabel{eq:hbernstein.as}{D3}] $\lim_{k \rightarrow \infty} \sum_{n = 0}^{\infty} (-1)^n B_k^n(t; h) = 0 $ for $0 < h < t < 1$ \hfill (Alternating Sums) \end{description} \item E. Uniform B-splines -- Irwin-Hall Distributions \begin{description} \item[\namedlabel{eq:bsplines.cs}{E1}] $\lim_{t \rightarrow \infty} \sum_{n =0}^{\infty}N_{0, n}(t) = 2$ \hfill (Columns) \item[\namedlabel{eq:bsplines.sd}{E2}] $\lim_{n \rightarrow \infty} \sum_{k \geq 0} N_{0, n - k}(k + t) = \frac{2}{3}$ for $ t > 0$ \hfill (Short Diagonals) \item[\namedlabel{eq:bsplines.as}{E3}]$\lim_{t \rightarrow \infty} \sum_{n =0}^{\infty} (-1)^n N_{0, n}(t) = 0$ \hfill (Alternating Sums) \end{description} \end{description} For the identities in B and E, we shall show that the convergence rate is polynomial with an arbitrarily large order, utilizing additional results from renewal theory. For the identities in A, C, and D, we shall show that analogous results hold for any fixed $k$ also invoking a probabilistic argument based on an infinite sequence of random variables, the same framework we use for the asymptotic identities. Below is a list of the non-asymptotic identities we will derive: \begin{description} \item A*. Binomial Coefficients \begin{description} \item[\namedlabel{eq:binomial.cs.exact}{A1*}] $\sum_{n = 0}^{\infty }\frac{1}{2^n} {n \choose k} = 2$ \hfill (Columns) \item[\namedlabel{eq:binomial.sd.exact}{A2*}] $\sum_{k \geq 0} \frac{1}{2^{n - k}}{{n - k}\choose{k}} = \frac{2}{3} + \frac{1}{3}\cdot \left(-\frac{1}{2}\right)^n$ \hfill (Short Diagonals) \item[\namedlabel{eq:binomial.as.exact}{A3*}] $\sum_{n = 0}^{\infty} (-1)^n \frac{1}{2^n} {n \choose k} = (-1)^k \frac{2}{3^{k + 1}}$ \hfill (Alternating Sums) \end{description} \item C*. Bernstein Polynomials -- Binomial Distributions \begin{description} \item[\namedlabel{eq:bernstein.cs.exact}{C1*}] $\sum_{n = 0}^{\infty} B^n_k(t) = \frac{1}{t}$ \hfill (Columns) \item[\namedlabel{eq:bernstein.sd.exact}{C2*}] $\sum_{k \geq 0} B^{n-k}_k(t) = \frac{1 - (-t)^{n + 1}}{1 + t}$ \hfill (Short Diagonals) \item[\namedlabel{eq:bernstein.as.exact}{C3*}] $\sum_{n = 0}^{\infty} (-1)^n B^n_k(t) = \frac{(-t)^k}{(2 - t)^{k + 1}}$ \hfill (Alternating Sums) \end{description} \item D*. $h$-Bernstein Polynomials -- P\'{o}lya-Eggenberger Distributions \begin{description} \item[\namedlabel{eq:hbernstein.cs.exact}{D1*}] $\sum_{n = 0}^{\infty} B_k^n(t; h) =\frac{1 - h}{t - h}$ for $0 < h < t < 1$ \hfill (Columns) \item[\namedlabel{eq:hbernstein.sd.exact}{D2*}] $\sum_{k \geq 0} B_k^{n - k}(t; h) = \int_0^1 \frac{x^{a - 1} (1 - x)^{b - 1}}{(1 + x) \mathrm{B}(a, b)} dx +$ \hfill (Short Diagonals) \\ \hspace*{0.3in}$(-1)^{n + 2} \int_0^1 \frac{x^{a + n} (1 - x)^{b - 1}}{(1 + x) \mathrm{B}(a, b)} dx$ \\ for $t \in (0, 1)$ and $h >0,$ where $a = t/h, b = (1 - t)/h$ \item[\namedlabel{eq:hbernstein.as.exact}{D3*}] $\sum_{n = 0}^{\infty} (-1)^n B_k^{n}(t; h) = (-1)^k \int_0^1 \frac{x^{a + k - 1} (1 - x)^{b - 1}}{(2 - x)^{k + 1} \mathrm{B}(a, b)} dx$ \hfill (Alternating Sums) \\ for $0 < h < t < 1$, where $a = t/h, b = (1 - t)/h$ \end{description} \end{description} Although it is reassuring that these identities involving binomial coefficients and Bernstein polynomials hold for any $k$ not only in the limit, the use of renewal theory provides an interesting link between binomial coefficients and Eulerian numbers and indicates the column sums in~\ref{eq:binomial.cs} and~\ref{eq:eulerian.cs} are expected to be 2, since 2 is the reciprocal mean of both the uniform distribution on $[0, 1]$ and the Bernoulli distribution with success probability 1/2. Likewise, the sums 2/3 of the short diagonals in~\ref{eq:binomial.sd} and~\ref{eq:eulerian.sd} are the reciprocal mean of a shifted version of these two distributions. The combinatorial interpretations of binomial coefficients and Eulerian numbers provide virtually no insight into revealing such a remarkable connection. Some identities such as~\ref{eq:eulerian.cs} seem straightforward using the general theory of renewal processes but are far from obvious or do not even appear promising otherwise (including the application of generating functions and enhanced approximation to the summand at each $n$). The rows of the sequences we shall study form distributions; the columns do not. While it may seem unnatural to study the columns rather than the rows, there has been some recent work to investigate similar column sums with good effect.~\cite{simsek2014generating} introduces generating functions for the columns of the binomial distribution and uses these generating functions to derive a variety of identities for the Bernstein polynomials, including formulas for sums, alternating sums, differentiation, degree elevation, and subdivision.~\cite{goldman2012generating} introduces generating functions for the columns of the uniform B-splines and then applies these generating functions to derive a collection of identities for uniform B-splines, including the Schoenberg identity, formulas for sums and alternating sums, for moments and reciprocal moments, for differentiation, for Laplace transforms, and for convolutions with monomials. Thus learning about the columns can also provide rich insights into the rows. It is partly in this spirit that we investigate the identities in this paper. \section{Renewal theory} Before we proceed with our proofs, we provide a brief review of renewal theory in stochastic processes~\cite[pp.\;358-373]{feller2008introduction}. A renewal process is a stochastic model for events that occur at random times. Let $X_1, X_2, \ldots$ be independent and identically distributed (IID) non-negative random variables following a distribution $F$ and let $S_n = \sum_{i = 1}^n X_i$ be their partial sum with $S_0 = 0$. We may interpret each $X_i$ as an \textit{interarrival time} and $S_n$ as the time of the $n$th arrival (or \textit{renewal}). The \textit{renewal measure} is defined by \begin{equation} \label{eq:renewal.measure} U(A) = \sum_{n = 0}^{\infty} \P(S_n \in A), \end{equation} for any $A$ that is a measurable subset on $[0, \infty)$. The renewal measure $U(A)$ is the expected number of arrivals in $A$ since \begin{align} \label{eq:renewal.measure.expectation} \sum_{n = 0}^{\infty} \P(S_n \in A) & = \sum_{n = 0}^{\infty} \mathrm{E}\left(1\{S_n \in A\}\right) = \mathrm{E}\left(\sum_{n = 0}^{\infty} 1\{S_n \in A\}\right) \\ & = \mathrm{E}\{ {\text{number of arrivals in }} A\}, \end{align} where $1(\cdot)$ is the indicator function. Let $A = (x, x + \Delta]$ where $\Delta > 0$ is a fixed constant. The following renewal theorem, also known as Blackwell's theorem, states that the expected number of arrivals in $A$ is asymptotically proportional to the length of $A$ with proportionality constant $1/\mu$, where $\mu$ is the expectation of the random variables $X_i$. The precise statement of this theorem depends on whether or not the distribution $F$ is arithmetic: A distribution is called \textit{arithmetic} if it is supported on a set of the form $\{n\lambda: n \in \mathbb{N}\}$ for some $\lambda > 0$, and the largest such $\lambda$ is called the \textit{span} of the distribution. \begin{theorem}[Blackwell's renewal theorem] \label{thm:blackwell.non} If the distribution $F$ is not arithmetic, $$\lim_{x \rightarrow \infty} U(x, x + \delta] = \delta/\mu$$ for any $\delta > 0$. If the distribution $F$ is arithmetic with span $\lambda$, $$\lim_{x \rightarrow \infty} U(x, x + \lambda] = \lambda/\mu.$$ \end{theorem} As we shall see in the next section, a list of identities involving special numbers and special distributions can be derived using a unified probabilistic argument via renewal theory by specifying $A$ in Equation~\eqref{eq:renewal.measure} and the distribution $F$ of interarrival times. We consider mainly two classes of distributions: the uniform distribution supported on $[a, b]$ where $b > a$ and the Bernoulli distribution with success probability $p \in (0, 1)$. Normalized Eulerian numbers and uniform B-splines correspond to uniform distributions, while normalized Binomial coefficients, Bernstein polynomials, and $h$-Bernstein polynomials correspond to Bernoulli distributions. \newcommand{\mathrm{Uniform}}{\mathrm{Uniform}} \newcommand{\mathrm{Bernoulli}}{\mathrm{Bernoulli}} \newcommand{\mathrm{Binomial}}{\mathrm{Binomial}} \textbf{Notation.} We write $X \sim F$ if $X$ is a random variable following a distribution $F$. We use $\mathrm{Uniform}(a, b)$ to denote the uniform distribution supported on $[a, b]$, whose probability density function is $f(x) = 1(x \in [a, b])$ and mean is $(a + b)/2$. We use $\mathrm{Bernoulli}(p)$ to denote the Bernoulli distribution with success probability $p$, where the probability mass function is $\P(X = 1) = p$ (the trial succeeds) and $\P(X = 0) = 1 - p$ (the trial fails) and the mean is $p$. The sum of $n$ IID Bernoulli trials drawn from $\mathrm{Bernoulli}(p)$ follows the binomial distribution $\mathrm{Binomial}(n, p)$. We place a superscript on $X_i, S_n, U$ in a renewal process to emphasize their dependence on the interarrival distribution $F$. In particular, we use the superscript ``$[a, b]$" when $X_i \sim \mathrm{Uniform}(a, b)$ and ``$(p)$" when $X_i \sim \mathrm{Bernoulli}(p)$; we use ``$(p) + 1$" when $X_i = X_i^* + 1$ where $X_i^* \sim \mathrm{Bernoulli}(p)$, i.e., $X_i$ follows $\mathrm{Bernoulli}(p)$ shifted by 1 satisfying $\P(X_i = 1) = 1 - p$ and $\P(X_i = 2) = p$. For example, the readers may see $X_i^{[0, 1]}$, $X_i^{[1, 2]}$, $X_i^{(1/2)}$ or $X_i^{(1/2) + 1}$, and similarly for $S_n$ and $U$. \section{Binomial coefficients and Eulerian numbers} \subsection{Normalized binomial coefficients} \label{sec:binomial} The column sums of the normalized binomial coefficients are closely related to a renewal process due to the well known probabilistic interpretation of the normalized binomial coefficients using Bernoulli trials. Consider a renewal process in which the interarrival times $X_i \sim \mathrm{Bernoulli}(1/2)$. Then \begin{equation} \frac{1}{2^n} {n \choose k} = \P(S^{(1/2)}_n = k) = \P(S^{(1/2)}_n \in (k - 1, k]), \end{equation} where $S_n^{(1/2)}$ is the time of the $n$th arrival and follows $\mathrm{Binomial}(n, 1/2)$. Since the Bernoulli distribution is arithmetic with mean $\mu = 1/2$ and span $\lambda = 1$, Theorem~\ref{thm:blackwell.non} gives \begin{equation} \lim_{k \rightarrow \infty} \sum_{n = 0}^{\infty }\frac{1}{2^n} {n \choose k} =\lim_{k \rightarrow \infty} U^{(1/2)}(k - 1, k] = \frac{1}{1/2} = 2. \tag{A1} \end{equation} For the sums of the short diagonals, we consider a new renewal process with shifted Bernoulli interarrival times $X^{(1/2) + 1}_i$ that have the same distribution as $X^{(1/2)}_i + 1$, i.e., $X^{(1/2) + 1}_i = 2$ if the $i$th trial succeeds and $X^{(1/2) + 1}_i = 1$ if the $i$th trial fails. Short diagonals are closely related to this new renewal process because \begin{align} \frac{1}{2^{n - k}} {{n - k}\choose{k}} & =\P(X^{(1/2)}_1 + \ldots + X^{(1/2)}_{n - k} \in (k - 1, k]) \\ & = \P((X^{(1/2)}_1 + 1) + \cdots + (X^{(1/2)}_{n - k} + 1) \in (n - 1, n]) \\ & = \P(X^{(1/2)+1}_1 + \cdots + X^{(1/2)+1}_{n - k} \in (n - 1, n]) \\ & = \P(S^{(1/2)+1}_{n - k} \in (n - 1, n]). \end{align} Therefore by Theorem~\ref{thm:blackwell.non} \begin{align} & \quad \; \lim_{n \rightarrow \infty} \sum_{k \geq 0} \frac{1}{2^{n - k}}{{n - k}\choose{k}} \\ & = \lim_{n \rightarrow \infty} \sum_{k \geq 0} \P(S^{(1/2)+1}_{n - k} \in (n - 1, n]) = \lim_{n \rightarrow \infty} \sum_{k \geq 0} \P(S^{(1/2)+1}_k \in (n - 1, n]) \label{eq:binomial.change.index} \\ & = \lim_{n \rightarrow \infty} U^{(1/2) +1}(n - 1, n] = \frac{1}{1 + \frac{1}{2}} = \frac{2}{3}, \tag{A2} \end{align} where the change of index in Equation~\eqref{eq:binomial.change.index} is guaranteed by the observation that $\P(S^{(1/2)+1}_k \in (n - 1, n]) = 0$ for $k \geq n$. Equation~\eqref{eq:binomial.cs}, as well as its variants in A, C, and D, actually hold for any $k$. We provide a probabilistic proof of their non-asymptotic counterparts in Section~\ref{sec:non.asymptotic}. The use of renewal theory unites Binomial coefficients and Eulerian numbers under the same framework with two interarrival time distributions, and leads to a range of identities involving special distributions. In the next section, we shall elaborate on such connections. \subsection{Normalized Eulerian numbers} \label{sec:Eulerian} Consider a renewal process with random interarrival time $X^{[0, 1]}_i \sim \mathrm{Uniform}(0, 1)$. \cite{Tanny1973} provides a probabilistic interpretation for the normalized Eulerian numbers: for each integer $k$, \begin{equation} \label{eq:eulerian2unifom} \frac{1}{n!} \genfrac{\langle}{\rangle}{0pt}{}{n}{k} = \P(S^{[0, 1]}_n \in (k - 1, k]). \end{equation} It follows by Equation~\eqref{eq:renewal.measure} that \begin{equation} \label{eq:Eulerian.renewal} \sum_{n = 0}^{\infty} \frac{1}{n!} \genfrac{\langle}{\rangle}{0pt}{}{n}{k} = \sum_{n = 0}^{\infty} \P(S^{[0, 1]}_n \in (k - 1, k]) = U^{[0, 1]}(k - 1, k]. \end{equation} Equation~\eqref{eq:Eulerian.renewal} bridges the quantity $\sum_{n = 0}^{\infty} \frac{1}{n!} \genfrac{\langle}{\rangle}{0pt}{}{n}{k}$ originating from Eulerian numbers with a renewal process, revealing the probabilistic interpretation of this column sum as the expected number of arrivals in the interval $(k - 1, k]$ when the interarrival time is uniformly distributed on $[0, 1]$. The uniform distribution on $[0, 1]$ is continuous, thus non-arithmetic, and has mean $\mu = 1/2$. Substituting $x = k$ and $\delta = 1$ in Theorem~\ref{thm:blackwell.non}, we obtain \begin{equation} \lim_{k \rightarrow \infty} \sum_{n = 0}^{\infty} \frac{1}{n!} \genfrac{\langle}{\rangle}{0pt}{}{n}{k} = \frac{1}{1/2} = 2. \tag{B1} \end{equation} \textbf{Rate of convergence.} We have evaluated $\sum_{n = 0}^{\infty} \frac{1}{n!} \genfrac{\langle}{\rangle}{0pt}{}{n}{k}$ numerically and observed that this sum converges very rapidly to 2. If $X \sim \mathrm{Uniform}(0, 1)$, then $\mathrm{E}(X^{\alpha + 1}) = \int_0^1 x^{\alpha + 1} d x = \frac{1}{\alpha + 2} < \infty$ for any $\alpha \geq 0$. According to Corollary 5.2 in~\cite{Konstantopoulos1999}, \begin{equation} \sum_{n = 0}^{\infty} \frac{1}{n!} \genfrac{\langle}{\rangle}{0pt}{}{n}{k} - 2 = o(k^{-\alpha}). \end{equation} Therefore, the convergence rate in Equation~\eqref{eq:eulerian.cs} is polynomial with an arbitrarily large order. Below we calculate the difference $\sum_{n = 0}^{\infty} \frac{1}{n!} \genfrac{\langle}{\rangle}{0pt}{}{n}{k} - 2$ for the first several $k$'s using ${\it Mathematica}$: \begin{tabular}{ccccc} \hline $k$ & 0 & 1 & 2& 3 \\ $\sum_{n = 0}^{\infty} \frac{1}{n!} \genfrac{\langle}{\rangle}{0pt}{}{n}{k} - 2$ & 0.71828 & $-4.8 \times 10^{-2}$ & $-4.2 \times 10^{-3}$ & $3.9 \times 10^{-5}$ \\ \hline $k$ & 4 & 5 & $\cdots$ & 50\\ $\sum_{n = 0}^{\infty} \frac{1}{n!} \genfrac{\langle}{\rangle}{0pt}{}{n}{k} - 2$ & $5.7 \times 10^{-5}$ & $5.1 \times 10^{-6}$ & $\cdots$ & $< 1.0 \times 10^{-45}$ \\ \hline \end{tabular} \textbf{Short diagonals.} Short diagonals turn out to be related to a renewal process with interarrival times $X^{[1,2]}_i \sim \mathrm{Uniform}(1, 2)$, which is identically distributed as $X^{[0, 1]}_i + 1$. The mean of $X^{[1,2]}_i \sim \mathrm{Uniform}(1, 2)$ is $\mathrm{E} X^{[1,2]}_i = \mathrm{E} X^{[0, 1]}_i + 1 = 3/2.$ Using Equation~\eqref{eq:eulerian2unifom} we find that \begin{align} \frac{1}{(n - k)!}\genfrac{\langle}{\rangle}{0pt}{}{n - k}{k}& =\P(X^{[0, 1]}_1 + \ldots + X^{[0, 1]}_{n - k} \in (k - 1, k]) \\ & = \P((X^{[0, 1]}_1 + 1) + \cdots + (X^{[0, 1]}_{n - k} + 1) \in (n - 1, n]) \\ & = \P(X^{[1, 2]}_1 + \cdots + X^{[1, 2]}_{n - k} \in (n - 1, n]) = \P(S^{[1, 2]}_{n - k} \in (n - 1, n]). \end{align} Therefore, \begin{align} \sum_{k \geq 0} \frac{1}{(n - k)!}\genfrac{\langle}{\rangle}{0pt}{}{n - k}{k} & =\sum_{k \geq 0} \P(S^{[1, 2]}_{n - k} \in (n - 1, n]) \\ & = \sum_{k \geq 0} \P(S^{[1, 2]}_{k} \in (n - 1, n]) = U^{[1, 2]}(n-1, n], \end{align} so by Theorem~\ref{thm:blackwell.non} \begin{equation} \lim_{n \rightarrow \infty} \sum_{k \geq 0} \frac{1}{(n - k)!}\genfrac{\langle}{\rangle}{0pt}{}{n - k}{k} = \lim_{n \rightarrow \infty} U^{[1, 2]}(n-1, n] = \frac{1}{3/2} = \frac{2}{3}, \tag{B2} \end{equation} which is the same value as the analogous result for the binomial coefficients. \section{Bernstein polynomials and \textit{h}-Bernstein polynomials} \subsection{Bernstein polynomials} \label{sec:bernstein} We consider a renewal process with random interarrival time $X^{(t)}_i \sim \mathrm{Bernoulli}(t)$ where the success probability $t \in (0, 1)$. In view of the probabilistic interpretation of the Bernstein polynomials $B^n_k(t)$ \begin{equation}\label{eq:bernstein.renewal} B^n_k(t) = {n \choose k} t^k (1 - t)^{n - k} = \P(S_n^{(t)} = (k - 1, k]), \end{equation} extensions of Equations~\eqref{eq:binomial.cs} and~\eqref{eq:binomial.sd} to Bernstein polynomials are immediately available. Since $\mathrm{Bernoulli}(t)$ is arithmetic with span $\lambda = 1$ and mean $\mu = t$, a direct application of Theorem~\ref{thm:blackwell.non} gives \begin{equation} \lim_{k \rightarrow \infty} \sum_{n = 0}^{\infty} B^n_k(t) = \lim_{k \rightarrow \infty} U^{(t)}(k - 1, k] = \frac{1}{t}. \tag{C1} \end{equation} For the sums of the short diagonals in~\ref{eq:bernstein.sd}, we consider a new renewal process with shifted Bernoulli interarrival times $X^{(t) + 1}_i$ that have the same distribution as $X^{(t)}_i + 1$, i.e., $X^{(t) + 1}_i = 2$ if the $i$th trial succeeds and $X^{(t) + 1}_i = 1$ if the $i$th trial fails and has mean $\mathrm{E} X^{(t) + 1}_i = \mathrm{E} X^{(t)}_i + 1 = t + 1.$ The sums of the short diagonals follow from the same argument as used in the proof of~\ref{eq:binomial.sd} and~\ref{eq:eulerian.sd}: \begin{align} B_k^{n - k}(t) & =\P(X^{(t)}_1 + \ldots + X^{(t)}_{n - k} \in (k - 1, k]) \\ & = \P((X^{(t)}_1 + 1) + \cdots + (X^{(t)}_{n - k} + 1) \in (n - 1, n]) \\ & = \P(X^{(t)+1}_1 + \cdots + X^{(t)+1}_{n - k} \in (n - 1, n]) = \P(S^{(t)+1}_{n - k} \in (n - 1, n]), \end{align} which by Theorem~\ref{thm:blackwell.non} gives \begin{align} \lim_{n \rightarrow \infty} \sum_{k \geq 0} B_k^{n - k}(t) & = \lim_{n \rightarrow \infty} \sum_{k \geq 0} \P(S^{(t)+1}_{n -k} \in (n - 1, n]) \\ & = \lim_{n \rightarrow \infty} \sum_{k \geq 0} \P(S^{(t)+1}_k \in (n - 1, n]) = \lim_{n \rightarrow \infty} U^{(t) +1}(n-1, n] = \frac{1}{t + 1}, \tag{C2} \end{align} noting that $\P(S^{(t)+1}_k \in (n - 1, n]) = 0$ for $k \geq n$. \subsection{\textit{h}-Bernstein polynomials}\label{sec:h.bernstein} The $h$-Bernstein polynomials are defined by \begin{equation} B^n_k(t; h) = {n \choose k}\frac{\prod_{i = 0}^{k - 1} (t + i h) \prod_{i = 0}^{n - k - 1}(1 - t + i h)}{\prod_{i = 0}^{n - 1} (1 + i h)}, \quad k = 0, 1, 2, \ldots, n, \end{equation} where $h \geq 0$. This formula is actually the probability density function of the P\'{o}lya Eggenberger distribution~\cite{eggenberger1923statistik,polya1930quelques}. This distribution reduces to the ordinary binomial distribution when $h = 0$, which has been discussed in the preceding section. Now we focus on positive $h$, when the P\'{o}lya Eggenberger distribution is a beta-binomial distribution with parameters $a = t/h$ and $b = (1 - t)/h$~\cite[Ch 6]{johnson2005univariate}. A beta-binomial distribution with parameters $(a, b)$ is the marginal distribution of $X$ if $(X | p) \sim \mathrm{Binomial}(n, p)$ and $p \sim \mathrm{Beta}(a, b)$, where $\mathrm{Beta}(a, b)$ is the Beta distribution having the probability density function $f(x) = \frac{x^{a - 1} (1 - x)^{b - 1}}{\mathrm{B}(a, b)}$ for $x \in (0, 1)$, and $\mathrm{B}(a, b)$ is the Beta function evaluated at $(a, b)$. This interpretation of $B_k^n(t; h)$ using a mixture of binomial distributions means that $h$-Bernstein polynomials correspond to a renewal process with interarrival times $X^{(p)}_i$, where the success probability $p$ is a random draw from $\mathrm{Beta}(a, b)$. Therefore, \begin{equation}\label{eq:h.bernstein.renewal} B_k^n(t; h) = \mathrm{E}_{p \sim \mathrm{Beta}(a, b) } \P(S^{(p)}_n \in (k - 1, k]) =: \int_0^1 \P(S^{(p)}_n \in (k - 1, k]) \frac{p^{a - 1} (1 - p)^{b - 1}}{\mathrm{B}(a, b)} dp. \end{equation} Using the same argument as in the derivation for Bernstein polynomials but conditioning on the random success probability $p$, it follows that \begin{align} \label{eq:exchange.limit} \lim_{k \rightarrow \infty} \sum_{n = 0}^{\infty} B_k^n(t; h) & = \mathrm{E}_{p \sim \mathrm{Beta}(a, b) } \lim_{k \rightarrow \infty} \sum_{n = 0}^{\infty} \P(S^{(p)}_n \in (k - 1, k]) \\ & = \mathrm{E}_{p \sim \mathrm{Beta}(a, b) } \lim_{k \rightarrow \infty} U^{(p)}(k - 1, k] = \mathrm{E}_{p \sim \mathrm{Beta}(a, b) } \left(\frac{1}{p}\right) \\ & = \frac{a + b - 1}{a - 1} = \frac{1/h - 1}{t/h - 1} = \frac{1 - h}{t - h}, \tag{D1} \end{align} where the interchange of expectation and limit in Equation~\eqref{eq:exchange.limit} is guaranteed by the dominated convergence theorem~\cite[p.\;111]{feller2008introduction}. Here we require $a = t/h > 1$ to ensure that the expectation of $1/p$ exists, which means Equation~\eqref{eq:hbernstein.cs} holds for $0 < h < t < 1$. Asymptotic formulas for the sums of the short diagonals also hold for the $h$-Bernstein polynomials: \begin{align} \lim_{n \rightarrow \infty} \sum_{k \geq 0} B_k^{n - k}(t; h) & = \mathrm{E}_{p \sim \mathrm{Beta}(a, b) } \left[\lim_{n \rightarrow \infty} \sum_{n = 0}^{\infty} \P(S^{(p)+1}_n \in (n - 1, n])\right] \\ & = \mathrm{E}_{p \sim \mathrm{Beta}(a, b) } \left(\frac{1}{p + 1} \right) = \int_0^1 \frac{x^{a - 1} (1 - x)^{b - 1}}{(1 + x) \mathrm{B}(a, b)} dx, \label{eq:D2} \tag{D2} \end{align} where $0 < t < 1$, $h > 0$, $a = t/h$, $b = (1 - t)/h$. By Euler's integral representation of $_2F_1$~\cite[(1.6.1)]{koekoek2010hypergeometric}, the integral on the right hand side of Equation~\eqref{eq:D2} reduces to a hypergeometric function, so \begin{equation} \lim_{n \rightarrow \infty} \sum_{k \geq 0} B_k^{n - k}(t; h) =\; _2F_1(1, t/h; 1/h; -1). \end{equation} When $h = 1$, \begin{equation} _2F_1(1, t/h; 1/h ; -1) =\; _2F_1(1, t; 1 ; -1) =\; _1F_0(t; ; -1) = 2^{-t}, \tag{D2a} \end{equation} for $0 < t < 1$. \section{B-splines} The application of renewal theory to Eulerian numbers can be extended to uniform B-splines~\cite{wang+:2010eulerian,he:2012eulerian}. Let $N_{0, n}(t)$ denote the B-spline of degree $n$ with knots at the integers $0, 1, \ldots, n+1$ and support $[0, n + 1]$. Let $\chi_{[0, 1]}(t)$ be the characteristic function over $[0, 1]$, i.e., $\chi_{[0, 1]}(t) = 1\{t \in [0, 1]\}$. Then $N_{0, n}(t)$ is the convolution of $\chi_{[0, 1]}(t)$ with itself $n + 1$ times. Since $\chi_{[0, 1]}(t)$ is the probability density function of $\mathrm{Uniform}(0, 1)$, the convolution $N_{0, n}(t)$ is actually the probability density function of $S^{[0,1]}_{n + 1}$, or in other words, the probability density function of the Irwin-Hall distribution. Furthermore, \begin{equation}\label{eq:bsplines.prob} N_{0,n}(t) = \int_{-\infty}^{\infty} N_{0, n - 1}(x) \chi_{[0, 1]}(t - x) dx = \int_{t - 1}^{t} N_{0, n - 1}(x) dx = \P(S^{[0, 1]}_{n} \in (t - 1, t]). \end{equation} It follows from Equations~\eqref{eq:eulerian2unifom} and~\eqref{eq:bsplines.prob} that the normalized Eulerian numbers are the uniform B-splines evaluated at the integers. Moreover, in view of Theorem~\ref{thm:blackwell.non} and Equation~\eqref{eq:bsplines.prob}, it follows that \begin{equation} \lim_{t \rightarrow \infty} \sum_{n =0}^{\infty}N_{0, n}(t) = 2. \tag{E1} \end{equation} Sums of short diagonals also have a counterpart for B-splines. Noting that \begin{align} & \quad \; N_{0, n - k}(k + t) =\P(X^{[0, 1]}_1 + \ldots + X^{[0, 1]}_{n - k} \in (k + t - 1, k + t]) \\ & = \P((X^{[0, 1]}_1 + 1) + \cdots + (X^{[0, 1]}_{n - k} + 1) \in (n + t - 1, n + t]) \\ & = \P(X^{[1, 2]}_1 + \cdots + X^{[1, 2]}_{n - k} \in (n + t - 1, n + t]) = \P(S^{[1, 2]}_{n - k} \in (n + t - 1, n + t]), \end{align} we have \begin{equation} \lim_{n \rightarrow \infty} \sum_{k \geq 0} N_{0, n - k}(k + t) = \lim_{n \rightarrow \infty}U^{[1, 2]}(n + t - 1, n + t] = \frac{1}{3/2} = \frac{2}{3}, \tag{E2} \end{equation} for all $t$. \section{Contrasts and alternating sums} Theorem~\ref{thm:blackwell.non} also holds for a \textit{delayed renewal process}, where $S_0$ is a random variable other than a constant zero. This insight leads to an extension of Equation~\eqref{eq:eulerian.cs} to alternating sums, and more generally, \textit{contrasts}. We call a length $m$ vector $(c_0, c_1, \ldots, c_{m - 1})$ a {\it contrast} if \[ \sum_{i = 0}^{m-1} c_i = 0. \] We use the same notation $c_k$ to denote its periodic extension, which is a sequence $\{c_k: k = 0, 1, 2, \ldots\}$ such that \[ c_k = c_i, \quad\text{if } k \equiv i \;(\bmod\; m). \] For each $j = 1, 2, \ldots, m$, consider a delayed renewal process with delay $S_0^{(j)} = S^{[0,1]}_j$ and interarrival time $X_i^{(j)} = \sum_{i' = 0}^{m - 1} X^{[0,1]}_{m(i - 1) + i' + j + 1} = S^{[0,1]}_{mi + j} - S^{[0,1]}_{m(i - 1) + j}$ for $i = 1, 2, \ldots$. The corresponding partial sum $S_n^{(j)}$ satisfies \[ S_n^{(j)} = \sum_{i = 1}^n X_i^{(j)} + S^{[0,1]}_j = \sum_{i = 1}^n (S^{[0,1]}_{mi + j} - S^{[0,1]}_{m(i - 1) + j}) + S^{[0,1]}_j = S^{[0,1]}_{mn + j}, \] and the expectation of interarrival times is $\mathrm{E} X_i^{(j)} = \sum_{i' = 0}^{m - 1} \mathrm{E} X^{[0,1]}_{m(i - 1) + i' + j + 1} = m/2$. A direct application of Theorem~\ref{thm:blackwell.non} to this delayed renewal process gives \begin{equation} \lim_{k \rightarrow \infty} \sum_{n = 0}^{\infty} \P(S^{[0,1]}_{mn + j} \in (k - 1, k]) = \lim_{k \rightarrow \infty} \sum_{n = 0}^{\infty} \P(S_n^{(j)} \in (k - 1, k]) = \frac{1}{m/2} = \frac{2}{m}. \end{equation} Now \begin{equation}\label{eq:dummy12.1} \sum_{n = 0}^{\infty} c_n \frac{1}{n!} \genfrac{\langle}{\rangle}{0pt}{}{n}{k} =\sum_{n = 0}^{\infty} c_n \P(S^{[0,1]}_n \in (k - 1, k])= \sum_{j = 0}^{m - 1} c_j \sum_{n = 0}^{\infty} \P(S^{[0,1]}_{mn + j} \in (k - 1, k]). \end{equation} Taking the limit of both sides in Equation~\eqref{eq:dummy12.1}, we obtain \begin{align} \lim_{k \rightarrow \infty} \sum_{n = 0}^{\infty} c_n \frac{1}{n!} \genfrac{\langle}{\rangle}{0pt}{}{n}{k} & = \sum_{j = 0}^{m - 1} c_j \lim_{k \rightarrow \infty} \sum_{n = 0}^{\infty} \P(S^{[0,1]}_{mn + j} \in (k - 1, k]) \\ & = \sum_{j = 0}^{m - 1} c_j \cdot \frac{2}{m} = \frac{2}{m} \sum_{j = 0}^{m - 1} c_j = 0. \end{align} A special case is alternating sums of the normalized Eulerian numbers when $m =2$ and $(c_0, c_1) = (1, -1)$, i.e., \begin{equation} \lim_{k \rightarrow \infty} \sum_{n = 0}^{\infty} (-1)^n \frac{1}{n!} \genfrac{\langle}{\rangle}{0pt}{}{n}{k} = 0. \tag{A3} \end{equation} Analogous results hold for normalized binomial coefficients following the same argument as above but replacing $\mathrm{Uniform}(0, 1)$ by $\mathrm{Bernoulli}(1/2)$. Thus \begin{equation} \lim_{k \rightarrow \infty} \sum_{n = 0}^{\infty} c_n \frac{1}{2^n} {n \choose k} = 0, \end{equation} and \begin{equation} \lim_{k \rightarrow \infty} \sum_{n = 0}^{\infty} (-1)^n \frac{1}{2^n} {n \choose k} = 0. \tag{B3} \end{equation} Proofs of~\ref{eq:bernstein.as},~\ref{eq:hbernstein.as}, and~\ref{eq:bsplines.as} are similar to the proofs provided above either by varying the distribution of interarrival times (use $\mathrm{Bernoulli}(t)$ where $t \in (0, 1)$ for~\ref{eq:bernstein.as} and a mixture of $\mathrm{Bernoulli}(p)$ where $p \sim \mathrm{Beta}(a = t/h, b = (1 - t)/h)$ for~\ref{eq:hbernstein.as}) or by applying Theorem~\ref{thm:blackwell.non} to $(t - 1, t]$ for real-valued $t$ (for~\ref{eq:bsplines.as}). Thus we omit these proofs here. \section{Non-asymptotic identities for normalized binomial coefficients and ($h$-)Bernstein polynomials} \label{sec:non.asymptotic} Here we derive non-asymptotic identities for fixed $k$ as a counterpart to the limits in A, C, and D, also through probabilistic arguments. The random variable $\sum_{n = 0}^{\infty} 1\{S^{(t)}_n = k\}$ is equal to one plus the number of trials that fail after $S_n$ first reaches $k$. Because the trails after $S_n$ are independent of $S_n$, $\sum_{n = 0}^{\infty} 1\{S^{(t)}_n = k\}$ has the same distribution as the number of trials before one success in an IID Bernoulli sequence, which is known to be a geometric distribution with parameter $t$ that has mean $1/t$. Consequently by Equation~\eqref{eq:renewal.measure.expectation}, \begin{equation} \sum_{n = 0}^{\infty } {n \choose k} t^k (1 - t)^{n - k} = \frac{1}{t}, \tag{C1*} \end{equation} for any nonnegative integer $k$. Substituting $t = 1/2$ into Equation~\eqref{eq:bernstein.cs.exact} leads to the column sum of the normalized binomial coefficients \begin{equation} \sum_{n = 0}^{\infty }\frac{1}{2^n} {n \choose k} = \frac{1}{1/2} = 2, \tag{A1*} \end{equation} for any nonnegative integer $k$. Equation~\eqref{eq:hbernstein.cs.exact} holds by using the equivalence between $h$-Bernstein polynomials and Beta-Binomial distributions as in Section~\ref{sec:h.bernstein}. For the sum of short diagonals, $$\sum_{k \geq 0} B_k^{n - k}(t) = \sum_{k \geq 0} \P(S^{(t)+1}_{k} = n) = \mathrm{E} \left[ \sum_{k \geq 0} 1(S^{(t)+1}_{k} = n)\right] =: \mathrm{E} Z_n.$$ Since there exists at most one value of $k$ such that $S^{(t)+1}_{k} = n$ as $S^{(t)+1}_{k}$ is strictly increasing in $k$, the random variable $Z_n$ follows a Bernoulli distribution. Let the success probability be $a_n = \P(Z_n = 1)$. In the infinite sequence of binary trials $\{S^{(t)+1}_k\}$, conditioning on whether the event right before $\{Z_n = 1\}$ occurs is a success or a failure, we conclude that $a_n = a_{n - 1} (1 - t) + a_{n - 2} t$ for $n \geq 2$, where we let $a_0 = 1$. Solving this recurrence for $a_n$ gives \begin{equation} \sum_{k \geq 0} B_k^{n - k}(t) = \frac{1 - (-t)^{n + 1}}{1 + t}. \tag{C2*} \end{equation} Substituting $t = 1/2$ yields \begin{equation} \sum_{k \geq 0} \frac{1}{2^{n - k}}{{n - k}\choose{k}} = \frac{2}{3} + \frac{1}{3}\cdot \left(-\frac{1}{2}\right)^n. \tag{A2*} \end{equation} In view of $B_k^{n - k}(t; h) = \mathrm{E}_{p \sim \mathrm{Beta}(a, b) } B_k^{n - k}(p)$, which follows from Equations~\eqref{eq:bernstein.renewal} and~\eqref{eq:h.bernstein.renewal} with $a = t/h$ and $b = (1 - t)/h$, Equation~\eqref{eq:bernstein.sd.exact} leads to \begin{align} \sum_{k \geq 0} B_k^{n - k}(t;h) & = \mathrm{E}_{p \sim \mathrm{Beta}(a, b) } \sum_{k \geq 0} B_k^{n - k}(p) = \mathrm{E}_{p \sim \mathrm{Beta}(a, b) } \left[\frac{1 - (-p)^{n + 1}}{1 + p} \right] \\ &= \int_0^1 \frac{x^{a - 1} (1 - x)^{b - 1}}{(1 + x) \mathrm{B}(a, b)} dx + (-1)^{n + 2} \int_0^1 \frac{x^{a + n} (1 - x)^{b - 1}}{(1 + x) \mathrm{B}(a, b)} dx. \tag{D2*} \end{align} For the alternating sums, we just need to show the Bernstein polynomials in~\ref{eq:bernstein.as.exact}, then~\ref{eq:binomial.as.exact} and~\ref{eq:hbernstein.as.exact} will follow in the same way as we derive~\ref{eq:binomial.sd.exact} and~\ref{eq:hbernstein.sd.exact}. For Equation~\eqref{eq:bernstein.as.exact}, we first rewrite the left-hand side into an expectation \[ \sum_{n = 0}^{\infty} (-1)^n B_k^n(t) = \sum_{n = 0}^{\infty} (-1)^n \mathrm{E} 1\{S_n^{(t)} = k\} = \mathrm{E} \left[\sum_{n = 0}^{\infty} (-1)^n 1\{S_n^{(t)} = k\} \right] = \mathrm{E} Z_k. \] Consider an IID Bernoulli sequence with success probability $t$ and a derived random sequence $\{U_j: j = 1, \ldots\}$ where $U_j$ is the number of trials needed from the $(j - 1)$th success to the $j$th success. Then the $U_j$'s are IID following a geometric distribution with parameter $t$. Let $V_j$ be the number of the trial at which the $j$th success first occurs, i.e., $V_j = \sum_{j' = 1}^j U_{j'}$. Then, \begin{align} Z_k & = \sum_{n = V_k}^{V_{k + 1} - 1} (-1)^n 1\{S_n^{(t)} = k\} = \sum_{n = V_k}^{V_{k + 1} - 1} (-1)^n = (-1)^{V_k} \cdot 1\{ V_{k + 1} - V_k \text{ is odd}\} \\ & = \prod_{j = 1}^k (-1)^{U_j} \cdot 1\{ U_{k + 1} \text{ is odd}\}, \end{align} which combined with the fact that the $U_j$'s are IID leads to \begin{equation} \mathrm{E} Z_k = \prod_{j = 1}^{k} \mathrm{E}[(-1)^{U_j}] \P\{ U_{k + 1} \text{ is odd}\} = \{\mathrm{E}[(-1)^{U_1}]\}^k \P\{ U_{1} \text{ is odd}\}. \end{equation} Let $A_1 = \{ U_1 \text{ is even}\}$ and $B_1 = \{ U_1 \text{ is odd}\} = A_1^c$. Then $\mathrm{E}[(-1)^{U_1}] = \P(A_1) - \P(B_1)$. Moreover it follows easily from the definition of $U_1$ that $\P(A_1) = \sum_{i = 0}^{\infty} (1 - t)^{2i + 1} t = t(1 - t)/(1 - (1 - t)^2) = (1 - t)/(2 - t)$ and $\P(B_1) = 1 - \P(A_1) = 1/(2 - t)$. Consequently, \begin{equation} \mathrm{E} Z_k = [\P(A_1) - \P(B_1)]^k \P(B_1) = \frac{(-t)^k}{(2 - t)^{k + 1}}. \tag{C3*} \end{equation} \section*{Acknowledgments} We thank Professor Plamen Simeonov for pointing out the connection between~\ref{eq:hbernstein.sd} and hypergeometric functions. \bibliographystyle{siamplain}
{ "timestamp": "2019-03-18T01:08:10", "yymm": "1903", "arxiv_id": "1903.06317", "language": "en", "url": "https://arxiv.org/abs/1903.06317", "abstract": "We provide a unified, probabilistic approach using renewal theory to derive some novel limits of sums for the normalized binomial coefficients and for the normalized Eulerian numbers. We also investigate some corresponding results for their associated distributions -- the binomial distributions for the binomial coefficients and the Irwin-Hall distributions (uniform B-splines) for the Eulerian numbers.", "subjects": "Combinatorics (math.CO); Discrete Mathematics (cs.DM); Probability (math.PR)", "title": "Limits of Sums for Binomial and Eulerian Numbers and their Associated Distributions", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9881308796527183, "lm_q2_score": 0.8198933337131077, "lm_q1q2_score": 0.8101619210633328 }
https://arxiv.org/abs/0805.2707
Recurrence Formulas for Fibonacci Sums
In this article we present a new recurrence formula for a finite sum involving the Fibonacci sequence. Furthermore, we state an algorithm to compute the sum of a power series related to Fibonacci series, without the use of term-by-term differentiation theorem
\section{Introducion} \renewcommand{\theequation}{1.\arabic{equation}} \setcounter{equation}{0} The Fibonacci sequence is one of the most famous numerical sequences in mathematics. It is defined in a recursive way: the first two terms are given and the following ones are defined as the sum of the two preceding ones. Mathematically speaking: \[ F_0 = 0,~F_1 =1,~~F_r = F_{r-1} + F_{r-2},~r\geq 2 . \] The first terms are: $1, 1, 2, 3, 5, 8, 13, 21, \ldots $ This sequence comes from the single pair of rabbits' progeny problem, which was early proposed by Leonardo de Pisa (Fibonacci) at the Liber Abacci of 1202. An intriguing point is that this sequence appears in many problems from Mathematics as well as in Botanic, Crystallography, Computer Science, etc \cite{Dunlap}. Consider the following finite sum involving the Fibonacci sequence, where $x$ is a real number, $m$ and $n$ are non-negative integers: \begin{equation} \label{1.1} \sum_{r=1}^{n} r^m F_r x^r. \end{equation} Many authors have been seeking to establish a sum formula for (\ref{1.1}) (see \cite{Harris}, \cite{Brousseau}, \cite{Ledin},\cite{Gauthier}). In this article we state a sum formula for (\ref{1.1}) that we believe may be considered as a new result. Consider now the power series associated to (\ref{1.1}): \begin{equation} \label{1.2} \sum_{r=1}^{+\infty} r^m F_r x^r. \end{equation} It is not difficult to demonstrate that the (\ref{1.2}) converges for all $m$ and all $x \in (-1/\phi , 1/\phi)$, in which $\phi=(1 + \sqrt5)/2$ is the golden ratio, a well-known constant associated with Fibonacci's sequence \cite{Dunlap}. The question hereby interposed is the following: within its convergence interval, is there a formula for the sum of the (\ref{1.2}) series? An answer to this question is obtained by invoking the term-by-term differentiation theorem for power series. Actually, such an equation is obtained by using $D = xd/dx$ operator $m$ times into the known identity \[ \sum_{r=1}^{+\infty} F_r x^r = \fracd{x}{1 -x - x^2}. \] If we define \[ S(x,j)=\sum_{r=1}^{+\infty} r^j F_r x^r , \] a recurrence formula can be obtained by the following way: \begin{equation} \label{1.3} \left \{ \begin{array}{ll} \displaystyle{S(x,0)=\fracd{x}{1 -x - x^2}}, & \\ \displaystyle{S(x,j)=D[S(x,j-1)]}, & j=1, \ldots,m. \end{array} \right. \end{equation} {\bf Example 1.1} Using the (\ref{1.3}) algorithm, we can calculate the numeric series' sum \begin{equation} \label{1.4} \displaystyle{S=\sum_{r=1}^{+\infty} \fracd{r F_r}{3^r}.} \end{equation} In fact, if $S(x,0) = x/(1-x-x^2)$, then $S(x,1) = x S' (x,0) = (x + x^3)/{(1 - x - x^2)}^2$. Hence, taking $x =1/3$ in $S(x,1)$, we get the sum $S = 6/5$ for the (\ref{1.4}) series. {\bf Example 1.2} Try now to compute the numeric series' sum below by using the same algorithm: \[ \displaystyle{\sum_{r=1}^{+\infty} \fracd{r^{50} F_r}{3^r}.} \] The (\ref{1.3}) algorithm's problem is, for each single step, higher computation cost in order to differentiate a function. The example 1.2 points out this difficulty. In this paper we obtain another recurrence formula to calculate the sum of (\ref{1.2}). The article is henceforth organized as follows: In the second section we present our main result, a recurrence formula for the (\ref{1.1}) finite sum and we show that it recovers some results on finite summation formulas involving the Fibonacci sequence. The third section was intended to rigorously proof our formula. In the fourth section we state an algorithm to compute the sum of (\ref{1.2}) without the use of derivatives. Finally, in the fifth section, we give some comments about the results and future possibilities. \section{Finite Sums} \renewcommand{\theequation}{2.\arabic{equation}} \setcounter{equation}{0} Our main result in this section is the theorem below: {\bf Theorem 2.1.} Let $x \in \Re,~1-x- x^2 \not = 0$ be given. Then the following finite recorrence formula holds \begin{equation} \label{2.1} \begin{array}{ll} \displaystyle{\sum_{r=1}^{n} r^m F_r x^r = \fracd{1}{1 - x - x^2} \sum_{i=1}^m \pmatrix{m \cr i} {(-1)}^{i + 1} \sum_{r=1}^n r^{m - i} F_r x^r} + & \\ & \\ + \displaystyle{\fracd{x^2}{1 - x - x^2} \sum_{i=1}^m \pmatrix{m \cr i} \sum_{r=1}^{n-1} r^{m - i} F_r x^r - \fracd{n^m (F_{n+1} x^{n+1} + F_n x^{n+2})}{1 - x - x^2}} .& \end{array} \end{equation} \vspace*{0.5 cm} As consequence of theorem 2.1 we obtain many closed formulas for finite sums involving Fibonacci summation. In fact, taking $x=1$ in (\ref{2.1}) we obtain the following finite summation: \begin{equation} \label{2.2} \displaystyle{ \sum_{r=1}^{n} r^m F_r = \sum_{i=1}^m \pmatrix{m \cr i} {(-1)}^i \sum_{r=1}^n r^{m - i} F_r } - \displaystyle{ \sum_{i=1}^m \pmatrix{m \cr i} \sum_{r=1}^{n-1} r^{m - i} F_r + n^m F_{n+2} }. \end{equation} We believe that (\ref{2.2}) is a new formula for (\ref{1.1}). From (\ref{2.2}) we can derive closed for some special cases of $m$. For instance, taking $m=1$ in (\ref{2.2}) we obtain \begin{equation} \label{2.3} \sum_{r=1}^{n} r F_r = \sum_{i=1}^1 \pmatrix{1 \cr i} {(-1)}^i \sum_{r=1}^n r^{1 - i} F_r - \sum_{i=1}^1 \pmatrix{1 \cr i} \sum_{r=1}^{n-1} r^{1 - i} F_r + n F_{n+2}. \end{equation} It is well known (see \cite{Ledin}) that \begin{equation} \label{2.4} \displaystyle{ \sum_{r=1}^{n - 1} F_r} = F_{n+1} - 1. \end{equation} Thus, from (\ref{2.3}) and (\ref{2.4}) we conclude that \[ \begin{array}{ll} \displaystyle{\sum_{r=1}^{n} r F_r} & = - \displaystyle{\sum_{r=1}^{n} F_r - \sum_{r=1}^{n - 1} F_r + n F_{n+2} } \\ & \\ & = -F_{n+2} + 1 - F_{n+1} + 1 + n F_{n+2}. \end{array} \] Therefore \begin{equation} \label{2.5} \displaystyle{\sum_{r=1}^{n} r F_r = n F_{n+2} - F_{n+3} + 2}, \end{equation} which is the formula (1) that appears in \cite{Harris}. Now, taking $m=2$ in (\ref{2.2}) we can see that \[ \sum_{r=1}^{n}r^{2}F_{r}=\sum_{i=1}^{2}\pmatrix{2 \cr i}(-1)^{i}\sum_{r=1}^{n} \, \,r^{2-i}F_{r} -\sum_{i=1}^{2}\pmatrix{2 \cr i}\sum_{r=1}^{n-1}\,r^{2-i}F_{r} +n^{2}F_{n+2} , \] that is, \begin{equation} \label{2.6} \sum_{r=1}^{n}r^{2}F_{r}=-2\sum_{r=1}^{n}rF_{r}-2\sum_{r=1}^{n-1}rF_{r} +n^{2}F_{n+2}+\sum_{r=1}^{n}F_{r}-\sum_{r=1}^{n-1}F_{r}. \end{equation} Thus, using (\ref{2.4}) and (\ref{2.5}) in (\ref{2.6}), after some algebric manipulation, we obtain \[ \sum_{r=1}^{n}r^2F_{r}=(n^2+2)F_{n+2}-(2n-3)F_{n+3}-8 , \] which is the formula (17) in \cite{Harris}. In an analogous way we can recover other known identities taking different values for $m$ in (\ref{2.2}). Actually, the recorrence formula (\ref{2.1}) can produce a lot of identities, simply choosing special values to $x$ and $m$. For instance, the formula (\ref{2.1}) for $x=-1$ is \begin{equation} \label{2.7} \begin{array}{lll} \displaystyle{\sum_{r=1}^{n}(-1)^{r}r^{m}F_{r}} & = &\displaystyle{\sum_{i=1}^{m}\pmatrix{m \cr i}(-1)^{i+1}\sum_{r=1}^{n} \, (-1)^{r}\,r^{m-i}F_{r} }+\nonumber \\ &+&\displaystyle{ \sum_{i=1}^{m}\pmatrix{m \cr i}\sum_{r=1}^{n-1}\,(-1)^{r}\,r^{m-i}F_{r} -n^{m}\left[(-1)^{n+1}F_{m+1}+(-1)^{n+2}F_{n}\right ]}. \end{array} \end{equation} Taking $m=1$ in (\ref{2.7}) we obtain \begin{equation} \label{2.8} \sum_{r=1}^{n}(-1)^{r}\,r\,F_{r}=\sum_{r=1}^{n}(-1)^{r}F_{r}+\sum_{r=1}^{n-1}(-1)^{r}F_{r} -n\left[(-1)^{n+1}F_{n+1}+(-1)^{n+2}F_{n}\right] . \end{equation} Since that (see \cite{Harris}) \begin{equation} \label{2.9} \sum_{r=1}^{n-1}(-1)^{r}F_{r} = (-1)^{n-1} F_{n-2} -1, \end{equation} and using (\ref{2.9}) in (\ref{2.8}), we conclude that \begin{equation} \label{2.10} \sum_{r=1}^{n}(-1)^{r}rF_{r}=(-1)^{n}F_{n-1} -1 + (-1)^{n-1}F_{n-2} -1 - n\left[(-1)^{n+1}F_{n+1}+(-1)^{n+2}F_{n}\right]. \end{equation} After some simplifications, (\ref{2.10}) becomes \begin{equation} \label{2.11} \sum_{r=1}^{n}(-1)^{r}rF_{r}=(-1)^{n} (n+1) F_{n-1} + (-1)^{n-1} F_{n-2} -2. \end{equation} Note that (\ref{2.11}) is the formula (2) in \cite{Harris}. \section{Proof of Our Main Result} \renewcommand{\theequation}{3.\arabic{equation}} \setcounter{equation}{0} Before proving Theorem 2.1 we need to state some auxiliar results: {\bf Lemma 3.1.} Let non-negative integers $n\geq k$ be given. Suppose that $1 - x - x^2 \not =0$. Then \begin{equation} \label{3.1} F_k x^k + F_{k+1} x^{k+1} + \ldots + F_n x^n = \fracd{F_k x^k + F_{k-1} x^{k+1} - F_{n+1} x^{n+1} - F_n x^{n+2}}{1 - x - x^2} . \end{equation} {\sl Proof.} Consider the sum \begin{equation} \label{3.2} S = F_k x^k + F_{k+1} x^{k+1} + F_{k+2}x^{k + 2} + F_{k+3}x^{k+3} + \ldots + F_{n-1}x^{n-1} + F_n x^n. \end{equation} Multiplying (\ref{3.2}) by $-x$ and $-x^2$ we obtain \begin{equation} \label{3.3} -x S = -F_k x^{k +1} - F_{k+1} x^{k + 2} - F_{k+2}x^{k + 3}- \ldots - F_{n-2}x^{n - 1} - F_{n-1}x^{n } - F_n x^{n + 1}. \end{equation} \begin{equation} \label{3.4} -x^2 S = -F_k x^{k +2} - F_{k+1} x^{k+3} - \ldots - F_{n-3} x^{n-1} - F_{n-2}x^{n} - F_{n-1}x^{n + 1}- F_n x^{n +2}. \end{equation} Adding (\ref{3.2}), (\ref{3.3}) and (\ref{3.4}), remembering the definition of Fibonacci sequence and cancelling terms we have \begin{equation} S -xS -x^2S =F_k x^k + F_{k+1} x^{k+1} -F_k x^{k +1} - F_n x^{n + 1} - F_{n-1}x^{n + 1}- F_n x^{n +2} \end{equation} Using again the definition of Fibonacci sequence we conclude that \[ S -xS -x^2S =F_k x^k + F_{k-1} x^{k+1} - F_{n+1} x^{n+1} - F_n x^{n+2}, \] that is, (\ref{3.1}) holds. {\bf Lemma 3.2.} Let $x \in \Re,~1-x- x^2 \not = 0$ be given. Then the following identity holds \begin{equation} \begin{array}{ll} \displaystyle{\sum_{r=1}^n r^m F_r x^r} = & \displaystyle{\fracd{1}{1 - x - x^2} \sum_{r=1}^n ( r^m - {(r-1)}^m ) ( F_r x^r + F_{r-1} x^{r+1})} - \\ & \\ & - \displaystyle{\fracd{n^m (F_{n+1} x^{n+1} + F_n x^{n+2})}{1 - x - x^2}} . \end{array} \end{equation} {\sl Proof.} Consider the sum \[ \sum_{r=1}^n r^m F_r x^r= 1 F_1 x + 2^m F_2 x^2 + 3^m F_3 x^3 + \ldots + n^m F_n x^n . \] It is easy to see that the sum above can be rearranged in the following way \[ \sum_{r=1}^n r^m F_r x^r = (F_1 x + F_2 x^2 + F_3 x^3 + \ldots + F_n x^n) + \] \[ + (2^m - 1)(F_2 x^2 + F_3 x^3 + \ldots + F_n x^n) + \] \[ + (3^m - 2^m)(F_3 x^3 + \ldots + F_n x^n) + \ldots + \] \[ + ((n-1)^m - (n-2)^m)(F_{n-1} x^{n-1}+ F_n x^n) + \] \[ + (n^m - (n-1)^m)( F_n x^n). \] By using the Lemma 3.1 we can write the last sum as \[ \sum_{r=1}^n r^m F_r x^r = \fracd{F_1 x^1 + F_0 x^2 - F_{n+1} x^{n+1} - F_n x^{n+2}}{1 - x - x^2} + \] \[ + (2^m - 1)(\fracd{F_2 x^2 + F_1 x^3 - F_{n+1} x^{n+1} - F_n x^{n+2}}{1 - x - x^2}) + \ldots + \] \[ + (n^m - (n-1)^m)(\fracd{F_n x^n + F_{n-1} x^{n+1} - F_{n+1} x^{n+1} - F_n x^{n+2}}{1 - x - x^2}). \] Therefore, the sum can be expressed by \[ \sum_{r=1}^n r^m F_r x^r = \fracd{1}{1 - x - x^2} \sum_{r=1}^n ( r^m - {(r-1)}^m ) ( F_r x^r + F_{r-1} x^{r+1}) - \] \[ - [\fracd{F_{n+1} x^{n+1} + F_n x^{n+2}}{1 - x - x^2}] [ 1 + (2^m -1) + (3^m - 2^m) + \ldots + ( {(n-1)}^m - {(n-2)}^m ) + ( n^m - {(n-1)}^m ) ] \] After cancelling some terms we finally obtain \[ \sum_{r=1}^n r^m F_r x^r = \fracd{1}{1 - x - x^2} \sum_{r=1}^n ( r^m - {(r-1)}^m ) ( F_r x^r + F_{r-1} x^{r+1}) - \] \[ - \fracd{n^m (F_{n+1} x^{n+1} + F_n x^{n+2})}{1 - x - x^2}, \] which is the desired result. {\sl Proof of Theorem 2.1.} By using a suitable change of variables we have \[ \fracd{1}{1 - x - x^2} \sum_{r=1}^n ( r^m - {(r-1)}^m ) ( F_r x^r + F_{r-1} x^{r+1}) \] \[ =\fracd{1}{1 - x - x^2} \sum_{r=1}^n ( r^m - {(r-1)}^m ) F_r x^r + \fracd{1}{1 - x - x^2} \sum_{\theta=1}^n ( \theta^m - {(\theta-1)}^m )F_{\theta-1} x^{\theta+1}) \] \[ =\fracd{1}{1 - x - x^2}\sum_{r=1}^n ( r^m - {(r-1)}^m ) F_r x^r + \fracd{x^2}{1 - x - x^2} \sum_{r=1}^{n - 1} ( {(r+1)}^m - r^m ) F_r x^r \] \[ = \fracd{1}{1 - x - x^2}\sum_{r=1}^n \sum_{i=1}^m \pmatrix{m \cr i} {(-1)}^{i + 1} r^{m - i} F_r x^r + \fracd{x^2}{1 - x - x^2}\sum_{r=1}^{n-1} \sum_{i=1}^m \pmatrix{m \cr i} r^{m - i} F_r x^r \] \[ = \fracd{1}{1 - x - x^2} \sum_{i=1}^m \pmatrix{m \cr i} {(-1)}^{i + 1} \sum_{r=1}^n r^{m - i} F_r x^r + \fracd{x^2}{1 - x - x^2} \sum_{i=1}^m \pmatrix{m \cr i} \sum_{r=1}^{n-1} r^{m - i} F_r x^r \] The theorem follows by the result above and the Lemma 3.2. \section{Power Series} \renewcommand{\theequation}{4.\arabic{equation}} \setcounter{equation}{0} In this section we state a result which provides an algorithm to compute the sum of (\ref{1.2}) without the use of term-by-term differentiation theorem. This algorithm is a consequence of the theorem below: {\bf Theorem 4.1.} Let $x \in (-1/ \phi,1/ \phi)$ be given. Then the following recorrence formula holds \begin{equation} \label{4.1} \begin{array}{ll} \displaystyle{\sum_{r=1}^{+\infty} r^m F_r x^r} = & \displaystyle{\fracd{1}{1 - x - x^2} \sum_{i=1}^m \pmatrix{m \cr i} {(-1)}^{i + 1} \sum_{r=1}^{+\infty} r^{m - i} F_r x^r} + \\ & \\ & + \displaystyle{\fracd{x^2}{1 - x - x^2} \sum_{i=1}^m \pmatrix{m \cr i} \sum_{r=1}^{+\infty} r^{m - i} F_r x^r } . \end{array} \end{equation} {\sl Proof.} Considering the Theorem 2.1, it is sufficient to take $n \rightarrow +\infty$ in (\ref{2.1}) and to remember that $\displaystyle{\lim_{n \rightarrow +\infty} n^m F_n x^n = 0}$, since the series (\ref{1.2}) converges for all integer $m$ and $x \in (-1/ \phi,1/ \phi)$. \vspace*{0.5 cm} By theorem 4.1, we can obtain the following algorithm in order to provide the sum of (\ref{1.2}): \begin{equation} \label{4.2} \left \{ \begin{array}{l} S(x,0)= \fracd{x}{1 -x - x^2}, \\ S(x,j)= \displaystyle{\fracd{1}{1 - x - x^2} \sum_{i=1}^j \pmatrix{j \cr i} {(-1)}^{i + 1} S(x,j - i)}+ \displaystyle{\fracd{x^2}{1 - x - x^2} \sum_{i=1}^j \pmatrix{j \cr i} S(x,j - i)}, \\ j=1, \ldots,m. \end{array} \right. \end{equation} This algorithm can be implemented in an efficient way, instead of the expensive process using the standard derivative operator. It answers, for instance, the question proposed in the example 1.2: \[ \displaystyle{\sum_{n=1}^{+\infty}\fracd{n^{50} F_n}{3^n} = 6.526\times 10^{74}}. \] \section{Final Remarks} \renewcommand{\theequation}{5.\arabic{equation}} \setcounter{equation}{0} In this article we state a new recurrence formula for a finite sum related to Fibonacci sequence. This formula recovers a lot of identities for Fibonacci sums. Besides this, it implies an algorithm to compute the sum of Fibonacci power series without the use of derivatives. The scheme used to obtain this results can be extended to others series. The ideas presented here are part of a larger investigation which has been developed concerning the series \begin{equation} \label{5.1} \displaystyle{\sum_{r=1}^{+\infty}}{r^{m}}x^{r}a_{r}, \end{equation} in which $\{a_r\}$ is an arbitrary sequence. In this article $\{a_r\}$ is the Fibonacci sequence. Nevertheless, we can extend our results for other sequence types (see \cite{JB1}, \cite{JB2}). For example, if we take $a_r = 1$, (\ref{5.1}) turns into the generalized geometric series \begin{equation} \sum_{r=1}^{+\infty} r^{m} x^r, \end{equation} which converges for all $ x \in (-1,1)$. Using the same ideas developed in the last section, we can find out a recurrence formula for such a series: \[ \sum_{r=1}^{+\infty} r^{m} x^{r} = \fracd{1}{1 - x} \sum_{i=1}^m \pmatrix{m \cr i} {(-1)}^{i + 1} \sum_{r=1}^{+\infty} r^{m - i} x^r \cdot \] There are other subjects still under investigation by which we search to extend the results hereby presented for other sequences such as, Lucas', Generalized Fibonacci's, Generalized Lucas', Pell's, Tribonacci's sequences, etc. It should be observed that in \cite{Filipponi}, the author studied a series related to (\ref{1.2}), covering Lucas' and Fibonacci's generalized sequences. However, their results are only valid for a positive rational $x$. Besides, the employed technique is quite different from ours. Additional references concerning Fibonacci numbers and the golden ratio can be found in \cite{Dunlap}.
{ "timestamp": "2008-05-18T02:44:43", "yymm": "0805", "arxiv_id": "0805.2707", "language": "en", "url": "https://arxiv.org/abs/0805.2707", "abstract": "In this article we present a new recurrence formula for a finite sum involving the Fibonacci sequence. Furthermore, we state an algorithm to compute the sum of a power series related to Fibonacci series, without the use of term-by-term differentiation theorem", "subjects": "History and Overview (math.HO); General Mathematics (math.GM)", "title": "Recurrence Formulas for Fibonacci Sums", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9852713839878681, "lm_q2_score": 0.8221891370573388, "lm_q1q2_score": 0.8100794289682751 }